source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Fredkin%20gate
The Fredkin gate (also CSWAP gate and conservative logic gate) is a computational circuit suitable for reversible computing, invented by Edward Fredkin. It is universal, which means that any logical or arithmetic operation can be constructed entirely of Fredkin gates. The Fredkin gate is a circuit or device with three inputs and three outputs that transmits the first bit unchanged and swaps the last two bits if, and only if, the first bit is 1. Definition The basic Fredkin gate is a controlled swap gate that maps three inputs onto three outputs . The C input is mapped directly to the C output. If C = 0, no swap is performed; maps to , and maps to . Otherwise, the two outputs are swapped so that maps to , and maps to . It is easy to see that this circuit is reversible, i.e., "undoes" itself when run backwards. A generalized n×n Fredkin gate passes its first n−2 inputs unchanged to the corresponding outputs, and swaps its last two outputs if and only if the first n−2 inputs are all 1. The Fredkin gate is the reversible three-bit gate that swaps the last two bits if, and only if, the first bit is 1. It has the useful property that the numbers of 0s and 1s are conserved throughout, which in the billiard ball model means the same number of balls are output as input. This corresponds nicely to the conservation of mass in physics, and helps to show that the model is not wasteful. Truth functions with AND, OR, XOR, and NOT The Fredkin gate can be defined using truth functions with AND, OR, XOR, and NOT, as follows: Cout= Cin where Alternatively: Cout= Cin Completeness One way to see that the Fredkin gate is universal is to observe that it can be used to implement AND, NOT and OR: If , then . If , then . If and , then . Example Three-bit full adder (add with carry) using five Fredkin gates. The "g" garbage output bit is if , and if . Inputs on the left, including two constants, go through three gates to quickly determine the parity. The 0 and 1 bits
https://en.wikipedia.org/wiki/Bernstein%E2%80%93Sato%20polynomial
In mathematics, the Bernstein–Sato polynomial is a polynomial related to differential operators, introduced independently by and , . It is also known as the b-function, the b-polynomial, and the Bernstein polynomial, though it is not related to the Bernstein polynomials used in approximation theory. It has applications to singularity theory, monodromy theory, and quantum field theory. gives an elementary introduction, while and give more advanced accounts. Definition and properties If is a polynomial in several variables, then there is a non-zero polynomial and a differential operator with polynomial coefficients such that The Bernstein–Sato polynomial is the monic polynomial of smallest degree amongst such polynomials . Its existence can be shown using the notion of holonomic D-modules. proved that all roots of the Bernstein–Sato polynomial are negative rational numbers. The Bernstein–Sato polynomial can also be defined for products of powers of several polynomials . In this case it is a product of linear factors with rational coefficients. generalized the Bernstein–Sato polynomial to arbitrary varieties. Note, that the Bernstein–Sato polynomial can be computed algorithmically. However, such computations are hard in general. There are implementations of related algorithms in computer algebra systems RISA/Asir, Macaulay2, and SINGULAR. presented algorithms to compute the Bernstein–Sato polynomial of an affine variety together with an implementation in the computer algebra system SINGULAR. described some of the algorithms for computing Bernstein–Sato polynomials by computer. Examples If then so the Bernstein–Sato polynomial is If then so The Bernstein–Sato polynomial of x2 + y3 is If tij are n2 variables, then the Bernstein–Sato polynomial of det(tij) is given by which follows from where Ω is Cayley's omega process, which in turn follows from the Capelli identity. Applications If is a non-negative polynomial then , initially
https://en.wikipedia.org/wiki/Portmap
The port mapper (rpc.portmap or just portmap, or rpcbind) is an Open Network Computing Remote Procedure Call (ONC RPC) service that runs on network nodes that provide other ONC RPC services. Version 2 of the port mapper protocol maps ONC RPC program number/version number pairs to the network port number for that version of that program. When an ONC RPC server is started, it will tell the port mapper, for each particular program number/version number pair it implements for a particular transport protocol (TCP or UDP), what port number it is using for that particular program number/version number pair on that transport protocol. Clients wishing to make an ONC RPC call to a particular version of a particular ONC RPC service must first contact the port mapper on the server machine to determine the actual TCP or UDP port to use. Versions 3 and 4 of the protocol, called the rpcbind protocol, map a program number/version number pair, and an indicator that specifies a transport protocol, to a transport-layer endpoint address for that program number/version number pair on that transport protocol. The port mapper service always uses TCP or UDP port 111; a fixed port is required for it, as a client would not be able to get the port number for the port mapper service from the port mapper itself. The port mapper must be started before any other RPC servers are started. The port mapper service first appeared in SunOS 2.0. Example portmap instance This shows the different programs and their versions, and which ports they use. For example, it shows that NFS is running, both version 2 and 3, and can be reached at TCP port 2049 or UDP port 2049, depending on what transport protocol the client wants to use, and that the mount protocol, both version 1 and 2, is running, and can be reached at UDP port 644 or TCP port 645, depending on what transport protocol the client wants to use. $ rpcinfo -p program vers proto port 100000 2 tcp 111 portmapper 1000
https://en.wikipedia.org/wiki/Cell%20disruption
Cell disruption is a method or process for releasing biological molecules from inside a cell. Methods The production of biologically interesting molecules using cloning and culturing methods allows the study and manufacture of relevant molecules. Except for excreted molecules, cells producing molecules of interest must be disrupted. This page discusses various methods. Another method of disruption is called cell unroofing. Bead method A common laboratory-scale mechanical method for cell disruption uses glass, ceramic, or steel beads, in diameter, mixed with a sample suspended in an aqueous solution. First developed by Tim Hopkins in the late 1970s, the sample and bead mix is subjected to high level agitation by stirring or shaking. Beads collide with the cellular sample, cracking open the cell to release the intracellular components. Unlike some other methods, mechanical shear is moderate during homogenization resulting in excellent membrane or subcellular preparations. The method, often called "bead beating", works well for all types of cellular material - from spores to animal and plant tissues. It is the most widely used method of yeast lysis, and can yield breakage of well over 50% (up to 95%). It has the advantage over other mechanical cell disruption methods of being able to disrupt very small sample sizes, process many samples at a time with no cross-contamination concerns, and does not release potentially harmful aerosols in the process. In the simplest example of the method, an equal volume of beads are added to a cell or tissue suspension in a test tube and the sample is vigorously mixed on a common laboratory vortex mixer. While processing times are slow, taking 310 times longer than that in specialty shaking machines, it works well for easily disrupted cells and is inexpensive. Successful bead beating is dependent not only on design features of the shaking machine (which take into consideration shaking oscillations frequency, shaking throw or distan
https://en.wikipedia.org/wiki/Actor%20model
The actor model in computer science is a mathematical model of concurrent computation that treats an actor as the basic building block of concurrent computation. In response to a message it receives, an actor can: make local decisions, create more actors, send more messages, and determine how to respond to the next message received. Actors may modify their own private state, but can only affect each other indirectly through messaging (removing the need for lock-based synchronization). The actor model originated in 1973. It has been used both as a framework for a theoretical understanding of computation and as the theoretical basis for several practical implementations of concurrent systems. The relationship of the model to other work is discussed in actor model and process calculi. History According to Carl Hewitt, unlike previous models of computation, the actor model was inspired by physics, including general relativity and quantum mechanics. It was also influenced by the programming languages Lisp, Simula, early versions of Smalltalk, capability-based systems, and packet switching. Its development was "motivated by the prospect of highly parallel computing machines consisting of dozens, hundreds, or even thousands of independent microprocessors, each with its own local memory and communications processor, communicating via a high-performance communications network." Since that time, the advent of massive concurrency through multi-core and manycore computer architectures has revived interest in the actor model. Following Hewitt, Bishop, and Steiger's 1973 publication, Irene Greif developed an operational semantics for the actor model as part of her doctoral research. Two years later, Henry Baker and Hewitt published a set of axiomatic laws for actor systems. Other major milestones include William Clinger's 1981 dissertation introducing a denotational semantics based on power domains and Gul Agha's 1985 dissertation which further developed a transition-based s
https://en.wikipedia.org/wiki/Branch%20target%20predictor
In computer architecture, a branch target predictor is the part of a processor that predicts the target, i.e. the address of the instruction that is executed next, of a taken conditional branch or an unconditional branch instruction before the target of the branch instruction is computed by the execution unit of the processor. Branch target prediction is not the same as branch prediction which attempts to guess whether a conditional branch will be taken or not-taken (i.e., binary). In more parallel processor designs, as the instruction cache latency grows longer and the fetch width grows wider, branch target extraction becomes a bottleneck. The recurrence is: Instruction cache fetches block of instructions Instructions in block are scanned to identify branches First predicted taken branch is identified Target of that branch is computed Instruction fetch restarts at branch target In machines where this recurrence takes two cycles, the machine loses one full cycle of fetch after every predicted taken branch. As predicted branches happen every 10 instructions or so, this can force a substantial drop in fetch bandwidth. Some machines with longer instruction cache latencies would have an even larger loss. To ameliorate the loss, some machines implement branch target prediction: given the address of a branch, they predict the target of that branch. A refinement of the idea predicts the start of a sequential run of instructions given the address of the start of the previous sequential run of instructions. This predictor reduces the recurrence above to: Hash the address of the first instruction in a run Fetch the prediction for the addresses of the targets of branches in that run of instructions Select the address corresponding to the branch predicted taken As the predictor RAM can be 5–10% of the size of the instruction cache, the fetch happens much faster than the instruction cache fetch, and so this recurrence is much faster. If it were not fast eno
https://en.wikipedia.org/wiki/Flanders%20Mathematics%20Olympiad
The Flanders Mathematics Olympiad (; VWO) is a Flemish mathematics competition for students in grades 9 through 12. Two tiers of this competition exist: one for 9th- and 10th-graders (; JWO), and one for 11th- and 12th-graders. It is a feeder competition for the International Mathematical Olympiad. History The Olympiad was founded in 1985, replacing a system previously used since 1969 in which Flemish students were nominated to the IMO by their teachers. , 20,000 students participate annually. In 2015, the founders of the Olympiad, Paul Igodt of the Katholieke Universiteit Leuven and Frank De Clerck of Ghent University, were given the career award for science communication of the Royal Flemish Academy of Belgium for Science and the Arts for their work. Procedure The competition lasts three rounds. During the first and second rounds, students must answer 30 multiple-choice mathematics problems. The first round occurs in schools, and the second round is organized by province, and is administered at various universities. The first round has a three-hour time limit for completion, the second round has a two-hour time limit. The final round consists of four problems which require a detailed and coherent essay-type response. After the final round, three contestants are selected to compete in the International Mathematical Olympiad, making up half of the team from Belgium; the other half of the team comes from Wallonia. References External links Official site (in Dutch) International Mathematical Olympiad
https://en.wikipedia.org/wiki/List%20of%20personal%20information%20managers
The following is a list of personal information managers (PIMs) and online organizers. Applications Discontinued applications See also Comparisons Comparison of email clients Comparison of file managers Comparison of note-taking software Comparison of reference management software Comparison of text editors Comparison of wiki software Comparison of word processors Lists List of outliners Comparison of project management software List of text editors List of wiki software External links Lists of software
https://en.wikipedia.org/wiki/SHA-2
SHA-2 (Secure Hash Algorithm 2) is a set of cryptographic hash functions designed by the United States National Security Agency (NSA) and first published in 2001. They are built using the Merkle–Damgård construction, from a one-way compression function itself built using the Davies–Meyer structure from a specialized block cipher. SHA-2 includes significant changes from its predecessor, SHA-1. The SHA-2 family consists of six hash functions with digests (hash values) that are 224, 256, 384 or 512 bits: SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/224, SHA-512/256. SHA-256 and SHA-512 are novel hash functions computed with eight 32-bit and 64-bit words, respectively. They use different shift amounts and additive constants, but their structures are otherwise virtually identical, differing only in the number of rounds. SHA-224 and SHA-384 are truncated versions of SHA-256 and SHA-512 respectively, computed with different initial values. SHA-512/224 and SHA-512/256 are also truncated versions of SHA-512, but the initial values are generated using the method described in Federal Information Processing Standards (FIPS) PUB 180-4. SHA-2 was first published by the National Institute of Standards and Technology (NIST) as a U.S. federal standard. The SHA-2 family of algorithms are patented in the U.S.. The United States has released the patent under a royalty-free license. As of 2011, the best public attacks break preimage resistance for 52 out of 64 rounds of SHA-256 or 57 out of 80 rounds of SHA-512, and collision resistance for 46 out of 64 rounds of SHA-256. Hash standard With the publication of FIPS PUB 180-2, NIST added three additional hash functions in the SHA family. The algorithms are collectively known as SHA-2, named after their digest lengths (in bits): SHA-256, SHA-384, and SHA-512. The algorithms were first published in 2001 in the draft FIPS PUB 180-2, at which time public review and comments were accepted. In August 2002, FIPS PUB 180-2 became the new Sec
https://en.wikipedia.org/wiki/Quantity%20surveyor
A quantity surveyor (QS) is a construction industry professional with expert knowledge on construction costs and contracts. Qualified professional quantity surveyors are known as Chartered Surveyors (Members and Fellows of RICS) in the UK and Certified Quantity Surveyors (a designation of AIQS) in Australia and other countries. In some countries such as Canada, South Africa, Kenya and Mauritius, qualified quantity surveyors are known as Professional Quantity Surveyors, a title protected by law. Quantity surveyors are responsible for managing all aspects of the contractual and financial side of construction projects. They help to ensure that the construction project is completed within its projected budget. Quantity surveyors are also hired by contractors to help with the valuation of construction work for the contractor, help with bidding and project budgeting, and the submission of bills to the client. Duties The duties of a quantity surveyor are as follows: Conducting financial feasibility studies for development projects. Cost estimate, cost planning and cost management. Analyzing terms and conditions in the contract. Predicting potential risks in the project and taking precautions to mitigate such. Forecasting the costs of different materials needed for the project. Prepare tender documents, contracts, budgets and other documentation. Take note of changes made and adjusting the budget accordingly. Tender management including preparation of bills of quantities, contract conditions and assembly of tender documents Contract management and contractual advice Valuation of construction work Claims and dispute management Lifecycle costing analysis Reinstatement Cost Assessment for Insurance Purposes. Professional associations RICS – The Royal Institution of Chartered Surveyors AIQS – Australian Institute of Quantity Surveyors IQSSL - Institute of Quantity Surveyors Sri Lanka ASAQS – Association of South African Quantity Surveyors BSIJ – Building
https://en.wikipedia.org/wiki/Je%C5%A1t%C4%9Bd%20Tower
Ještěd Tower () is a television transmitter on the top of Mount Ještěd near Liberec in the Czech Republic. It is high. It is made of reinforced concrete shaped in a hyperboloid form. The tower's architect is Karel Hubáček who was assisted by Zdeněk Patrman, involved in building statics, and by Otakar Binar, who designed the interior furnishing. It took the team three years to finalize the structure design (1963–1966). The construction itself took seven years to finish (1966–1973). The hyperboloid shape was chosen since it naturally extends the silhouette of the hill and, moreover, well resists the extreme climate conditions on the summit of Mount Ještěd. The design combines the operation of a mountain-top hotel and a television transmitter. The hotel and the restaurant are located in the lowest sections of the tower. Before the construction of the current hotel, two huts stood near the mountain summit: one was built in the middle of the 19th century and the other was added in the early 20th century. Both buildings had a wooden structure and both burned to the ground in the 1960s. The tower is one of the dominant features of the North Bohemian landscape. The gallery on the ground floor and the restaurant on the first floor offers views as far as to Poland and Germany. The tower has been on the list of the Czech cultural monuments since 1998, becoming a national cultural monument in 2006. In 2007 it was entered on the Tentative List of UNESCO World Heritage sites. In 1969 Karel Hubáček was awarded the prestigious Perret Prize of the International Union of Architects (UIA). Access The monument is accessible by road and by the Ještěd cable car from the foot of the mountain. However since a crash in 2021, the system has been closed indefinitely. Construction After the existing Ještěd lodge burned down in January 1963, a decision was made by Restaurace Liberec (the company that used to manage the burned-down lodges) and the Prague Radio Communications Administration
https://en.wikipedia.org/wiki/Subquotient
In the mathematical fields of category theory and abstract algebra, a subquotient is a quotient object of a subobject. Subquotients are particularly important in abelian categories, and in group theory, where they are also known as sections, though this conflicts with a different meaning in category theory. In the literature about sporadic groups wordings like " is involved in " can be found with the apparent meaning of " is a subquotient of ." A quotient of a subrepresentation of a representation (of, say, a group) might be called a subquotient representation; e.g., Harish-Chandra's subquotient theorem. Examples Of the 26 sporadic groups, the 20 subquotients of the monster group are referred to as the "Happy Family", whereas the remaining 6 are called "pariah groups." Order relation The relation subquotient of is an order relation. Proof of transitivity for groups Notation For group , subgroup of and normal subgroup of the quotient group is a subquotient of Let be subquotient of , furthermore be subquotient of and be the canonical homomorphism. Then all vertical () maps with suitable are surjective for the respective pairs The preimages and are both subgroups of containing and it is and , because every has a preimage with Moreover, the subgroup is normal in As a consequence, the subquotient of is a subquotient of in the form Relation to cardinal order In constructive set theory, where the law of excluded middle does not necessarily hold, one can consider the relation subquotient of as replacing the usual order relation(s) on cardinals. When one has the law of the excluded middle, then a subquotient of is either the empty set or there is an onto function . This order relation is traditionally denoted If additionally the axiom of choice holds, then has a one-to-one function to and this order relation is the usual on corresponding cardinals. See also Homological algebra Subcountable References Category theory Abstract a
https://en.wikipedia.org/wiki/Lists%20of%20human%20genes
Lists of human genes are as follows: By chromosome Human chromosomes, each of which contains an incomplete list of genes located on that chromosome, are as follows: Chromosome 1 Chromosome 2 Chromosome 3 Chromosome 4 Chromosome 5 Chromosome 6 Chromosome 7 Chromosome 8 Chromosome 9 Chromosome 10 Chromosome 11 Chromosome 12 Chromosome 13 Chromosome 14 Chromosome 15 Chromosome 16 Chromosome 17 Chromosome 18 Chromosome 19 Chromosome 20 Chromosome 21 Chromosome 22 X Chromosome Y Chromosome Protein-coding genes The lists below constitute a complete list of all known human protein-coding genes: List of human protein-coding genes 1, covers genes A1BG–EPC1 List of human protein-coding genes 2, covers genes EPC2–MTMR7 List of human protein-coding genes 3, covers genes MTMR8–SLC20A2 List of human protein-coding genes 4, covers genes SLC22A1–ZZZ3 Transcription factors 1639 genes which encode proteins that are known or expected to function as human transcription factors: List of human transcription factors See also List of enzymes List of proteins List of disabled human pseudogenes External links iHOP-Protein Information Database NextBio-Life Science Search Engine Entrez-Cross Database Query Search System TranscriptomeBrowser Genetics-related lists
https://en.wikipedia.org/wiki/Loose%20coupling
In computing and systems design, a loosely coupled system is one in which components are weakly associated (have breakable relationships) with each other, and thus changes in one component least affect existence or performance of another component. in which each of its components has, or makes use of, little or no knowledge of the definitions of other separate components. Subareas include the coupling of classes, interfaces, data, and services. Loose coupling is the opposite of tight coupling. Advantages and disadvantages Components in a loosely coupled system can be replaced with alternative implementations that provide the same services. Components in a loosely coupled system are less constrained to the same platform, language, operating system, or build environment. If systems are decoupled in time, it is difficult to also provide transactional integrity; additional coordination protocols are required. Data replication across different systems provides loose coupling (in availability), but creates issues in maintaining consistency (data synchronization). In integration Loose coupling in broader distributed system design is achieved by the use of transactions, queues provided by message-oriented middleware, and interoperability standards. Four types of autonomy, which promote loose coupling, are: reference autonomy, time autonomy, format autonomy, and platform autonomy. Loose coupling is an architectural principle and design goal in service-oriented architectures; eleven forms of loose coupling and their tight coupling counterparts are listed in: physical connections via mediator, asynchronous communication style, simple common types only in data model, weak type system, data-centric and self-contained messages, distributed control of process logic, dynamic binding (of service consumers and providers), platform independence, business-level compensation rather than system-level transactions, deployment at different times, implicit upgrades in ve
https://en.wikipedia.org/wiki/Const%20%28computer%20programming%29
In some programming languages, const is a type qualifier (a keyword applied to a data type) that indicates that the data is read-only. While this can be used to declare constants, in the C family of languages differs from similar constructs in other languages in being part of the type, and thus has complicated behavior when combined with pointers, references, composite data types, and type-checking. In other languages, the data is not in a single memory location, but copied at compile time on each use. Languages which use it include C, C++, D, JavaScript, Julia, and Rust. Introduction When applied in an object declaration, it indicates that the object is a constant: its value may not be changed, unlike a variable. This basic use – to declare constants – has parallels in many other languages. However, unlike in other languages, in the C family of languages the const is part of the type, not part of the object. For example, in C, declares an object x of int const type – the const is part of the type, as if it were parsed "(int const) x" – while in Ada, declares a constant (a kind of object) X of INTEGER type: the constant is part of the object, but not part of the type. This has two subtle results. Firstly, const can be applied to parts of a more complex type – for example, int const * const x; declares a constant pointer to a constant integer, while int const * x; declares a variable pointer to a constant integer, and int * const x; declares a constant pointer to a variable integer. Secondly, because const is part of the type, it must match as part of type-checking. For example, the following code is invalid: void f(int& x); // ... int const i; f(i); because the argument to f must be a variable integer, but i is a constant integer. This matching is a form of program correctness, and is known as const-correctness. This allows a form of programming by contract, where functions specify as part of their type signature whether they modify their arguments or not, an
https://en.wikipedia.org/wiki/Digital%20Devil%20Story%3A%20Megami%20Tensei
Digital Devil Story: Megami Tensei refers to two distinct role-playing video games based on a trilogy of science fantasy novels by Japanese author Aya Nishitani. One version was developed by Atlus and published by Namco in 1987 for the Famicom—Atlus would go on to create further games in the Megami Tensei franchise. A separate version for personal computers was developed and published by Telenet Japan with assistance from Atlus during the same year. The story sees Japanese high school students Akemi Nakajima and Yumiko Shirasagi combat the forces of Lucifer, unleashed by a demon summoning program created by Nakajima. The gameplay features first-person dungeon crawling and turn-based battles or negotiation with demons in the Famicom version, and a journey through a hostile labyrinth as Nakajima featuring real-time combat in the Telenet version. Development on both versions of the video game began as part of a multimedia expansion of Nishitani's book series. Nishitani was deeply involved with the design and scenario. The gameplay mechanics in Atlus' role-playing version of the game were based on the Wizardry series, but with an added demon negotiation system considered revolutionary for the time. Atlus and Telenet Japan worked on their projects simultaneously, playing against genre expectations for their respective platforms. The Famicom version proved the more popular with both critics and players, leading to the development of the 1990 Famicom sequel Digital Devil Story: Megami Tensei II. An enhanced port of both games for the Super Famicom was released in 1995. Gameplay The Famicom version of Digital Devil Story: Megami Tensei is a traditional role-playing video game in which the player takes control of a party composed of two humans and a number of demons. The party explores a large dungeon using a first-person perspective. The human characters use a variety of weapons and items, with the primary weapons being swords and guns. The items, which can range from h
https://en.wikipedia.org/wiki/UltraMon
UltraMon is a commercial application for Microsoft Windows users who use multiple displays. UltraMon is developed by Realtime Soft, a small software development company based in Bern, Switzerland. UltraMon currently contains the following features: Two additional title bar buttons for managing windows among the monitors Customizable button location A taskbar on each additional monitor that displays tasks on that monitor Pre-defined application window placement Display profiles for multiple pre-defined display settings Spannable wallpaper option Different wallpapers for different monitors Advanced multiple-monitor screensaver management Display mirroring (Forces to software rendering) Overcome Windows' limit of 10 displays UltraMon is distributed as trialware, requiring the user to purchase the software after a trial period (30 days). UltraMon 3.3.0 is available with full Windows XP, Vista, 7, and 8 support. See also Multi-monitor References Utilities for Windows Display technology
https://en.wikipedia.org/wiki/WFNA%20%28TV%29
WFNA (channel 55) is a television station licensed to Gulf Shores, Alabama, United States, serving as the CW outlet for southwest Alabama and northwest Florida. It is owned and operated by network majority owner Nexstar Media Group alongside Mobile-licensed CBS affiliate WKRG-TV (channel 5). The two stations share studios with several radio stations owned by iHeartMedia on Broadcast Drive in southwest Mobile; WFNA's transmitter is located in unincorporated Baldwin County near Spanish Fort, Alabama. History Prior to the station's sign-on, WFNA's call letters were originally planned to be WGMP (standing for "Gulf Shores, Mobile, Pensacola"). The station first signed on the air as WBPG on September 2, 2001; it replaced WFGX (channel 35) as the area's WB affiliate after the station reverted to independent status four days earlier on August 31. The station was originally owned by Pegasus Broadcasting. At the time, WFGX's signal was all but unviewable over-the-air on the Alabama side of the market, but WBPG's signal decently covered the entire market. In 2003, Emmis Communications purchased the station, which created a duopoly with Fox affiliate WALA-TV (channel 10); WBPG's operations were subsequently merged with WALA at the latter station's facility on Satchel Paige Drive. LIN TV Corporation acquired WALA-TV on November 30, 2005; instead of acquiring WBPG directly along with it, the company instead began to operate the station under a local marketing agreement. Just over seven months later, on July 7, 2006, LIN purchased WBPG outright. On January 24, 2006, CBS Corporation and Time Warner announced the shutdown of both UPN and The WB effective that fall. In place of these two networks, a new "fifth" network—"The CW Television Network" (its name representing the first initials of parent companies CBS and Warner Bros.), jointly owned by both companies, would launch, with a lineup primarily featuring the most popular programs from both networks. WBPG joined The CW on Se
https://en.wikipedia.org/wiki/Mathematical%20Kangaroo
Mathematical Kangaroo (also known as Kangaroo challenge, or jeu-concours Kangourou in French) is an international mathematics competition in over 77 countries. There are six levels of participation, ranging from grade 1 to grade 12. The competition is held annually on the third Thursday of March. The challenge consists of problems in multiple-choice form that are not standard notebook problems and come from a variety of topics. Besides basic computational skills, they require inspiring ideas, perseverance, creativity and imagination, logical thinking, and other problem-solving strategies. Often there are small stories, intriguing problems, and surprising results, which encourage discussions with friends and family. It had over 6 million participants from 57 countries in 2014. In 2022, it has 84 participants countries and claims to be the largest competition for school students in the world. History Mathematicians in Australia came up with the idea to organize a competition that underlines the joy of mathematics and encourages mathematical problem-solving. A multiple-choice competition was created, which has been taking place in Australia since 1978. At the same time, both in France and all over the world, a widely supported movement emerged towards the popularization of mathematics. The idea of a multiple-choice competition then sprouted from two French teachers, André Deledicq and Jean Pierre Boudine, who visited their Australian colleagues Peter O’Holloran and Peter Taylor and witnessed their competition. In 1990, they decided to start a challenge in France under the name Kangourou des Mathématiques in order to pay tribute to their Australian colleagues. The particularity of this challenge was the desire for massive distribution of documentation, offering a gift to each participant (books, small games, fun objects, scientific and cultural trips). The first Kangaroo challenge took place on May 15, 1991. Since it was immediately very successful, shortly afterwa
https://en.wikipedia.org/wiki/Continuous%20obsolescence
Continuous obsolescence or perpetual revolution is a phenomenon where industry trends, or other items that do not immediately correspond to technical needs, mandate a continual readaptation of a system. Such work does not increase the usefulness of the system, but is required for the system to continue fulfilling its functions. Unintentional reasons Continuous obsolescence may be unintentional. One type of largely unintentional case of continuous obsolescence occurs when the rising demand for graphics- and experience-intensive video games collides with a long development time for a new title. While a game may promise to be acceptable or even revolutionary if released on schedule, a delay exposes it to the risk of being unable to compete with better games released during the delay (e.g. Daikatana), or of being continually rewritten to take advantage of better technologies as they become available (e.g. Duke Nukem Forever). This last behavior is an example of a software development anti-pattern. Intentional reasons Continuous obsolescence may also be intentional, for example when an application tries to include compatibility for the output of another widely used application. In this case, the software house responsible for the latter may vary its output format repeatedly, forcing the developer of the former to continuously expend resources to keep its compatibility up-to-date, rather than using those resources to expand features or otherwise make the product more competitive. Many accuse Microsoft of doing exactly this with the file formats used by its Office application suite. See also Planned obsolescence References Anti-patterns Obsolescence
https://en.wikipedia.org/wiki/Temperature%20gradient%20gel%20electrophoresis
Temperature gradient gel electrophoresis (TGGE) and denaturing gradient gel electrophoresis (DGGE) are forms of electrophoresis which use either a temperature or chemical gradient to denature the sample as it moves across an acrylamide gel. TGGE and DGGE can be applied to nucleic acids such as DNA and RNA, and (less commonly) proteins. TGGE relies on temperature dependent changes in structure to separate nucleic acids. DGGE separates genes of the same size based on their different denaturing ability which is determined by their base pair sequence. DGGE was the original technique, and TGGE a refinement of it. History DGGE was invented by Leonard Lerman, while he was a professor at SUNY Albany. The same equipment can be used for analysis of protein, which was first done by Thomas E. Creighton of the MRC Laboratory of Molecular Biology, Cambridge, England. Similar looking patterns are produced by proteins and nucleic acids, but the fundamental principles are quite different. TGGE was first described by Thatcher and Hodson and by Roger Wartell of Georgia Tech. Extensive work was done by the group of Riesner in Germany. Commercial equipment for DGGE is available from Bio-Rad, INGENY and CBS Scientific; a system for TGGE is available from Biometra. Temperature gradient gel electrophoresis DNA has a negative charge and so will move to the positive electrode in an electric field. A gel is a molecular mesh, with holes roughly the same size as the diameter of the DNA string. When an electric field is applied, the DNA will begin to move through the gel, at a speed roughly inversely proportional to the length of the DNA molecule (shorter lengths of DNA travel faster) — this is the basis for size dependent separation in standard electrophoresis. In TGGE there is also a temperature gradient across the gel. At room temperature, the DNA will exist stably in a double-stranded form. As the temperature is increased, the strands begin to separate (melting), and the speed at whic
https://en.wikipedia.org/wiki/Cis-3-Hexenal
cis-3-Hexenal, also known as (Z)-3-hexenal and leaf aldehyde, is an organic compound with the formula CH3CH2CH=CHCH2CHO. It is classified as an unsaturated aldehyde. It is a colorless liquid and an aroma compound with an intense odor of freshly cut grass and leaves. Occurrence It is one of the major volatile compounds in ripe tomatoes, although it tends to isomerize into the conjugated trans-2-hexenal. It is produced in small amounts by most plants and it acts as an attractant to many predatory insects. It is also a pheromone in many insect species. See also cis-3-hexen-1-ol has a similar but weaker odor and is used in flavors and perfumes. 1-Hexanol, another volatile organic compound, also considered responsible for the freshly mowed grass odor External links Hexenal References Flavors Insect pheromones Insect ecology Alkenals
https://en.wikipedia.org/wiki/Freezer%20burn
Freezer burn is a condition that occurs when frozen food has been damaged by dehydration and oxidation due to air reaching the food. It is generally caused by food not being securely wrapped in air-tight packaging. Freezer burn appears as grayish-brown leathery spots on frozen food and occurs when air reaches the food's surface and dries the product. Color changes result from chemical changes in the food's pigment. Freezer burn does not make the food unsafe; it merely causes dry spots in foods. The food remains usable and edible, but removing the freezer burns will improve the flavor. The dehydration of freezer-burned food is caused by water sublimating from the food into the surrounding atmosphere. The lost water may then be deposited elsewhere in the food and packaging as snow-like crystals. See also Freeze drying Ice crystals References Inline citations General references United States Food and Drug Administration Food preservation Frozen food
https://en.wikipedia.org/wiki/Sandvine
Sandvine Incorporated is an application and network intelligence company based in Waterloo, Ontario. Sandvine markets network policy control products that are designed to implement broad network policies, including Internet censorship, congestion management, and security. Sandvine's products target Tier 1 and Tier 2 networks for consumers, including cable, DSL, and mobile. Operation Sandvine classifies application traffic across mobile and local networks by user, device, network type, location and other parameters. The company then applies machine learning-based analytics to real-time data and makes technical policy changes. As of 2021, Sandvine has over 500 customers globally. Company history Sandvine was formed in August, 2001 in Waterloo, Ontario, by a team of approximately 30 people from PixStream, a then-recently closed company acquired by Cisco. An initial round of VC funding launched the company with $20 million CDN. A subsequent round of financing of $19 million (CDN) was completed in May 2005. In March 2006 Sandvine completed an initial public offering on the London AIM exchange under the ticker 'SAND'. In October 2006 Sandvine completed an initial public offering on the Toronto Stock Exchange under the ticker 'SVC'. Initial product sales focused on congestion management and fair usage as service providers struggled with the rapid growth in broadband traffic. As fiber rollouts and 4G networks became more prevalent, the company's application optimization and monetization use cases were adopted by many customers. This allowed service providers to deliver usage and application-based plans, zero-rate applications, reduce fraud, and introduce security and parental controls as a way to generate new revenues. In June 2007 Sandvine acquired CableMatrix Technologies for its PacketCable Multimedia (PCMM)-based PCRF that enable broadband operators to increase subscriber satisfaction while delivering media-rich IP applications and services such as SIP telephony,
https://en.wikipedia.org/wiki/Lamella%20%28materials%29
A lamella () is a small plate or flake, from the Latin, and may also be used to refer to collections of fine sheets of material held adjacent to one another, in a gill-shaped structure, often with fluid in between though sometimes simply a set of 'welded' plates. The term is used in biological contexts to describe thin membranes of plates of tissue. In context of materials science, the microscopic structures in bone and nacre are called lamellae. Moreover, the term lamella is often used as a way to describe crystal structure of some materials. Uses of the term In surface chemistry (especially mineralogy and materials science), lamellar structures are fine layers, alternating between different materials. They can be produced by chemical effects (as in eutectic solidification), biological means, or a deliberate process of lamination, such as pattern welding. Lamellae can also describe the layers of atoms in the crystal lattices of materials such as metals. In surface anatomy, a lamella is a thin plate-like structure, often one amongst many lamellae very close to one another, with open space between. In chemical engineering, the term is used for devices such as filters and heat exchangers. In mycology, a lamella (or gill) is a papery hymenophore rib under the cap of some mushroom species, most often agarics. The term has been used to describe the construction of lamellar armour, as well as the layered structures that can be described by a lamellar vector field. In medical professions, especially orthopedic surgery, the term is used to refer to 3D printed titanium technology which is used to create implantable medical devices (in this case, orthopedic implants). In context of water-treatment, lamellar filters may be referred to as plate filters or tube filters. This term is used to describe a certain type of ichthyosis, a congenital skin condition. Lamellar Ichthyosis often presents with a "colloidal" membrane at birth. It is characterized by generalized dark
https://en.wikipedia.org/wiki/Fluorescence-lifetime%20imaging%20microscopy
Fluorescence-lifetime imaging microscopy or FLIM is an imaging technique based on the differences in the exponential decay rate of the photon emission of a fluorophore from a sample. It can be used as an imaging technique in confocal microscopy, two-photon excitation microscopy, and multiphoton tomography. The fluorescence lifetime (FLT) of the fluorophore, rather than its intensity, is used to create the image in FLIM. Fluorescence lifetime depends on the local micro-environment of the fluorophore, thus precluding any erroneous measurements in fluorescence intensity due to change in brightness of the light source, background light intensity or limited photo-bleaching. This technique also has the advantage of minimizing the effect of photon scattering in thick layers of sample. Being dependent on the micro-environment, lifetime measurements have been used as an indicator for pH, viscosity and chemical species concentration. Fluorescence lifetimes A fluorophore which is excited by a photon will drop to the ground state with a certain probability based on the decay rates through a number of different (radiative and/or nonradiative) decay pathways. To observe fluorescence, one of these pathways must be by spontaneous emission of a photon. In the ensemble description, the fluorescence emitted will decay with time according to where . In the above, is time, is the fluorescence lifetime, is the initial fluorescence at , and are the rates for each decay pathway, at least one of which must be the fluorescence decay rate . More importantly, the lifetime, is independent of the initial intensity and of the emitted light. This can be utilized for making non-intensity based measurements in chemical sensing. Measurement Fluorescence-lifetime imaging yields images with the intensity of each pixel determined by , which allows one to view contrast between materials with different fluorescence decay rates (even if those materials fluoresce at exactly the same wavelength)
https://en.wikipedia.org/wiki/Symmetric%20space
In mathematics, a symmetric space is a Riemannian manifold (or more generally, a pseudo-Riemannian manifold) whose group of symmetries contains an inversion symmetry about every point. This can be studied with the tools of Riemannian geometry, leading to consequences in the theory of holonomy; or algebraically through Lie theory, which allowed Cartan to give a complete classification. Symmetric spaces commonly occur in differential geometry, representation theory and harmonic analysis. In geometric terms, a complete, simply connected Riemannian manifold is a symmetric space if and only if its curvature tensor is invariant under parallel transport. More generally, a Riemannian manifold (M, g) is said to be symmetric if and only if, for each point p of M, there exists an isometry of M fixing p and acting on the tangent space as minus the identity (every symmetric space is complete, since any geodesic can be extended indefinitely via symmetries about the endpoints). Both descriptions can also naturally be extended to the setting of pseudo-Riemannian manifolds. From the point of view of Lie theory, a symmetric space is the quotient G/H of a connected Lie group G by a Lie subgroup H which is (a connected component of) the invariant group of an involution of G. This definition includes more than the Riemannian definition, and reduces to it when H is compact. Riemannian symmetric spaces arise in a wide variety of situations in both mathematics and physics. Their central role in the theory of holonomy was discovered by Marcel Berger. They are important objects of study in representation theory and harmonic analysis as well as in differential geometry. Geometric definition Let M be a connected Riemannian manifold and p a point of M. A diffeomorphism f of a neighborhood of p is said to be a geodesic symmetry if it fixes the point p and reverses geodesics through that point, i.e. if γ is a geodesic with then It follows that the derivative of the map f at p is minus the
https://en.wikipedia.org/wiki/Salt%20River%20Project
The Salt River Project (SRP) encompasses two separate entities: the Salt River Project Agricultural Improvement and Power District, an agency of the state of Arizona that serves as an electrical utility for the Phoenix metropolitan area, and the Salt River Valley Water Users' Association, a utility cooperative that serves as the primary water provider for much of central Arizona. It is one of the primary public utility companies in Arizona. SRP is not related to the Rio Salado Project (Rio Salado is Spanish for Salt River), a series of improvement projects along the Salt River through the Phoenix Metropolitan Area. Service territory SRP serves nearly all of the Phoenix metropolitan area. A large portion of its electric service territory is shared with Arizona Public Service. Governance Each company of SRP is governed separately. For the Association, landowners elect a president, a vice president, a 10-member board of governors and 30 council members. For the District, landowners elect a president, a vice president, a 14-member board of directors and 30 council members. The officials of each organization are elected on the first Tuesday in April of even-numbered years. The last scheduled Association and District elections were held on April 7, 2020. Both are elected by all landowners in the SRP service area through a "debt-proportionate" system. For instance, a person who owns five acres casts five votes. History The Hohokam, the ancestors of the Salt River Pima-Maricopa Indian and Gila River Indian communities, built canals spanning nearly 500 miles. The SRP canal system follows much of the ancient canal network. Early settlers in Phoenix and nearby areas were forced to rely on the flow of the Salt River to sustain agricultural activities. The river was prone to both floods and droughts and proved to be a less than reliable resource for the settlers. Failed plans to build a dam on the river in 1897, combined with a series of droughts, heightened the need f
https://en.wikipedia.org/wiki/K%C5%8Dsaku%20Yosida
was a Japanese mathematician who worked in the field of functional analysis. He is known for the Hille-Yosida theorem concerning C0-semigroups. Yosida studied mathematics at the University of Tokyo, and held posts at Osaka and Nagoya Universities. In 1955, Yosida returned to the University of Tokyo. See also Einar Carl Hille Functional analysis References Kôsaku Yosida: Functional analysis. Grundlehren der mathematischen Wissenschaften 123, Springer-Verlag, 1971 (3rd ed.), 1974 (4th ed.), 1978 (5th ed.), 1980 (6th ed.) External links Photo Kosaku Yosida / School of Mathematics and Statistics University of St Andrews, Scotland 94. Normed Rings and Spectral Theorems, II. By Kôsaku YOSIDA. Mathematical Inlstitute, Nagoya Imperial University. (Comm. by T.TAKAGMI, M.I.A. Oct.12,1943.) Kosaku Yosida (1909 - 1990) - Biography - MacTutor 1909 births 1990 deaths 20th-century Japanese mathematicians Mathematical analysts Functional analysts Operator theorists Approximation theorists University of Tokyo alumni Academic staff of the University of Tokyo Academic staff of Osaka University Academic staff of Nagoya University Laureates of the Imperial Prize
https://en.wikipedia.org/wiki/%C3%98ystein%20Ore
Øystein Ore (7 October 1899 – 13 August 1968) was a Norwegian mathematician known for his work in ring theory, Galois connections, graph theory, and the history of mathematics. Life Ore graduated from the University of Oslo in 1922, with a Cand.Real.degree in mathematics. In 1924, the University of Oslo awarded him the Ph.D. for a thesis titled Zur Theorie der algebraischen Körper, supervised by Thoralf Skolem. Ore also studied at Göttingen University, where he learned Emmy Noether's new approach to abstract algebra. He was also a fellow at the Mittag-Leffler Institute in Sweden, and spent some time at the University of Paris. In 1925, he was appointed research assistant at the University of Oslo. Yale University’s James Pierpont went to Europe in 1926 to recruit research mathematicians. In 1927, Yale hired Ore as an assistant professor of mathematics, promoted him to associate professor in 1928, then to full professor in 1929. In 1931, he became a Sterling Professor (Yale's highest academic rank), a position he held until he retired in 1968. Ore gave an American Mathematical Society Colloquium lecture in 1941 and was a plenary speaker at the International Congress of Mathematicians in 1936 in Oslo. He was also elected to the American Academy of Arts and Sciences and the Oslo Academy of Science. He was a founder of the Econometric Society. Ore visited Norway nearly every summer. During World War II, he was active in the "American Relief for Norway" and "Free Norway" movements. In gratitude for the services rendered to his native country during the war, he was decorated in 1947 with the Order of St. Olav. In 1930, Ore married Gudrun Lundevall. They had two children. Ore had a passion for painting and sculpture, collected ancient maps, and spoke several languages. Work Ore is known for his work in ring theory, Galois connections, and most of all, graph theory. His early work was on algebraic number fields, how to decompose the ideal generated by a prime number
https://en.wikipedia.org/wiki/Mitogen
A mitogen is a small bioactive protein or peptide that induces a cell to begin cell division, or enhances the rate of division (mitosis). Mitogenesis is the induction (triggering) of mitosis, typically via a mitogen. The mechanism of action of a mitogen is that it triggers signal transduction pathways involving mitogen-activated protein kinase (MAPK), leading to mitosis. The cell cycle Mitogens act primarily by influencing a set of proteins which are involved in the restriction of progression through the cell cycle. The G1 checkpoint is controlled most directly by mitogens: further cell cycle progression does not need mitogens to continue. The point where mitogens are no longer needed to move the cell cycle forward is called the "restriction point" and depends on cyclins to be passed. One of the most important of these is TP53, a gene which produces a family of proteins known as p53. It, combined with the Ras pathway, downregulate cyclin D1, a cyclin-dependent kinase, if they are not stimulated by the presence of mitogens. In the presence of mitogens, sufficient cyclin D1 can be produced. This process cascades onwards, producing other cyclins which stimulate the cell sufficiently to allow cell division. While animals produce internal signals that can drive the cell cycle forward, external mitogens can cause it to progress without these signals. Endogenous mitogens Mitogens can be either endogenous or exogenous factors. Endogenous mitogens function to control cell division is a normal and necessary part of the life cycle of multicellular organisms. For example, in zebrafish, an endogenous mitogen Nrg1 is produced in response to indications of heart damage. When it is expressed, it causes the outer layers of the heart to respond by increasing division rates and producing new layers of heart muscle cells to replace the damaged ones. This pathway can potentially be deleterious, however: expressing Nrg1 in the absence of heart damage causes uncontrolled growth of
https://en.wikipedia.org/wiki/IEEE%20P1363
IEEE P1363 is an Institute of Electrical and Electronics Engineers (IEEE) standardization project for public-key cryptography. It includes specifications for: Traditional public-key cryptography (IEEE Std 1363-2000 and 1363a-2004) Lattice-based public-key cryptography (IEEE Std 1363.1-2008) Password-based public-key cryptography (IEEE Std 1363.2-2008) Identity-based public-key cryptography using pairings (IEEE Std 1363.3-2013) The chair of the working group as of October 2008 is William Whyte of NTRU Cryptosystems, Inc., who has served since August 2001. Former chairs were Ari Singer, also of NTRU (1999–2001), and Burt Kaliski of RSA Security (1994–1999). The IEEE Standard Association withdrew all of the 1363 standards except 1363.3-2013 on 7 November 2019. Traditional public-key cryptography (IEEE Std 1363-2000 and 1363a-2004) This specification includes key agreement, signature, and encryption schemes using several mathematical approaches: integer factorization, discrete logarithm, and elliptic curve discrete logarithm. Key agreement schemes DL/ECKAS-DH1 and DL/ECKAS-DH2 (Discrete Logarithm/Elliptic Curve Key Agreement Scheme, Diffie–Hellman version): This includes both traditional Diffie–Hellman and elliptic curve Diffie–Hellman. DL/ECKAS-MQV (Discrete Logarithm/Elliptic Curve Key Agreement Scheme, Menezes–Qu–Vanstone version) Signature schemes DL/ECSSA (Discrete Logarithm/Elliptic Curve Signature Scheme with Appendix): Includes four main variants: DSA, ECDSA, Nyberg-Rueppel, and Elliptic Curve Nyberg-Rueppel. IFSSA (Integer Factorization Signature Scheme with Appendix): Includes two variants of RSA, Rabin-Williams, and ESIGN, with several message encoding methods. "RSA1 with EMSA3" is essentially PKCS#1 v1.5 RSA signature; "RSA1 with EMSA4 encoding" is essentially RSA-PSS; "RSA1 with EMSA2 encoding" is essentially ANSI X9.31 RSA signature. DL/ECSSR (Discrete Logarithm/Elliptic Curve Signature Scheme with Recovery) DL/ECSSR-PV (Discrete Logari
https://en.wikipedia.org/wiki/Refinement%20%28computing%29
Refinement is a generic term of computer science that encompasses various approaches for producing correct computer programs and simplifying existing programs to enable their formal verification. Program refinement In formal methods, program refinement is the verifiable transformation of an abstract (high-level) formal specification into a concrete (low-level) executable program. Stepwise refinement allows this process to be done in stages. Logically, refinement normally involves implication, but there can be additional complications. The progressive just-in-time preparation of the product backlog (requirements list) in agile software development approaches, such as Scrum, is also commonly described as refinement. Data refinement Data refinement is used to convert an abstract data model (in terms of sets for example) into implementable data structures (such as arrays). Operation refinement converts a specification of an operation on a system into an implementable program (e.g., a procedure). The postcondition can be strengthened and/or the precondition weakened in this process. This reduces any nondeterminism in the specification, typically to a completely deterministic implementation. For example, x ∈ {1,2,3} (where x is the value of the variable x after an operation) could be refined to x ∈ {1,2}, then x ∈ {1}, and implemented as x := 1. Implementations of x := 2 and x := 3 would be equally acceptable in this case, using a different route for the refinement. However, we must be careful not to refine to x ∈ {} (equivalent to false) since this is unimplementable; it is impossible to select a member from the empty set. The term reification is also sometimes used (coined by Cliff Jones). Retrenchment is an alternative technique when formal refinement is not possible. The opposite of refinement is abstraction. Refinement calculus Refinement calculus is a formal system (inspired from Hoare logic) that promotes program refinement. The FermaT Transformation System i
https://en.wikipedia.org/wiki/C0-semigroup
{{DISPLAYTITLE:C0-semigroup }} In mathematics, a C0-semigroup, also known as a strongly continuous one-parameter semigroup, is a generalization of the exponential function. Just as exponential functions provide solutions of scalar linear constant coefficient ordinary differential equations, strongly continuous semigroups provide solutions of linear constant coefficient ordinary differential equations in Banach spaces. Such differential equations in Banach spaces arise from e.g. delay differential equations and partial differential equations. Formally, a strongly continuous semigroup is a representation of the semigroup (R+, +) on some Banach space X that is continuous in the strong operator topology. Thus, strictly speaking, a strongly continuous semigroup is not a semigroup, but rather a continuous representation of a very particular semigroup. Formal definition A strongly continuous semigroup on a Banach space is a map such that ,   (the identity operator on ) , as . The first two axioms are algebraic, and state that is a representation of the semigroup ; the last is topological, and states that the map is continuous in the strong operator topology. Infinitesimal generator The infinitesimal generator A of a strongly continuous semigroup T is defined by whenever the limit exists. The domain of A, D(A), is the set of x∈X for which this limit does exist; D(A) is a linear subspace and A is linear on this domain. The operator A is closed, although not necessarily bounded, and the domain is dense in X. The strongly continuous semigroup T with generator A is often denoted by the symbol (or, equivalently, ). This notation is compatible with the notation for matrix exponentials, and for functions of an operator defined via functional calculus (for example, via the spectral theorem). Uniformly continuous semigroup A uniformly continuous semigroup is a strongly continuous semigroup T such that holds. In this case, the infinitesimal generator A of T is
https://en.wikipedia.org/wiki/Statistical%20Methods%20for%20Research%20Workers
Statistical Methods for Research Workers is a classic book on statistics, written by the statistician R. A. Fisher. It is considered by some to be one of the 20th century's most influential books on statistical methods, together with his The Design of Experiments (1935). It was originally published in 1925, by Oliver & Boyd (Edinburgh); the final and posthumous 14th edition was published in 1970. Reviews According to Denis Conniffe: Ronald A. Fisher was "interested in application and in the popularization of statistical methods and his early book Statistical Methods for Research Workers, published in 1925, went through many editions and motivated and influenced the practical use of statistics in many fields of study. His Design of Experiments (1935) [promoted] statistical technique and application. In that book he emphasized examples and how to design experiments systematically from a statistical point of view. The mathematical justification of the methods described was not stressed and, indeed, proofs were often barely sketched or omitted altogether ..., a fact which led H. B. Mann to fill the gaps with a rigorous mathematical treatment in his well-known treatise, ." Chapters Prefaces Introduction Diagrams Distributions Tests of Goodness of Fit, Independence and Homogeneity; with table of χ2 Tests of Significance of Means, Difference of Means, and Regression Coefficients The Correlation Coefficient Intraclass Correlations and the Analysis of Variance Further Applications of the Analysis of Variance SOURCES USED FOR DATA AND METHODS INDEX In the second edition of 1928 a chapter 9 was added: The Principles of Statistical Estimation. See also The Design of Experiments Notes Further reading The March 1951 issue of the Journal of the American Statistical Association contains articles celebrating the 25th anniversary of the publication of the first edition. A.W.F. Edwards (2005) "R. A. Fisher, Statistical Methods for Research Workers, 1925," in I. G
https://en.wikipedia.org/wiki/Compound%20annual%20growth%20rate
Compound annual growth rate (CAGR) is a business and investing specific term for the geometric progression ratio that provides a constant rate of return over the time period. CAGR is not an accounting term, but it is often used to describe some element of the business, for example revenue, units delivered, registered users, etc. CAGR dampens the effect of volatility of periodic returns that can render arithmetic means irrelevant. It is particularly useful to compare growth rates from various data sets of common domain such as revenue growth of companies in the same industry or sector. CAGR is equivalent to the more generic exponential growth rate when the exponential growth interval is one year. Formula CAGR is defined as: where is the initial value, is the end value, and is the number of years. Actual or normalized values may be used for calculation as long as they retain the same mathematical proportion. Example In this example, we will compute the CAGR over a three-year period. Assume that the year-end revenues of a business over a three-year period, , have been: {| class="wikitable" border="1" |- ! Year-End ! 2004-12-31 ! 2007-12-31 |- | Year-End Revenue | 9,000 | 13,000 |} Therefore, to calculate the CAGR of the revenues over the three-year period spanning the "end" of 2004 to the "end" of 2007 is: Note that this is a smoothed growth rate per year. This rate of growth would take you to the ending value, from the starting value, in the number of years given, if growth had been at the same rate every year. Verification: Multiply the initial value (2004 year-end revenue) by (1 + CAGR) three times (because we calculated for 3 years). The product will equal the year-end revenue for 2007. This shows the compound growth rate: For n = 3: For comparison: the Arithmetic Mean Return (AMR) would be the sum of annual revenue changes (compared with the previous year) divided by number of years, or: In contrast to CAGR, you cannot obtain by multi
https://en.wikipedia.org/wiki/EComStation
eComStation or eCS is an operating system based on OS/2 Warp for the 32-bit x86 architecture. It was originally developed by Serenity Systems and Mensys BV under license from IBM. It includes additional applications, and support for new hardware which were not present in OS/2 Warp. It is intended to allow OS/2 applications to run on modern hardware, and is used by a number of large organizations for this purpose. By 2014, approximately thirty to forty thousand licenses of eComStation had been sold. Financial difficulties at Mensys in 2012 led to the development of eComStation stalling, and ownership being transferred to a sister company named XEU.com (now known as PayGlobal Technologies BV), who continue to sell and support the operating system. The lack of a new release since 2011 was one of the motivations for the creation of the ArcaOS OS/2 distribution. Differences between eComStation and OS/2 Version 1 of eComStation, released in 2001, was based around the integrated OS/2 version 4.5 client Convenience Package for OS/2 Warp version 4, which was released by IBM in 2000. The latter had been made available only to holders of existing OS/2 support contracts; it included the following new features (among others) compared to the final retail version of OS/2 (1996's OS/2 Warp version 4): IBM-supplied updates of software and components that had shipped with the 1999 release of OS/2 Warp Server for e-business, but had not been made available to users of the client version. Key among these were the JFS file system and the logical volume manager. Operating system features and enhancements that had been made available as updates but never offered as an install-time option. These included an updated kernel, a 32-bit TCP/IP stack and associated networking utilities, a firewall, updated drivers and other system components, newer versions of Java, SciTech SNAP Graphics video support, and more. IBM-supplied updates that had previously only been offered to customers with
https://en.wikipedia.org/wiki/Gauge%20%28instrument%29
In science and engineering, a dimensional gauge or simply gauge is a device used to make measurements or to display certain dimensional information. A wide variety of tools exist which serve such functions, ranging from simple pieces of material against which sizes can be measured to complex pieces of machinery. Dimensional properties include thickness, gap in space, diameter of materials. Basic types All gauges can be divided into four main types, independent of their actual use. Analogue instrument meter with analogue display ("needles"). Until the later decades the most common basic type. Digital instrument meter with analogue display. A screen that shows an "analogue meter", commonly used in modern aircraft cockpits, and some hospital equipment etc. Digital instrument meter with digital display. Only numbers are shown at a digital display. Analogue instrument meter with digital display. Only numbers are displayed, but through a mechanical or electro-mechanical display (today very rare but has existed for clocks, certain Doppler meters and informational screens at many stations and airports) The two basic types with an analogue display are usually easier for the human eyes and brain to interpret, especially if many instrument meters must be read simultaneously. An indicator or needle indicates the measurement on the gauge. The other two types are only displaying digits, which are more complex for humans to read and interpret. The ultimate example is cockpit instrumentation in aircraft. The flight instruments cannot display figures only, hence even in the most modern "glass-cockpits" where almost all instruments are displayed at screens, few figures are visible. Instead the screens display analogue meters. Types Various types of dimensional gauges include: See also Dimensional instruments Geometric dimensioning and tolerancing References Measuring instruments
https://en.wikipedia.org/wiki/Secure%20by%20design
Secure by design, in software engineering, means that software products and capabilities have been designed to be foundationally secure. Alternate security strategies, tactics and patterns are considered at the beginning of a software design, and the best are selected and enforced by the architecture, and they are used as guiding principles for developers. It is also encouraged to use strategic design patterns that have beneficial effects on security, even though those design patterns were not originally devised with security in mind. Secure by Design is increasingly becoming the mainstream development approach to ensure security and privacy of software systems. In this approach, security is considered and built into the system at every layer and starts with a robust architecture design. Security architectural design decisions are based on well-known security strategies, tactics, and patterns defined as reusable techniques for achieving specific quality concerns. Security tactics/patterns provide solutions for enforcing the necessary authentication, authorization, confidentiality, data integrity, privacy, accountability, availability, safety and non-repudiation requirements, even when the system is under attack. In order to ensure the security of a software system, not only is it important to design a robust intended security architecture but it is also necessary to map updated security strategies, tactics and patterns to software development in order to maintain security persistence. Expect attacks Malicious attacks on software should be assumed to occur, and care is taken to minimize impact. Security vulnerabilities are anticipated, along with invalid user input. Closely related is the practice of using "good" software design, such as domain-driven design or cloud native, as a way to increase security by reducing risk of vulnerability-opening mistakes—even though the design principles used were not originally conceived for security purposes. Avoid security th
https://en.wikipedia.org/wiki/Clipper%20architecture
The Clipper architecture is a 32-bit RISC-like instruction set architecture designed by Fairchild Semiconductor. The architecture never enjoyed much market success, and the only computer manufacturers to create major product lines using Clipper processors were Intergraph and High Level Hardware, although Opus Systems offered a product based on the Clipper as part of its Personal Mainframe range. The first processors using the Clipper architecture were designed and sold by Fairchild, but the division responsible for them was subsequently sold to Intergraph in 1987; Intergraph continued work on Clipper processors for use in its own systems. The Clipper architecture used a simplified instruction set compared to earlier CISC architectures, but it did incorporate some more complicated instructions than were present in other contemporary RISC processors. These instructions were implemented in a so-called Macro Instruction ROM within the Clipper CPU. This scheme allowed the Clipper to have somewhat higher code density than other RISC CPUs. Versions The initial Clipper microprocessor produced by Fairchild was the C100, which became available in 1986. This was followed by the faster C300 from Intergraph in 1988. The final model of the Clipper was the C400, released in 1990, which was extensively redesigned to be faster and added more floating-point registers. The C400 processor combined two key architectural techniques to achieve a new level of performance — superscalar instruction dispatch and superpipelined operation. While many processors of the time used either superscalar instruction dispatch or superpipelined operation, the Clipper C400 was the first processor to use both. Intergraph started work on a subsequent Clipper processor design known as the C5, but this was never completed or released. Nonetheless, some advanced processor design techniques were devised for the C5, and Intergraph was granted patents on these. These patents, along with the original C
https://en.wikipedia.org/wiki/Neutrophile
A neutrophile is a neutrophilic organism that thrives in a neutral pH environment between 6.5 and 7.5. Environment The pH of the environment can support growth or hinder neutrophilic organisms. When the pH is within the microbe's range, they grow and within that range there is an optimal growth pH. Neutrophiles are adapted to live in an environment where the hydrogen ion concentration is at equilibrium. They are sensitive to the concentration, and when the pH become too basic or acidic, the cell's proteins can denature. Depending on the microbe and the pH, the microbe's growth can be slowed or stopped altogether. Manipulation of the pH of the environment that the microbe is in is used by the food industry to control its growth in order to increase the shelf life of food. See also Acidophile Acidophobe Alkaliphile Extremophile Mesophile References Microbiology
https://en.wikipedia.org/wiki/Microsoft%20Mail
Microsoft Mail (or MSMail/MSM) was the name given to several early Microsoft e-mail products for local area networks, primarily two architectures: one for Macintosh networks, and one for PC architecture-based LANs. All were eventually replaced by the Exchange and Outlook product lines. Mac Networks The first Microsoft Mail product was introduced in 1988 for AppleTalk Networks. It was based on InterMail, a product that Microsoft purchased and updated. An MS-DOS client was added for PCs on AppleTalk networks. It was later sold off to become Star Nine Mail, then Quarterdeck Mail, and has long since been discontinued. PC Networks The second Microsoft Mail product, Microsoft Mail for PC Networks v2.1, was introduced in 1991. It was based on Network Courier, a LAN email system produced by Consumers Software of Vancouver BC, which Microsoft had purchased. Following the initial 1991 rebranding release, Microsoft issued its first major update as Version 3.0 in 1992. This version included Microsoft's first Global Address Book technology and first networked scheduling application, Microsoft Schedule+. Versions 3.0 through 3.5 included email clients for MS-DOS, OS/2 1.31, Mac OS, Windows (both 16 and 32-bit), a separate Windows for Workgroups Mail client, and a DOS-based Remote Client for use over pre-PPP/pre-SLIP dialup modem connections. A stripped-down version of the PC-based server, Microsoft Mail for PC Networks, was included in Windows 95 and Windows NT 4.0. The last version based on this architecture was 3.5; afterwards, it was replaced by Microsoft Exchange Server, which started with version 4.0. The client software was also named Microsoft Mail, and was included in some older versions of Microsoft Office such as version 4.x. The original "Inbox" (Exchange client or Windows Messaging) of Windows 95 also had the capability to connect to an MS Mail server. Microsoft Mail Server was eventually replaced by Microsoft Exchange; Microsoft Mail Client, Microsoft Exchange C
https://en.wikipedia.org/wiki/Luria%E2%80%93Delbr%C3%BCck%20experiment
The Luria–Delbrück experiment (1943) (also called the Fluctuation Test) demonstrated that in bacteria, genetic mutations arise in the absence of selective pressure rather than being a response to it. Thus, it concluded Darwin's theory of natural selection acting on random mutations applies to bacteria as well as to more complex organisms. Max Delbrück and Salvador Luria won the 1969 Nobel Prize in Physiology or Medicine in part for this work. History By the 1940s the ideas of inheritance and mutation were generally accepted, though the role of DNA as the hereditary material had not yet been established. It was thought that bacteria were somehow different and could develop heritable genetic mutations depending on the circumstances they found themselves: in short, was the mutation in bacteria pre-adaptive (pre-existent) or post-adaptive (directed adaption)? In their experiment, Luria and Delbrück inoculated a small number of bacteria (Escherichia coli) into separate culture tubes. After a period of growth, they plated equal volumes of these separate cultures onto agar containing the T1 phage (virus). If resistance to the virus in bacteria were caused by an induced activation in bacteria i.e. if resistance were not due to heritable genetic components, then each plate should contain roughly the same number of resistant colonies. Assuming a constant rate of mutation, Luria hypothesized that if mutations occurred after and in response to exposure to the selective agent, the number of survivors would be distributed according to a Poisson distribution with the mean equal to the variance. This was not what Delbrück and Luria found: Instead the number of resistant colonies on each plate varied drastically: the variance was considerably greater than the mean. Luria and Delbrück proposed that these results could be explained by the occurrence of a constant rate of random mutations in each generation of bacteria growing in the initial culture tubes. Based on these assumptio
https://en.wikipedia.org/wiki/No-arbitrage%20bounds
In financial mathematics, no-arbitrage bounds are mathematical relationships specifying limits on financial portfolio prices. These price bounds are a specific example of good–deal bounds, and are in fact the greatest extremes for good–deal bounds. The most frequent nontrivial example of no-arbitrage bounds is put–call parity for option prices. In incomplete markets, the bounds are given by the subhedging and superhedging prices. The essence of no-arbitrage in mathematical finance is excluding the possibility of "making money out of nothing" in the financial market. This is necessary because the existence of arbitrage is not only unrealistic, but also contradicts the possibility of an economic equilibrium. All mathematical models of financial markets have to satisfy a no-arbitrage condition to be realistic models. See also Box spread Indifference price References Mathematical finance
https://en.wikipedia.org/wiki/Heterogeneous%20network
In computer networking, a heterogeneous network is a network connecting computers and other devices where the operating systems and protocols have significant differences. For example, local area networks (LANs) that connect Microsoft Windows and Linux based personal computers with Apple Macintosh computers are heterogeneous. Heterogeneous network also describes wireless networks using different access technologies. For example, a wireless network that provides a service through a wireless LAN and is able to maintain the service when switching to a cellular network is called a wireless heterogeneous network. HetNet Reference to a HetNet often indicates the use of multiple types of access nodes in a wireless network. A Wide Area Network can use some combination of macrocells, picocells, and femtocells in order to offer wireless coverage in an environment with a wide variety of wireless coverage zones, ranging from an open outdoor environment to office buildings, homes, and underground areas. Mobile experts define a HetNet as a network with complex interoperation between macrocell, small cell, and in some cases WiFi network elements used together to provide a mosaic of coverage, with handoff capability between network elements. A study from ARCchart estimates that HetNets will help drive the mobile infrastructure market to account for nearly US$57 billion in spending globally by 2017. Small Cell Forum defines the HetNet as ‘multi-x environment – multi-technology, multi-domain, multi-spectrum, multi-operator and multi-vendor. It must be able to automate the reconfiguration of its operation to deliver assured service quality across the entire network, and flexible enough to accommodate changing user needs, business goals and subscriber behaviours.’ HetNet architecture From an architectural perspective, the HetNet can be viewed as encompassing conventional macro radio access network (RAN) functions, RAN transport capability, small cells, and Wi-Fi functionality,
https://en.wikipedia.org/wiki/Galactomannan
Galactomannans are polysaccharides consisting of a mannose backbone with galactose side groups, more specifically, a (1-4)-linked beta-D-mannopyranose backbone with branchpoints from their 6-positions linked to alpha-D-galactose, (i.e. 1-6-linked alpha-D-galactopyranose). In order of increasing number of mannose-to-galactose ratio: fenugreek gum, mannose:galactose ~1:1 guar gum, mannose:galactose ~2:1 tara gum, mannose:galactose ~3:1 locust bean gum or carob gum, mannose:galactose ~4:1 cassia gum, mannose:galactose ~5:1 Galactomannans are often used in food products to increase the viscosity of the water phase. Guar gum has been used to add viscosity to artificial tears, but is not as stable as carboxymethylcellulose. Food use Galactomannans are used in foods as stabilisers. Guar and locust bean gum (LBG) are commonly used in ice cream to improve texture and reduce ice cream meltdown. LBG is also used extensively in cream cheese, fruit preparations and salad dressings. Tara gum is seeing growing acceptability as a food ingredient but is still used to a much lesser extent than guar or LBG. Guar has the highest usage in foods, largely due to its low and stable price. Clinical use Galactomannan is a component of the cell wall of the mold Aspergillus and is released during growth. Detection of galactomannan in blood is used to diagnose invasive aspergillosis infections in humans. This is performed with monoclonal antibodies in a double-sandwich ELISA; this assay from Bio-Rad Laboratories was approved by the FDA in 2003 and is of moderate accuracy. The assay is most useful in patients who have had hemopoietic cell transplants (stem cell transplants). False positive Aspergillus Galactomannan test have been found in patients on intravenous treatment with some antibiotics or fluids containing gluconate or citric acid such as some transfusion platelets, parenteral nutrition or PlasmaLyte. References Edible thickening agents Polysaccharides Carbohydrates Natural gu
https://en.wikipedia.org/wiki/Torre%20Entel
Torre Entel (Entel Tower) is the name of a high TV and telecommunications tower in Santiago, Chile. Torre Entel has an observation deck open for visitors. Construction began in 1970 during Eduardo Frei Montalva term as president and it was inaugurated in 1974. In 1976 it carried its first television transmissions. For many years it was the tallest building in Chile and today remains a symbol of Santiago. The tower is constructed of concrete, steel, and aluminum. With 128 m high and 18 floors, it was after the end of its construction in 1974, the highest architectural structure in the country, a title it kept until the inauguration of the Telefonica Tower in 1996 with 143 m. Already surpassed in height by other buildings, it continues being the structure of greater prominence in the commune of Santiago, being located next to the Avenida Libertador General Bernardo O'Higgins and to a block of the La Moneda Palace, reason why it has stayed like an icon of the city. Its design represents a torch, an ancient form of telecommunication. History Its construction began during the government of Eduardo Frei Montalva, on July 1, 1970, as part of the National Telecommunications Center. After four years of construction, the Tower reached its current height on August 30, 1974. Its structure was influenced by the Post Office Tower in London, which had been built a few years earlier. Later, on September 8, 1975, two satellite dishes were installed, which were the first telecommunications elements visible from the outside, and finally, on April 12, 1976, the telephone channels came into service. From that moment on, the Entel Tower became the vital nucleus of the country's communications system by allowing the interconnection of Entel's telephone, television, radio and microwave network services with those of the Chilean Telephone Company (currently Movistar) and with the north, center and south of the country and the province of Mendoza, Argentina. In addition, it is connecte
https://en.wikipedia.org/wiki/Nuclear%20localization%20sequence
A nuclear localization signal or sequence (NLS) is an amino acid sequence that 'tags' a protein for import into the cell nucleus by nuclear transport. Typically, this signal consists of one or more short sequences of positively charged lysines or arginines exposed on the protein surface. Different nuclear localized proteins may share the same NLS. An NLS has the opposite function of a nuclear export signal (NES), which targets proteins out of the nucleus. Types Classical These types of NLSs can be further classified as either monopartite or bipartite. The major structural differences between the two are that the two basic amino acid clusters in bipartite NLSs are separated by a relatively short spacer sequence (hence bipartite - 2 parts), while monopartite NLSs are not. The first NLS to be discovered was the sequence PKKKRKV in the SV40 Large T-antigen (a monopartite NLS). The NLS of nucleoplasmin, KR[PAATKKAGQA]KKKK, is the prototype of the ubiquitous bipartite signal: two clusters of basic amino acids, separated by a spacer of about 10 amino acids. Both signals are recognized by importin α. Importin α contains a bipartite NLS itself, which is specifically recognized by importin β. The latter can be considered the actual import mediator. Chelsky et al. proposed the consensus sequence K-K/R-X-K/R for monopartite NLSs. A Chelsky sequence may, therefore, be part of the downstream basic cluster of a bipartite NLS. Makkah et al. carried out comparative mutagenesis on the nuclear localization signals of SV40 T-Antigen (monopartite), C-myc (monopartite), and nucleoplasmin (bipartite), and showed amino acid features common to all three. The role of neutral and acidic amino acids was shown for the first time in contributing to the efficiency of the NLS. Rotello et al. compared the nuclear localization efficiencies of eGFP fused NLSs of SV40 Large T-Antigen, nucleoplasmin (AVKRPAATKKAGQAKKKKLD), EGL-13 (MSRRRKANPTKLSENAKKLAKEVEN), c-Myc (PAAKRVKLD) and TUS-protein (KLKIK
https://en.wikipedia.org/wiki/Global%20Infectious%20Disease%20Epidemiology%20Network
Global Infectious Diseases and Epidemiology Online Network (GIDEON) is a web-based program for decision support and informatics in the fields of Infectious Diseases and Geographic Medicine. Due to the advancement of both disease research and digital media, print media can no longer follow the dynamics of outbreaks and epidemics as they emerge in "real time." As of 2005, more than 300 generic infectious diseases occur haphazardly in time and space and are challenged by over 250 drugs and vaccines. 1,500 species of pathogenic bacteria, viruses, parasites and fungi have been described. GIDEON works to combat this by creating a diagnosis through geographical indicators, a map of the status of the disease in history, a detailed list of potential vaccines and treatments, and finally listing all the potential species of the disease or outbreak such as bacterial classifications. Organization GIDEON consists of four modules. The first Diagnosis module generates a Bayesian ranked differential diagnosis based on signs, symptoms, laboratory tests, country of origin and incubation period – and can be used for diagnosis support and simulation of all infectious diseases in all countries. Since the program is web-based, this module can also be adapted to disease and bioterror surveillance. The second module follows the epidemiology of individual diseases, including their global background and status in each of 205 countries and regions. All past and current outbreaks of all diseases, in all countries, are described in detail. The user may also access a list of diseases compatible with any combination of agent, vector, vehicle, reservoir and country (for example, one could list all the mosquito-borne flaviviruses of Brazil which have an avian reservoir). Over 30,000 graphs display all the data, and are updated in "real time". These graphs can be used for preparation of PowerPoint displays, pamphlets, lecture notes, etc. Several thousand high-quality images are also available, i
https://en.wikipedia.org/wiki/Random%20matrix
In probability theory and mathematical physics, a random matrix is a matrix-valued random variable—that is, a matrix in which some or all elements are random variables. Many important properties of physical systems can be represented mathematically as matrix problems. For example, the thermal conductivity of a lattice can be computed from the dynamical matrix of the particle-particle interactions within the lattice. Applications Physics In nuclear physics, random matrices were introduced by Eugene Wigner to model the nuclei of heavy atoms. Wigner postulated that the spacings between the lines in the spectrum of a heavy atom nucleus should resemble the spacings between the eigenvalues of a random matrix, and should depend only on the symmetry class of the underlying evolution. In solid-state physics, random matrices model the behaviour of large disordered Hamiltonians in the mean-field approximation. In quantum chaos, the Bohigas–Giannoni–Schmit (BGS) conjecture asserts that the spectral statistics of quantum systems whose classical counterparts exhibit chaotic behaviour are described by random matrix theory. In quantum optics, transformations described by random unitary matrices are crucial for demonstrating the advantage of quantum over classical computation (see, e.g., the boson sampling model). Moreover, such random unitary transformations can be directly implemented in an optical circuit, by mapping their parameters to optical circuit components (that is beam splitters and phase shifters). Random matrix theory has also found applications to the chiral Dirac operator in quantum chromodynamics, quantum gravity in two dimensions, mesoscopic physics, spin-transfer torque, the fractional quantum Hall effect, Anderson localization, quantum dots, and superconductors Mathematical statistics and numerical analysis In multivariate statistics, random matrices were introduced by John Wishart, who sought to estimate covariance matrices of large samples. Chernoff-, B
https://en.wikipedia.org/wiki/Windows%20on%20Windows
In computing, Windows on Windows (commonly referred to as WOW) was a compatibility layer of 32-bit versions of the Windows NT family of operating systems since 1993 with the release of Windows NT 3.1, which extends NTVDM to provide limited support for running legacy 16-bit programs written for Windows 3.x or earlier. There is a similar subsystem, known as WoW64, on 64-bit Windows versions that runs 32-bit programs. This subsystem is not available in 64-bit editions since Windows 11 (including Windows Server 2008 R2 and later, which only have 64-bit editions) and therefore cannot run 16-bit software without third-party emulation software (e.g. DOSBox). Windows 10 is the final version of Windows to include this subsystem. This subsystem has since been discontinued, as Windows 11 dropped support for 32-bit processors. Background Many 16-bit Windows legacy programs can run without changes on newer 32-bit editions of Windows. The reason designers made this possible was to allow software developers time to remedy their software during the industry transition from Windows 3.1x to Windows 95 and later, without restricting the ability for the operating system to be upgraded to a current version before all programs used by a customer had been taken care of. The Windows 9x series of operating systems, reflecting their roots in DOS, functioned as hybrid 16- and 32-bit systems in the sense that the underlying operating system was not truly 32-bit, and therefore could run 16-bit software natively without requiring any special emulation; Windows NT operating systems differ significantly from Windows 9x in their architecture, and therefore require a more complex solution. Two separate strategies are used in order to let 16-bit programs run on 32-bit versions of Windows (with some runtime limitations). They are called thunking and shimming. Thunking The WOW subsystem of the operating system in order to provide support for 16-bit pointers, memory models and address space. Al
https://en.wikipedia.org/wiki/Wilkinson%20power%20divider
In the field of microwave engineering and circuit design, the Wilkinson Power Divider is a specific class of power divider circuit that can achieve isolation between the output ports while maintaining a matched condition on all ports. The Wilkinson design can also be used as a power combiner because it is made up of passive components and hence is reciprocal. First published by Ernest J. Wilkinson in 1960, this circuit finds wide use in radio frequency communication systems utilizing multiple channels since the high degree of isolation between the output ports prevents crosstalk between the individual channels. It uses quarter wave transformers, which can be easily fabricated as quarter wave lines on printed circuit boards. It is also possible to use other forms of transmission line (e.g. coaxial cable) or lumped circuit elements (inductors and capacitors). Theory The scattering parameters for the common case of a 2-way equal-split Wilkinson power divider at the design frequency is given by Inspection of the S matrix reveals that the network is reciprocal (), that the terminals are matched (), that the output terminals are isolated (=0), and that equal power division is achieved (). The non-unitary matrix results from the fact that the network is lossy. An ideal Wilkinson divider would yield . Network theorem governs that a divider cannot satisfy all three conditions (being matched, reciprocal and loss-less) at the same time. Wilkinson divider satisfies the first two (matched and reciprocal), and cannot satisfy the last one (being loss-less). Hence, there is some loss occurring in the network. No loss occurs when the signals at ports 2 and 3 are in phase and have equal magnitude. In case of noise input to ports 2 and 3, the noise level at port 1 does not increase, half of the noise power is dissipated in the resistor. By cascading, the input power might be divided to any -number of outputs. Unequal/Asymmetric Division Through Wilkinson Divider If the
https://en.wikipedia.org/wiki/Digital%20identity
Digital identity refers to the information utilized by computer systems to represent external entities, including a person, organization, application, or device. When used to describe an individual, it encompasses a person's compiled information and plays a crucial role in automating access to computer-based services, verifying identity online, and enabling computers to mediate relationships between entities. Digital identity for individuals is an aspect of a person's social identity and can also be referred to as online identity. The widespread use of digital identities can include the entire collection of information generated by a person's online activity. This includes usernames, passwords, search history, birthday, social security number, and purchase history. When publicly available, this data can be used by others to discover a person's civil identity. It can also be harvested to create what has been called a "data double", an aggregated profile based on the user's data trail across databases. In turn, these data doubles serve to facilitate personalization methods on the web and across various applications. If personal information is no longer the currency that people give for online content and services, something else must take its place. Media publishers, app makers, and e-commerce shops are now exploring different paths to surviving a privacy-conscious internet, in some cases overturning their business models. Many are choosing to make people pay for what they get online by levying subscription fees and other charges instead of using their personal data. An individual's digital identity is often linked to their civil or national identity and many countries have instituted national digital identity systems that provide digital identities to their citizenry. The legal and social effects of digital identity are complex and challenging. Faking a legal identity in the digital world may present many threats to a digital society and raises the opportunity fo
https://en.wikipedia.org/wiki/Flash%20animation
Adobe Flash animation (formerly Macromedia Flash animation and FutureSplash animation) is an animation that is created with the Adobe Animate (formerly Flash Professional) platform or similar animation software and often distributed in the SWF file format. The term Adobe Flash animation refers to both the file format and the medium in which the animation is produced. Adobe Flash animation has enjoyed mainstream popularity since the mid-2000s, with many Adobe Flash-animated television series, television commercials, and award-winning online shorts being produced since then. In the late 1990s, when bandwidth was still at 56 kbit/s for most Internet users, many Adobe Flash animation artists employed limited animation or cutout animation when creating projects intended for web distribution. This allowed artists to release shorts and interactive experiences well under 1 MB, which could stream both audio and high-end animation. Adobe Flash is able to integrate bitmaps and other raster-based art, as well as video, though most Adobe Flash films are created using only vector-based drawings, which often result in a somewhat clean graphic appearance. Some hallmarks of poorly produced Adobe Flash animation are jerky natural movements (seen in walk-cycles and gestures), auto-tweened character movements, lip-sync without interpolation and abrupt changes from front to profile view. Adobe Flash animations are typically distributed by way of the World Wide Web, in which case they are often referred to as Internet cartoons, online cartoons, or web cartoons. Web Adobe Flash animations may be interactive and are often created in a series. An Adobe Flash animation is distinguished from a Webcomic, which is a comic strip distributed via the Web, rather than an animated cartoon. History The first prominent use of the Adobe Flash animation format was by Ren & Stimpy creator John Kricfalusi. On October 15, 1997, he launched The Goddamn George Liquor Program, the first cartoon series
https://en.wikipedia.org/wiki/Ariel%20Rubinstein
Ariel Rubinstein (Hebrew: אריאל רובינשטיין; born April 13, 1951) is an Israeli economist who works in economic theory, game theory and bounded rationality. Biography Ariel Rubinstein is a professor of economics at the School of Economics at Tel Aviv University and the Department of Economics at New York University. He studied mathematics and economics at the Hebrew University of Jerusalem, 1972–1979 (B.Sc. Mathematics, Economics and Statistics, 1974; M.A. Economics, 1975; M.Sc Mathematics, 1976; Ph.D. Economics, 1979). In 1982, he published "Perfect equilibrium in a bargaining model", an important contribution to the theory of bargaining. The model is known also as a Rubinstein bargaining model. It describes two-person bargaining as an extensive game with perfect information in which the players alternate offers. A key assumption is that the players are impatient. The main result gives conditions under which the game has a unique subgame perfect equilibrium and characterizes this equilibrium. Honours and awards Rubinstein was elected a member of the Israel Academy of Sciences and Humanities (1995), a Foreign Honorary Member of the American Academy of Arts and Sciences in (1994) and the American Economic Association (1995). In 1985 he was elected a fellow of the Econometric Society, and served as its president in 2004. In 2002, he was awarded an honorary doctorate by the Tilburg University. He has received the Bruno Prize (2000), the Israel Prize for economics (2002), the Nemmers Prize in Economics (2004), the EMET Prize (2006). and the Rothschild Prize (2010). Published works Bargaining and Markets, with Martin J. Osborne, Academic Press 1990 A Course in Game Theory, with Martin J. Osborne, MIT Press, 1994. Modeling Bounded Rationality, MIT Press, 1998. Economics and Language, Cambridge University Press, 2000. Lecture Notes in Microeconomic Theory: The Economic Agent, Princeton University Press, 2006. Economic Fables, Open Book Publishers, 2012. AGAD
https://en.wikipedia.org/wiki/Richard%20Schroeppel
Richard C. Schroeppel (born 1948) is an American mathematician born in Illinois. His research has included magic squares, elliptic curves, and cryptography. In 1964, Schroeppel won first place in the United States among over 225,000 high school students in the Annual High School Mathematics Examination, a contest sponsored by the Mathematical Association of America and the Society of Actuaries. In both 1966 and 1967, Schroeppel scored among the top 5 in the U.S. in the William Lowell Putnam Mathematical Competition. In 1973 he discovered that there are 275,305,224 normal magic squares of order 5. In 1998–1999 he designed the Hasty Pudding Cipher, which was a candidate for the Advanced Encryption Standard, and he is one of the designers of the SANDstorm hash, a submission to the NIST SHA-3 competition. Among other contributions, Schroeppel was the first to recognize the sub-exponential running time of certain integer factoring algorithms. While not entirely rigorous, his proof that Morrison and Brillhart's continued fraction factoring algorithm ran in roughly steps was an important milestone in factoring and laid a foundation for much later work, including the current "champion" factoring algorithm, the number field sieve. Schroeppel analyzed Morrison and Brillhart's algorithm, and saw how to cut the run time to roughly by modifications that allowed sieving. This improvement doubled the size of numbers that could be factored in a given amount of time. Coming around the time of the RSA algorithm, which depends on the difficulty of factoring for its security, this was a critically important result. Due to Schroeppel's apparent prejudice against publishing (though he freely circulated his ideas within the research community), and in spite of Pomerance noting that his quadratic sieve factoring algorithm owed a debt to Schroeppel's earlier work, the latter's contribution is often overlooked. (See the section on "Smooth Numbers" on pages 1476–1477 of Pomerance's "A Ta
https://en.wikipedia.org/wiki/Thread-local%20storage
In computer programming, thread-local storage (TLS) is a memory management method that uses static or global memory local to a thread. While the use of global variables is generally discouraged in modern programming, legacy operating systems such as UNIX are designed for uniprocessor hardware and require some additional mechanism to retain the semantics of pre-reentrant APIs. An example of such situations is where functions use a global variable to set an error condition (for example the global variable errno used by many functions of the C library). If errno were a global variable, a call of a system function on one thread may overwrite the value previously set by a call of a system function on a different thread, possibly before following code on that different thread could check for the error condition. The solution is to have errno be a variable that looks like it is global, but in fact exists once per thread—i.e., it lives in thread-local storage. A second use case would be multiple threads accumulating information into a global variable. To avoid a race condition, every access to this global variable would have to be protected by a mutex. Alternatively, each thread might accumulate into a thread-local variable (that, by definition, cannot be read from or written to from other threads, implying that there can be no race conditions). Threads then only have to synchronise a final accumulation from their own thread-local variable into a single, truly global variable. Many systems impose restrictions on the size of the thread-local memory block, in fact often rather tight limits. On the other hand, if a system can provide at least a memory address (pointer) sized variable thread-local, then this allows the use of arbitrarily sized memory blocks in a thread-local manner, by allocating such a memory block dynamically and storing the memory address of that block in the thread-local variable. On RISC machines, the calling convention often reserves a thread pointer re
https://en.wikipedia.org/wiki/Viral%20replication
Viral replication is the formation of biological viruses during the infection process in the target host cells. Viruses must first get into the cell before viral replication can occur. Through the generation of abundant copies of its genome and packaging these copies, the virus continues infecting new hosts. Replication between viruses is greatly varied and depends on the type of genes involved in them. Most DNA viruses assemble in the nucleus while most RNA viruses develop solely in cytoplasm. Viral production / replication Viruses multiply only in living cells. The host cell must provide the energy and synthetic machinery and the low- molecular-weight precursors for the synthesis of viral proteins and nucleic acids. The virus replication occurs in seven stages, namely; Attachment Entry, Uncoating, Transcription / mRNA production, Synthesis of virus components, Virion assembly and Release (Liberation Stage). Attachment It is the first step of viral replication. The virus attaches to the cell membrane of the host cell. It then injects its DNA or RNA into the host to initiate infection. In animal cells these viruses get into the cell through the process of endocytosis which works through fusing of the virus and fusing of the viral envelope with the cell membrane of the animal cell and in plant cells it enters through the process of pinocytosis which works on pinching of the viruses. Entry The cell membrane of the host cell invaginates the virus particle, enclosing it in a pinocytotic vacuole. This protects the cell from antibodies like in the case of the HIV virus. Uncoating Cell enzymes (from lysosomes) strip off the virus protein coat. This releases or renders accessible the virus nucleic acid or genome. Transcription / mRNA production For some RNA viruses, the infecting RNA produces messenger RNA (mRNA), which can translate the genome into protein products. For viruses with negative stranded RNA, or DNA, viruses are produced by transcription then t
https://en.wikipedia.org/wiki/Deadband
A deadband or dead-band (also known as a dead zone or a neutral zone) is a band of input values in the domain of a transfer function in a control system or signal processing system where the output is zero (the output is 'dead' - no action occurs). Deadband regions can be used in control systems such as servoamplifiers to prevent oscillation or repeated activation-deactivation cycles (called 'hunting' in proportional control systems). A form of deadband that occurs in mechanical systems, compound machines such as gear trains is backlash. Voltage regulators In some power substations there are regulators that keep the voltage within certain predetermined limits, but there is a range of voltage in-between during which no changes are made, such as between 112 and 118 volts (the deadband is 6 volts), or between 215 to 225 volts (deadband is 10 volts). Backlash Gear teeth with slop (backlash) exhibit deadband. There is no drive from the input to the output shaft in either direction while the teeth are not meshed. Leadscrews generally also have backlash and hence a deadband, which must be taken into account when making position adjustments, especially with CNC systems. If mechanical backlash eliminators are not available, the control can compensate for backlash by adding the deadband value to the position vector whenever direction is reversed. Hysteresis versus Deadband Deadband is different from hysteresis. With hysteresis, there is no deadband and so the output is always in one direction or another. Devices with hysteresis have memory, in that previous system states dictate future states. Examples of devices with hysteresis are single-mode thermostats and smoke alarms. Deadband is the range in a process where no changes to output are made. Hysteresis is the difference in a variable depending on the direction of travel. Thermostats Simple (single mode) thermostats exhibit hysteresis. For example, the furnace in the basement of a house is adjusted automatically by
https://en.wikipedia.org/wiki/Appell%20sequence
In mathematics, an Appell sequence, named after Paul Émile Appell, is any polynomial sequence satisfying the identity and in which is a non-zero constant. Among the most notable Appell sequences besides the trivial example are the Hermite polynomials, the Bernoulli polynomials, and the Euler polynomials. Every Appell sequence is a Sheffer sequence, but most Sheffer sequences are not Appell sequences. Appell sequences have a probabilistic interpretation as systems of moments. Equivalent characterizations of Appell sequences The following conditions on polynomial sequences can easily be seen to be equivalent: For , and is a non-zero constant; For some sequence of scalars with , For the same sequence of scalars, where For , Recursion formula Suppose where the last equality is taken to define the linear operator on the space of polynomials in . Let be the inverse operator, the coefficients being those of the usual reciprocal of a formal power series, so that In the conventions of the umbral calculus, one often treats this formal power series as representing the Appell sequence . One can define by using the usual power series expansion of the and the usual definition of composition of formal power series. Then we have (This formal differentiation of a power series in the differential operator is an instance of Pincherle differentiation.) In the case of Hermite polynomials, this reduces to the conventional recursion formula for that sequence. Subgroup of the Sheffer polynomials The set of all Appell sequences is closed under the operation of umbral composition of polynomial sequences, defined as follows. Suppose and are polynomial sequences, given by Then the umbral composition is the polynomial sequence whose th term is (the subscript appears in , since this is the th term of that sequence, but not in , since this refers to the sequence as a whole rather than one of its terms). Under this operation, the set of all Sheffer sequ
https://en.wikipedia.org/wiki/Trigger%20strategy
In game theory, a trigger strategy is any of a class of strategies employed in a repeated non-cooperative game. A player using a trigger strategy initially cooperates but punishes the opponent if a certain level of defection (i.e., the trigger) is observed. The level of punishment and the sensitivity of the trigger vary with different trigger strategies. Trigger strategies Grim trigger (the punishment continues indefinitely after the other player defects just once) Tit for tat (the punishment continues as long as the other player defects) Tit for two tats (a more forgiving variant of tit for tat) References Textbooks and general reference texts Vives, X. (1999) Oligopoly pricing, MIT Press, Cambridge MA (readable; suitable for advanced undergraduates.) Tirole, J. (1988) The Theory of Industrial Organization, MIT Press, Cambridge MA (An organized introduction to industrial organization) Classical paper on this subject Friedman, J. (1971). A non-cooperative equilibrium for supergames, Review of Economic Studies 38, 1–12. (The first formal proof of the Folk theorem (game theory)). Non-cooperative games
https://en.wikipedia.org/wiki/Pr%C3%BCfer%20sequence
In combinatorial mathematics, the Prüfer sequence (also Prüfer code or Prüfer numbers) of a labeled tree is a unique sequence associated with the tree. The sequence for a tree on n vertices has length n − 2, and can be generated by a simple iterative algorithm. Prüfer sequences were first used by Heinz Prüfer to prove Cayley's formula in 1918. Algorithm to convert a tree into a Prüfer sequence One can generate a labeled tree's Prüfer sequence by iteratively removing vertices from the tree until only two vertices remain. Specifically, consider a labeled tree T with vertices {1, 2, ..., n}. At step i, remove the leaf with the smallest label and set the ith element of the Prüfer sequence to be the label of this leaf's neighbour. The Prüfer sequence of a labeled tree is unique and has length n − 2. Both coding and decoding can be reduced to integer radix sorting and parallelized. Example Consider the above algorithm run on the tree shown to the right. Initially, vertex 1 is the leaf with the smallest label, so it is removed first and 4 is put in the Prüfer sequence. Vertices 2 and 3 are removed next, so 4 is added twice more. Vertex 4 is now a leaf and has the smallest label, so it is removed and we append 5 to the sequence. We are left with only two vertices, so we stop. The tree's sequence is {4,4,4,5}. Algorithm to convert a Prüfer sequence into a tree Let {a[1], a[2], ..., a[n]} be a Prüfer sequence: The tree will have n+2 nodes, numbered from 1 to n+2. For each node set its degree to the number of times it appears in the sequence plus 1. For instance, in pseudo-code: Convert-Prüfer-to-Tree(a) 1 n ← length[a] 2 T ← a graph with n + 2 isolated nodes, numbered 1 to n + 2 3 degree ← an array of integers 4 for each node i in T do 5 degree[i] ← 1 6 for each value i in a do 7 degree[i] ← degree[i] + 1 Next, for each number in the sequence a[i], find the first (lowest-numbered) node, j, with degree equal to 1, add the edge (j, a[
https://en.wikipedia.org/wiki/Galvanoluminescence
Galvanoluminescence Is the emission of light produced by the passage of an electric current through an appropriate electrolyte in which an electrode, made of certain metals such as aluminium or tantalum, has been immersed. An example being the electrolysis of sodium bromide (NaBr). Luminescence Materials science
https://en.wikipedia.org/wiki/Henk%20Barendregt
Hendrik Pieter (Henk) Barendregt (born 18 December 1947, Amsterdam) is a Dutch logician, known for his work in lambda calculus and type theory. Life and work Barendregt studied mathematical logic at Utrecht University, obtaining his master's degree in 1968 and his PhD in 1971, both cum laude, under Dirk van Dalen and Georg Kreisel. After a postdoctoral position at Stanford University, he taught at Utrecht University. Since 1986, Barendregt has taught at Radboud University Nijmegen, where he now holds the Chair of Foundations of Mathematics and Computer Science. His research group works on Constructive Interactive Mathematics. He is also Adjunct Professor at Carnegie Mellon University, Pittsburgh, USA. He has been a visiting scholar at Darmstadt, ETH Zürich, Siena, and Kyoto. Barendregt was elected a member of Academia Europaea in 1992. In 1997 Barendregt was elected member of the Royal Netherlands Academy of Arts and Sciences. On 6 February 2003 Barendregt was awarded the Spinozapremie for 2002, the highest scientific award in the Netherlands. In 2002 he was knighted in the Orde van de Nederlandse Leeuw. Barendregt received an honorary doctorate from Heriot-Watt University in 2015. Selected publications — See Errata References External links Barendregt's homepage Author profile in the database zbMATH 1947 births Living people Dutch computer scientists Mathematical logicians Members of Academia Europaea Members of the Royal Netherlands Academy of Arts and Sciences Academic staff of Radboud University Nijmegen Spinoza Prize winners Utrecht University alumni Scientists from Amsterdam Academic staff of Technische Universität Darmstadt
https://en.wikipedia.org/wiki/Ambient%20device
Ambient devices are a type of consumer electronics, characterized by their ability to be perceived at-a-glance, also known as "glanceable". Ambient devices use pre-attentive processing to display information and are aimed at minimizing mental effort. Associated fields include ubiquitous computing and calm technology. The concept is closely related to the Internet of Things. The New York Times Magazine announced ambient devices as one of its Ideas of the Year in 2002. The award recognized a start-up company, Ambient Devices, whose first product Ambient Orb, was a frosted-glass ball lamp, which maps information to a linear color spectrum and displays the trend in the data. Other products in the genre include the 2008 Chumby, and the 2012 52-LED device MooresCloud (a reference to Moore's Law) from Australia. Research on ambient devices began at Xerox Parc, with a paper co-written by Mark Weiser and John Seely Brown, entitled Calm Computing. Purpose The purpose of ambient devices is to enable immediate and effortless access to information. The original developers of the idea state that an ambient device is designed to provide support to people carrying out everyday activities. Ambient devices decrease the effort needed to process incoming data, thus rendering individuals more productive. The key issue lies with taking Internet-based content (e.g. traffic congestion, weather condition, stock market quotes) and mapping it into a single, usually one-dimensional spectrum (e.g. angle, colour). According to Rose, this presents data to an end user seamlessly, with an insignificant amount of cognitive load. History The concept of ambient devices can be traced back to the early 2000s, when preliminary research was carried at Xerox PARC, according to the company’s official website. The MIT Media Lab website lists the venture as founded by David L. Rose, Ben Resner, Nabeel Hyatt and Pritesh Gandhi as a lab spin-off. Examples Ambient Orb was introduced by Ambient Devices i
https://en.wikipedia.org/wiki/Biomedical%20cybernetics
Biomedical cybernetics investigates signal processing, decision making and control structures in living organisms. Applications of this research field are in biology, ecology and health sciences. Fields Biological cybernetics Medical cybernetics Methods Connectionism Decision theory Information theory Systeomics Systems theory See also Cybernetics Prosthetics List of biomedical cybernetics software References Kitano, H. (Hrsg.) (2001). Foundations of Systems Biology. Cambridge (Massachusetts), London, MIT Press, . External links ResearchGate topic on biomedical cybernetics Cybernetics
https://en.wikipedia.org/wiki/Baghdad%20Tower
Baghdad Tower (Al-Ma'mun) (), previously called International Saddam Tower (), is a TV tower in Baghdad, Iraq. The tower opened in 1994 and replaced a communications tower destroyed in the Gulf War. A revolving restaurant and observation deck are located on the top floor. After the 2003 invasion of Iraq, the tower was occupied by American soldiers and was renamed as Baghdad Tower. History The tower's construction began in 1991 adjoining to the al-Ma'mun Telecom Exchange. It was built with a height of 204 meters with a revolving restaurant on top as well as the words "Allah is the Greatest" on the top of the restaurant. The center of the tower consists of seven floors with a modern architectural style and is located on the land of the old pedal. It also became the main international gateway to receive and send all types of communications in Baghdad. Located in the Yarmouk neighborhood west of Baghdad, it was a major tourist site. On March 27, 2003, a week into the US-led invasion of the country, 4,700-pound bombs destroyed the al-Ma'mun Telecom Exchange and along with it, caused damages to the tower and was abandoned. In 2007, the tower was to be the centerpiece of a short-lived urban renewal project by the Ministry of Communications which ended three years later. During the project, a U.S. State Department advisor noted that it was the "only discernible sign of major building construction in Baghdad." For a while, the tower was once again abandoned as funds for ministry investments became limited due to the rise of ISIS in Northern Iraq. The tower was refurbished in 2016 and since then, many projects and announcements have came up about the opening of the tower but generally have all been delayed or had their process stopped for no clear reason. Interest in the tower was renewed in 2018 when even former-Iraqi President Barham Salih was brought into the occasion. As of 2020, the tower remained closed despite public outcry. Gallery See also External links
https://en.wikipedia.org/wiki/Papain
Papain, also known as papaya proteinase I, is a cysteine protease () enzyme present in papaya (Carica papaya) and mountain papaya (Vasconcellea cundinamarcensis). It is the namesake member of the papain-like protease family. It has wide ranging commercial applications in the leather, cosmetic, textiles, detergents, food and pharmaceutical industries. In the food industry, papain is used as an active ingredient in many commercial meat tenderizers. Papain family Papain belongs to a family of related proteins, known as the papain-like protease family, with a wide variety of activities, including endopeptidases, aminopeptidases, dipeptidyl peptidases and enzymes with both exo- and endopeptidase activity. Members of the papain family are widespread, found in baculoviruses, eubacteria, yeast, and practically all protozoa, plants and mammals. The proteins are typically lysosomal or secreted, and proteolytic cleavage of the propeptide is required for enzyme activation, although bleomycin hydrolase is cytosolic in fungi and mammals. Papain-like cysteine proteinases are essentially synthesised as inactive proenzymes (zymogens) with N-terminal propeptide regions. The activation process of these enzymes includes the removal of propeptide regions, which serve a variety of functions in vivo and in vitro. The pro-region is required for the proper folding of the newly synthesised enzyme, the inactivation of the peptidase domain and stabilisation of the enzyme against denaturing at neutral to alkaline pH conditions. Amino acid residues within the pro-region mediate their membrane association, and play a role in the transport of the proenzyme to lysosomes. Among the most notable features of propeptides is their ability to inhibit the activity of their cognate enzymes and that certain propeptides exhibit high selectivity for inhibition of the peptidases from which they originate. Structure The papain precursor protein contains 345 amino acid residues, and consists of a signal seq
https://en.wikipedia.org/wiki/Isotopic%20labeling
Isotopic labeling (or isotopic labelling) is a technique used to track the passage of an isotope (an atom with a detectable variation in neutron count) through a reaction, metabolic pathway, or cell. The reactant is 'labeled' by replacing specific atoms by their isotope. The reactant is then allowed to undergo the reaction. The position of the isotopes in the products is measured to determine the sequence the isotopic atom followed in the reaction or the cell's metabolic pathway. The nuclides used in isotopic labeling may be stable nuclides or radionuclides. In the latter case, the labeling is called radiolabeling. In isotopic labeling, there are multiple ways to detect the presence of labeling isotopes; through their mass, vibrational mode, or radioactive decay. Mass spectrometry detects the difference in an isotope's mass, while infrared spectroscopy detects the difference in the isotope's vibrational modes. Nuclear magnetic resonance detects atoms with different gyromagnetic ratios. The radioactive decay can be detected through an ionization chamber or autoradiographs of gels. An example of the use of isotopic labeling is the study of phenol (C6H5OH) in water by replacing common hydrogen (protium) with deuterium (deuterium labeling). Upon adding phenol to deuterated water (water containing D2O in addition to the usual H2O), the substitution of deuterium for the hydrogen is observed in phenol's hydroxyl group (resulting in C6H5OD), indicating that phenol readily undergoes hydrogen-exchange reactions with water. Only the hydroxyl group is affected, indicating that the other 5 hydrogen atoms do not participate in the exchange reactions. Isotopic tracer An isotopic tracer, (also "isotopic marker" or "isotopic label"), is used in chemistry and biochemistry to help understand chemical reactions and interactions. In this technique, one or more of the atoms of the molecule of interest is substituted for an atom of the same chemical element, but of a different isotope
https://en.wikipedia.org/wiki/Photon%20noise
Photon noise is the randomness in signal associated with photons arriving at a detector. For a simple black body emitting on an absorber, the noise-equivalent power is given by where is the Planck constant, is the central frequency, is the bandwidth, is the occupation number and is the optical efficiency. The first term is essentially shot noise whereas the second term is related to the bosonic character of photons, variously known as "Bose noise" or "wave noise". At low occupation number, such as in the visible spectrum, the shot noise term dominates. At high occupation number, however, typical of the radio spectrum, the Bose term dominates. See also Hanbury Brown and Twiss effect Phonon noise References Further reading Photons Energy Signal processing Particle statistics Photodetectors
https://en.wikipedia.org/wiki/Genetic%20anthropomorphism
In evolutionary biology, genetic anthropomorphism refers to "thinking like a gene". The central question is "if I were a gene, what would I do in order to reproduce myself". The question is an obvious fallacy since genes are incapable of thought. However, natural selection does act in such a way that those that are most successful at reproducing themselves (by following the optimum strategy) prosper. Thinking like a gene enables the results to be visualised. This is related to a philosophical tool known as the intentional stance. The most notable genetic anthropomorphist was the British biologist, W. D. Hamilton. Hamilton's friend, Richard Dawkins, popularised the idea. Anthropomorphism has been criticised on a number of grounds, including that it is reductionist. Evolutionary biology
https://en.wikipedia.org/wiki/Symbolic%20integration
In calculus, symbolic integration is the problem of finding a formula for the antiderivative, or indefinite integral, of a given function f(x), i.e. to find a differentiable function F(x) such that This is also denoted Discussion The term symbolic is used to distinguish this problem from that of numerical integration, where the value of F is sought at a particular input or set of inputs, rather than a general formula for F. Both problems were held to be of practical and theoretical importance long before the time of digital computers, but they are now generally considered the domain of computer science, as computers are most often used currently to tackle individual instances. Finding the derivative of an expression is a straightforward process for which it is easy to construct an algorithm. The reverse question of finding the integral is much more difficult. Many expressions which are relatively simple do not have integrals that can be expressed in closed form. See antiderivative and nonelementary integral for more details. A procedure called the Risch algorithm exists which is capable of determining whether the integral of an elementary function (function built from a finite number of exponentials, logarithms, constants, and nth roots through composition and combinations using the four elementary operations) is elementary and returning it if it is. In its original form, Risch algorithm was not suitable for a direct implementation, and its complete implementation took a long time. It was first implemented in Reduce in the case of purely transcendental functions; the case of purely algebraic functions was solved and implemented in Reduce by James H. Davenport; the general case was solved by Manuel Bronstein, who implemented almost all of it in Axiom, though to date there is no implementation of the Risch algorithm which can deal with all of the special cases and branches in it. However, the Risch algorithm applies only to indefinite integrals, while most of th
https://en.wikipedia.org/wiki/Spectral%20efficiency
Spectral efficiency, spectrum efficiency or bandwidth efficiency refers to the information rate that can be transmitted over a given bandwidth in a specific communication system. It is a measure of how efficiently a limited frequency spectrum is utilized by the physical layer protocol, and sometimes by the medium access control (the channel access protocol). Link spectral efficiency The link spectral efficiency of a digital communication system is measured in bit/s/Hz, or, less frequently but unambiguously, in (bit/s)/Hz. It is the net bit rate (useful information rate excluding error-correcting codes) or maximum throughput divided by the bandwidth in hertz of a communication channel or a data link. Alternatively, the spectral efficiency may be measured in bit/symbol, which is equivalent to bits per channel use (bpcu), implying that the net bit rate is divided by the symbol rate (modulation rate) or line code pulse rate. Link spectral efficiency is typically used to analyze the efficiency of a digital modulation method or line code, sometimes in combination with a forward error correction (FEC) code and other physical layer overhead. In the latter case, a "bit" refers to a user data bit; FEC overhead is always excluded. The modulation efficiency in bit/s is the gross bit rate (including any error-correcting code) divided by the bandwidth. Example 1: A transmission technique using one kilohertz of bandwidth to transmit 1,000 bits per second has a modulation efficiency of 1 (bit/s)/Hz. Example 2: A V.92 modem for the telephone network can transfer 56,000 bit/s downstream and 48,000 bit/s upstream over an analog telephone network. Due to filtering in the telephone exchange, the frequency range is limited to between 300 hertz and 3,400 hertz, corresponding to a bandwidth of 3,400 − 300 = 3,100 hertz. The spectral efficiency or modulation efficiency is 56,000/3,100 = 18.1 (bit/s)/Hz downstream, and 48,000/3,100 = 15.5 (bit/s)/Hz upstream. An upper bound for the att
https://en.wikipedia.org/wiki/Millionth
One millionth is equal to 0.000 001, or 1 x 10−6 in scientific notation. It is the reciprocal of a million, and can be also written as . Units using this fraction can be indicated using the prefix "micro-" from Greek, meaning "small". Numbers of this quantity are expressed in terms of μ (the Greek letter mu). "Millionth" can also mean the ordinal number that comes after the nine hundred, ninety-nine thousand, nine hundred, ninety-ninth and before the million and first. See also International System of Units Micro- International Map of the World Order of magnitude (numbers) Order of magnitude Parts-per notation Per mille References Fractions (mathematics) Rational numbers
https://en.wikipedia.org/wiki/Evolutionary%20arms%20race
In evolutionary biology, an evolutionary arms race is an ongoing struggle between competing sets of co-evolving genes, phenotypic and behavioral traits that develop escalating adaptations and counter-adaptations against each other, resembling the geopolitical concept of an arms race. These are often described as examples of positive feedback. The co-evolving gene sets may be in different species, as in an evolutionary arms race between a predator species and its prey (Vermeij, 1987), or a parasite and its host. Alternatively, the arms race may be between members of the same species, as in the manipulation/sales resistance model of communication (Dawkins & Krebs, 1979) or as in runaway evolution or Red Queen effects. One example of an evolutionary arms race is in sexual conflict between the sexes, often described with the term Fisherian runaway. Thierry Lodé emphasized the role of such antagonistic interactions in evolution leading to character displacements and antagonistic coevolution. Symmetrical versus asymmetrical arms races Arms races may be classified as either symmetrical or asymmetrical. In a symmetrical arms race, selection pressure acts on participants in the same direction. An example of this is trees growing taller as a result of competition for light, where the selective advantage for either species is increased height. An asymmetrical arms race involves contrasting selection pressures, such as the case of cheetahs and gazelles, where cheetahs evolve to be better at hunting and killing while gazelles evolve not to hunt and kill, but rather to evade capture. Hostparasite dynamic Selective pressure between two species can include host-parasite coevolution. This antagonistic relationship leads to the necessity for the pathogen to have the best virulent alleles to infect the organism and for the host to have the best resistant alleles to survive parasitism. As a consequence, allele frequencies vary through time depending on the size of virulent an
https://en.wikipedia.org/wiki/Parasite%20load
Parasite load is a measure of the number and virulence of the parasites that a host organism harbours. Quantitative parasitology deals with measures to quantify parasite loads in samples of hosts and to make statistical comparisons of parasitism across host samples. In evolutionary biology, parasite load has important implications for sexual selection and the evolution of sex, as well as openness to experience. Infection and distribution A single parasite species usually has an aggregated distribution across host individuals, which means that most hosts harbor few parasites, while a few hosts carry the vast majority of parasite individuals. This poses considerable problems for students of parasite ecology: use of parametric statistics should be avoided. Log-transformation of data before the application of parametric test, or the use of non-parametric statistics is often recommended. However, this can give rise to further problems. Therefore, modern day quantitative parasitology is based on more advanced biostatistical methods. In vertebrates, males frequently carry higher parasite loads than females. Differences in movement patterns, habitat choice, diet, body size, and ornamentation are all thought to contribute to this sex bias observed in parasite loads. Often males have larger habitat ranges and thus are likely to encounter more parasite-dense areas than female conspecifics. Whenever sexual dimorphism is exhibited in species, the larger sex is thought to tolerate higher parasite loads. In insects, susceptibility to parasite load has been linked to genetic variation in the insect colony. In colonies of Hymenoptera (ants, bees and wasps), colonies with high genetic variation that were exposed to parasites experienced lesser parasite loads than colonies that are more genetically similar. Methods of quantifying Depending on the parasitic species in question, various methods of quantification allow scientists to measure the numbers of parasites present an
https://en.wikipedia.org/wiki/Bandwidth%20throttling
Bandwidth throttling consists in the intentional limitation of the communication speed (bytes or kilobytes per second), of the ingoing (received) or outgoing (sent) data in a network node or in a network device. The data speed and rendering may be limited depending on various parameters and conditions. Overview Limiting the speed of data sent by a data originator (a client computer or a server computer) is much more efficient than limiting the speed in an intermediate network device between client and server because while in the first case usually no network packets are lost, in the second case network packets can be lost / discarded whenever ingoing data speed overcomes the bandwidth limit or the capacity of device and data packets cannot be temporarily stored in a buffer queue (because it is full or it does not exist); the usage of such a buffer queue is to absorb the peaks of incoming data for very short time lapse. In the second case discarded data packets can be resent by transmitter and received again. When a low level network device discards incoming data packets usually can also notify that fact to data transmitter in order to slow down the transmission speed (see also network congestion). NOTE: Bandwidth throttling should not be confused with rate limiting which operates on client requests at application server level and/or at network management level (i.e. by inspecting protocol data packets). Rate limiting can also help in keeping peaks of data speed under control. These bandwidth limitations can be implemented: at (a client program or a server program, i.e. ftp server, web server, etc.) which can be run and configured to throttle data sent through network or even to throttle data received from network (by reading data at most at a throttled amount per second); at (typically done by an ISP). The (client/server program) is usually perfectly because it is a choice of the client manager or the server manager (by server administrator) to limit
https://en.wikipedia.org/wiki/Windthrow
In forestry, windthrow refers to trees uprooted by wind. Breakage of the tree bole (trunk) instead of uprooting is called windsnap. Blowdown refers to both windthrow and windsnap. Causes Windthrow is common in all forested parts of the world that experience storms or high wind speeds. The risk of windthrow to a tree is related to the tree's size (height and diameter), the 'sail area' presented by its crown, the anchorage provided by its roots, its exposure to the wind, and the local wind climate. A common way of quantifying the risk of windthrow to a forest area is to model the probability or 'return time' of a wind speed that would damage those trees at that location. Another potential method is the detection of scattered windthrow based on satellite images. Tree senescence can also be a factor, where multiple factors contributing to the declining health of a tree reduce its anchorage and therefore increase its susceptibility to windthrow. The resulting damage can be a significant factor in the development of a forest. Windthrow can also increase following logging, especially in young forests managed specifically for timber. The removal of trees at a forest's edge increases the exposure of the remaining trees to the wind. Trees that grow adjacent to lakes or other natural forest edges, or in exposed situations such as hill sides, develop greater rooting strength through growth feedback with wind movement, i.e. 'adaptive' or 'acclimative' growth. If a tree does not experience much wind movement during the stem exclusion phase of stand succession, it is not likely to develop a resistance to wind. Thus, when a fully or partially developed stand is bisected by a new road or by a clearcut, the trees on the new edge are less supported by neighbouring trees than they were and may not be capable of withstanding the higher forces which they now experience. Trees with heavy growths of ivy, wisteria, or kudzu are already stressed and may be more susceptible to windthrow,
https://en.wikipedia.org/wiki/Antrum
This is a disambiguation page for the biological term. For the 2018 horror movie, see Antrum (film) In biology, antrum is a general term for a cavity or chamber, which may have specific meaning in reference to certain organs or sites in the body. In vertebrates, it may refer specifically to: Antrum follicularum, the cavity in the epithelium that envelops the oocyte Mastoid antrum, a cavity between the middle ear and temporal bone in the skull Stomach antrum, either Pyloric antrum, the lower portion of the stomach. This is what is usually referred to as "antrum" in stomach-related topics or Antrum cardiacum, a dilation that occurs in the esophagus near the stomach (forestomach) Maxillary antrum or antrum of Highmore, the maxillary sinus, a cavity in the maxilla and the largest of the paranasal sinuses In invertebrates, it may refer specifically to: Antrum of female lepidoptera genitalia Anatomy
https://en.wikipedia.org/wiki/Artificial%20intelligence%20in%20video%20games
In video games, artificial intelligence (AI) is used to generate responsive, adaptive or intelligent behaviors primarily in non-player characters (NPCs) similar to human-like intelligence. Artificial intelligence has been an integral part of video games since their inception in the 1950s. AI in video games is a distinct subfield and differs from academic AI. It serves to improve the game-player experience rather than machine learning or decision making. During the golden age of arcade video games the idea of AI opponents was largely popularized in the form of graduated difficulty levels, distinct movement patterns, and in-game events dependent on the player's input. Modern games often implement existing techniques such as pathfinding and decision trees to guide the actions of NPCs. AI is often used in mechanisms which are not immediately visible to the user, such as data mining and procedural-content generation. In general, game AI does not, as might be thought and sometimes is depicted to be the case, mean a realization of an artificial person corresponding to an NPC in the manner of the Turing test or an artificial general intelligence. Overview The term "game AI" is used to refer to a broad set of algorithms that also include techniques from control theory, robotics, computer graphics and computer science in general, and so video game AI may often not constitute "true AI" in that such techniques do not necessarily facilitate computer learning or other standard criteria, only constituting "automated computation" or a predetermined and limited set of responses to a predetermined and limited set of inputs. Many industries and corporate voices claim that so-called video game AI has come a long way in the sense that it has revolutionized the way humans interact with all forms of technology, although many expert researchers are skeptical of such claims, and particularly of the notion that such technologies fit the definition of "intelligence" standardly used in the
https://en.wikipedia.org/wiki/List%20of%20video%20editing%20software
The following is a list of video editing software. The criterion for inclusion in this list is the ability to perform non-linear video editing. Most modern transcoding software supports transcoding a portion of a video clip, which would count as cropping and trimming. However, items in this article have one of the following conditions: Can perform other non-linear video editing function such as montage or compositing Can do the trimming or cropping without transcoding Free (libre) or open-source The software listed in this section is either free software or open source, and may or may not be commercial. Active and stable Avidemux (Linux, macOS, Windows) Losslesscut (Linux, macOS, Windows) Blender VSE (Linux, FreeBSD, macOS, Windows) Cinelerra (Linux, FreeBSD) FFmpeg (Linux, macOS, Windows) – CLI only; no visual feedback Flowblade (Linux) Kdenlive (Linux, FreeBSD, macOS, Windows) LiVES (BSD, IRIX, Linux, Solaris) Olive (Linux, macOS, Windows) - currently in alpha OpenShot (Linux, FreeBSD, macOS, Windows) Pitivi (Linux, FreeBSD) Shotcut (Linux, FreeBSD, macOS, Windows) Inactive Kino (Linux, FreeBSD) VirtualDub (Windows) VirtualDubMod (Windows) VideoLan Movie Creator (VLMC) (Linux, macOS, Windows) Proprietary (non-commercial) The software listed in this section is proprietary, and freeware or freemium. Active ActivePresenter (Windows) – Also screencast software DaVinci Resolve (macOS, Windows, Linux) Freemake Video Converter (Windows) iMovie (iOS, macOS) ivsEdits (Windows) Lightworks (Windows, Linux, macOS) Microsoft Photos (Windows) showbox.com (Windows, macOS) VideoPad Home Edition (Windows, macOS, iPad, Android) VSDC Free Video Editor (Windows) WeVideo (Web app) YouTube Create (Android) Discontinued Adobe Premiere Express (Web app) Pixorial (Web app) VideoThang (Windows) Windows Movie Maker (Windows) Proprietary (commercial) The software listed in this section is proprietary and commercial. Active Adobe After Effects (macOS, Windows) Adobe Premier
https://en.wikipedia.org/wiki/Tutte%20theorem
In the mathematical discipline of graph theory the Tutte theorem, named after William Thomas Tutte, is a characterization of finite undirected graphs with perfect matchings. It is a generalization of Hall's marriage theorem from bipartite to arbitrary graphs. It is a special case of the Tutte–Berge formula. Intuition The goal is to characterize all graphs that do not have a perfect matching. Start with the most obvious case of a graph without a perfect matching: a graph with an odd number of vertices. In such a graph, any matching leaves at least one unmatched vertex, so it cannot be perfect. A slightly more general case is a disconnected graph in which one or more components have an odd number of vertices (even if the total number of vertices is even). Let us call such components odd components. In any matching, each vertex can only be matched to vertices in the same component. Therefore, any matching leaves at least one unmatched vertex in every odd component, so it cannot be perfect. Next, consider a graph G with a vertex u such that, if we remove from G the vertex u and its adjacent edges, the remaining graph (denoted ) has two or more odd components. As above, any matching leaves, in every odd component, at least one vertex that is unmatched to other vertices in the same component. Such a vertex can only be matched to u. But since there are two or more unmatched vertices, and only one of them can be matched to u, at least one other vertex remains unmatched, so the matching is not perfect. Finally, consider a graph G with a set of vertices such that, if we remove from G the vertices in and all edges adjacent to them, the remaining graph (denoted ) has more than odd components. As explained above, any matching leaves at least one unmatched vertex in every odd component, and these can be matched only to vertices of - but there are not enough vertices on for all these unmatched vertices, so the matching is not perfect. We have arrived at a necessary co
https://en.wikipedia.org/wiki/Graph%20property
In graph theory, a graph property or graph invariant is a property of graphs that depends only on the abstract structure, not on graph representations such as particular labellings or drawings of the graph. Definitions While graph drawing and graph representation are valid topics in graph theory, in order to focus only on the abstract structure of graphs, a graph property is defined to be a property preserved under all possible isomorphisms of a graph. In other words, it is a property of the graph itself, not of a specific drawing or representation of the graph. Informally, the term "graph invariant" is used for properties expressed quantitatively, while "property" usually refers to descriptive characterizations of graphs. For example, the statement "graph does not have vertices of degree 1" is a "property" while "the number of vertices of degree 1 in a graph" is an "invariant". More formally, a graph property is a class of graphs with the property that any two isomorphic graphs either both belong to the class, or both do not belong to it. Equivalently, a graph property may be formalized using the indicator function of the class, a function from graphs to Boolean values that is true for graphs in the class and false otherwise; again, any two isomorphic graphs must have the same function value as each other. A graph invariant or graph parameter may similarly be formalized as a function from graphs to a broader class of values, such as integers, real numbers, sequences of numbers, or polynomials, that again has the same value for any two isomorphic graphs. Properties of properties Many graph properties are well-behaved with respect to certain natural partial orders or preorders defined on graphs: A graph property P is hereditary if every induced subgraph of a graph with property P also has property P. For instance, being a perfect graph or being a chordal graph are hereditary properties. A graph property is monotone if every subgraph of a graph with property P al
https://en.wikipedia.org/wiki/PRIMOS
PRIMOS is a discontinued operating system developed during the 1970s by Prime Computer for its minicomputer systems. It rapidly gained popularity and by the mid-1980s was a serious contender as a mainline minicomputer operating system. With the advent of PCs and the decline of the minicomputer industry, Prime was forced out of the market in the early 1990s, and by the end of 2010 the trademarks for both PRIME and PRIMOS no longer existed. Prime had also offered a customizable real-time OS called RTOS. Internals One feature of PRIMOS was that it, like UNIX, was largely written in a high level language (with callable assembly language library functions available). At first, this language was FORTRAN IV, which was an odd choice from a pure computer science standpoint: no pointers, no if-then-else, no native string type, etc. FORTRAN was, however, the language most known to engineers, and engineers were a big market for Prime in their early years. The unusual choice of FORTRAN for the OS programming language had to do with the people who founded Prime. They had worked for Honeywell on a NASA project. FORTRAN was the language they had used both at NASA and, for many of them, at MIT. Honeywell, at that time, was uninterested in minicomputers, so they left and founded Prime, "taking" the code with them. They developed hardware optimized to run FORTRAN, including machine instructions that directly implemented FORTRAN's distinctive 3-way branch operation. Since Prime's hardware did not perform byte addressing, there was no impetus to create a C compiler. Late models of the hardware were eventually modified to support I-mode, and programs compiled in C. Later, around version 18, a version of PL/1, called PL/P, became the high level language of choice within PRIMOS, and the PL/P and Modula-2 languages were used in the Kernel. Furthermore, some new PRIMOS utilities were written in SP/L, which was similar to PL/P. The source code to PRIMOS was available to customers
https://en.wikipedia.org/wiki/Baseline%20%28configuration%20management%29
In configuration management, a baseline is an agreed description of the attributes of a product, at a point in time, which serves as a basis for defining change. A change is a movement from this baseline state to a next state. The identification of significant changes from the baseline state is the central purpose of baseline identification. Typically, significant states are those that receive a formal approval status, either explicitly or implicitly. An approval status may be attributed to individual items, when a prior definition for that status has been established by project leaders, or signified by mere association to a particular established baseline. Nevertheless, this approval status is usually recognized publicly. A baseline may be established for the singular purpose of marking an approved configuration item, e.g. a project plan that has been signed off for execution. Associating multiple configuration items to such a baseline indicates those items as also being approved. Baselines may also be used to mark milestones. A baseline may refer to a single work product, or a set of work products that can be used as a logical basis for comparison. Most baselines are established at a fixed point in time and serve to continue to reference that point (identification of state). However, some baselines, dynamic baselines, are established to carry forward as a reference to the item itself regardless of any changes to the item. These latter baselines evolve with the progression of the work effort but continue to identify notable work products in the project. Retrieving such a dynamic baseline obtains the current revision of only these notable items in the project. While marking approval status covers the majority of uses for a baseline, multiple fixed baselines may also be established to monitor the progress of work through the passage of time. In this case, each baseline is a visible measure through an endured team effort, e.g. a series of developmental baselines. T
https://en.wikipedia.org/wiki/Derivation%20of%20the%20Schwarzschild%20solution
The Schwarzschild solution describes spacetime under the influence of a massive, non-rotating, spherically symmetric object. It is considered by some to be one of the simplest and most useful solutions to the Einstein field equations . Assumptions and notation Working in a coordinate chart with coordinates labelled 1 to 4 respectively, we begin with the metric in its most general form (10 independent components, each of which is a smooth function of 4 variables). The solution is assumed to be spherically symmetric, static and vacuum. For the purposes of this article, these assumptions may be stated as follows (see the relevant links for precise definitions): A spherically symmetric spacetime is one that is invariant under rotations and taking the mirror image. A static spacetime is one in which all metric components are independent of the time coordinate (so that ) and the geometry of the spacetime is unchanged under a time-reversal . A vacuum solution is one that satisfies the equation . From the Einstein field equations (with zero cosmological constant), this implies that since contracting yields . Metric signature used here is (+,+,+,−). Diagonalising the metric The first simplification to be made is to diagonalise the metric. Under the coordinate transformation, , all metric components should remain the same. The metric components () change under this transformation as: () But, as we expect (metric components remain the same), this means that: () Similarly, the coordinate transformations and respectively give: () () Putting all these together gives: () and hence the metric must be of the form: where the four metric components are independent of the time coordinate (by the static assumption). Simplifying the components On each hypersurface of constant , constant and constant (i.e., on each radial line), should only depend on (by spherical symmetry). Hence is a function of a single variable: A similar argument applied to
https://en.wikipedia.org/wiki/PBASIC
PBASIC is a microcontroller-based version of BASIC created by Parallax, Inc. in 1992. PBASIC was created to bring ease of use to the microcontroller and embedded processor world. It is used for writing code for the BASIC Stamp microcontrollers. After the code is written, it is tokenized and loaded into an EEPROM on the microcontroller. These tokens are fetched by the microcontroller and used to generate instructions for the processor. Syntax When starting a PBASIC file, the programmer defines the version of the BASIC Stamp and the version of PBASIC that will be used. Variables and constants are usually declared first thing in a program. The DO LOOP, FOR NEXT loop, IF and ENDIF, and some standard BASIC commands are part of the language, but many commands like PULSOUT, HIGH, LOW, DEBUG, and FREQOUT are native to PBASIC and are used for special purposes that are not available in traditional BASIC (such as having the Basic Stamp ring a piezoelectric speaker, for example). Programming In the Stamp Editor, the PBASIC integrated development environment (IDE) running on a (Windows) PC, the programmer has to select 1 of 7 different basic stamps, BS1, BS2, BS2E, BS2SX, BS2P, BS2PE, and BS2PX, which is done by using one of these commands: ' {$STAMP BS1} ' {$STAMP BS2} ' {$STAMP BS2e} ' {$STAMP BS2sx} ' {$STAMP BS2p} ' {$STAMP BS2pe} ' {$STAMP BS2px} The programmer must also select which PBASIC version to use, which he or she may express with commands such as these: ' {$PBASIC 1.0} ' use version 1.0 syntax (BS1 only) ' {$PBASIC 2.0} ' use version 2.0 syntax ' {$PBASIC 2.5} ' use version 2.5 syntax An example of a program using HIGH and LOW to make an LED blink, along with a DO...LOOP would be: DO HIGH 1 'turn LED on I/O pin 1 on PAUSE 1000 'keep it on for 1 second LOW 1 'turn it off PAUSE 500 'keep it off for 500 msec LOOP 'repeat forever An example of a pr
https://en.wikipedia.org/wiki/Wheel%20graph
In the mathematical discipline of graph theory, a wheel graph is a graph formed by connecting a single universal vertex to all vertices of a cycle. A wheel graph with vertices can also be defined as the 1-skeleton of an pyramid. Some authors write to denote a wheel graph with vertices (); other authors instead use to denote a wheel graph with vertices (), which is formed by connecting a single vertex to all vertices of a cycle of length . The rest of this article uses the former notation. Set-builder construction Given a vertex set of {1, 2, 3, …, v}, the edge set of the wheel graph can be represented in set-builder notation by {{1, 2}, {1, 3}, …, {1, v}, {2, 3}, {3, 4}, …, {v − 1, v}, {v, 2}}. Properties Wheel graphs are planar graphs, and have a unique planar embedding. More specifically, every wheel graph is a Halin graph. They are self-dual: the planar dual of any wheel graph is an isomorphic graph. Every maximal planar graph, other than K4 = W4, contains as a subgraph either W5 or W6. There is always a Hamiltonian cycle in the wheel graph and there are cycles in Wn . For odd values of n, Wn is a perfect graph with chromatic number 3: the vertices of the cycle can be given two colors, and the center vertex given a third color. For even n, Wn has chromatic number 4, and (when n ≥ 6) is not perfect. W7 is the only wheel graph that is a unit distance graph in the Euclidean plane. The chromatic polynomial of the wheel graph Wn is : In matroid theory, two particularly important special classes of matroids are the wheel matroids and the whirl matroids, both derived from wheel graphs. The k-wheel matroid is the graphic matroid of a wheel Wk+1, while the k-whirl matroid is derived from the k-wheel by considering the outer cycle of the wheel, as well as all of its spanning trees, to be independent. The wheel W6 supplied a counterexample to a conjecture of Paul Erdős on Ramsey theory: he had conjectured that the complete graph has the smallest Ramsey num
https://en.wikipedia.org/wiki/Product%20software%20implementation%20method
A product software implementation method is a systematically structured approach to effectively integrate a software based service or component into the workflow of an organizational structure or an individual end-user. This entry focuses on the process modeling (Process Modeling) side of the implementation of “large” (explained in complexity differences) product software, using the implementation of Enterprise Resource Planning systems as the main example to elaborate on. Overview A product software implementation method is a blueprint to get users and/or organizations running with a specific software product. The method is a set of rules and views to cope with the most common issues that occur when implementing a software product: business alignment from the organizational view and acceptance from human view. The implementation of product software, as the final link in the deployment chain of software production, is in a financial perspective a major issue. It is stated that the implementation of (product) software consumes up to 1/3 of the budget of a software purchase (more than hardware and software requirements together). Implementation complexity differences The complexity of implementing product software differs on several issues. Examples are: the number of end users that will use the product software, the effects that the implementation has on changes of tasks and responsibilities for the end user, the culture and the integrity of the organization where the software is going to be used and the budget available for acquiring product software. In general, differences are identified on a scale of size (bigger, smaller, more, less). An example of the “smaller” product software is the implementation of an office package. However there could be a lot of end users in an organization, the impact on the tasks and responsibilities of the end users will not be too intense, as the daily workflow of the end user is not changing significantly. An example of “lar
https://en.wikipedia.org/wiki/1023%20%28number%29
1023 (one thousand [and] twenty-three) is the natural number following 1022 and preceding 1024. In mathematics 1023 is the tenth Mersenne number of the form . In binary, it is also the tenth repdigit 11111111112 as all Mersenne numbers in decimal are repdigits in binary. It is equal to the sum of five consecutive prime numbers 193 + 197 + 199 + 211 + 223. It is the number of three-dimensional polycubes with 7 cells. 1023 is the number of elements in the 9-simplex, as well as the number of uniform polytopes in the tenth-dimensional hypercubic family , and the number of noncompact solutions in the family of paracompact honeycombs that shares symmetries with . In other fields Computing Floating-point units in computers often run a IEEE 754 64-bit, floating-point excess-1023 format in 11-bit binary. In this format, also called binary64, the exponent of a floating-point number (e.g. 1.009001 E1031) appears as an unsigned binary integer from 0 to 2047, where subtracting 1023 from it gives the actual signed value. 1023 is the number of dimensions or length of messages of an error-correcting Reed-Muller code made of 64 block codes. Technology The Global Positioning System (GPS) works on a ten-digit binary counter that runs for 1023 weeks, at which point an integer overflow causes its internal value to roll over to zero again. 1023 being , is the maximum number that a 10-bit ADC converter can return when measuring the highest voltage in range. See also The year AD 1023 References Integers
https://en.wikipedia.org/wiki/Heston%20Blumenthal
Heston Marc Blumenthal (; born 27 May 1966) is a British celebrity chef, TV personality and food writer. Blumenthal is regarded as a pioneer of multi-sensory cooking, food pairing and flavour encapsulation. He came to public attention with unusual recipes, such as bacon-and-egg ice cream and snail porridge. His recipes for triple-cooked chips and soft-centred Scotch eggs have been widely imitated. He has advocated a scientific approach to cooking, for which he has been awarded honorary degrees from the universities of Reading, Bristol and London and made an honorary Fellow of the Royal Society of Chemistry. Blumenthal's public profile has been increased by a number of television series, most notably for Channel 4, as well as a product range for the Waitrose supermarket chain introduced in 2010. He is the proprietor of the Fat Duck in Bray, Berkshire, a three-Michelin-star restaurant which is widely regarded as one of the best in the world. Blumenthal also owns Dinner, a two-Michelin-star restaurant in London, and a pub in Bray, the Hind's Head, with one Michelin star. Early life Heston Marc Blumenthal was born in Shepherd's Bush, London, on 27 May 1966, to a Jewish father born in Southern Rhodesia and an English mother who converted to Judaism. His surname comes from a great-grandfather from Latvia and means 'flowered valley' (or 'bloom-dale'), in German. Blumenthal was raised in Paddington, and attended Latymer Upper School in Hammersmith; St John's Church of England School in Lacey Green, Buckinghamshire; and John Hampden Grammar School, High Wycombe. His interest in cooking began at the age of sixteen on a family holiday to Provence, France, when he was taken to the 3-Michelin-starred restaurant L'Oustau de Baumanière. He was inspired by the quality of the food and "the whole multi-sensory experience: the sound of fountains and cicadas, the heady smell of lavender, the sight of the waiters carving lamb at the table". When he learned to cook, he was influe
https://en.wikipedia.org/wiki/Sequential%20hermaphroditism
Sequential hermaphroditism (called dichogamy in botany) is one of the two types of hermaphroditism, the other type being simultaneous hermaphroditism. It occurs when the organism's sex changes at some point in its life. In particular, a sequential hermaphrodite produces eggs (female gametes) and sperm (male gametes) at different stages in life. Sequential hermaphroditism occurs in many fish, gastropods, and plants. Species that can undergo these changes do so as a normal event within their reproductive cycle, usually cued by either social structure or the achievement of a certain age or size. In some species of fish, sequential hermaphroditism is much more common than simultaneous hermaphroditism. In animals, the different types of change are male to female (protandry or protandrous hermaphroditism), female to male (protogyny or protogynous hermaphroditism), and bidirectional (serial or bidirectional hermaphroditism). Both protogynous and protandrous hermaphroditism allow the organism to switch between functional male and functional female. Bidirectional hermaphrodites have the capacity for sex change in either direction between male and female or female and male, potentially repeatedly during their lifetime. These various types of sequential hermaphroditism may indicate that there is no advantage based on the original sex of an individual organism. Those that change gonadal sex can have both female and male germ cells in the gonads or can change from one complete gonadal type to the other during their last life stage. In plants, individual flowers are called dichogamous if their function has the two sexes separated in time, although the plant as a whole may have functionally male and functionally female flowers open at any one moment. A flower is protogynous if its function is first female, then male, and protandrous if its function is male then female. It used to be thought that this reduced inbreeding, but it may be a more general mechanism for reducing pollen-
https://en.wikipedia.org/wiki/Object%20Process%20Methodology
Object process methodology (OPM) is a conceptual modeling language and methodology for capturing knowledge and designing systems, specified as ISO/PAS 19450. Based on a minimal universal ontology of stateful objects and processes that transform them, OPM can be used to formally specify the function, structure, and behavior of artificial and natural systems in a large variety of domains. OPM was conceived and developed by Dov Dori. The ideas underlying OPM were published for the first time in 1995. Since then, OPM has evolved and developed. In 2002, the first book on OPM was published, and on December 15, 2015, after six years of work by ISO TC184/SC5, ISO adopted OPM as ISO/PAS 19450. A second book on OPM was published in 2016. Since 2019, OPM has become a foundation for a Professional Certificate program in Model-Based Systems Engineering - MBSE at EdX. Lectures are available as web videos on Youtube. Overview Object process methodology (OPM) is a conceptual modeling language and methodology for capturing knowledge and designing systems. Based on a minimal universal ontology of stateful objects and processes that transform them, OPM can be used to formally specify the function, structure, and behavior of artificial and natural systems in a large variety of domains. Catering to human cognitive abilities, an OPM model represents the system under design or study bimodally in both graphics and text for improved representation, understanding, communication, and learning. In OPM, an object anything that does or does not exist. Objects are stateful—they may have states, such that at each point in time, the object is at one of its states or in transition between states. A process is a thing that transforms an object by creating or consuming it, or by changing its state. OPM is bimodal; it is expressed both visually/graphically in object-process diagrams (OPD) and verbally/textually in Object-Process Language (OPL), a set of automatically generated sentences in a su
https://en.wikipedia.org/wiki/NV1
The Nvidia NV1, manufactured by SGS-Thomson Microelectronics under the model name STG2000, was a multimedia PCI card announced in May 1995 and released in November 1995. It was sold to retail by Diamond as the Diamond Edge 3D. The NV1 featured a complete 2D/3D graphics core based upon quadratic texture mapping, VRAM or FPM DRAM memory, an integrated 32-channel 350 MIPS playback-only sound card, and a Sega Saturn-compatible joypad port. As such, it was intended to replace the 2D graphics card, Sound Blaster-compatible audio systems, and 15-pin joystick ports, then prevalent on IBM PC compatibles. Putting all of this functionality on a single card led to significant compromises, and the NV1 was not very successful in the market. A modified version, the NV2, was developed in partnership with Sega for the Sega Dreamcast, but ultimately dropped. Nvidia's next stand-alone product, the RIVA 128, focussed entirely on 2D and 3D performance and was much more successful. History Several Sega Saturn games saw NV1-compatible conversions on the PC such as Panzer Dragoon and Virtua Fighter Remix. However, the NV1 struggled in a market place full of several competing proprietary standards, and was marginalized by emerging triangle polygon-based 2D/3D accelerators such as the low-cost S3 Graphics ViRGE, Matrox Mystique, ATI Rage, and Rendition Vérité V1000 among other early entrants. It ultimately did not sell well, despite being a promising and interesting device. NV1's biggest initial problem was its cost and overall quality. Although it offered credible 3D performance, its use of quadratic surfaces was anything but popular, and was quite different than typical polygon rendering. The audio portion of the card received merely acceptable reviews, with the General MIDI receiving lukewarm responses at best (a critical component at the time due to the superior sound quality produced by competing products). The Sega Saturn console was a market failure compared to Sony's PlayStation
https://en.wikipedia.org/wiki/Permabit
Permabit Technology Corporation was a private supplier of Data Reduction solutions to the Computer Data Storage industry. On 31 July 2017 it was announced that Red Hat had acquired the assets and technology of Permabit Technology Corporation. Permabit Albireo The Permabit Albireo family of products are designed with data reduction features. The common component among these products is the Albireo index - a hash datastore. Three products in the Albireo family range from an embedded SDK (offering integration with existing storage) to a ready-to-deploy appliance. Albireo SDK – a software development kit designed to add data deduplication to hardware devices or software applications that benefit from sharing duplicate chunks. Albireo VDO – a drop-in data efficiency solution for Linux architectures. VDO provides fine-grained (4 KB chunk), inline deduplication, thin provisioning, compression and replication. Albireo SANblox – a ready-to-run data efficiency appliance that integrates data deduplication and data compression transparently into Fibre Channel SAN environments. History Permabit was founded as Permabit Inc. in 2000 by a technical and business team from Massachusetts Institute of Technology. The company went through a management buyout in 2007 and a new business entity, Permabit Technology Corporation, was formed at that time. Permabit’s first product, Permabit Enterprise Archive (originally known as Permeon) was a multi-PB scalable, content-addressable, scale-out storage product, first launched in 2004. Enterprise Archive utilized in-house developed technologies in the areas of capacity optimization, WORM, storage management and data protection. In 2010, Permabit launched the Albireo family of products which focus on licensing Permabit data efficiency and management innovations to original equipment manufacturers, software vendors and online service providers. Publicly acknowledged companies that offer Albireo-based solutions include Dell EMC, Hitachi Dat
https://en.wikipedia.org/wiki/Timeout%20%28computing%29
In telecommunications and related engineering (including computer networking and programming), the term timeout or time-out has several meanings, including: A network parameter related to an enforced event designed to occur at the conclusion of a predetermined elapsed time. A specified period of time that will be allowed to elapse in a system before a specified event is to take place, unless another specified event occurs first; in either case, the period is terminated when either event takes place. Note: A timeout condition can be canceled by the receipt of an appropriate time-out cancellation signal. An event that occurs at the end of a predetermined period of time that began at the occurrence of another specified event. The timeout can be prevented by an appropriate signal. Timeouts allow for more efficient usage of limited resources without requiring additional interaction from the agent interested in the goods that cause the consumption of these resources. The basic idea is that in situations where a system must wait for something to happen, rather than waiting indefinitely, the waiting will be aborted after the timeout period has elapsed. This is based on the assumption that further waiting is useless, and some other action is necessary. Examples Specific examples include: In the Microsoft Windows and ReactOS command-line interfaces, the timeout command pauses the command processor for the specified number of seconds. In POP connections, the server will usually close a client connection after a certain period of inactivity (the timeout period). This ensures that connections do not persist forever, if the client crashes or the network goes down. Open connections consume resources, and may prevent other clients from accessing the same mailbox. In HTTP persistent connections, the web server saves opened connections (which consume CPU time and memory). The web client does not have to send an "end of requests series" signal. Connections are closed
https://en.wikipedia.org/wiki/RAM%20image
A RAM image is a sequence of machine code instructions and associated data kept permanently in the non-volatile ROM memory of an embedded system, which is copied into volatile RAM by a bootstrap loader. Typically the RAM image is loaded into RAM when the system is switched on, and it contains a second-level bootstrap loader and basic hardware drivers, enabling the unit to function as desired, or else more sophisticated software to be loaded into the system. Embedded systems
https://en.wikipedia.org/wiki/PC-UX
PC-UX is a discontinued NEC port of UNIX System III for their APC III and PC-9801 personal computer. It had extensive graphics capability. PC-UX and MS-DOS could reside on the same hard drive. It also had file transfer utilities that allowed files between PC-UX and MS-DOS. In 1985, the suggested retail price for PC-UX on APC III was $700. NEC's subsequent port of UNIX System V was called PC-UX/V. See also Xenix for AT etc. References Discontinued operating systems NEC software Unix variants X86 operating systems
https://en.wikipedia.org/wiki/Direction%20of%20arrival
In signal processing, direction of arrival (DOA) denotes the direction from which usually a propagating wave arrives at a point, where usually a set of sensors are located. These set of sensors forms what is called a sensor array. Often there is the associated technique of beamforming which is estimating the signal from a given direction. Various engineering problems addressed in the associated literature are: Find the direction relative to the array where the sound source is located Direction of different sound sources around you are also located by you using a process similar to those used by the algorithms in the literature Radio telescopes use these techniques to look at a certain location in the sky Recently beamforming has also been used in radio frequency (RF) applications such as wireless communication. Compared with the spatial diversity techniques, beamforming is preferred in terms of complexity. On the other hand, beamforming in general has much lower data rates. In multiple access channels (code-division multiple access (CDMA), frequency-division multiple access (FDMA), time-division multiple access (TDMA)), beamforming is necessary and sufficient Various techniques for calculating the direction of arrival, such as angle of arrival (AoA), time difference of arrival (TDOA), frequency difference of arrival (FDOA), or other similar associated techniques. Limitations on the accuracy of estimation of direction of arrival signals in digital antenna arrays are associated with jitter ADC and DAC. Advanced sophisticated techniques perform joint direction of arrival and time of arrival (ToA) estimation to allow a more accurate localization of a node. This also has the merit of localizing more targets with less antenna resources. Indeed, it is well-known in the array processing community that, generally speaking, one can resolve targets via antennas. When JADE (joint angle and delay) estimation is employed, one can go beyond this limit. Typical DOA estimation
https://en.wikipedia.org/wiki/International%20Association%20for%20Quantitative%20Finance
The International Association for Quantitative Finance (IAQF), formerly the International Association of Financial Engineers (IAFE), is a non-profit professional society dedicated to fostering the fields of quantitative finance and financial engineering. The IAQF hosts several panel discussions throughout the year to discuss the issues that affect the industry from both academic and professional angles. Since it was established in 1992, the IAQF has expanded its reach to host events in San Francisco, Toronto, Boston, and London. Fischer Black Memorial Foundation The educational arm of the IAQF is the Fischer Black Memorial Foundation (FBMF). While the IAQF focuses on the profession of financial engineering, the FBMF aims to expose students to the financial engineering field and help them work towards a career in the industry. Financial engineering is often underrepresented on university campuses and the FBMF tries to bridge the gap between academia and the professional world. The main tool of the FBMF is the very successful "How I Became a Quant" event series that bring professionals to college campuses to tell students about their experiences getting into the field. The FBMF also co-hosts (along with SIAM and New York University) an annual career fair that draws students from all over the country to meet with the premier hiring companies in the industry. This is one of the only career fairs that is specifically for financial engineering and it is hugely popular with both the students and companies. Events Often, these events are evening panels with 3–4 speakers; both practitioners and academics typically sit on these panels. Much of the information presented at these events is available afterward on the IAQF website. Every year, the IAQF honors one member of the financial engineering world with its Financial Engineer of the Year (FEOY) award. The winner is selected through an exhaustive nomination and voting process and the list of former winners illustrates th