source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Harmonic%20balance
|
Harmonic balance is a method used to calculate the steady-state response of nonlinear differential equations, and is mostly applied to nonlinear electrical circuits.
It is a frequency domain method for calculating the steady state, as opposed to the various time-domain steady-state methods. The name "harmonic balance" is descriptive of the method, which starts with Kirchhoff's Current Law written in the frequency domain and a chosen number of harmonics. A sinusoidal signal applied to a nonlinear component in a system will generate harmonics of the fundamental frequency. Effectively the method assumes a linear combination of sinusoids can represent the solution, then balances current and voltage sinusoids to satisfy Kirchhoff's law. The method is commonly used to simulate circuits which include nonlinear elements, and is most applicable to systems with feedback in which limit cycles occur.
Microwave circuits were the original application for harmonic balance methods in electrical engineering. Microwave circuits were well-suited because, historically, microwave circuits consist of many linear components which can be directly represented in the frequency domain, plus a few nonlinear components. System sizes were typically small. For more general circuits, the method was considered impractical for all but these very small circuits until the mid-1990s, when Krylov subspace methods were applied to the problem.
The application of preconditioned Krylov subspace methods allowed much larger systems to be solved, both in the size of the circuit and in the number of harmonics. This made practical the present-day use of harmonic balance methods to analyze radio-frequency integrated circuits (RFICs).
Example
Consider the differential equation . We use the ansatz solution , and plugging in, we obtain
Then by matching the terms, we have
,
which yields approximate period .
For a more exact approximation, we use ansatz solution . Plugging these in and matching the , ter
|
https://en.wikipedia.org/wiki/Fluent%20%28artificial%20intelligence%29
|
In artificial intelligence, a fluent is a condition that can change over time. In logical approaches to reasoning about actions, fluents can be represented in first-order logic by predicates having an argument that depends on time. For example, the condition "the box is on the table", if it can change over time, cannot be represented by ; a third argument is necessary to the predicate to specify the time: means that the box is on the table at time . This representation of fluents is modified in the situation calculus by using the sequence of the past actions in place of the current time.
A fluent can also be represented by a function, dropping the time argument. For example, that the box is on the table can be represented by , where is a function and not a predicate. In first-order logic, converting predicates to functions is called reification; for this reason, fluents represented by functions are said to be reified. When using reified fluents, a separate predicate is necessary to tell when a fluent is actually true or not. For example, means that the box is actually on the table at time , where the predicate is the one that tells when fluents are true. This representation of fluents is used in the event calculus, in the fluent calculus, and in the features and fluents logics.
Some fluents can be represented as functions in a different way. For example, the position of a box can be represented by a function whose value is the object the box is standing on at time . Conditions that can be represented in this way are called functional fluents. Statements about the values of such functions can be given in first-order logic with equality using literals such as . Some fluents are represented this way in the situation calculus.
Naive physics
From a historical point of view, fluents were introduced in the context of qualitative reasoning. The idea is to describe a process model not with mathematical equations but with natural language. That means an action is
|
https://en.wikipedia.org/wiki/Event%20calculus
|
The event calculus is a logical language for representing and reasoning about events and their effects first presented by Robert Kowalski and Marek Sergot in 1986. It was extended by Murray Shanahan and Rob Miller in the 1990s. Similar to other languages for reasoning about change, the event calculus represents the effects of actions on fluents. However, events can also be external to the system. In the event calculus, one can specify the value of fluents at some given time points, the events that take place at given time points, and their effects.
Fluents and events
In the event calculus, fluents are reified. This means that they are not formalized by means of predicates but by means of functions. A separate predicate is used to tell which fluents hold at a given time point. For example, means that the box is on the table at time ; in this formula, is a predicate while is a function.
Events are also represented as terms. The effects of events are given using the predicates and . In particular, means that,
if the event represented by the term is executed at time ,
then the fluent will be true after .
The predicate has a similar meaning, with the only difference
being that will be false after .
Domain-independent axioms
Like other languages for representing actions, the event calculus formalizes the correct evolution of the fluent via formulae telling the value of each fluent after an arbitrary action has been performed. The event calculus solves the frame problem in a way that is similar to the successor state axioms of the situation calculus: a fluent is true at time if and only if it has been made true in the past and has not been made false in the meantime.
This formula means that the fluent represented by the term is true at time if:
an event has taken place: ;
this took place in the past: ;
this event has the fluent as an effect: ;
the fluent has not been made false in the meantime:
A similar formula is used to formalize the opp
|
https://en.wikipedia.org/wiki/Embedded%20value
|
The Embedded Value (EV) of a life insurance company is the present value of future profits plus adjusted net asset value. It is a construct from the field of actuarial science which allows insurance companies to be valued.
Background
Life insurance policies are long-term contracts, where the policyholder pays a premium to be covered against a possible future event (such as the death of the policyholder).
Future income for the insurer consists of premiums paid by policyholders whilst future outgoings comprise claims paid to policyholders as well as various expenses. The difference, combined with income on and release of statutory reserves, represents future profit.
Net asset value is the difference between the total assets and liabilities of an insurance company.
For companies, the net asset value is usually calculated at book value. This needs to be adjusted to market values for EV purposes. Furthermore, this value may be discounted to reflect the "lock in" of some of the assets by their nature. (An example of such a lock-in would be assets held within the with-profits fund)
Value of the insurer
EV measures the value of the insurer by adding today's value of the existing business (i.e. future profits) to the market value of net assets (i.e. accumulated past profits).
It is a conservative measure of the insurer's value in the sense that it only considers future profits from existing policies and so ignores the possibility that the insurer may sell new policies in future. It also excludes goodwill. As a result, the insurer is worth more than its EV.
Formula
Embedded Value is calculated as follows:
EV = PVFP + ANAV
where
EV = Embedded Value
PVFP = present value of future profits
ANAV = adjusted net asset value
Improvements
European embedded value (EEV) is a variation of EV which was set up by the CFO Forum which allows for a more formalised method of choosing the parameters and doing the calculations, to enable greater transparency and comparability.
Market
|
https://en.wikipedia.org/wiki/Plastochron
|
As the tip of a plant shoot grows, new leaves are produced at regular time intervals if temperature is held constant. This time interval is termed the plastochron (or plastochrone). The plastochrone index and the leaf plastochron index are ways of measuring the age of a plant dependent on morphological traits rather than on chronological age. Use of these indices removes differences caused by germination, developmental differences and exponential growth.
Definitions
The spatial pattern of the arrangement of leaves is called phyllotaxy whereas the time between successive leaf initiation events is called the plastochron and the rate of emergence from the apical bud is the phyllochron.
Plastochron ratio
In 1951, F. J. Richards introduced the idea of the plastochron ratio and developed a system of equations to describe mathematically a centric representation using three parameters: plastochron ratio, divergence angle, and the angle of the cone tangential to the apex in the area being considered.
Emerging phyllodes or leaf variants experience a sudden change from a high humidity environment to a more arid one. There are other changes they encounter such as variations in light level, photoperiod and the gaseous content of the air.
References
Botany
|
https://en.wikipedia.org/wiki/List%20of%20the%20largest%20software%20companies
|
Many lists exist that provide an overview of large software companies, often called "independent software vendors" ("ISVs"), in the world. The lists differ by methodology of composition and consequently show substantial differences in both the listed companies and the ranking of those companies.
Legend
Forbes Global 2000
The Forbes Global 2000 is an annual ranking of the top 2000 public companies in the world by Forbes magazine, based on a mix of four metrics: sales, profit, assets and market value. The Forbes list for software companies includes only pure play (or nearly pure play) software companies and excludes manufacturers, consumer electronics companies, conglomerates, IT consulting firms, and computer services companies even if they have large software divisions.
The top 50 companies in terms of market capitalization in the 2019 Forbes list for the "Software & Programming" industry are listed in the following table:
All values listed in the table are in billion US$.
See also
List of largest technology companies by revenue
List of largest manufacturing companies by revenue
List of largest United States–based employers globally
List of largest employers
Economy of the United States
References
External links
Lists of information technology companies
Lists of companies by revenue
Economy-related lists of superlatives
|
https://en.wikipedia.org/wiki/User%20Location%20Service
|
In computing, User Location Service was a standards-based protocol for directory services and presence information, first submitted as a draft to the IETF in February 1996.
Client software supporting ULS included early versions of Microsoft Netmeeting, Intel Video Phone and FreeWebFone. Netmeeting had depreciated ULS in favour of Internet Locator Service by 1997 and FreeWebFone no longer exists.
A ULS server provides directory services and presence lookup for clients. At one stage, public ULS servers were made available by Microsoft and others, but these have largely been abandoned.
ULS typically runs on the TCP port 522.
See also
Internet Locator Service
LDAP
External links
Microsoft Technet: Netmeeting
Freewebfone User Location Server
Microsoft NetMeeting Overview
ULS Internet-Draft submitted to the IETF by Microsoft in 1996
Network protocols
|
https://en.wikipedia.org/wiki/Minichromosome
|
A minichromosome is a small chromatin-like structure resembling a chromosome and consisting of centromeres, telomeres and replication origins but little additional genetic material. They replicate autonomously in the cell during cellular division. Minichromosomes may be created by natural processes as chromosomal aberrations or by genetic engineering.
Structure
Minichromosomes can be either linear or circular pieces of DNA. By minimizing the amount of unnecessary genetic information on the chromosome and including the basic components necessary for DNA replication (centromere, telomeres, and replication sequences), molecular biologists aim to construct a chromosomal platform which can be utilized to insert or present new genes into a host cell.
Production
Producing minichromosomes by genetic engineering techniques involves two primary methods, the de novo (bottom-up) and the top-down approach.
De novo
The minimum constituent parts of a chromosome (centromere, telomeres, and DNA replication sequences) are assembled by using molecular cloning techniques to construct the desired chromosomal contents in vitro. Next, the desired contents of the minichromosome must be transformed into a host which is capable of assembling the components (typically yeast or mammalian cells) into a functional chromosome. This approach has been attempted for the introduction of minichromosomes into maize for the possibility of genetic engineering, but success has been limited and questionable. In general, the de novo approach is more difficult than the top-down method due to species incompatibility issues and the heterochromatic nature of centromeric regions.
Top-down
This method utilizes the mechanism of telomere-mediated chromosomal truncation (TMCT). This process is the generation of truncation by selective transformation of telomeric sequences into a host genome. This insertion causes the generation of more telomeric sequences and eventual truncation. The newly synthesized trunca
|
https://en.wikipedia.org/wiki/Disk%20staging
|
Disk staging is using disks as an additional, temporary stage of backup process before finally storing backup to tape. Backups stay on disk typically for a day or a week, before being copied to tape in a background process and deleted afterwards.
The process of disk staging is controlled by the same software that performs actual backups, which is different from virtual tape library where intermediate disk usage is hidden from main backup software. Both techniques are known as D2D2T (disk-to-disk-to-tape).
Restoring data
Data is restored from disk if possible. But if the data exists only on tape it is restored directly (no backward-staging on restore).
Reasons
Reasons behind using D2D2T:
increase performance of small, random-access restores: disk has much faster random access than tape
increase overall backup/restore performance: although disk and a tape have similar streaming throughput, you can easily scale disk throughput by the means of striping (and tape-striping is a much less established technique)
increase utilization of tape drives: tape shoe-shining effect is eliminated when staging (note that it may still happen on tape restores)
See also
Backup
Virtual tape library
References
Backup
|
https://en.wikipedia.org/wiki/Ring%20King
|
Ring King, known as in Japan and Europe, is an arcade boxing game. It was published in 1985 by Woodplace in Japan and Europe, and by Data East in North America.
Gameplay
The game continues the series' theme of comical sports as the player takes the role of a boxer who makes his way from his debut to become a world champion. Ring King, though perhaps unintentionally, is standard of the boxing creations of its era, via providing quirky monikers for opponents the player encounters; in its arcade release, these number eight (8): Violence Jo (this entry level fighter is the champion, in the NES version), Brown Pants, White Wolf, Bomba Vern, Beat Brown, Blue Warker (reigning champion, in the arcade version), Green Hante and Onetta Yank. Assuming the player wins the championship, arcade play continues cycling through only the last of the afore-listed three (Blue Warker, Green Hante, Onetta Yank).
The player can choose from several different types of punches and defensive maneuvers, along with unique special attacks. The player revives their stamina during the round interval by pressing the button rapidly. In the Nintendo port, the boxer's abilities are determined by three different stats; punch, stamina, and speed. The player can improve these stats using the power points gained after each match. Performing well in matches allows the player to create more powerful boxers. The player can save their game progress by recording a password, and two players can face off against each other in the two-player mode. Though the game is rudimentary, it is possible to counter-punch, and missing with too many punches causes the boxer's stamina to decrease.
Special attacks
The biggest characteristic of the game is the comical set of special attacks. These moves are activated when the player presses the attack button at the right timing and at the right distance. The attacks have the capability to instantly knock out the opponent, but being countered before a special attack causes an
|
https://en.wikipedia.org/wiki/Total%20body%20irradiation
|
Total body irradiation (TBI) is a form of radiotherapy used primarily as part of the preparative regimen for haematopoietic stem cell (or bone marrow) transplantation. As the name implies, TBI involves irradiation of the entire body, though in modern practice the lungs are often partially shielded to lower the risk of radiation-induced lung injury. Total body irradiation in the setting of bone marrow transplantation serves to destroy or suppress the recipient's immune system, preventing immunologic rejection of transplanted donor bone marrow or blood stem cells. Additionally, high doses of total body irradiation can eradicate residual cancer cells in the transplant recipient, increasing the likelihood that the transplant will be successful.
Dosage
Doses of total body irradiation used in bone marrow transplantation typically range from 10 to >12 Gy. For reference, an unfractionated (i.e. single exposure) dose of 4.5 Gy is fatal in 50% of exposed individuals without aggressive medical care. The 10-12 Gy is typically delivered across multiple fractions to minimise toxicities to the patient.
Early research in bone marrow transplantation by E. Donnall Thomas and colleagues demonstrated that this process of splitting TBI into multiple smaller doses resulted in lower toxicity and better outcomes than delivering a single, large dose. The time interval between fractions allows other normal tissues some time to repair some of the damage caused. However, the dosing is still high enough that the ultimate result is the destruction of both the patient's bone marrow (allowing donor marrow to engraft) and any residual cancer cells. Non-myeloablative bone marrow transplantation uses lower doses of total body irradiation, typically about 2 Gy, which do not destroy the host bone marrow but do suppress the host immune system sufficiently to promote donor engraftment.
Usage in other cancers
In addition to its use in bone marrow transplantation, total body irradiation has been explore
|
https://en.wikipedia.org/wiki/Research%20Unix
|
The term "Research Unix" refers to early versions of the Unix operating system for DEC PDP-7, PDP-11, VAX and Interdata 7/32 and 8/32 computers, developed in the Bell Labs Computing Sciences Research Center (CSRC).
History
The term Research Unix first appeared in the Bell System Technical Journal (Vol. 57, No. 6, Pt. 2 Jul/Aug 1978) to distinguish it from other versions internal to Bell Labs (such as PWB/UNIX and MERT) whose code-base had diverged from the primary CSRC version. However, that term was little-used until Version 8 Unix, but has been retroactively applied to earlier versions as well. Prior to V8, the operating system was most commonly called simply UNIX (in caps) or the UNIX Time-Sharing System.
AT&T licensed Version 5 to educational institutions, and Version 6 also to commercial sites. Schools paid $200 and others $20,000, discouraging most commercial use, but Version 6 was the most widely used version into the 1980s. Research Unix versions are often referred to by the edition of the manual that describes them, because early versions and the last few were never officially released outside of Bell Labs, and grew organically. So, the first Research Unix would be the First Edition, and the last the Tenth Edition. Another common way of referring to them is as "Version x Unix" or "Vx Unix", where x is the manual edition. All modern editions of Unix—excepting Unix-like implementations such as Coherent, Minix, and Linux—derive from the 7th Edition.
Starting with the 8th Edition, versions of Research Unix had a close relationship to BSD. This began by using 4.1cBSD as the basis for the 8th Edition. In a Usenet post from 2000, Dennis Ritchie described these later versions of Research Unix as being closer to BSD than they were to UNIX System V, which also included some BSD code:
Versions
Legacy
In 2002, Caldera International released Unix V1, V2, V3, V4, V5, V6, V7 on PDP-11 and Unix 32V on VAX as FOSS under a permissive BSD-like software license.
In 20
|
https://en.wikipedia.org/wiki/Interdata%207/32%20and%208/32
|
The Model 7/32 and Model 8/32 were 32-bit minicomputers introduced by Perkin-Elmer after they acquired Interdata, Inc., in 1973. Interdata computers are primarily remembered for being the first 32-bit minicomputers under $10,000. The 8/32 was a more powerful machine than the 7/32, with the notable feature of allowing user-programmable microcode to be employed.
The Model 7/32 provided fullword data processing power and direct memory addressing up to 1 million bytes through the use of 32-bit general registers and a comprehensive instruction set.
Background
After the commercial success of the microcoded, mainframe IBM 360-series of computers, startup companies arrived on the scene to scale microcode technology to the smaller minicomputers. Among these companies were Prime Computer, Microdata, and Interdata. Interdata used microcode to define an architecture that was heavily influenced by the IBM 360 instruction set. The DOS-type real-time serial/multitasking operating system was called OS/32.
Differences between the 7/32 and 8/32
General register sets – The 7/32 has 2 sets while the 8/32 can have either 2 or 8.
I/O priority levels – The 7/32 has none but the 8/32 can have up to 3.
Writeable control store – The 7/32 does not have one and the 8/32 does.
On average the 8/32 is 2.5x faster than the 7/32.
Usage
The 7/32 and 8/32 became the computers of choice in large scale embedded systems, such as FFT machines used in real-time seismic analysis, CAT scanners, and flight simulator systems. They were also often used as non-IBM peripherals in IBM networks, serving the role of HASP workstations and spooling systems, so called RJE (Remote Job Entry) stations. For example, the computers behind the first Space Shuttle simulator consisted of thirty-six 32-bit minis inputting and/or outputting data to networked mainframe computers (both IBM and Univac), all in real-time.
The 8/32 was used in the Lunar and Planetary Laboratory, Department of Planetary Sciences at the Univers
|
https://en.wikipedia.org/wiki/Resource%20construction%20set
|
The resource construction set (GEM RCS) is a GUI builder for GEM applications. It was written by Digital Research.
RCS was widely used on the Atari ST, Atari STe, Atari TT, Atari MEGA ST, Atari MEGA STE and Atari Falcon platforms.
Example
Files of the Atari Development Kit
Resource file
runtime binary
0000: 000000E2 00E200E2 00E20000 002400E1 ...â.â.â.â...$.á
0010: 000002AA 00130003 00000000 00000000 ...ª............
0020: 000002B6 20446573 6B200020 46696C65 ...¶ Desk . File
0030: 20002020 43726169 6773204D 656E7500 . Craigs Menu.
0040: 2D2D2D2D 2D2D2D2D 2D2D2D2D 2D2D2D2D ----------------
0050: 2D2D2D2D 00202044 65736B20 41636365 ----. Desk Acce
0060: 73736F72 79203120 20002020 4465736B ssory 1 . Desk
0070: 20416363 6573736F 72792032 20200020 Accessory 2 .
0080: 20446573 6B204163 63657373 6F727920 Desk Accessory
0090: 33202000 20204465 736B2041 63636573 3 . Desk Acces
00A0: 736F7279 20342020 00202044 65736B20 sory 4 . Desk
00B0: 41636365 73736F72 79203520 20002020 Accessory 5 .
00C0: 4465736B 20416363 6573736F 72792036 Desk Accessory 6
00D0: 20200020 20517569 74202020 20202020 . Quit
00E0: 0000FFFF 00010005 00190000 00000000 ..ÿÿ............
00F0: 00000000 00000050 00190005 00020002 .......P........
0100: 00140000 00000000 11000000 00000050 ...............P
0110: 02010001 00030004 00190000 00000000 ................
0120: 00000002 0000000C 03010004 FFFFFFFF ............ÿÿÿÿ
0130: 00200000 00000000 00240000 00000006 . .......$......
0140: 03010002 FFFFFFFF 00200000 00000000 ....ÿÿÿÿ. ......
0150: 002B0006 00000006 03010000 0006000F .+..............
0160: 00190000 00000000 00000000 03010050 ...............P
0170: 0013000F 0007000E 00140000 000000FF ...............ÿ
0180: 11000002 00000014 00080008 FFFFFFFF ............ÿÿÿÿ
0190: 001C0000 00000000 00320000 00000014 .........2......
01A0: 00010009 FFFFFFFF 001C0000 00080000 ....ÿÿÿÿ........
01B0: 00400000 00010014 0001000A FFFFFFFF .@..........ÿÿÿÿ
01C0: 001C00
|
https://en.wikipedia.org/wiki/Formula%20game
|
A formula game is an artificial game represented by a fully quantified Boolean formula. Players' turns alternate and the space of possible moves is denoted by bound variables. If a variable is universally quantified, the formula following it has the same truth value as the formula beginning with the universal quantifier regardless of the move taken. If a variable is existentially quantified, the formula following it has the same truth value as the formula beginning with the existential quantifier for at least one move available at the turn. Turns alternate, and a player loses if he cannot move at his turn. In computational complexity theory, the language FORMULA-GAME is defined as all formulas such that Player 1 has a winning strategy in the game represented by . FORMULA-GAME is PSPACE-complete.
References
Sipser, Michael. (2006). Introduction to the Theory of Computation. Boston: Thomson Course Technology.
Satisfiability problems
Boolean algebra
PSPACE-complete problems
|
https://en.wikipedia.org/wiki/%E2%86%93
|
The arrow symbol ↓ may refer to:
The downward direction, a relative direction
The keyboard cursor control key, an arrow key
A downwards arrow, a Unicode arrow symbol
Logical NOR, operator which produces a result that is the negation of logical OR
An undefined object, in mathematical well-definition
A comma category, in category theory
Down (game theory), a mathematical game
The ingressive sound, in phonetics
An APL function
"Decreased" (and similar meanings), in medical notation
The precipitation of an insoluble solid, in chemical notation
See also
Down sign (disambiguation)
Arrow (disambiguation)
↑ (disambiguation)
→ (disambiguation)
← (disambiguation)
Logic symbols
|
https://en.wikipedia.org/wiki/Nadir
|
The nadir is the direction pointing directly below a particular location; that is, it is one of two vertical directions at a specified location, orthogonal to a horizontal flat surface.
The direction opposite of the nadir is the zenith.
Definitions
Space science
Since the concept of being below is itself somewhat vague, scientists define the nadir in more rigorous terms. Specifically, in astronomy, geophysics and related sciences (e.g., meteorology), the nadir at a given point is the local vertical direction pointing in the direction of the force of gravity at that location.
The term can also be used to represent the lowest point that a celestial object reaches along its apparent daily path around a given point of observation (i.e. the object's lower culmination). This can be used to describe the position of the Sun, but it is only technically accurate for one latitude at a time and only possible at the low latitudes. The Sun is said to be at the nadir at a location when it is at the zenith at the location's antipode and is 90° below the horizon.
Nadir also refers to the downward-facing viewing geometry of an orbiting satellite, such as is employed during remote sensing of the atmosphere, as well as when an astronaut faces the Earth while performing a spacewalk. A nadir image is a satellite image or aerial photo of the Earth taken vertically. A satellite ground track represents its orbit projected to nadir on to Earth's surface.
Medicine
Generally in medicine, nadir is used to indicate the progression to the lowest point of a clinical symptom (e.g. fever patterns) or a laboratory count. In oncology, the term nadir is used to represent the lowest level of a blood cell count while a patient is undergoing chemotherapy. A diagnosis of neutropenic nadir after chemotherapy typically lasts 7–10 days.
Figurative usage
The word is also used figuratively to mean a low point, such as with a person's spirits, the quality of an activity or profession, or the nadir of Amer
|
https://en.wikipedia.org/wiki/Upper%20set
|
In mathematics, an upper set (also called an upward closed set, an upset, or an isotone set in X) of a partially ordered set is a subset with the following property: if s is in S and if x in X is larger than s (that is, if ), then x is in S. In other words, this means that any x element of X that is to some element of S is necessarily also an element of S.
The term lower set (also called a downward closed set, down set, decreasing set, initial segment, or semi-ideal) is defined similarly as being a subset S of X with the property that any element x of X that is to some element of S is necessarily also an element of S.
Definition
Let be a preordered set.
An in (also called an , an , or an set) is a subset that is "closed under going up", in the sense that
for all and all if then
The dual notion is a (also called a , , , , or ), which is a subset that is "closed under going down", in the sense that
for all and all if then
The terms or are sometimes used as synonyms for lower set. This choice of terminology fails to reflect the notion of an ideal of a lattice because a lower set of a lattice is not necessarily a sublattice.
Properties
Every partially ordered set is an upper set of itself.
The intersection and the union of any family of upper sets is again an upper set.
The complement of any upper set is a lower set, and vice versa.
Given a partially ordered set the family of upper sets of ordered with the inclusion relation is a complete lattice, the upper set lattice.
Given an arbitrary subset of a partially ordered set the smallest upper set containing is denoted using an up arrow as (see upper closure and lower closure).
Dually, the smallest lower set containing is denoted using a down arrow as
A lower set is called principal if it is of the form where is an element of
Every lower set of a finite partially ordered set is equal to the smallest lower set containing all maximal elements of
where denotes the set c
|
https://en.wikipedia.org/wiki/LliureX
|
LliureX () is a project of the Generalitat Valenciana with the goal of introducing new ICTs based on free software in the Valencian Community education system.
It is a Linux distribution that is used on over 110,000 PCs in schools in the Valencia region.
Originally it was based on Debian but since version 7.09 it is based on Ubuntu and since version 19 on KDE neon.
Awards
LliureX was awarded the Open Awards 2019 at the OpenExpo conference for its innovation in the field of education.
References
External links
Official site (in Valencian and Spanish)
Educational operating systems
Spanish-language Linux distributions
Ubuntu derivatives
KDE
Valencian Community
Linux distributions
|
https://en.wikipedia.org/wiki/FlexRay
|
FlexRay is an automotive network communications protocol developed by the FlexRay Consortium to govern on-board automotive computing. It is designed to be faster and more reliable than CAN and TTP, but it is also more expensive. The FlexRay consortium disbanded in 2009, but the FlexRay standard is now a set of ISO standards, ISO 17458-1 to 17458-5.
FlexRay is a communication bus designed to ensure high data rates, fault tolerance, operating on a time cycle, split into static and dynamic segments for event-triggered and time-triggered communications.
Features
FlexRay supports data rates up to , explicitly supports both star and bus physical topologies, and can have two independent data channels for fault-tolerance (communication can continue with reduced bandwidth if one channel is inoperative). The bus operates on a time cycle, divided into two parts: the static segment and the dynamic segment. The static segment is preallocated into slices for individual communication types, providing stronger determinism than its predecessor CAN. The dynamic segment operates more like CAN, with nodes taking control of the bus as available, allowing event-triggered behavior.
Consortium
The FlexRay Consortium was made up of the following core members:
Freescale Semiconductor
Bosch
NXP Semiconductors
BMW
Volkswagen
Daimler
General Motors
There were also Premium Associate and Associate members of FlexRay consortium. By September 2009, there were 28 premium associate members and more than 60 associate members. At the end of 2009, the consortium disbanded.
Commercial deployment
The first series production vehicle with FlexRay was at the end of 2006 in the BMW X5 (E70), enabling a new and fast adaptive damping system. Full use of FlexRay was introduced in 2008 in the new BMW 7 Series (F01).
Vehicles
Audi A4 (B9) (2015–)
Audi A5 (F5) (2016–)
Audi A6 (C7) (2011-2018)
Audi A7
Audi A8 (D4) (2010–2017)
Audi Q7 (2015-)
Audi TT Mk3 (2014–)
Audi R8 (2015–)
Bentley Flying Spur (201
|
https://en.wikipedia.org/wiki/Flour%20bleaching%20agent
|
Flour bleaching agent is the agent added to fresh milled grains to whiten the flour by removing the yellow colour pigment called xanthophyll. It whitens the flour, which is used in the baking industry.
Overview
Usual flour bleaching agents are:
Organic peroxides (benzoyl peroxide)
Calcium peroxide
Chlorine
Chlorine dioxide
Azodicarbonamide
Nitrogen dioxide
Atmospheric oxygen, used during natural aging of flour
Use of chlorine, bromates, and peroxides is not allowed in the European Union.
Bleached flour improves the structure-forming capacity, allowing the use of dough formulas with lower proportions of flour and higher proportions of sugar . In biscuit making, use of chlorinated flour reduces the spread of the dough, and provides a "tighter" surface. The changes of functional properties of the flour proteins are likely to be caused by their oxidation.
In countries where bleached flour is prohibited, microwaving plain flour produces similar chemical changes to the bleaching process. This improves the final texture of baked goods made to recipes intended for bleached flours.
See also
Chorleywood bread process – another bread making process that increases volume
Flour treatment agent
Graham flour – an early unbleached whole-grain flour
Maida flour – a commonly bleached flour in India
References
Food additives
|
https://en.wikipedia.org/wiki/Azodicarbonamide
|
Azodicarbonamide, ADCA, ADA, or azo(bis)formamide, is a chemical compound with the molecular formula . It is a yellow to orange-red, odorless, crystalline powder. It is sometimes called a 'yoga mat' chemical because of its widespread use in foamed plastics. It was first described by John Bryden in 1959.
Synthesis
It is prepared in two steps via treatment of urea with hydrazine to form biurea, as described in this idealized equation:
Oxidation with chlorine or chromic acid yields azodicarbonamide:
Applications
Blowing agent
The principal use of azodicarbonamide is in the production of foamed plastics as a blowing agent. The thermal decomposition of azodicarbonamide produces nitrogen, carbon monoxide, carbon dioxide, and ammonia gases, which are trapped in the polymer as bubbles to form a foamed article.
Azodicarbonamide is used in plastics, synthetic leather, and other industries and can be pure or modified. Modification affects the reaction temperatures. Pure azodicarbonamide generally reacts around 200 °C. In the plastic, leather, and other industries, modified azodicarbonamide (average decomposition temperature 170 °C) contains additives that accelerate the reaction or react at lower temperatures.
An example of the use of azodicarbonamide as a blowing agent is found in the manufacture of vinyl (PVC) and EVA-PE foams, where it forms bubbles upon breaking down into gas at high temperature. Vinyl foam is springy and does not slip on smooth surfaces. It is useful for carpet underlay and floor mats. Commercial yoga mats made of vinyl foam have been available since the 1980s; the first mats were cut from carpet underlay.
Food additive
As a food additive, azodicarbonamide is used as a flour bleaching agent and a dough conditioner. It reacts with moist flour as an oxidizing agent. The main reaction product is biurea, which is stable during baking. Secondary reaction products include semicarbazide and ethyl carbamate. It is known by the E number E927. Many restauran
|
https://en.wikipedia.org/wiki/Auxanometer
|
An auxanometer (Gr. = "to grow" + metron= "measure") is an apparatus for measuring increase of growth in plants.
In case of an arc-auxanometer (see picture), there is a thin cord fixed to the plant apex on one end and a dead-weight on the other with a pointer indicating against an arc scale. In some forms it passes over a pulley which has a pointer attached to it. When the plant's height increases, the pulley rotates and the pointer moves on a circular scale to directly give the magnitude of growth. The "rate of growth" is a derived measurement obtained by dividing the length of growth measured by the auxanometer, by the time said measurement took. It is also called an arc-indicator. These simple types of auxanometer have been replaced by rotation sensors at the fulcrum point linked to dataloggers with a balancing beam attached to the growing tip/plant apex.
Sensitive auxanometers allow measurement of growth as small as a micrometer, which allows measurement of growth in response to short-term changes in atmospheric composition. Auxanometers are used in laboratory, the field, and the classroom.
See also
Crescograph
References
Measuring instruments
|
https://en.wikipedia.org/wiki/S%C3%A9minaire%20Nicolas%20Bourbaki
|
The Séminaire Nicolas Bourbaki (Bourbaki Seminar) is a series of seminars (in fact public lectures with printed notes distributed) that has been held in Paris since 1948. It is one of the major institutions of contemporary mathematics, and a barometer of mathematical achievement, fashion, and reputation. It is named after Nicolas Bourbaki, a group of French and other mathematicians of variable membership.
The Poincaré Seminars are a series of talks on physics inspired by the Bourbaki seminars on mathematics.
1948/49 series
Henri Cartan, Les travaux de Koszul, I (Lie algebra cohomology)
Claude Chabauty, Le théorème de Minkowski-Hlawka (Minkowski-Hlawka theorem)
Claude Chevalley, L'hypothèse de Riemann pour les corps de fonctions algébriques de caractéristique p, I, d'après Weil (local zeta-function)
Roger Godement, Groupe complexe unimodulaire, I : Les représentations unitaires irréductibles du groupe complexe unimodulaire, d'après Gelfand et Neumark (representation theory of the complex special linear group)
Léo Kaloujnine, Sur la structure de p-groupes de Sylow des groupes symétriques finis et de quelques généralisations infinies de ces groupes (Sylow theorems, symmetric groups, infinite group theory)
Pierre Samuel, (birational geometry)
Jean Braconnier, Sur les suites de composition d'un groupe et la tour des groupes d'automorphismes d'un groupe fini, d'après H. Wielandt (finite groups)
Henri Cartan, Les travaux de Koszul, II (see 1)
Claude Chevalley, L'hypothèse de Riemann pour les groupes de fonctions algébriques de caractéristique p, II, d'après Weil (see 3)
Luc Gauthier, (see 6)
Laurent Schwartz, Sur un mémoire de Petrowsky : "Über das Cauchysche Problem für ein System linearer partieller Differentialgleichungen im gebiete nichtanalytischen Funktionen" (partial differential equations)
Henri Cartan, Les travaux de Koszul, III (see 1)
Roger Godement, Groupe complexe unimodulaire, II : La transformation de Fourier dans le groupe complexe un
|
https://en.wikipedia.org/wiki/Uniface%20%28programming%20language%29
|
Uniface is a low-code development and deployment platform for enterprise applications that can run in a large range of runtime environments, including mobile, mainframe, web, Service-oriented architecture (SOA), Windows, Java EE, and .NET. Uniface is used to create mission-critical applications.
Uniface applications are database and platform independent. Uniface provides an integration framework that enables Uniface applications to integrate with all major DBMS products such as Oracle, Microsoft SQL Server, MySQL and IBM Db2. In addition, Uniface also supports file systems such as RMS (HP OpenVMS), Sequential files, operating system text files and a wide range of other technologies, such as IBM mainframe-based products (CICS, IMS), web services, SMTP, POP email, LDAP directories, .NET, ActiveX, Component Object Model (COM), C(++) programs, and Java. Uniface operates under Microsoft Windows, various flavors of Unix, Linux, CentOS and IBM i.
Uniface can be used in complex systems that maintain critical enterprise data supporting mission-critical business processes such as point-of sale and web-based online shopping, financial transactions, salary administration, and inventory control. It is currently used by thousands of companies in more than 30 countries, with an effective installed base of millions of end-users. Uniface applications range from client/server to web, and from data entry to workflow, as well as portals that are accessed locally, via intranets and the internet.
Originally developed in the Netherlands by Inside Automation, later Uniface B.V., the product and company were acquired by Detroit-based Compuware Corp in 1994, and in 2014 was acquired by Marlin Equity Partners and continued as Uniface B.V. global headquartered in Amsterdam. In February 2021 Uniface was acquired by Rocket Software headquartered in Waltham, Massachusetts, USA.
Uniface Products
Uniface Development Environment is an integrated collection of tools for modeling, implementing,
|
https://en.wikipedia.org/wiki/Packaging%20gas
|
A packaging gas is used to pack sensitive materials such as food into a modified atmosphere environment. The gas used is usually inert, or of a nature that protects the integrity of the packaged goods, inhibiting unwanted chemical reactions such as food spoilage or oxidation. Some may also serve as a propellant for aerosol sprays like cans of whipped cream. For packaging food, the use of various gases is approved by regulatory organisations.
Their E numbers are included in the following lists in parentheses.
Inert gases
These gas types do not cause a chemical change to the substance that they protect.
argon (E938), used for canned products
helium (E939), used for canned products
nitrogen (E941), also propellant
carbon dioxide (E290), also propellant
Propellant gases
Specific kinds of packaging gases are aerosol propellants. These process and assist the ejection of the product from its container.
chlorofluorocarbons known as CFC (E940 and E945), now rarely used because of the damage that they do to the ozone layer:
dichlorodifluoromethane (E940)
chloropentafluoroethane (E945)
nitrous oxide (E942), used for aerosol whipped cream canisters (see Nitrous oxide: Aerosol propellant)
octafluorocyclobutane (E946)
Reactive gases
These must be used with caution as they may have adverse effects when exposed to certain chemicals. They will cause oxidisation or contamination to certain types of materials.
oxygen (E948), used e.g. for packaging of vegetables
hydrogen (E949)
Volatile gases
Hydrocarbon gases approved for use with food need to be used with extreme caution as they are highly combustible, when combined with oxygen they burn very rapidly and may cause explosions in confined spaces. Special precautions must be taken when transporting these gases.
butane (E943a)
isobutane (E943b)
propane (E944)
See also
Shielding gas
References
Food additives
Food science
Hydrogen technologies
Packaging
Industrial gases
|
https://en.wikipedia.org/wiki/Path%20integration
|
Path integration is the method thought to be used by animals for dead reckoning.
History
Charles Darwin first postulated an inertially-based navigation system in animals in 1873. Studies beginning in the middle of the 20th century confirmed that animals could return directly to a starting point, such as a nest, in the absence of vision and having taken a circuitous outwards journey. This shows that they can use cues to track distance and direction in order to estimate their position, and hence how to get home. This process was named path integration to capture the concept of continuous integration of movement cues over the journey. Manipulation of inertial cues confirmed that at least one of these movements (or idiothetic) cues are information from the vestibular organs, which detect movement in the three dimensions. Other cues probably include proprioception (information from muscles and joints about limb position), motor efference (information from the motor system telling the rest of the brain what movements were commanded and executed), and optic flow (information from the visual system signaling how fast the visual world is moving past the eyes). Together, these sources of information can tell the animal which direction it is moving, at what speed, and for how long. In addition, sensitivity to the Earth's magnetic field for underground animals (e.g., mole rat) can give path integration.
Mechanism
Studies in arthropods, most notably in the Sahara desert ant (Cataglyphis bicolor), reveal the existence of highly effective path integration mechanisms that depend on determination of directional heading (by polarized light or sun position) and distance computations (by monitoring leg movement or optical flow).
In mammals, three important discoveries shed light on this issue.
The first, in the early 1970s, is that neurons in the hippocampal formation, called place cells, respond to the position of the animal.
The second, in the early 1990s, is that neurons i
|
https://en.wikipedia.org/wiki/Tears%20of%20wine
|
The phenomenon called tears of wine is manifested as a ring of clear liquid, near the top of a glass of wine, from which droplets continuously form and drop back into the wine. It is most readily observed in a wine which has a high alcohol content. It is also referred to as wine legs, fingers, curtains, church windows, or feet.
Cause
The effect is a consequence of the fact that alcohol has a lower surface tension than water. If alcohol is mixed with water inhomogeneously, a region with a lower concentration of alcohol will pull on the surrounding fluid more strongly than a region with a higher alcohol concentration. The result is that the liquid tends to flow away from regions with higher alcohol concentration. This can be easily and strikingly demonstrated by spreading a thin film of water on a smooth surface and then allowing a drop of alcohol to fall on the center of the film. The liquid will rush out of the region where the drop of alcohol fell.
Wine is mostly a mixture of alcohol and water, with dissolved sugars, acids, colourants and flavourants. Where the surface of the wine meets the side of the glass, capillary action makes the liquid climb the side of the glass. As it does so, both alcohol and water evaporate from the rising film, but the alcohol evaporates faster, due to its higher vapor pressure. The resulting decrease in the concentration of alcohol causes the surface tension of the liquid to increase, and this causes more liquid to be drawn up from the bulk of the wine, which has a lower surface tension because of its higher alcohol content. The wine moves up the side of the glass and forms droplets that fall back under their own weight.
The phenomenon was first correctly explained by physicist James Thomson, the elder brother of Lord Kelvin, in 1855. It is an instance of what is today called the Marangoni effect (or the Gibbs-Marangoni effect): the flow of liquid caused by surface tension gradients.
The evaporation of alcohol also creates a
|
https://en.wikipedia.org/wiki/Hysteresivity
|
Hysteresivity derives from “hysteresis”, meaning “lag”. It is the tendency to react slowly to an outside force, or to not return completely to its original state. Whereas the area within a hysteresis loop represents energy dissipated to heat and is an extensive quantity with units of energy, the hysteresivity represents the fraction of the elastic energy that is lost to heat, and is an intensive property that is dimensionless.
Overview
When a force deforms a material it generates elastic stresses and internal frictional stresses. Most often, frictional stress is described as being analogous to the stress that results from the flow of a viscous fluid, but in many engineering materials, in soft biological tissues, and in living cells, the concept that friction arises only from a viscous stress is now known to be erroneous. For example, Bayliss and Robertson
and Hildebrandt demonstrated that frictional stress in lung tissue is dependent upon the amount of lung expansion but not the rate of expansion, findings that are fundamentally incompatible with the notion of friction being caused by a viscous stress. If not by a viscous stress, how then does friction arise, and how is it properly described?
In many inert and living materials, the relationship between elastic and frictional stresses turns out to be very nearly invariant (something unaltered by a transformation). In lung tissues, for example, the frictional stress is almost invariably between 0.1 and 0.2 of the elastic stress, where this fraction is called the hysteresivity, h, or, equivalently, the structural damping coefficient. It is a simple phenomenological fact, therefore, that for each unit of peak elastic strain energy that is stored during a cyclic deformation, 10 to 20% of that elastic energy is taxed as friction and lost irreversibly to heat. This fixed relationship holds at the level of the whole lung
, isolated lung parenchymal tissue strips, isolated smooth muscle strips, and even isolate
|
https://en.wikipedia.org/wiki/Sequestrant
|
A sequestrant is a food additive which improves the quality and stability of foods. A sequestrant forms chelate complexes with polyvalent metal ions, especially copper, iron and nickel. This can prevent the oxidation of the fats in the food. Sequestrants are therefore a type of preservative.
The name comes from Latin and means "to withdraw from use" .
Common sequestrants are:
Calcium chloride (E509)
Calcium acetate (E263)
Calcium disodium ethylene diamine tetra-acetate (E385)
Glucono delta-lactone (E575)
Sodium gluconate (E576)
Potassium gluconate (E577)
Sodium tripolyphosphate (E451)
Sodium hexametaphosphate (E452i)
Sodium and calcium salts of EDTA are also commonly used in many foods and beverages.
References
Food additives
|
https://en.wikipedia.org/wiki/Doubly%20stochastic%20matrix
|
In mathematics, especially in probability and combinatorics, a doubly stochastic matrix
(also called bistochastic matrix) is a square matrix of nonnegative real numbers, each of whose rows and columns sums to 1, i.e.,
Thus, a doubly stochastic matrix is both left stochastic and right stochastic.
Indeed, any matrix that is both left and right stochastic must be square: if every row sums to 1 then the sum of all entries in the matrix must be equal to the number of rows, and since the same holds for columns, the number of rows and columns must be equal.
Birkhoff polytope
The class of doubly stochastic matrices is a convex polytope known as the Birkhoff polytope . Using the matrix entries as Cartesian coordinates, it lies in an -dimensional affine subspace of -dimensional Euclidean space defined by independent linear constraints specifying that the row and column sums all equal 1. (There are constraints rather than because one of these constraints is dependent, as the sum of the row sums must equal the sum of the column sums.) Moreover, the entries are all constrained to be non-negative and less than or equal to 1.
Birkhoff–von Neumann theorem
The Birkhoff–von Neumann theorem (often known simply as Birkhoff's theorem) states that the polytope is the convex hull of the set of permutation matrices, and furthermore that the vertices of are precisely the permutation matrices. In other words, if is a doubly stochastic matrix, then there exist and permutation matrices such that
(Such a decomposition of X is known as a 'convex combination'.) A proof of the theorem based on Hall's marriage theorem is given below.
This representation is known as the Birkhoff–von Neumann decomposition, and may not be unique. It is often described as a real-valued generalization of Kőnig's theorem, where the correspondence is established through adjacency matrices of graphs.
Other properties
The product of two doubly stochastic matrices is doubly stochastic. However, the
|
https://en.wikipedia.org/wiki/Firming%20agent
|
Firming agents are food additives added in order to precipitate residual pectin, thus strengthening the structure of the food and preventing its collapse during processing.
These are salts, typically lactates or phosphates, calcium salts or aluminum sulfates.
They are mainly used for (fresh) fruit and vegetables. For example, in the case of fruit sold cut into wedges, the pulp can be sprayed with a solution of the respective salt. They are salts that react with an ingredient in the product, such as the pectin in the fruit.
Typical firming agents are:
Calcium carbonate (E170)
Calcium hydrogen sulfite (E227)
Calcium citrates (E333)
Calcium phosphates (E341)
Calcium sulfate (E516)
Calcium chloride (E509)
Magnesium chloride (E511)
Magnesium sulfate (E518)
Calcium gluconate (E578)
Magnesium gluconate (E580)
References
Food additives
|
https://en.wikipedia.org/wiki/Edubuntu
|
Edubuntu, previously known as Ubuntu Education Edition, is an official derivative of the Ubuntu operating system designed for use in classrooms inside schools, homes and communities.
Edubuntu is developed in collaboration with teachers and technologists in several countries. Edubuntu is built on top of the Ubuntu base, incorporates the LTSP thin client architecture and several education-specific applications, and is aimed at users aged 6 to 18. It was designed for easy installation and ongoing system maintenance.
Features
Included with Edubuntu is the Linux Terminal Server Project and many applications relevant to education including GCompris, KDE Edutainment Suite, Sabayon Profile Manager, Pessulus Lockdown Editor, Edubuntu Menueditor, LibreOffice, Gnome Nanny and iTalc. Edubuntu CDs were previously available free of charge through their Shipit service; it is only available as a download in a DVD format.
In 23.04, Edubuntu's default GUI is GNOME. From 12.04 to 14.04, Edubuntu's default GUI was Unity; however GNOME, which had previously been the default, was also available. Since release 7.10, KDE is also available as Edubuntu KDE. In 2010, Edubuntu and the Qimo 4 Kids project were working on providing Qimo within Edubuntu, but this was not done as it would not have fit on a CD.
Project goals
The primary goal of Edubuntu was to enable an educator with limited technical knowledge and skills to set up a computer lab or an on-line learning environment in an hour or less and then effectively administer that environment.
The principal design goals of Edubuntu were centralized management of configuration, users and processes, together with facilities for working collaboratively in a classroom setting. Equally important was the gathering together of the best available free software and digital materials for education. According to a statement of goals on the official Edubuntu website: "Our aim is to put together a system that contains all the best free software ava
|
https://en.wikipedia.org/wiki/Dorsiventral
|
A dorsiventral (Lat. dorsum, "the back", venter, "the belly") organ is one that has two surfaces differing from each other in appearance and structure, as an ordinary leaf. This term has also been used as a synonym for dorsoventral organs, those that extend from a dorsal to a ventral surface.
This word is also used to define body structure of an organism, e.g. flatworm have dorsiventrally flattened bodies.
References
Anatomy
|
https://en.wikipedia.org/wiki/Strobogrammatic%20number
|
A strobogrammatic number is a number whose numeral is rotationally symmetric, so that it appears the same when rotated 180 degrees. In other words, the numeral looks the same right-side up and upside down (e.g., 69, 96, 1001). A strobogrammatic prime is a strobogrammatic number that is also a prime number, i.e., a number that is only divisible by one and itself (e.g., 11). It is a type of ambigram, words and numbers that retain their meaning when viewed from a different perspective, such as palindromes.
Description
When written using standard characters (ASCII), the numbers, 0, 1, 8 are symmetrical around the horizontal axis, and 6 and 9 are the same as each other when rotated 180 degrees. In such a system, the first few strobogrammatic numbers are:
0, 1, 8, 11, 69, 88, 96, 101, 111, 181, 609, 619, 689, 808, 818, 888, 906, 916, 986, 1001, 1111, 1691, 1881, 1961, 6009, 6119, 6699, 6889, 6969, 8008, 8118, 8698, 8888, 8968, 9006, 9116, 9696, 9886, 9966, ...
The first few strobogrammatic primes are:
11, 101, 181, 619, 16091, 18181, 19861, 61819, 116911, 119611, 160091, 169691, 191161, 196961, 686989, 688889, ...
The years 1881 and 1961 were the most recent strobogrammatic years; the next strobogrammatic year will be 6009.
Although amateur aficionados of mathematics are quite interested in this concept, professional mathematicians generally are not. Like the concept of repunits and palindromic numbers, the concept of strobogrammatic numbers is base-dependent (expanding to base-sixteen, for example, produces the additional symmetries of 3/E; some variants of duodecimal systems also have this and a symmetrical x). Unlike palindromes, it is also font dependent. The concept of strobogrammatic numbers is not neatly expressible algebraically, the way that the concept of repunits is, or even the concept of palindromic numbers.
Nonstandard systems
The strobogrammatic properties of a given number vary by typeface. For instance, in an ornate serif type, the numbers 2 and 7
|
https://en.wikipedia.org/wiki/Kurosh%20problem
|
In mathematics, the Kurosh problem is one general problem, and several more special questions, in ring theory. The general problem is known to have a negative solution, since one of the special cases has been shown to have counterexamples. These matters were brought up by Aleksandr Gennadievich Kurosh as analogues of the Burnside problem in group theory.
Kurosh asked whether there can be a finitely-generated infinite-dimensional algebraic algebra (the problem being to show this cannot happen). A special case is whether or not every nil algebra is locally nilpotent.
For PI-algebras the Kurosh problem has a positive solution.
Golod showed a counterexample to that case, as an application of the Golod–Shafarevich theorem.
The Kurosh problem on group algebras concerns the augmentation ideal I. If I is a nil ideal, is the group algebra locally nilpotent?
There is an important problem which is often referred as the Kurosh's problem on division rings. The problem asks whether there exists an algebraic (over the center) division ring which is not locally finite. This problem has not been solved until now.
References
Vesselin S. Drensky, Edward Formanek (2004), Polynomial Identity Rings, p. 89.
Some open problems in the theory of infinite dimensional algebras (2007). E. Zelmanov.
Ring theory
Unsolved problems in mathematics
|
https://en.wikipedia.org/wiki/Pencil%20%28geometry%29
|
In geometry, a pencil is a family of geometric objects with a common property, for example the set of lines that pass through a given point in a plane, or the set of circles that pass through two given points in a plane.
Although the definition of a pencil is rather vague, the common characteristic is that the pencil is completely determined by any two of its members. Analogously, a set of geometric objects that are determined by any three of its members is called a bundle. Thus, the set of all lines through a point in three-space is a bundle of lines, any two of which determine a pencil of lines. To emphasize the two-dimensional nature of such a pencil, it is sometimes referred to as a flat pencil.
Any geometric object can be used in a pencil. The common ones are lines, planes, circles, conics, spheres, and general curves. Even points can be used. A pencil of points is the set of all points on a given line. A more common term for this set is a range of points.
Pencil of lines
In a plane, let and be two distinct intersecting lines. For concreteness, suppose that has the equation, and has the equation . Then
,
represents, for suitable scalars and , any line passing through the intersection of = 0 and = 0. This set of lines passing through a common point is called a pencil of lines. The common point of a pencil of lines is called the vertex of the pencil.
In an affine plane with the reflexive variant of parallelism, a set of parallel lines forms an equivalence class called a pencil of parallel lines. This terminology is consistent with the above definition since in the unique projective extension of the affine plane to a projective plane a single point (point at infinity) is added to each line in the pencil of parallel lines, thus making it a pencil in the above sense in the projective plane.
Pencil of planes
A pencil of planes, is the set of planes through a given straight line in three-space, called the axis of the pencil. The pencil is sometimes refer
|
https://en.wikipedia.org/wiki/Signed%20zero
|
Signed zero is zero with an associated sign. In ordinary arithmetic, the number 0 does not have a sign, so that −0, +0 and 0 are equivalent. However, in computing, some number representations allow for the existence of two zeros, often denoted by −0 (negative zero) and +0 (positive zero), regarded as equal by the numerical comparison operations but with possible different behaviors in particular operations. This occurs in the sign-magnitude and ones' complement signed number representations for integers, and in most floating-point number representations. The number 0 is usually encoded as +0, but can still be represented by +0, −0, or 0.
The IEEE 754 standard for floating-point arithmetic (presently used by most computers and programming languages that support floating-point numbers) requires both +0 and −0. Real arithmetic with signed zeros can be considered a variant of the extended real number line such that 1/−0 = −∞ and 1/+0 = +∞; division is only undefined for ±0/±0 and ±∞/±∞.
Negatively signed zero echoes the mathematical analysis concept of approaching 0 from below as a one-sided limit, which may be denoted by x → 0−, x → 0−, or x → ↑0. The notation "−0" may be used informally to denote a negative number that has been rounded to zero. The concept of negative zero also has some theoretical applications in statistical mechanics and other disciplines.
It is claimed that the inclusion of signed zero in IEEE 754 makes it much easier to achieve numerical accuracy in some critical problems, in particular when computing with complex elementary functions. On the other hand, the concept of signed zero runs contrary to the usual assumption made in mathematics that negative zero is the same value as zero. Representations that allow negative zero can be a source of errors in programs, if software developers do not take into account that while the two zero representations behave as equal under numeric comparisons, they yield different results in some operations.
Re
|
https://en.wikipedia.org/wiki/Gloger%27s%20rule
|
Gloger's rule is an ecogeographical rule which states that within a species of endotherms, more heavily pigmented forms tend to be found in more humid environments, e.g. near the equator. It was named after the zoologist Constantin Wilhelm Lambert Gloger, who first remarked upon this phenomenon in 1833 in a review of covariation of climate and avian plumage color. Erwin Stresemann later noted that the idea had been expressed even earlier by Peter Simon Pallas in Zoographia Rosso-Asiatica (1811). Gloger found that birds in more humid habitats tended to be darker than their relatives from regions with higher aridity. Over 90% of 52 North American bird species studies conform to this rule.
One explanation of Gloger's rule in the case of birds appears to be the increased resistance of dark feathers to feather- or hair-degrading bacteria such as Bacillus licheniformis. Feathers in humid environments have a greater bacterial load, and humid environments are more suitable for microbial growth; dark feathers or hair are more difficult to break down. More resilient eumelanins (dark brown to black) are deposited in hot and humid regions, whereas in arid regions, pheomelanins (reddish to sandy color) predominate due to the benefit of crypsis.
Among mammals, there is a marked tendency in equatorial and tropical regions to have a darker skin color than poleward relatives. In this case, the underlying cause is probably the need to better protect against the more intense solar UV radiation at lower latitudes. However, absorption of a certain amount of UV radiation is necessary for the production of certain vitamins, notably vitamin D (see also osteomalacia).
Gloger's rule is also vividly demonstrated among human populations. Populations that evolved in sunnier environments closer to the equator tend to be darker-pigmented than populations originating farther from the equator. There are exceptions, however; among the most well known are the Tibetans and Inuit, who have darker s
|
https://en.wikipedia.org/wiki/Gamma%20matrices
|
In mathematical physics, the gamma matrices, also called the Dirac matrices, are a set of conventional matrices with specific anticommutation relations that ensure they generate a matrix representation of the Clifford algebra It is also possible to define higher-dimensional gamma matrices. When interpreted as the matrices of the action of a set of orthogonal basis vectors for contravariant vectors in Minkowski space, the column vectors on which the matrices act become a space of spinors, on which the Clifford algebra of spacetime acts. This in turn makes it possible to represent infinitesimal spatial rotations and Lorentz boosts. Spinors facilitate spacetime computations in general, and in particular are fundamental to the Dirac equation for relativistic particles.
In Dirac representation, the four contravariant gamma matrices are
is the time-like, Hermitian matrix. The other three are space-like, anti-Hermitian matrices. More compactly, and where denotes the Kronecker product and the (for ) denote the Pauli matrices.
In addition, for discussions of group theory the identity matrix () is sometimes included with the four gamma matricies, and there is an auxiliary, "fifth" traceless matrix used in conjunction with the regular gamma matrixies
The "fifth matrix" is not a proper member of the main set of four; it used for separating nominal left and right chiral representations.
The gamma matrices have a group structure, the gamma group, that is shared by all matrix representations of the group, in any dimension, for any signature of the metric. For example, the 2×2 Pauli matrices are a set of "gamma" matrices in three dimensional space with metric of Euclidean signature (3, 0). In five spacetime dimensions, the four gammas, above, together with the fifth gamma-matrix to be presented below generate the Clifford algebra.
Mathematical structure
The defining property for the gamma matrices to generate a Clifford algebra is the anticommutation relation
wher
|
https://en.wikipedia.org/wiki/Behrens%E2%80%93Fisher%20problem
|
In statistics, the Behrens–Fisher problem, named after Walter-Ulrich Behrens and Ronald Fisher, is the problem of interval estimation and hypothesis testing concerning the difference between the means of two normally distributed populations when the variances of the two populations are not assumed to be equal, based on two independent samples.
Specification
One difficulty with discussing the Behrens–Fisher problem and proposed solutions, is that there are many different interpretations of what is meant by "the Behrens–Fisher problem". These differences involve not only what is counted as being a relevant solution, but even the basic statement of the context being considered.
Context
Let X1, ..., Xn and Y1, ..., Ym be i.i.d. samples from two populations which both come from the same location–scale family of distributions. The scale parameters are assumed to be unknown and not necessarily equal, and the problem is to assess whether the location parameters can reasonably be treated as equal. Lehmann states that "the Behrens–Fisher problem" is used both for this general form of model when the family of distributions is arbitrary, and for when the restriction to a normal distribution is made. While Lehmann discusses a number of approaches to the more general problem, mainly based on nonparametrics, most other sources appear to use "the Behrens–Fisher problem" to refer only to the case where the distribution is assumed to be normal: most of this article makes this assumption.
Requirements of solutions
Solutions to the Behrens–Fisher problem have been presented that make use of either a classical or a Bayesian inference point of view and either solution would be notionally invalid judged from the other point of view. If consideration is restricted to classical statistical inference only, it is possible to seek solutions to the inference problem that are simple to apply in a practical sense, giving preference to this simplicity over any inaccuracy in the corresponding
|
https://en.wikipedia.org/wiki/Klein%20geometry
|
In mathematics, a Klein geometry is a type of geometry motivated by Felix Klein in his influential Erlangen program. More specifically, it is a homogeneous space X together with a transitive action on X by a Lie group G, which acts as the symmetry group of the geometry.
For background and motivation see the article on the Erlangen program.
Formal definition
A Klein geometry is a pair where G is a Lie group and H is a closed Lie subgroup of G such that the (left) coset space G/H is connected. The group G is called the principal group of the geometry and G/H is called the space of the geometry (or, by an abuse of terminology, simply the Klein geometry). The space of a Klein geometry is a smooth manifold of dimension
dim X = dim G − dim H.
There is a natural smooth left action of G on X given by
Clearly, this action is transitive (take ), so that one may then regard X as a homogeneous space for the action of G. The stabilizer of the identity coset is precisely the group H.
Given any connected smooth manifold X and a smooth transitive action by a Lie group G on X, we can construct an associated Klein geometry by fixing a basepoint x0 in X and letting H be the stabilizer subgroup of x0 in G. The group H is necessarily a closed subgroup of G and X is naturally diffeomorphic to G/H.
Two Klein geometries and are geometrically isomorphic if there is a Lie group isomorphism so that . In particular, if φ is conjugation by an element , we see that and are isomorphic. The Klein geometry associated to a homogeneous space X is then unique up to isomorphism (i.e. it is independent of the chosen basepoint x0).
Bundle description
Given a Lie group G and closed subgroup H, there is natural right action of H on G given by right multiplication. This action is both free and proper. The orbits are simply the left cosets of H in G. One concludes that G has the structure of a smooth principal H-bundle over the left coset space G/H:
Types of Klein geometries
Effective geo
|
https://en.wikipedia.org/wiki/Serre%27s%20modularity%20conjecture
|
In mathematics, Serre's modularity conjecture, introduced by , states that an odd, irreducible, two-dimensional Galois representation over a finite field arises from a modular form. A stronger version of this conjecture specifies the weight and level of the modular form. The conjecture in the level 1 case was proved by Chandrashekhar Khare in 2005, and a proof of the full conjecture was completed jointly by Khare and Jean-Pierre Wintenberger in 2008.
Formulation
The conjecture concerns the absolute Galois group of the rational number field .
Let be an absolutely irreducible, continuous, two-dimensional representation of over a finite field .
Additionally, assume is odd, meaning the image of complex conjugation has determinant -1.
To any normalized modular eigenform
of level , weight , and some Nebentype character
,
a theorem due to Shimura, Deligne, and Serre-Deligne attaches to a representation
where is the ring of integers in a finite extension of . This representation is characterized by the condition that for all prime numbers , coprime to we have
and
Reducing this representation modulo the maximal ideal of gives a mod representation of .
Serre's conjecture asserts that for any representation as above, there is a modular eigenform such that
.
The level and weight of the conjectural form are explicitly conjectured in Serre's article. In addition, he derives a number of results from this conjecture, among them Fermat's Last Theorem and the now-proven Taniyama–Weil (or Taniyama–Shimura) conjecture, now known as the modularity theorem (although this implies Fermat's Last Theorem, Serre proves it directly from his conjecture).
Optimal level and weight
The strong form of Serre's conjecture describes the level and weight of the modular form.
The optimal level is the Artin conductor of the representation, with the power of removed.
Proof
A proof of the level 1 and small weight cases of the conjecture was obtained in 2004 by Chandrashekha
|
https://en.wikipedia.org/wiki/Chemical%20clock
|
A chemical clock (or clock reaction) is a complex mixture of reacting chemical compounds in which the onset of an observable property (discoloration or coloration) occurs after a predictable induction time due to the presence of clock species at a detectable amount.
In cases where one of the reagents has a visible color, crossing a concentration threshold can lead to an abrupt color change after a reproducible time lapse.
Types
Clock reactions may be classified into three or four types:
Substrate-depletive clock reaction
The simplest clock reaction featuring two reactions:
A → C (rate k1)
B + C → products (rate k2, fast)
When substrate (B) is present, the clock species (C) is quickly consumed in the second reaction. Only when substrate B is all used up or depleted, species C can build up in amount causing the color to change. An example for this clock reaction is the sulfite/iodate reaction or iodine clock reaction, also known as Landolt's reaction.
Sometimes, a clock reaction involves the production of intermediate species in three consecutive reactions.
P + Q → R
R + Q → C
P + C → 2R
Given that Q is in excess, when substrate (P) is depleted, C builds up resulting in the change in color.
Autocatalysis-driven clock reaction
The basis of the reaction is similar to substrate-depletive clock reaction, except for the fact that rate k2 is very slow leading to the co-existing of substrates and clock species, so there is no need for substrate to be depleted to observe the change in color. The example for this clock is pentathionate/iodate reaction.
Pseudoclock behavior
The reactions in this category behave like a clock reaction, however they are irreproducible, unpredictable and hard to control. Examples are chlorite/thiosulfate and iodide/chlorite reactions.
Crazy clock reaction
The reaction is irreproducible in each run due to the initial inhomogeneity of the mixture which result from variation in stirring rate, overall volume as well as geometry of
|
https://en.wikipedia.org/wiki/Graphical%20Data%20Display%20Manager
|
GDDM (Graphical Data Display Manager) is a computer graphics system for the IBM System/370 which was developed in IBM's Hursley lab, and first released in 1979. GDDM was originally designed to provide programming support for the IBM 3279 colour display terminal and the associated 3287 colour printer. The 3279 was a colour graphics terminal designed to be used in a general business environment.
GDDM was extended in the early 1980s to provide graphics support for all of IBM's display terminals and printers, and ran on all of IBM's mainframe operating systems.
GDDM also provided support for the (then current) international standards for interactive computer graphics: GKS and PHIGS. Both GKS and PHIGS were designed around the requirements of CAD systems.
GDDM is also available on the IBM i midrange operating system, as well as its predecessor, the AS/400.
GDDM comprises a number of components:
Graphics primitives - lines, circles, boxes etc.
Graphing - through the Presentation Graphics Feature (PGF)
Language support - PL/I, REXX, COBOL etc.
Conversion capabilities - for example to GIF format.
Interactive Chart Utility (ICU).
GDDM remains in widespread use today, embedded in many z/OS applications, as well as in system programs.
GDDM and OS/2 Presentation Manager
IBM and Microsoft began collaborating on the design of OS/2 in 1986. The Graphics Presentation Interface (GPI), the graphics API in the OS/2 Presentation Manager, was based on IBM's GDDM and the Graphics Control Program (GCP). GCP was originally developed in Hursley for the 3270/PC-G and 3270/PC-GX terminals.
The GPI was the primary graphics API for the OS/2 operating system.
At the time (1980s), the graphical user interface (GUI) was still in its early stages of popularity, but already it was clear that the foundation of a good GUI was a graphics API with strong real-time interactive capabilities. Unfortunately, the design of GDDM was closer to (at the time) traditional graphics APIs like GKS, wh
|
https://en.wikipedia.org/wiki/Prime%20vertical
|
In astronomy, astrology, and geodesy, the prime vertical or first vertical is the vertical circle passing east and west through the zenith of a specific location, and intersecting the horizon in its east and west points.
In other words, the prime vertical is the vertical circle perpendicular to the meridian, and passes through the east and west points, zenith, and nadir of any place.
See also
Earth radius#Prime vertical
Meridian (astronomy)
References
External links
Astronomical coordinate systems
|
https://en.wikipedia.org/wiki/General%20Service%20Code
|
General Service Code is a code that was used during the American Civil War. The code uses one flag or two torches.
The flags come in three color schemes: a red square in the middle of a white background, white on black, or black on white. The flag that is used at any time depends on the visibility.
The flags come in three sizes: two feet by two feet, four by four, and six by six. The 2x2 flags are used in battle to send messages back to headquarters and to send back commands, sometimes by more than one signaler. The 4x4 flags are used for almost everything else. The 6x6 flags are for sending messages that can't wait until night so they could use the torches. These flags are so heavy that no one really wanted to use them.
One torch is put on a pole and waved around and is called the action torch. The other was stuck on a stake and called the foot torch. The purpose of the foot torch is to decipher if the message is meant for you or for the guy on the other side of the sender.
The torches run on turpentine. Turpentine is used in the torches because it burns brighter than kerosene. People don't use turpentine in lamps because it is far too volatile to be used in that manner.
The code uses three positions. Position one is to the left. Position two is to the right. Position three is forward. The following is the code and shortcuts.
A 11 B 1221 C 212 D 111 E 21 F 1112 G 1122 H 211 I 2 J 2211
K 1212 L 112 M 2112 N 22 O 12 P 2121 Q 2122 R 122 S 121 T 1
U 221 V 2111 W 2212 X 1211 Y 222 Z 1111
1 12221 2 21112 3 11211 4 11121 5 11112 6 21111 7 22111 8 22221 9 22122 0 11111
& 2222 -tion 2221 -ing 1121 -ed 1222
121212 Error
3 End of word
33 End of sentence
333 End of message
11, 11, 11, 3 Message received or understood
11, 11, 11, 333 Cease signaling
1 Wait a moment. 2 Are you ready? 3 I am ready.
4 Use short pole and small f
|
https://en.wikipedia.org/wiki/Natural%20gum
|
Natural gums are polysaccharides of natural origin, capable of causing a large increase in a solution's viscosity, even at small concentrations. They are mostly botanical gums, found in the woody elements of plants or in seed coatings.
Human uses
Gums are used in the food industry as thickening agents, gelling agents, emulsifying agents, and stabilizers, and in other industrial adhesives, binding agents, crystal inhibitors, clarifying agents, encapsulating agents, flocculating agents, swelling agents, foam stabilizers, etc. When consumed by humans, many of these gums are fermented by the microbes that inhabit the lower gastrointestinal tract (microbiome) and may influence the ecology and functions of these microscopic communities.
Commercial significance
Humans have used natural gums for various purposes, including chewing and the manufacturing of a wide range of products - such as varnish and lacquerware. Before the invention of synthetic equivalents, trade in gum formed part of the economy in places such as the Arabian peninsula (whence the name "gum arabic"), West Africa,
East Africa (copal) and northern New Zealand (kauri gum).
Examples
Natural gums can be classified according to their origin. They can also be classified as uncharged or ionic polymers (polyelectrolytes). Examples include (with E number food additive code):
References
Food additives
Chewing gum
Edible thickening agents
Polysaccharides
|
https://en.wikipedia.org/wiki/Glazing%20agent
|
A glazing agent is a natural or synthetic substance that provides a waxy, homogeneous, coating to prevent water loss from a surface and provide other protection.
Natural
Natural glazing agents keep moisture inside plants and insects. Scientists harnessed this characteristic in coatings made of substances classified as waxes. A natural wax is chemically defined as an ester with a very long hydrocarbon chain that also includes a long chain alcohol.
Examples are:
Stearic acid (E570)
Beeswax (E901)
Candelilla wax (E902)
Carnauba wax (E903)
Shellac (E904)
Microcrystalline wax (E905c), Crystalline wax (E907)
Lanolin (E913)
Oxidized polyethylene wax (E914)
Esters of colophonium (E915)
Paraffin
Synthetic
Scientists have produced glazing agents that mimic their natural counterparts. These components are added in different proportions to achieve the optimal glazing agent for a product. Such products include cosmetics, automobiles and food.
Some of the characteristics that are looked for in all of the above industries are:
1. Preservation - the glazing agent must protect the product from degradation and water loss. This characteristic can lead to a longer shelf life for a food or the longevity of a car without rusting.
2. Stability - the glazing agent must maintain its integrity under pressure or heat.
3. Uniform viscosity - this ensures a stronger protective coating that can be applied to the product as a homogeneous layer.
4. Industrial reproduction - because most glazing agents are used on commercial goods and therefore large quantities of glazing agent may be needed.
There are different variations of glazing agents, depending on the product, but they are all designed for the same purpose.
References
See also
Glaze (cooking technique)
Food coating, a comprehensive description of this unit operation in a food processing line.
Food additives
|
https://en.wikipedia.org/wiki/Candelilla%20wax
|
Candelilla wax is a wax derived from the leaves of the small Candelilla shrub native to northern Mexico and the southwestern United States, Euphorbia antisyphilitica, from the family Euphorbiaceae. It is yellowish-brown, hard, brittle, aromatic, and opaque to translucent.
Composition and production
With a melting point of 68.5–72.5 °C, candelilla wax consists of mainly hydrocarbons (about 50%, chains with 29–33 carbons), esters of higher molecular weight (20–29%), free acids (7–9%), and resins (12–14%, mainly triterpenoid esters). The high hydrocarbon content distinguishes this wax from carnauba wax. It is insoluble in water, but soluble in many organic solvents such as acetone, chloroform, benzene, and turpentine.
The wax is obtained by boiling the leaves and stems with dilute sulfuric acid, and the resulting "cerote" is skimmed from the surface and further processed. In this way, about 900 tons are produced annually.
Uses
It is mostly used mixed with other waxes to harden them without raising their melting point. As a food additive, candelilla wax has the E number E 902 and is used as a glazing agent. It also finds use in the cosmetic industry, as a component of lip balms and lotion bars. One of its major uses is as a binder for chewing gums.
Candelilla wax can be used as a substitute for carnauba wax and beeswax. It is also used for making varnish.
References
External links
Candelilla wax data sheet - from the UN Food and Agriculture Organization
Candelilla Institute
Wax, Men, and Money: Candelilla Wax Camps along the Rio Grande
Visual arts materials
Food additives
Painting materials
Waxes
E-number additives
|
https://en.wikipedia.org/wiki/IFolder
|
iFolder is an open-source application, developed by Novell, Inc., intended to allow cross-platform file sharing across computer networks.
iFolder operates on the concept of shared folders, where a folder is marked as shared and the contents of the folder are then synchronized to other computers over a network, either directly between computers in a peer-to-peer fashion or through a server. This is intended to allow a single user to synchronize files between different computers (for example between a work computer and a home computer) or share files with other users (for example a group of people who are collaborating on a project).
The core of the iFolder is actually a project called Simias. It is Simias which actually monitors files for changes, synchronizes these changes and controls the access permissions on folders. The actual iFolder clients (including a graphical desktop client and a web client) are developed as separate programs that communicate with the Simias back-end.
History
Originally conceived and developed at PGSoft before the company was taken over by Novell in 2000, iFolder was announced by Novell on March 19, 2001, and released on June 29, 2001 as a software package for Windows NT/2000 and Novell NetWare 5.1 or included with the forthcoming Novell NetWare 6.0. It also included the ability to access shared files through a web browser.
iFolder Professional Edition 2, announced on March 13, 2002 and released a month later, added support for Linux and Solaris and web access support for Windows CE and Palm OS. This edition was also designed to share files between millions of users in large companies, with increased reporting features for administrators. In 2003 iFolder won a Codie award.
On March 22, 2004, after their purchase of the Linux software companies Ximian and SUSE, Novell announced that they were releasing iFolder as an open source project under the GPL license. They also announced that the open source version of iFolder would use the M
|
https://en.wikipedia.org/wiki/Forest-fire%20model
|
In applied mathematics, a forest-fire model is any of a number of dynamical systems displaying self-organized criticality. Note, however, that according to Pruessner et al. (2002, 2004) the forest-fire model does not behave critically on very large, i.e. physically relevant scales. Early versions go back to Henley (1989) and Drossel and Schwabl (1992). The model is defined as a cellular automaton on a grid with Ld cells. L is the sidelength of the grid and d is its dimension. A cell can be empty, occupied by a tree, or burning. The model of Drossel and Schwabl (1992) is defined by four rules which are executed simultaneously:
A burning cell turns into an empty cell
A tree will burn if at least one neighbor is burning
A tree ignites with probability f even if no neighbor is burning
An empty space fills with a tree with probability p
The controlling parameter of the model is p/f which gives the average number of trees planted between two lightning strikes (see Schenk et al. (1996) and Grassberger (1993)). In order to exhibit a fractal frequency-size distribution of clusters a double separation of time scales is necessary
where Tsmax is the burn time of the largest cluster. The scaling behavior is not simple, however ( Grassberger 1993,2002 and Pruessner et al. 2002,2004).
A cluster is defined as a coherent set of cells, all of which have the same state. Cells are coherent if they can reach each other via nearest neighbor relations. In most cases, the von Neumann neighborhood (four adjacent cells) is considered.
The first condition allows large structures to develop, while the second condition keeps trees from popping up alongside a cluster while burning.
In landscape ecology, the forest fire model is used to illustrate the role of the fuel mosaic in the wildfire regime. The importance of the fuel mosaic on wildfire spread is debated. Parsimonious models such as the forest fire model can help to explore the role of the fuel mosaic and its limitations
|
https://en.wikipedia.org/wiki/Olami%E2%80%93Feder%E2%80%93Christensen%20model
|
In physics, in the area of dynamical systems, the Olami–Feder–Christensen model is an earthquake model conjectured to be an example of self-organized criticality where local exchange dynamics are not conservative. Despite the original claims of the authors and subsequent claims of other authors such as Lise, whether or not the model is self organized critical remains an open question.
The system behaviour reproduces some empirical laws that earthquakes follow (such as the Gutenberg–Richter law and Omori's Law)
Model definition
The model is a simplification of the Burridge-Knopoff model, where the blocks move instantly to their balanced positions when submitted to a force greater than their friction.
Let S be a square lattice with L × L sites and let Kmn ≥ 0 be the tension at site (m,n). The sites with tension greater than 1 are called critical and go through a relaxation step where their tension spreads to their neighbours. Through analogy with the Burridge-Knopoff model, what is being simulated is a fault, where one of the lattice's dimensions is the flaw depth and the other one follows the flaw.
Model rules
If there are no critical sites, then the system suffers a continuous drive, until a site becomes critical:
else if the sites C1, C2, ..., Cm are critical the relaxation rule is applied in parallel:
where K'C is the tension prior to the relaxation and ΓC is the set of neighbours of site C. α is called the conservative parameter and can range from 0 to 0.25 in a square lattice. This can create a chain reaction which is interpreted as an earthquake.
These rules allow us to define a time variable that is update during the driving step
this is equivalent to define a constant drive
and assume the relaxation step is instantaneous, which is a good approximation for an earthquake model.
Behaviour and criticality
The system's behaviour is heavily influenced by the α parameter. For α=0.25 the system is conservative (in the sense that the
|
https://en.wikipedia.org/wiki/Bak%E2%80%93Sneppen%20model
|
The Bak–Sneppen model is a simple model of co-evolution between interacting species. It was developed to show how self-organized criticality may explain key features of the fossil record, such as the distribution of sizes of extinction events and the phenomenon of punctuated equilibrium. It is named after Per Bak and Kim Sneppen.
The model dynamics repeatedly eliminates the least adapted species and mutates it and its neighbors to recreate the interaction between species. A comprehensive study of the details of this model can be found in Phys. Rev. E 53, 414–443 (1996). A solvable version of the model has been proposed in Phys. Rev. Lett. 76, 348–351 (1996), which shows that the dynamics evolves sub-diffusively, driven by a long-range memory.
An evolutionary local search heuristic based on the Bak–Sneppen model, called extremal optimization, has been introduced in The Bak–Sneppen model has been applied to the theory of scientific progress.
Description
We consider N species, which are associated with a fitness factor f(i). They are indexed by integers i around a ring. The algorithm consists in choosing the least fit species, and then replacing it and its two closest neighbors (previous and next integer) by new species, with a new random fitness. After a long run there will be a minimum required fitness, below which species don't survive. These "long-run" events are referred to as avalanches, and the model proceeds through these avalanches until it reaches a state of relative stability where all species' fitness are above a certain threshold.
See also
Evolutionary biology
References
External links
Bak–Sneppen Evolution Model as an interactive java applet. Dead link 2019114
Chaotic maps
Evolutionary biology
Self-organization
Mathematical and theoretical biology
|
https://en.wikipedia.org/wiki/Blown%20plate%20glass
|
Blown plate is a hand-blown glass. There is a record of blown plate being produced in London in 1620.
Production
Blown plate was made by hand-grinding broad sheet glass. As the process was labour-intensive, and expensive, blown plate was mainly used for carriages and mirrors rather than in windows for buildings.
Other methods for making hand-blown glass included: broad sheet, crown glass, polished plate and cylinder blown sheet. These methods of manufacture lasted at least until the end of the 19th century. The early 20th century marked the move away from hand-blown to machine manufactured glass such as rolled plate, machine drawn cylinder sheet, flat drawn sheet, single and twin ground polished plate and float glass.
References
Glass production
Glass types
|
https://en.wikipedia.org/wiki/Broad%20sheet%20glass
|
Broad sheet is a type of hand-blown glass. It was first made in Sussex in 1226.
Production
It is made by blowing molten glass into an elongated tube shape with a blowpipe. Then, while the glass is still hot, the ends are cut off and the resulting cylinder is split with shears and flattened on an iron plate. The quality of broad sheet glass is not good, with many imperfections and mostly translucent. Due to the relatively small sizes blown, broad sheet was typically made into leadlights. The centerpiece was used for decoration in places where looking through the glass wasn't vital. If the piece was large, it was possible to see bubble tracks and strain lines.
Other methods for making hand-blown glass included blown plate glass, crown glass (introduced to England in the 17th century), polished plate glass and cylinder blown sheet glass. These methods of manufacture lasted at least until the end of the 19th century. The early 20th century marks the move away from hand-blown to machine manufactured glass such as rolled plate glass, machine drawn cylinder sheet glass, flat drawn sheet glass, single and twin ground polished plate glass and float glass.
Broad sheet glass was first made in the UK in Chiddingfold, Surrey on the border with Sussex in 1226. In 1240 an order was placed for this glass to be used in Westminster Abbey. This glass was of poor quality and fairly opaque. Manufacture slowly decreased and ceased by the early 16th century. The choice of this location may have been due to the availability of sand, the abundance of bracken (the ash of which can be used to make potash for soda glass) and the significant beech forests to provide charcoal as fuel for the kiln. Examples of glass from this area can be found in Guildford Museum.
Whilst French glass-makers and others were making broad sheet glass earlier than this notably William Le Verrier, Schurterrers and John Alemayne. Between 1350 and 1356 Alemayne secured orders for glass to be used in St. Stephens C
|
https://en.wikipedia.org/wiki/Polished%20plate%20glass
|
Polished plate is a type of hand-made glass. It is produced by casting glass onto a table and then subsequently grinding and polishing the glass. This was originally done by hand, and then later by machine. It was an expensive process requiring a large capital investment.
Other methods of producing hand-blown window glass included: broad sheet, blown plate, crown glass and cylinder blown sheet. These methods of manufacture lasted at least until the end of the 19th century. The early 20th century marks the move away from hand-blown to machine manufactured glass such as rolled plate, machine drawn cylinder sheet, flat drawn sheet, single and twin ground polished plate, and float glass.
The Frenchman, Louis Lucas de Nehou, in 1688, in conjunction with Abraham Thevart, succeeded in perfecting the process of casting plate-glass. Mirror plates previous to the invention had been made from blown "sheet" glass, and were consequently very limited in size. De Nehou's process of rolling molten glass poured on an iron table rendered the manufacture of very large plates possible.
In 1773 English polished plate (by the French process) was produced at Ravenhead.
By 1800 a steam engine was used to carry out the grinding and polishing of the cast glass.
References
Glass production
Glass types
|
https://en.wikipedia.org/wiki/Machine%20drawn%20cylinder%20sheet%20glass
|
Machine drawn cylinder sheet was the first mechanical method for "drawing" window glass. Cylinders of glass 40 feet (12 m) high are drawn vertically from a circular tank. The glass is then annealed and cut into 7 to 10 foot (2 to 3 m) cylinders. These are cut lengthways, reheated, and flattened.
This process was invented in the USA in 1903. This type of glass was manufactured in the early 20th century (it was manufactured in the United Kingdom by Pilkingtons from 1910 to 1933).
Other historical methods for making window glass included broad sheet, blown plate, crown glass, polished plate and cylinder blown sheet. These methods of manufacture lasted at least until the end of the 19th century. The early 20th century marks the move away from hand-blown to machine manufactured glass such as rolled plate, flat drawn sheet, single and twin ground polished plate and float glass.
Sources
Glass production
|
https://en.wikipedia.org/wiki/Sanmina%20Corporation
|
Sanmina Corporation is an American electronics manufacturing services (EMS) provider headquartered in San Jose, California that serves original equipment manufacturers in communications and computer hardware fields. The firm has nearly 80 manufacturing sites, and is one of the world’s largest independent manufacturers of printed circuit boards and backplanes. , it is ranked number 482 in the Fortune 500 list.
History
Sanmina was founded by Jure Sola and Milan Mandarić in 1980 as a printed circuit board manufacturer. It was named after Milan Manadarić's daughters Sandra and Jasmina. During the 1980s, it expanded into manufacturing backplanes and subassemblies for the telecommunications industry. During the 1990s, the company grew, producing complete products for major OEM companies and completing a number of acquisitions. Jure Sola became CEO and Chairman of Sanmina in 1991. The company completed an initial public offering on NASDAQ in 1993.
Merger and name changes
In December 2001, Sanmina merged with SCI Systems of Huntsville, Alabama, for $6 billion in cash, stock, and debt. Although Sanmina was only half as large as SCI at the time, it was in a better cash position because its core telecommunications business was performing well, whereas SCI's lower-margin businesses such as personal computer manufacturing, were struggling. Shortly after, Sanmina-SCI bought E-M Solutions, a bankrupt Fremont, California electronics manufacturer, for $110 million in cash. Then in early 2002, Sanmina acquired Rancho Santa Margarita-based Viking Interworks for $15 million ($10.9 million in cash and 390,000 shares of Sanmina stock worth $10.26 per share at the time).
On November 15, 2012, the company changed its name to Sanmina.
On July 2, 2015, the company announced that it had acquired the CertainSource Technology Group.
Change of Leadership
Bob Eulau replaced co-founder Jure Sola becoming the CEO effective October 2, 2017. After this change, Sola assumed the role of Execut
|
https://en.wikipedia.org/wiki/Computational%20finance
|
Computational finance is a branch of applied computer science that deals with problems of practical interest in finance. Some slightly different definitions are the study of data and algorithms currently used in finance and the mathematics of computer programs that realize financial models or systems.
Computational finance emphasizes practical numerical methods rather than mathematical proofs and focuses on techniques that apply directly to economic analyses. It is an interdisciplinary field between mathematical finance and numerical methods. Two major areas are efficient and accurate computation of fair values of financial securities and the modeling of stochastic time series.
History
The birth of computational finance as a discipline can be traced to Harry Markowitz in the early 1950s. Markowitz conceived of the portfolio selection problem as an exercise in mean-variance optimization. This required more computer power than was available at the time, so he worked on useful algorithms for approximate solutions. Mathematical finance began with the same insight, but diverged by making simplifying assumptions to express relations in simple closed forms that did not require sophisticated computer science to evaluate.
In the 1960s, hedge fund managers such as Ed Thorp and Michael Goodkin (working with Harry Markowitz, Paul Samuelson and Robert C. Merton) pioneered the use of computers in arbitrage trading. In academics, sophisticated computer processing was needed by researchers such as Eugene Fama in order to analyze large amounts of financial data in support of the efficient-market hypothesis.
During the 1970s, the main focus of computational finance shifted to options pricing and analyzing mortgage securitizations. In the late 1970s and early 1980s, a group of young quantitative practitioners who became known as "rocket scientists" arrived on Wall Street and brought along personal computers. This led to an explosion of both the amount and variety of computational
|
https://en.wikipedia.org/wiki/MDSP
|
MDSP is a multiprocessor DSP family from Cradle Technologies. Currently used mostly in streaming video processing in broadcast (internet and terrestrial) and video surveillance security markets.
It is a loosely coupled architecture that employs compute and Input/output (IO) subsystems with programmable (software defined) IO, consisting of general purpose and signal processing cores. The general purpose cores are used for control and IO processing and the DSP cores for fixed or floating point computation.
MDSP is similar in architecture to the Cell (microprocessor) processor from STI (Sony, Toshiba and IBM)
except it has multiple processing elements. Cell is PowerPC based. The PE (processing element) or GPP (General purpose processor) units are 32 bit general purpose RISC-like cores coupled with signal processing units (DSP or DSE) via a databus.
Development tools
The initial software development kit (sdk4) was based on cygwin 1.3.x and Cradles umgcc (GCC port). Sdk5 is based on Cygwin 1.5.x and cragcc (gcc port).
The chips are programmed in a mix of C and CLASM (C like assembly). The PEs can be programmed in C, the DSEs and MTEs are programmed in CLASM. The programmer has to manage resource allocation using semaphores, paying special attention to keeping all DSP units fed with instructions.
External links
CT3400 datasheet
CT3600 product brief
CT3600 datasheet
software development kit download
Digital signal processors
|
https://en.wikipedia.org/wiki/Central%20tolerance
|
In immunology, central tolerance (also known as negative selection) is the process of eliminating any developing T or B lymphocytes that are autoreactive, i.e. reactive to the body itself. Through elimination of autoreactive lymphocytes, tolerance ensures that the immune system does not attack self peptides. Lymphocyte maturation (and central tolerance) occurs in primary lymphoid organs such as the bone marrow and the thymus. In mammals, B cells mature in the bone marrow and T cells mature in the thymus.
Central tolerance is not perfect, so peripheral tolerance exists as a secondary mechanism to ensure that T and B cells are not self-reactive once they leave primary lymphoid organs. Peripheral tolerance is distinct from central tolerance in that it occurs once developing immune cells exit primary lymphoid organs (the thymus and bone-marrow), prior to their export into the periphery.
Function of central tolerance
Central tolerance is essential to proper immune cell functioning because it helps ensure that mature B cells and T cells do not recognize self-antigens as foreign microbes. More specifically, central tolerance is necessary because T cell receptors (TCRs) and B cell receptors (BCRs) are made by cells through random somatic rearrangement. This process, known as V(D)J recombination, is important because it increases the receptor diversity which increases the likelihood that B cells and T cells will have receptors for novel antigens. Junctional diversity occurs during recombination and serves to further increase the diversity of BCRs and TCRs. The production of random TCRs and BCRs is an important method of defense against microbes due to their high mutation rate. This process also plays an important role in promoting the survival of a species, because there will be a variety of receptor arrangements within a species – this enables a very high chance of at least one member of the species having receptors for a novel antigen.
While the process of somatic recom
|
https://en.wikipedia.org/wiki/Mathematics%20of%20Sudoku
|
Mathematics can be used to study Sudoku puzzles to answer questions such as "How many filled Sudoku grids are there?", "What is the minimal number of clues in a valid puzzle?" and "In what ways can Sudoku grids be symmetric?" through the use of combinatorics and group theory.
The analysis of Sudoku is generally divided between analyzing the properties of unsolved puzzles (such as the minimum possible number of given clues) and analyzing the properties of solved puzzles. Initial analysis was largely focused on enumerating solutions, with results first appearing in 2004.
For classical Sudoku, the number of filled grids is 6,670,903,752,021,072,936,960 (), which reduces to 5,472,730,538 essentially different solutions under the validity preserving transformations. There are 26 possible types of symmetry, but they can only be found in about 0.005% of all filled grids. An ordinary puzzle with a unique solution must have at least 17 clues. There is a solvable puzzle with at most 21 clues for every solved grid. The largest minimal puzzle found so far has 40 clues in the 81 cells.
Similar results are known for variants and smaller grids. No exact results are known for Sudokus larger than the classical 9×9 grid, although there are estimates which are believed to be fairly accurate.
Puzzles
Minimum number of givens
Ordinary Sudokus (proper puzzles) have a unique solution. A minimal Sudoku is a Sudoku from which no clue can be removed leaving it a proper Sudoku. Different minimal Sudokus can have a different number of clues. This section discusses the minimum number of givens for proper puzzles.
Ordinary Sudoku
Many Sudokus have been found with 17 clues, although finding them is not a trivial task. A paper by Gary McGuire, Bastian Tugemann, and Gilles Civario, released on 1 January 2012, explains how it was proved through an exhaustive computer search based on hitting set enumeration that the minimum number of clues in any proper Sudoku is 17.
Symmetrical Sudoku
The fe
|
https://en.wikipedia.org/wiki/Ergodic%20sequence
|
In mathematics, an ergodic sequence is a certain type of integer sequence, having certain equidistribution properties.
Definition
Let be an infinite, strictly increasing sequence of positive integers. Then, given an integer q, this sequence is said to be ergodic mod q if, for all integers , one has
where
and card is the count (the number of elements) of a set, so that is the number of elements in the sequence A that are less than or equal to t, and
so is the number of elements in the sequence A, less than t, that are equivalent to k modulo q. That is, a sequence is an ergodic sequence if it becomes uniformly distributed mod q as the sequence is taken to infinity.
An equivalent definition is that the sum
vanish for every integer k with .
If a sequence is ergodic for all q, then it is sometimes said to be ergodic for periodic systems.
Examples
The sequence of positive integers is ergodic for all q.
Almost all Bernoulli sequences, that is, sequences associated with a Bernoulli process, are ergodic for all q. That is, let be a probability space of random variables over two letters . Then, given , the random variable is 1 with some probability p and is zero with some probability 1-p; this is the definition of a Bernoulli process. Associated with each is the sequence of integers
Then almost every sequence is ergodic.
See also
Ergodic theory
Ergodic process, for the use of the term in signal processing
Ergodic theory
Integer sequences
|
https://en.wikipedia.org/wiki/Global%20Atmosphere%20Watch
|
The Global Atmosphere Watch (GAW) is a worldwide system established by the World Meteorological Organizationa United Nations agencyto monitor trends in the Earth's atmosphere. It arose out of concerns for the state of the atmosphere in the 1960s.
Mission
The Global Atmosphere Watch's mission is quite straightforward, consisting of three concise points:
To make reliable, comprehensive observations of the chemical composition and selected physical characteristics of the atmosphere on global and regional scales;
To provide the scientific community with the means to predict future atmospheric states;
To organize assessments in support of formulating environmental policy.
Goals
The GAW program is guided by 8 strategic goals:
To improve the measurements programme for better geographical and temporal coverage and for near real-time monitoring capability;
To complete the quality assurance/quality control system;
To improve availability of data and promote their use;
To improve communication and cooperation between all GAW components and with the scientific community;
To identify and clarify changing roles of GAW components;
To maintain present and solicit new support and collaborations for the GAW programme;
To intensify capacity-building in developing countries;
To enhance the capabilities of National Meteorological and Hydrological Services in providing urban environmental air quality services.
Moreover, the programme seeks not only to understand changes in the Earth's atmosphere, but also to forecast them, and perhaps control the human activities that cause them.
Genesis
The original reason for testing the atmosphere for trace chemicals was mere scientific interest, but of course, many scientists eventually wondered what effects these trace chemicals could have on the atmosphere, and on life.
The GAW's genesis began as far back as the 1950s when the World Meteorological Organization began a programme of monitoring the atmosphere for trace chemicals, and also r
|
https://en.wikipedia.org/wiki/Cipher%20runes
|
Cipher runes, or cryptic runes, are the cryptographical replacement of the letters of the runic alphabet.
Preservation
The knowledge of cipher runes was best preserved in Iceland, and during the 17th–18th centuries, Icelandic scholars produced several treatises on the subject. The most notable of these is the manuscript Runologia by Jón Ólafsson (1705–1779), which he wrote in Copenhagen (1732–1752). It thoroughly treats numerous cipher runes and runic ciphers, and it is now preserved in the Arnamagnæan Institute in Copenhagen.
Jón Ólafsson's treatise presents the Younger Futhark in the Viking Age order, which means that the m-rune precedes the l-rune. This small detail was of paramount importance for the interpretation of Viking Age cipher runes because in the 13th century the two runes had changed places through the influence of the Latin alphabet where l precedes m. Since the medieval runic calendar used the post-13th-century order, the early runologists of the 17th–18th centuries believed that the l-m order was the original one, and the order of the runes is of vital importance for the interpretation of cipher runes.
Structure of the ciphers
In the runic alphabet, the runes have their special order and are divided into groups. In the Younger Futhark, which has 16 letters, they are divided into three groups. The Icelandic tradition calls the first group (f, u, þ, ã, r and k) "Freyr's ætt", the second group (h, n, i, a and s) "Hagal's ætt" and the third group (t, b, m, l and ʀ) "Tyr's ætt". In order to make the inscription even harder to decipher, Freyr's ætt and Tyr's ætt change places so that group one is group three and vice versa. However, in several cases the ætts are counted in their correct order, and not backwards. There are numerous forms of cipher runes, but they are all based on the principle of giving the number of the ætt and the number of the rune within the ætt.
The tent runes are based on strokes added to the four arms of an X shape: Each X re
|
https://en.wikipedia.org/wiki/Gibbing
|
Gibbing is the process of preparing salt herring (or soused herring), in which the gills and part of the gullet are removed from the fish, eliminating any bitter taste. The liver and pancreas are left in the fish during the salt-curing process because they release enzymes essential for flavor. The fish is then cured in a barrel with one part salt to 20 herring. Today many variations and local preferences exist in this process.
History
According to a popular story, the process of gibbing was invented by Willem Beukelszoon ( Willem Beuckelsz, William Buckels or William Buckelsson), a 14th-century fisherman from Biervliet, Zealand. The invention of this fish preservation technique led to the Dutch becoming a seafaring power.
Sometime between 1380 and 1386, Beuckelsz discovered that "salt fish will keep, and that fish that can be kept can be packed and can be exported".
Beuckelsz' invention of gibbing created an export industry for salt herring that was monopolized by the Dutch. They began to build ships and eventually moved from trading in herring to colonizing and the Dutch Empire.
The Emperor Charles V erected a statue to Beuckelsz honouring him as the benefactor of his country, and Queen Mary of Hungary after finding his tomb sat upon it and ate a herring.
Herring is still very important to the Dutch who celebrate (Flag Day) each spring, as a tradition that dates back to the 14th century when fishermen went out to sea in their small boats to capture the annual catch, and to preserve and export their catch abroad.
See also
Herring Buss
References
External links
The Inventor Of Salt Herring (NY Times.com PAYWALL)
Herring
History of the Netherlands Podcast: The Fishy Tale of Willem Beukelszoon
Food preservation
Salted foods
Fish processing
Dutch inventions
|
https://en.wikipedia.org/wiki/Web%20API
|
A web API is an application programming interface (API) for either a web server or a web browser.
As a web development concept, it can be related to a web application's client side (including any web frameworks being used).
A server-side web API consists of one or more publicly exposed endpoints to a defined request–response message system, typically expressed in JSON or XML by means of an HTTP-based web server.
A server API (SAPI) is not considered a server-side web API, unless it is publicly accessible by a remote web application.
Client side
A client-side web API is a programmatic interface to extend functionality within a web browser or other HTTP client. Originally these were most commonly in the form of native plug-in browser extensions however most newer ones target standardized JavaScript bindings.
The Mozilla Foundation created their WebAPI specification which is designed to help replace native mobile applications with HTML5 applications.
Google created their Native Client architecture which is designed to help replace insecure native plug-ins with secure native sandboxed extensions and applications. They have also made this portable by employing a modified LLVM AOT compiler.
Server side
A server-side web API consists of one or more publicly exposed endpoints to a defined request–response message system, typically expressed in JSON or XML. The web API is exposed most commonly by means of an HTTP-based web server.
Mashups are web applications which combine the use of multiple server-side web APIs. Webhooks are server-side web APIs that take input as a Uniform Resource Identifier (URI) that is designed to be used like a remote named pipe or a type of callback such that the server acts as a client to dereference the provided URI and trigger an event on another server which handles this event thus providing a type of peer-to-peer IPC.
Endpoints
Endpoints are important aspects of interacting with server-side web APIs, as they specify where resources li
|
https://en.wikipedia.org/wiki/List%20of%20Wenninger%20polyhedron%20models
|
This is an indexed list of the uniform and stellated polyhedra from the book Polyhedron Models, by Magnus Wenninger.
The book was written as a guide book to building polyhedra as physical models. It includes templates of face elements for construction and helpful hints in building, and also brief descriptions on the theory behind these shapes. It contains the 75 nonprismatic uniform polyhedra, as well as 44 stellated forms of the convex regular and quasiregular polyhedra.
Models listed here can be cited as "Wenninger Model Number N", or WN for brevity.
The polyhedra are grouped in 5 tables: Regular (1–5), Semiregular (6–18), regular star polyhedra (20–22,41), Stellations and compounds (19–66), and uniform star polyhedra (67–119). The four regular star polyhedra are listed twice because they belong to both the uniform polyhedra and stellation groupings.
Platonic solids (regular convex polyhedra) W1 to W5
Archimedean solids (Semiregular) W6 to W18
Kepler–Poinsot polyhedra (Regular star polyhedra) W20, W21, W22 and W41
Stellations: models W19 to W66
Stellations of octahedron
Stellations of dodecahedron
Stellations of icosahedron
Stellations of cuboctahedron
Stellations of icosidodecahedron
Uniform nonconvex solids W67 to W119
See also
List of uniform polyhedra
The fifty nine icosahedra
List of polyhedral stellations
References
Errata
In Wenninger, the vertex figure for W90 is incorrectly shown as having parallel edges.
External links
Magnus J. Wenninger
Software used to generate images in this article:
Stella: Polyhedron Navigator Stella (software) - Can create and print nets for all of Wenninger's polyhedron models.
Vladimir Bulatov's Polyhedra Stellations Applet
Vladimir Bulatov's Polyhedra Stellations Applet packaged as an OS X application
M. Wenninger, Polyhedron Models, Errata: known errors in the various editions.
Polyhedra
Polyhedral stellation
Mathematics-related lists
|
https://en.wikipedia.org/wiki/Rhizosphere
|
The rhizosphere is the narrow region of soil or substrate that is directly influenced by root secretions and associated soil microorganisms known as the root microbiome. Soil pores in the rhizosphere can contain many bacteria and other microorganisms that feed on sloughed-off plant cells, termed rhizodeposition, and the proteins and sugars released by roots, termed root exudates. This symbiosis leads to more complex interactions, influencing plant growth and competition for resources. Much of the nutrient cycling and disease suppression by antibiotics required by plants, occurs immediately adjacent to roots due to root exudates and metabolic products of symbiotic and pathogenic communities of microorganisms. The rhizosphere also provides space to produce allelochemicals to control neighbours and relatives.
The rhizoplane refers to the root surface including its associated soil particles which closely interact with each other. The plant-soil feedback loop and other physical factors occurring at the plant-root soil interface are important selective pressures in communities and growth in the rhizosphere and rhizoplane.
Background
The term "rhizosphere" was used first in 1904 by the German plant physiologist Lorenz Hiltner to describe how plant roots interface with surrounding soil. The prefix rhiza- comes from the Greek, and means "root". Hiltner postulated the rhizosphere was a region surrounding the plant roots, and populated with microorganisms under some degree of control by chemicals released from the plant roots.
Chemical interactions
Chemical availability
Plant roots may exude 20-40% of the sugars and organic acids - photosynthetically fixed carbon. Plant root exudates, such as organic acids, change the chemical structure and the biological communities of the rhizosphere in comparison with the bulk soil or parent soil. Concentrations of organic acids and saccharides affect the ability of the biological communities to shuttle phosphorus, nitrogen, potassium
|
https://en.wikipedia.org/wiki/Orbifold%20notation
|
In geometry, orbifold notation (or orbifold signature) is a system, invented by the mathematician William Thurston and promoted by John Conway, for representing types of symmetry groups in two-dimensional spaces of constant curvature. The advantage of the notation is that it describes these groups in a way which indicates many of the groups' properties: in particular, it follows William Thurston in describing the orbifold obtained by taking the quotient of Euclidean space by the group under consideration.
Groups representable in this notation include the point groups on the sphere (), the frieze groups and wallpaper groups of the Euclidean plane (), and their analogues on the hyperbolic plane ().
Definition of the notation
The following types of Euclidean transformation can occur in a group described by orbifold notation:
reflection through a line (or plane)
translation by a vector
rotation of finite order around a point
infinite rotation around a line in 3-space
glide-reflection, i.e. reflection followed by translation.
All translations which occur are assumed to form a discrete subgroup of the group symmetries being described.
Each group is denoted in orbifold notation by a finite string made up from the following symbols:
positive integers
the infinity symbol,
the asterisk, *
the symbol o (a solid circle in older documents), which is called a wonder and also a handle because it topologically represents a torus (1-handle) closed surface. Patterns repeat by two translation.
the symbol (an open circle in older documents), which is called a miracle and represents a topological crosscap where a pattern repeats as a mirror image without crossing a mirror line.
A string written in boldface represents a group of symmetries of Euclidean 3-space. A string not written in boldface represents a group of symmetries of the Euclidean plane, which is assumed to contain two independent translations.
Each symbol corresponds to a distinct transformation:
an
|
https://en.wikipedia.org/wiki/Hub%20dynamo
|
A hub dynamo is a small electrical generator built into the hub of a bicycle wheel that is usually used to power lights. Often the hub "dynamo" is not actually a dynamo, which creates DC, but a low-power magneto that creates AC. Most modern hub dynamos are regulated to 3 watts at 6 volts, although some will drive up to 6 watts at 12 volts.
Models
The market was largely pioneered by Sturmey-Archer with their Dynohub of the 1930s–1970s. This competed effectively with contemporaneous bottle dynamos and bottom-bracket generators, but the Dynohub was heavy with its steel housing and was discontinued in the 1980s. Around 2009, Sturmey-Archer released new hub dynamo/drum brake units with an aluminum housing, designated X-FDD and XL-FDD.
The Schmidt Original Nabendynamo (SON) can power two 6-volt lamps in series at speeds above about , and Schmidt manufactures lamps designed to facilitate this. These lamps have optics based on the Bisy FL road lights. The efficiency of the SON is quoted by the manufacturers at 65% (so just over 5 W of the rider's output is diverted to produce 3 W of electrical power) but this applies at only . At higher speeds the efficiency falls. Bicycle dynamos instead use permanent magnets to eliminate the need for a battery to excite the field and initiate electrical generation.
Shimano offers a variety of hub dynamos under the "Nexus" brand, such as the DH-3N70/DH-3N71, advertised as having significantly less drag than the Nexus NX-30.
SRAM manufactured the i-Light hub dynamo until 2017. The D7 series was available for both rim and disc brakes while the D3 series featured several rim brake varieties. In a 2006 review by the German Stiftung Warentest, the efficiency at of a D1 series i-Light hub dynamo was 66%, 10% better than a SON-28.
SR Suntour offered the DH-CT-630 hub dynamo series with integrated overvoltage protection. It was apparently discontinued in 2010, as it is absent from 2011 and later SR Suntour catalogs.
SP Dynamo Systems offer
|
https://en.wikipedia.org/wiki/Error%20threshold%20%28evolution%29
|
In evolutionary biology and population genetics, the error threshold (or critical mutation rate) is a limit on the number of base pairs a self-replicating molecule may have before mutation will destroy the information in subsequent generations of the molecule. The error threshold is crucial to understanding "Eigen's paradox".
The error threshold is a concept in the origins of life (abiogenesis), in particular of very early life, before the advent of DNA. It is postulated that the first self-replicating molecules might have been small ribozyme-like RNA molecules. These molecules consist of strings of base pairs or "digits", and their order is a code that directs how the molecule interacts with its environment. All replication is subject to mutation error. During the replication process, each digit has a certain probability of being replaced by some other digit, which changes the way the molecule interacts with its environment, and may increase or decrease its fitness, or ability to reproduce, in that environment.
Fitness landscape
It was noted by Manfred Eigen in his 1971 paper (Eigen 1971) that this mutation process places a limit on the number of digits a molecule may have. If a molecule exceeds this critical size, the effect of the mutations becomes overwhelming and a runaway mutation process will destroy the information in subsequent generations of the molecule. The error threshold is also controlled by the "fitness landscape" for the molecules. The fitness landscape is characterized by the two concepts of height (=fitness) and distance (=number of mutations). Similar molecules are "close" to each other, and molecules that are fitter than others and more likely to reproduce, are "higher" in the landscape.
If a particular sequence and its neighbors have a high fitness, they will form a quasispecies and will be able to support longer sequence lengths than a fit sequence with few fit neighbors, or a less fit neighborhood of sequences. Also, it was noted by Wi
|
https://en.wikipedia.org/wiki/Dissection%20problem
|
In geometry, a dissection problem is the problem of partitioning a geometric figure (such as a polytope or ball) into smaller pieces that may be rearranged into a new figure of equal content. In this context, the partitioning is called simply a dissection (of one polytope into another). It is usually required that the dissection use only a finite number of pieces. Additionally, to avoid set-theoretic issues related to the Banach–Tarski paradox and Tarski's circle-squaring problem, the pieces are typically required to be well-behaved. For instance, they may be restricted to being the closures of disjoint open sets.
The Bolyai–Gerwien theorem states that any polygon may be dissected into any other polygon of the same area, using interior-disjoint polygonal pieces. It is not true, however, that any polyhedron has a dissection into any other polyhedron of the same volume using polyhedral pieces (see Dehn invariant). This process is possible, however, for any two honeycombs (such as cube) in three dimension and any two zonohedra of equal volume (in any dimension).
A partition into triangles of equal area is called an equidissection. Most polygons cannot be equidissected, and those that can often have restrictions on the possible numbers of triangles. For example, Monsky's theorem states that there is no odd equidissection of a square.
See also
Dissection puzzle
Hilbert's third problem
Hinged dissection
References
External links
David Eppstein, Dissection Tiling.
Discrete geometry
Euclidean geometry
Geometric dissection
Polygons
Polyhedra
Polytopes
Mathematical problems
|
https://en.wikipedia.org/wiki/NEC%20V25
|
The NEC V25 (μPD70320) is the microcontroller version of the NEC V20 processor, manufactured by NEC Corporation. Features include:
NEC V20 core: 8-bit external data path, 20-bit address bus, machine code compatible with the Intel 8088
Timers
Internal interrupt controller
Dual-channel UART and baud rate generator for serial communications
It was officially phased out by NEC in early 2003.
References
Microcontrollers
V25
16-bit microprocessors
|
https://en.wikipedia.org/wiki/Eva%20%28social%20network%29
|
eva is a video social network that allows users to record and post short, spontaneous videos from their mobile phones.
Overview
eva is a mobile app and social networking service that was launched by Forbidden Technologies plc - creator of cloud-based video editing software FORscene - in summer 2015. Its co-founders include Stephen B. Streater (founder of Eidos Interactive), Aziz Musa (CEO of Forbidden Technologies plc) and Jens Wikholm (award-winning celebrity and portrait photographer). After holding the beta launch in London in July 2015 - and a subsequent regional launch in Melbourne - eva launched globally in Los Angeles on 1 October 2015. It is available on iOS and can be downloaded from the App Store.
Using eva
In order to make a video on eva, a user simply holds their thumb on the in-app record button and releases it when they are finished. The videos that users take are then automatically uploaded to their personal feed, the wider eva public feed, and grouped by interest topic so they directly become a part of the right communities. Instead of being saved to users' phones, these videos are saved in the cloud, utilising the powerful cloud-based video editing software developed by FORscene. eva has been described by SourceWire as "beautiful, simple and addictive".
Press Coverage
Working with service design consultancy we are experience (or wae), eva's initial structure was created and launched in an incredible 30-day sprint that made waves in the press.
In autumn 2015, eva - working with Chameleon PR - funded research into stereotypes surrounding bearded men, a piece of research that was picked up by over 100 global publications including the Huffington Post, the Independent, and Cosmopolitan (magazine).
References
IOS software
Social networking services
British social networking websites
2015 software
Video software
|
https://en.wikipedia.org/wiki/James%20Wallace%20Black
|
James Wallace Black (February 10, 1825 – January 5, 1896), known professionally as J.W. Black, was an early American photographer whose career was marked by experimentation and innovation.
Biography
He was born on February 10, 1825, in Francestown, New Hampshire.
After trying his luck as a painter in Boston, he turned to photography, beginning as a daguerreotype plate polisher. He soon partnered with John Adams Whipple, a prolific Boston photographer and inventor. Black's photograph of abolitionist John Brown in 1859, the year of his insurrection at Harpers Ferry, is now in the National Portrait Gallery, Smithsonian Institution.
In March 1860, Black took a photograph of poet Walt Whitman when Whitman was in Boston to oversee the typesetting of his 1860 edition of Leaves of Grass. Black's studio at 173 Washington Street was less than a block from the publishing firm of Thayer and Eldridge, who apparently commissioned the photograph to promote the 1860 edition.
On October 13, 1860, two years after the French photographer Nadar conducted his earliest experiments in balloon flight, Black made the first successful aerial photographs in the United States in collaboration with the balloon navigator Samuel Archer King on King's hot-air balloon, the Queen of the Air. He photographed Boston from a hot-air balloon at , taking 8 plates of glass negative measuring . One good print resulted, which the photographer entitled Boston, as the Eagle and the Wild Goose See It. This was the first clear aerial image of a city.
Almost immediately, aerial reconnaissance would be put to use by the Union and Confederate Armies during the American Civil War, though there is no credible evidence that aerial photography was successful.
Black later became the authority on the use of the magic lantern, a candlelight-powered projector that was a predecessor of today's slide projectors. By the late 1870s Black's business largely consisted of lantern slide production, including his famous imag
|
https://en.wikipedia.org/wiki/Marcel%20Vogel
|
Marcel Joseph Vogel (April 14, 1917 – February 12, 1991) was a research scientist working at the IBM San Jose Research Center for 27 years. He is sometimes referred to as Dr. Vogel, although this title was based on an honorary degree, not a Ph.D. Later in his career, he became interested in various theories of quartz crystals and other occult and esoteric fields of study.
Mainstream scientific work
It is claimed that Vogel started his research into luminescence while he was still in his teens. This research eventually led him to publish his thesis, Luminescence in Liquids and Solids and Their Practical Application, in collaboration with University of Chicago's Dr. Peter Pringsheim in 1943.
Two years after the publication, Vogel incorporated his own company, Vogel Luminescence, in San Francisco. For the next decade the firm developed a variety of new products: fluorescent crayons, tags for insecticides, a black light inspection kit to determine the secret trackways of rodents in cellars from their urine and the psychedelic colors popular in "new age" posters. In 1957, Vogel Luminescence was sold to Ultra Violet Products and Vogel joined IBM as a full-time research scientist. He retired from IBM in 1984.
In 1977 and 1978, Vogel participated in experiments with the Markovich Tesla Electrical Power Source, referred to as MTEPS, that was built by Peter T. Markovich.
He received 32 patents for his inventions up through his tenure at IBM. Among these was the magnetic coating for the 24" hard disk drive systems still in use. His areas of expertise, besides luminescence, were phosphor technology, magnetics and liquid crystal systems.
At Vogel's February 14, 1991 funeral, IBM researcher and Sacramento, California physician Bernard McGinity, M.D. said of him, "He made his mark because of the brilliance of his mind, his prolific ideas, and his seemingly limitless creativity."
Esoteric and occult studies
Crystals
Vogel was a proponent of crystal healing, and believed cut
|
https://en.wikipedia.org/wiki/Commodore%20LCD
|
The Commodore LCD (sometimes known in short as the CLCD) is an LCD-equipped laptop made by Commodore International. It was presented at the January 1985 Consumer Electronics Show, but never released. The CLCD was not directly compatible with other Commodore home computers, but its built-in Commodore BASIC 3.6 interpreter could run programs written in the Commodore 128's BASIC 7.0, as long as these programs did not include system-specific POKE commands. Like the Commodore 264 and Radio Shack TRS-80 Model 100 series computers, the CLCD had several built-in ROM-based office application programs.
The CLCD featured a 1 MHz Rockwell 65C102 CPU (a CMOS 6502 variant) and 32 KB of RAM (expandable to 64 KB internally). The BASIC interpreter and application programs were built into 96 KB of ROM.
References
External links
Old-Computers.Com: Commodore LCD
Secret Weapons of Commodore: The Commodore LCD
Commodore computers
Early laptops
6502-based home computers
Prototypes
|
https://en.wikipedia.org/wiki/Sativum
|
Sativa, sativus, and sativum are Latin botanical adjectives meaning cultivated. It is often associated botanically with plants that promote good health and used to designate certain seed-grown domestic crops.
Usage
Sativa (ending in -a) is the feminine form of the adjective, but masculine (-us) and neuter (-um) endings are also used to agree with the gender of the nouns they modify. For example, the masculine Crocus sativus and neuter Pisum sativum.
Examples
Examples of crops incorporating this word and its variations into their Latin name include:
Allium sativum, garlic.
Avena sativa, the common oat.
Cannabis sativa, one of three forms of cannabis.
Castanea sativa, sweet chestnut.
Crocus sativus, the saffron crocus.
Cucumis sativus, the cucumber.
Daucus carota subsp. sativus, the carrot, a plant species.
Eruca sativa, the rocket or arugula, a leaf vegetable.
Lactuca sativa, Lollo rosso lettuce.
Medicago sativa, alfalfa.
Nigella sativa, a flower whose edible seeds are sometimes known as "black cumin" or "black caraway".
Oryza sativa, rice.Pastinaca sativa., parsnip, a root vegetable closely related to the carrot and parsley; all belong to the family Apiaceae.
Pisum sativum'', pea plant.
See also
8 Foot Sativa, a New Zealand–based metal band
Sativa (Jhené Aiko song)
Sativanorte and Sativasur, towns/municipalities in the Colombian department of Boyacá
Sativum (disambiguation)
Sativus (disambiguation)
References
Latin biological phrases
Horticulture
|
https://en.wikipedia.org/wiki/Icebox
|
An icebox (also called a cold closet) is a compact non-mechanical refrigerator which was a common early-twentieth-century kitchen appliance before the development of safely powered refrigeration devices. Before the development of electric refrigerators, iceboxes were referred to by the public as "refrigerators". Only after the invention of the modern electric refrigerator did early non-electric refrigerators become known as iceboxes. The terms ice box and refrigerator were used interchangeably in advertising as long ago as 1848.
Origin
The first recorded use of refrigeration technology dates back to 1775 BC in the Sumerian city of Terqa. It was there that the region's King, Zimri-lim, began the construction of an elaborate ice house fitted with a sophisticated drainage system and shallow pools to freeze water in the night. Using ice for cooling and preservation was nothing new by then, but these ice houses paved the way for their smaller counterpart, the icebox, to come into existence. The traditional kitchen icebox dates back to the days of ice harvesting, whose heyday ran from the mid-19th century until the 1930s, when the electric refrigerator was introduced for home use. Most municipally consumed ice was harvested in winter from snow-packed areas or frozen lakes, stored in ice houses, and delivered domestically. In 1827 the commercial ice cutter was invented, increasing the ease and efficiency of harvesting natural ice. This invention made ice cheaper and in turn helped the icebox become more common.
Up until then, iceboxes for domestic use were not mass manufactured. By the 1840s, however, various companies including the Baldwin Refrigerator Company and the Ranney Refrigerator Company, and later Sears, started making home iceboxes commercially. D. Eddy & Son of Boston is considered to be the first company to produce iceboxes in mass numbers. As many Americans desired big iceboxes, some companies, such as the Boston Scientific Refrigerator Company, introduc
|
https://en.wikipedia.org/wiki/Zope%20Object%20Database
|
The Zope Object Database (ZODB) is an object-oriented database for transparently and persistently storing Python objects. It is included as part of the Zope web application server, but can also be used independently of Zope.
Features of the ZODB include: transactions, history/undo, transparently pluggable storage, built-in caching, multiversion concurrency control (MVCC), and scalability across a network (using ).
History
Created by Jim Fulton of Zope Corporation in the late 90s.
Started as simple Persistent Object System (POS) during Principia development (which later became Zope)
ZODB 3 was renamed when a significant architecture change was landed.
ZODB 4 was a short lived project to re-implement the entire ZODB 3 package using 100% Python.
Implementation
Basics
ZODB stores Python objects using an extended version of Python's built-in object persistence (pickle). A ZODB database has a single root object (normally a dictionary), which is the only object directly made accessible by the database. All other objects stored in the database are reached through the root object. Objects referenced by an object stored in the database are automatically stored in the database as well.
ZODB supports concurrent transactions using MVCC and tracks changes to objects on a per-object basis. Only changed objects are committed. Transactions are non-destructive by default and can be reverted.
Example
For example, say we have a car described using 3 classes Car, Wheel and Screw. In Python, this could be represented the following way:
class Car: [...]
class Wheel: [...]
class Screw: [...]
myCar = Car()
myCar.wheel1 = Wheel()
myCar.wheel2 = Wheel()
for wheel in (myCar.wheel1, myCar.wheel2):
wheel.screws = [Screw(), Screw()]
If the variable mycar is the root of persistence, then:
zodb['mycar'] = myCar
This puts all of the object instances (car, wheel, screws etc.) into storage, which can be retrieved later. If another program gets a connection to the database through the
|
https://en.wikipedia.org/wiki/Pre-order
|
A pre-order is an order placed for an item that has not yet been released. The idea for pre-orders came because people found it hard to get popular items in stores because of their popularity. Companies then had the idea to allow customers to reserve their own personal copy before its release, which has been a huge success.
Pre-orders allow consumers to guarantee immediate shipment on release, manufacturers can gauge how much demand there will be and thus the size of initial production runs, and sellers can be assured of minimum sales. Additionally, high pre-order rates can be used to increase sales further.
Pre-order incentive
Pre-order incentive, also known as pre-order bonus, is a marketing tactic in which a retailer or manufacturer/publisher of a product (usually a book or video game) encourages buyers to reserve a copy of the product at the store prior to its release.
Reasons vary, but typically, publishers wish to ensure strong initial sales for a product, and the offered incentive is used to induce shoppers who might otherwise wait for positive reviews or a specific shopping period, like the holiday season, to commit to a purchase. Having paid for part or all of the purchase when placing the order, the consumers will usually complete the transaction shortly after the product's release, often on its first day in stores. Individual stores or retail chains may also offer bonuses for a popularly anticipated product to ensure that the customer chooses to buy at that location, rather than from a competitor.
The pre-order bonus may be as simple as a discount on the item's purchase price or other related merchandise, another marketing strategy, or it may be an actual item or set of items. The items may be related merchandise or exclusive items available only through the pre-order program.
In video games
Until around 2000, the primary distribution method for video games were through physical media such as CD-ROMs, DVDs, or game cartridges, including packaging an
|
https://en.wikipedia.org/wiki/List%20of%20combinatorial%20computational%20geometry%20topics
|
List of combinatorial computational geometry topics enumerates the topics of computational geometry that states problems in terms of geometric objects as discrete entities and hence the methods of their solution are mostly theories and algorithms of combinatorial character.
See List of numerical computational geometry topics for another flavor of computational geometry that deals with geometric objects as continuous entities and applies methods and algorithms of nature characteristic to numerical analysis.
Construction/representation
Boolean operations on polygons
Convex hull
Hyperplane arrangement
Polygon decomposition
Polygon triangulation
Minimal convex decomposition
Minimal convex cover problem (NP-hard)
Minimal rectangular decomposition
Tessellation problems
Shape dissection problems
Straight skeleton
Stabbing line problem
Triangulation
Delaunay triangulation
Point-set triangulation
Polygon triangulation
Voronoi diagram
Extremal shapes
Minimum bounding box (Smallest enclosing box, Smallest bounding box)
2-D case: Smallest bounding rectangle (Smallest enclosing rectangle)
There are two common variants of this problem.
In many areas of computer graphics, the bounding box (often abbreviated to bbox) is understood to be the smallest box delimited by sides parallel to coordinate axes which encloses the objects in question.
In other applications, such as packaging, the problem is to find the smallest box the object (or objects) may fit in ("packaged"). Here the box may assume an arbitrary orientation with respect to the "packaged" objects.
Smallest bounding sphere (Smallest enclosing sphere)
2-D case: Smallest bounding circle
Largest empty rectangle (Maximum empty rectangle)
Largest empty sphere
2-D case: Maximum empty circle (largest empty circle)
Interaction/search
Collision detection
Line segment intersection
Point location
Point in polygon
Polygon intersection
Range searching
Orthogonal range searching
Simplex range searchi
|
https://en.wikipedia.org/wiki/Uncorrelated%20asymmetry
|
In game theory an uncorrelated asymmetry is an arbitrary asymmetry in a game which is otherwise symmetrical. The name 'uncorrelated asymmetry' is due to John Maynard Smith who called payoff relevant asymmetries in games with similar roles for each player 'correlated asymmetries' (note that any game with correlated asymmetries must also have uncorrelated asymmetries).
The explanation of an uncorrelated asymmetry usually makes reference to "informational asymmetry". Which may confuse some readers, since, games which may have uncorrelated asymmetries are still games of complete information . What differs between the same game with and without an uncorrelated asymmetry is whether the players know which role they have been assigned. If players in a symmetric game know whether they are Player 1, Player 2, etc. (or row vs. column player in a bimatrix game) then an uncorrelated asymmetry exists. If the players do not know which player they are then no uncorrelated asymmetry exists. The information asymmetry is that one player believes he is player 1 and the other believes he is player 2. Therefore, "informational asymmetry" does not refer to knowledge in the sense of an information set in an extensive form game.
The concept of uncorrelated asymmetries is important in determining which Nash equilibria are evolutionarily stable strategies in discoordination games such as the game of chicken. In these games the mixing Nash is the ESS if there is no uncorrelated asymmetry, and the pure conditional Nash equilibria are ESSes when there is an uncorrelated asymmetry.
The usual applied example of an uncorrelated asymmetry is territory ownership in the hawk-dove game. Even if the two players ("owner" and "intruder") have the same payoffs (i.e., the game is payoff symmetric), the territory owner will play Hawk, and the intruder Dove, in what is known as the 'Bourgeois strategy' (the reverse is also an ESS known as the 'anti-bourgeois strategy', but makes little biologi
|
https://en.wikipedia.org/wiki/List%20of%20numerical%20computational%20geometry%20topics
|
List of numerical computational geometry topics enumerates the topics of computational geometry that deals with geometric objects as continuous entities and applies methods and algorithms of nature characteristic to numerical analysis. This area is also called "machine geometry", computer-aided geometric design, and geometric modelling.
See List of combinatorial computational geometry topics for another flavor of computational geometry that states problems in terms of geometric objects as discrete entities and hence the methods of their solution are mostly theories and algorithms of combinatorial character.
Curves
In the list of curves topics, the following ones are fundamental to geometric modelling.
Parametric curve
Bézier curve
Spline
Hermite spline
Beta spline
B-spline
Higher-order spline
NURBS
Contour line
Surfaces
Bézier surface
Isosurface
Parametric surface
Other
Level-set method
Computational topology
Mathematics-related lists
Geometric algorithms
Geometry
|
https://en.wikipedia.org/wiki/Sipser%E2%80%93Lautemann%20theorem
|
In computational complexity theory, the Sipser–Lautemann theorem or Sipser–Gács–Lautemann theorem states that bounded-error probabilistic polynomial (BPP) time is contained in the polynomial time hierarchy, and more specifically Σ2 ∩ Π2.
In 1983, Michael Sipser showed that BPP is contained in the polynomial time hierarchy. Péter Gács showed that BPP is actually contained in Σ2 ∩ Π2. Clemens Lautemann contributed by giving a simple proof of BPP’s membership in Σ2 ∩ Π2, also in 1983. It is conjectured that in fact BPP=P, which is a much stronger statement than the Sipser–Lautemann theorem.
Proof
Here we present the Lautemann's proof. Without loss of generality, a machine M ∈ BPP with error ≤ 2−|x| can be chosen. (All BPP problems can be amplified to reduce the error probability exponentially.) The basic idea of the proof is to define a Σ2 sentence that is equivalent to stating that x is in the language, L, defined by M by using a set of transforms of the random variable inputs.
Since the output of M depends on random input, as well as the input x, it is useful to define which random strings produce the correct output as A(x) = {r | M(x,r) accepts}. The key to the proof is to note that when x ∈ L, A(x) is very large and when x ∉ L, A(x) is very small. By using bitwise parity, ⊕, a set of transforms can be defined as A(x) ⊕ t={r ⊕ t | r ∈ A(x)}. The first main lemma of the proof shows that the union of a small finite number of these transforms will contain the entire space of random input strings. Using this fact, a Σ2 sentence and a Π2 sentence can be generated that is true if and only if x ∈ L (see conclusion).
Lemma 1
The general idea of lemma one is to prove that if A(x) covers a large part of the random space then there exists a small set of translations that will cover the entire random space. In more mathematical language:
If , then , where such that
Proof. Randomly pick t1, t2, ..., t|r|. Let (the union of all transforms of A(x)).
So, for all r
|
https://en.wikipedia.org/wiki/Duophonic
|
Duophonic sound was a trade name for a type of audio signal processing used by Capitol Records on certain releases and re-releases of mono recordings issued during the 1960s and 1970s. In this process monaural recordings were reprocessed into a type of artificial stereo. Generically, the sound is commonly known as fake stereo or mock stereo.
This was done by splitting the mono signal into two channels, then delaying one channel's signal by means of delay lines and other circuits, i.e. desynchronizing the two channels by fractions of a second, and cutting the bass frequencies in one channel with a high-pass filter, then cutting the treble frequencies in the other channel with a low-pass filter. The result was an artificial stereo effect, without giving the listener the true directional sound characteristics of real stereo. In some cases, the effect was enhanced with reverberation and other technical tricks, sometimes adding stereo echo to mono tracks in an attempt to fool the listener.
Capitol employed this technique in order to increase its inventory of stereo LPs, thus satisfying retailer demand for more stereo content (and helping promote the sale of stereo receivers and turntables). For nearly ten years Capitol used the banner "DUOPHONIC – For Stereo Phonographs Only" to differentiate the Duophonic LPs from its true stereo LPs.
Capitol began using the process in June 1961 and continued its practice into the 1970s. It was used for some of the biggest Capitol releases, including albums by The Beach Boys and Frank Sinatra. Over the years, however, some Duophonic tapes were confused with true stereo recordings in Capitol Records' vaults, and were reissued on CD throughout the 1980s and 1990s. Capitol intentionally reissued some of the Beatles' Duophonic mixes on The Capitol Albums, Volume 1 and The Capitol Albums, Volume 2, in 2004 and 2006, respectively.
On rare occasions some artists deliberately used fake stereo to achieve an intended artistic effect. In such
|
https://en.wikipedia.org/wiki/Monopolin
|
Monopolin is a protein complex that in budding yeast is composed of the four proteins CSM1, HRR25, LRS4, and MAM1. Monopolin is required for the segregation of homologous centromeres to opposite poles of a dividing cell during anaphase I of meiosis. This occurs by bridging DSN1 kinetochore proteins to sister kinetochores within the centromere to physically fuse them and allow for the microtubules to pull each homolog toward opposite mitotic spindles.
Molecular structure
Monopolin is composed of a 4 CSM1:2 LRS4 complex which forms a V-shaped structure with two globular heads at the ends, which are responsible for directly crosslinking sister kinetochores. Bound to each CSM1 head is a MAM1 protein which recruits one copy of the HRR25 kinase. The hydrophobic cavity on the CSM1 subunit allows the hydrophobic regions of Monopolin receptor and kinetochore protein, DSN1, to bind to and fuse the sister kinetochores. Microtubules can then attach to the kinetochores on the homologous centromeres and pull them toward opposite mitotic spindles to complete anaphase of meiosis I.
References
Proteins
Molecular biology
Protein complexes
|
https://en.wikipedia.org/wiki/Separatrix%20%28mathematics%29
|
In mathematics, a separatrix is the boundary separating two modes of behaviour in a differential equation.
Example: simple pendulum
Consider the differential equation describing the motion of a simple pendulum:
where denotes the length of the pendulum, the gravitational acceleration and the angle between the pendulum and vertically downwards. In this system there is a conserved quantity H (the Hamiltonian), which is given by
With this defined, one can plot a curve of constant H in the phase space of system. The phase space is a graph with along the horizontal axis and on the vertical axis – see the thumbnail to the right. The type of resulting curve depends upon the value of H.
If then no curve exists (because must be imaginary).
If then the curve will be a simple closed curve which is nearly circular for small H and becomes "eye" shaped when H approaches the upper bound. These curves correspond to the pendulum swinging periodically from side to side.
If then the curve is open, and this corresponds to the pendulum forever swinging through complete circles.
In this system the separatrix is the curve that corresponds to . It separates — hence the name — the phase space into two distinct areas, each with a distinct type of motion. The region inside the separatrix has all those phase space curves which correspond to the pendulum oscillating back and forth, whereas the region outside the separatrix has all the phase space curves which correspond to the pendulum continuously turning through vertical planar circles.
Example: FitzHugh–Nagumo model
In the FitzHugh–Nagumo model, when the linear nullcline pierces the cubic nullcline at the left, middle, and right branch once each, the system has a separatrix. Trajectories to the left of the separatrix converge to the left stable equilibrium, and similarly for the right. The separatrix itself is the stable manifold for the saddle point in the middle. Details are found in the page.
The separatrix is cle
|
https://en.wikipedia.org/wiki/Dihedral%20symmetry%20in%20three%20dimensions
|
In geometry, dihedral symmetry in three dimensions is one of three infinite sequences of point groups in three dimensions which have a symmetry group that as an abstract group is a dihedral group Dihn (for n ≥ 2).
Types
There are 3 types of dihedral symmetry in three dimensions, each shown below in 3 notations: Schönflies notation, Coxeter notation, and orbifold notation.
Chiral
Dn, [n,2]+, (22n) of order 2n – dihedral symmetry or para-n-gonal group (abstract group: Dihn).
Achiral
Dnh, [n,2], (*22n) of order 4n – prismatic symmetry or full ortho-n-gonal group (abstract group: Dihn × Z2).
Dnd (or Dnv), [2n,2+], (2*n) of order 4n – antiprismatic symmetry or full gyro-n-gonal group (abstract group: Dih2n).
For a given n, all three have n-fold rotational symmetry about one axis (rotation by an angle of 360°/n does not change the object), and 2-fold rotational symmetry about a perpendicular axis, hence about n of those. For n = ∞, they correspond to three Frieze groups. Schönflies notation is used, with Coxeter notation in brackets, and orbifold notation in parentheses. The term horizontal (h) is used with respect to a vertical axis of rotation.
In 2D, the symmetry group Dn includes reflections in lines. When the 2D plane is embedded horizontally in a 3D space, such a reflection can either be viewed as the restriction to that plane of a reflection through a vertical plane, or as the restriction to the plane of a rotation about the reflection line, by 180°. In 3D, the two operations are distinguished: the group Dn contains rotations only, not reflections. The other group is pyramidal symmetry Cnv of the same order, 2n.
With reflection symmetry in a plane perpendicular to the n-fold rotation axis, we have Dnh, [n], (*22n).
Dnd (or Dnv), [2n,2+], (2*n) has vertical mirror planes between the horizontal rotation axes, not through them. As a result, the vertical axis is a 2n-fold rotoreflection axis.
Dnh is the symmetry group for a regular n-sided prism and also for a
|
https://en.wikipedia.org/wiki/The%20Lady%20Tasting%20Tea
|
The Lady Tasting Tea: How Statistics Revolutionized Science in the Twentieth Century () is a book by David Salsburg about the history of modern statistics and the role it played in the development of science and industry.
The title comes from the "lady tasting tea", an example from the famous book, The Design of Experiments, by Ronald A. Fisher. Regarding Fisher's example, the statistician Debabrata Basu wrote that "the famous case of the 'lady tasting tea'" was "one of the two supporting pillars [...] of the randomization analysis of experimental data".
Summary
The book discusses the statistical revolution which took place in the twentieth century, where science shifted from a deterministic view (Clockwork universe) to a perspective concerned primarily with probabilities and distributions and parameters. Salsburg does this through a collection of stories about the people who were fundamental in the change, starting with men like R.A. Fisher and Karl Pearson. He discusses at length how many of these people had their own philosophy of statistics, and in particular their own understanding of statistical significance. Throughout, he introduces in a very nontechnical fashion a variety of statistical ideas and methods, such as maximum likelihood estimation and bootstrapping.
Reception
The book was generally well-received, receiving coverage in a variety of medical and statistical journals. Reviewers from the medical field enjoyed Salsburg's coverage of Fisher's opposition to early research on the health effects of tobacco. Critics disagreed with certain opinions that Salsburg voiced, like his barebones portrayal of Bayesian statistics and his seeming disdain for pure mathematics. Nevertheless, almost all reviewers appreciated the interesting read and recommended the book to people in their field as well as a general audience.
List of scholars mentioned
The book discusses a wide variety of statisticians, mathematicians, as well as other scientists and scholars.
|
https://en.wikipedia.org/wiki/Tombstone%20%28programming%29
|
Tombstones are a mechanism to detect dangling pointers and mitigate the problems they can cause in computer programs. Dangling pointers can appear in certain computer programming languages, e.g. C, C++ and assembly languages.
A tombstone is a structure that acts as an intermediary between a pointer and its target, often heap-dynamic data in memory. The pointer – sometimes called the handle – points only at tombstones and never to its actual target. When the data is deallocated, the tombstone is set to a null (or, more generally, to a value that is illegal for a pointer in the given runtime environment), indicating that the variable no longer exists. This mechanism prevents the use of invalid pointers, which would otherwise access the memory area that once belonged to the now deallocated variable, although it may already contain other data, in turn leading to corruption of in-memory data. Depending on the operating system, the CPU can automatically detect such an invalid access (e.g. for the null value: a null pointer dereference error). This supports in analyzing the actual reason, a programming error, in debugging, and it can also be used to abort the program in production use, to prevent it from continuing with invalid data structures.
In more generalized terms, a tombstone can be understood as a marker for "this data is no longer here". For example, in filesystems it may be efficient when deleting files to mark them as "dead" instead of immediately reclaiming all their data blocks.
The downsides of using tombstones include a computational overhead and additional memory consumption: extra processing is necessary to follow the path from the pointer to data through the tombstone, and extra memory is necessary to retain tombstones for every pointer throughout the program. One other problem is that all the code that needs to work with the pointers in question needs to be implemented to use the tombstone mechanism.
Among popular programming languages, C++ implemen
|
https://en.wikipedia.org/wiki/Interprime
|
In mathematics, an interprime is the average of two consecutive odd primes. For example, 9 is an interprime because it is the average of 7 and 11. The first interprimes are:
4, 6, 9, 12, 15, 18, 21, 26, 30, 34, 39, 42, 45, 50, 56, 60, 64, 69, 72, 76, 81, 86, 93, 99, ...
Interprimes cannot be prime themselves (otherwise the primes would not have been consecutive).
There are infinitely many primes and therefore also infinitely many interprimes. The largest known interprime may be the 388342-digit n = 2996863034895 · 21290000, where n + 1 is the largest known twin prime.
See also
Prime gap
Twin primes
Cousin prime
Sexy prime
Balanced prime – a prime number with equal-sized prime gaps above and below it
References
Integer sequences
Prime numbers
|
https://en.wikipedia.org/wiki/Ioctl
|
In computing, ioctl (an abbreviation of input/output control) is a system call for device-specific input/output operations and other operations which cannot be expressed by regular system calls. It takes a parameter specifying a request code; the effect of a call depends completely on the request code. Request codes are often device-specific. For instance, a CD-ROM device driver which can instruct a physical device to eject a disc would provide an ioctl request code to do so. Device-independent request codes are sometimes used to give userspace access to kernel functions which are only used by core system software or still under development.
The ioctl system call first appeared in Version 7 of Unix under that name. It is supported by most Unix and Unix-like systems, including Linux and macOS, though the available request codes differ from system to system. Microsoft Windows provides a similar function, named "DeviceIoControl", in its Win32 API.
Background
Conventional operating systems can be divided into two layers, userspace and the kernel. Application code such as a text editor resides in userspace, while the underlying facilities of the operating system, such as the network stack, reside in the kernel. Kernel code handles sensitive resources and implements the security and reliability barriers between applications; for this reason, user mode applications are prevented by the operating system from directly accessing kernel resources.
Userspace applications typically make requests to the kernel by means of system calls, whose code lies in the kernel layer. A system call usually takes the form of a "system call vector", in which the desired system call is indicated with an index number. For instance, exit() might be system call number 1, and write() number 4. The system call vector is then used to find the desired kernel function for the request. In this way, conventional operating systems typically provide several hundred system calls to the userspace.
Thoug
|
https://en.wikipedia.org/wiki/Layered%20Service%20Provider
|
Layered Service Provider (LSP) is a deprecated feature of the Microsoft Windows Winsock 2 Service Provider Interface (SPI). A Layered Service Provider is a DLL that uses Winsock APIs to attempt to insert itself into the TCP/IP protocol stack. Once in the stack, a Layered Service Provider can intercept and modify inbound and outbound Internet traffic. It allows processing of all the TCP/IP traffic taking place between the Internet and the applications that are accessing the Internet (such as a web browser, the email client, etc.). For example, it could be used by malware to redirect web browers to rogue websites, or to block access to sites like Windows Update. Alternatively, a computer security program could scan network traffic for viruses or other threats. The Winsock Service Provider Interface (SPI) API provides a mechanism for layering providers on top of each other. Winsock LSPs are available for a range of useful purposes, including parental controls and Web content filtering. The parental controls web filter in Windows Vista is an LSP. The layering order of all providers is kept in the Winsock Catalog.
Details
Unlike the well-known Winsock 2 API, which is covered by numerous books, documentation, and samples, the Winsock 2 SPI is relatively unexplored. The Winsock 2 SPI is implemented by network transport service providers and namespace resolution service providers. The Winsock 2 SPI can be used to extend an existing transport service provider by implementing a Layered Service Provider. For example, quality of service (QoS) on Windows 98 and Windows 2000 is implemented as an LSP over the TCP/IP protocol stack. Another use for LSPs would be to develop specialized URL filtering software to prevent Web browsers from accessing certain sites, regardless of the browser installed on a desktop.
The Winsock 2 SPI allows software developers to create two different types of service providers—transport and namespace. Transport providers (commonly referred to as proto
|
https://en.wikipedia.org/wiki/Electromagnetic%20wave%20equation
|
The electromagnetic wave equation is a second-order partial differential equation that describes the propagation of electromagnetic waves through a medium or in a vacuum. It is a three-dimensional form of the wave equation. The homogeneous form of the equation, written in terms of either the electric field or the magnetic field , takes the form:
where
is the speed of light (i.e. phase velocity) in a medium with permeability , and permittivity , and is the Laplace operator. In a vacuum, , a fundamental physical constant. The electromagnetic wave equation derives from Maxwell's equations. In most older literature, is called the magnetic flux density or magnetic induction. The following equationspredicate that any electromagnetic wave must be a transverse wave, where the electric field and the magnetic field are both perpendicular to the direction of wave propagation.
The origin of the electromagnetic wave equation
In his 1865 paper titled A Dynamical Theory of the Electromagnetic Field, James Clerk Maxwell utilized the correction to Ampère's circuital law that he had made in part III of his 1861 paper On Physical Lines of Force. In Part VI of his 1864 paper titled Electromagnetic Theory of Light, Maxwell combined displacement current with some of the other equations of electromagnetism and he obtained a wave equation with a speed equal to the speed of light. He commented:
The agreement of the results seems to show that light and magnetism are affections of the same substance, and that light is an electromagnetic disturbance propagated through the field according to electromagnetic laws.
Maxwell's derivation of the electromagnetic wave equation has been replaced in modern physics education by a much less cumbersome method involving combining the corrected version of Ampère's circuital law with Faraday's law of induction.
To obtain the electromagnetic wave equation in a vacuum using the modern method, we begin with the modern 'Heaviside' form of Maxwell's
|
https://en.wikipedia.org/wiki/Odious%20number
|
In number theory, an odious number is a positive integer that has an odd number of 1s in its binary expansion. Non-negative integers that are not odious are called evil numbers.
In computer science, an odious number is said to have odd parity.
Examples
The first odious numbers are:
Properties
If denotes the th odious number (with ), then for all , .
Every positive integer has an odious multiple that is at most . The numbers for which this bound is tight are exactly the Mersenne numbers with even exponents, the numbers of the form , such as 3, 15, 63, etc. For these numbers, the smallest odious multiple is exactly .
Related sequences
The odious numbers give the positions of the nonzero values in the Thue–Morse sequence. Every power of two is odious, because its binary expansion has only one nonzero bit. Except for 3, every Mersenne prime is odious, because its binary expansion consists of an odd prime number of consecutive nonzero bits.
Non-negative integers that are not odious are called evil numbers. The partition of the non-negative integers into the odious and evil numbers is the unique partition of these numbers into two sets that have equal multisets of pairwise sums.
References
External links
Integer sequences
|
https://en.wikipedia.org/wiki/Metalorganic%20vapour-phase%20epitaxy
|
Metalorganic vapour-phase epitaxy (MOVPE), also known as organometallic vapour-phase epitaxy (OMVPE) or metalorganic chemical vapour deposition (MOCVD), is a chemical vapour deposition method used to produce single- or polycrystalline thin films. It is a process for growing crystalline layers to create complex semiconductor multilayer structures. In contrast to molecular-beam epitaxy (MBE), the growth of crystals is by chemical reaction and not physical deposition. This takes place not in vacuum, but from the gas phase at moderate pressures (10 to 760 Torr). As such, this technique is preferred for the formation of devices incorporating thermodynamically metastable alloys, and it has become a major process in the manufacture of optoelectronics, such as Light-emitting diodes. It was invented in 1968 at North American Aviation (later Rockwell International) Science Center by Harold M. Manasevit.
Basic principles
In MOCVD ultrapure precursor gases are injected into a reactor, usually with a non-reactive carrier gas. For a III-V semiconductor, a metalorganic could be used as the group III precursor and a hydride for the group V precursor. For example, indium phosphide can be grown with trimethylindium ((CH3)3In) and phosphine (PH3) precursors.
As the precursors approach the semiconductor wafer, they undergo pyrolysis and the subspecies absorb onto the semiconductor wafer surface. Surface reaction of the precursor subspecies results in the incorporation of elements into a new epitaxial layer of the semiconductor crystal lattice. In the mass-transport-limited growth regime in which MOCVD reactors typically operate, growth is driven by supersaturation of chemical species in the vapor phase. MOCVD can grow films containing combinations of group III and group V, group II and group VI, group IV.
Required pyrolysis temperature increases with increasing chemical bond strength of the precursor. The more carbon atoms are attached to the central metal atom, the weaker the
|
https://en.wikipedia.org/wiki/Pulegone
|
Pulegone is a naturally occurring organic compound obtained from the essential oils of a variety of plants such as Nepeta cataria (catnip), Mentha piperita, and pennyroyal. It is classified as a monoterpene.
Pulegone is a clear colorless oily liquid and has a pleasant odor similar to pennyroyal, peppermint and camphor. It is used in flavoring agents, in perfumery, and in aromatherapy.
Toxicology
It was reported that the chemical is toxic to rats if a large quantity is consumed.
Pulegone is also an insecticide − the most powerful of three insecticides naturally occurring in many mint species.
As of October 2018, the FDA withdrew authorization for the use of pulegone as a synthetic flavoring substance for use in food, but that naturally-occurring pulegone can continue to be used.
Sources
Creeping charlie
Mentha longifolia
Mentha suaveolens
Pennyroyal
Peppermint
Schizonepeta tenuifolia
Bursera graveolens
See also
Menthofuran
Menthol
References
Ketones
Flavors
Cooling flavors
Perfume ingredients
Monoterpenes
IARC Group 2B carcinogens
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.