source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Deutsch%20limit
|
The Deutsch limit is an aphorism about the information density of visual programming languages originated by L. Peter Deutsch that states:
The problem with visual programming is that you can’t have more than 50 visual primitives on the screen at the same time.
The term was made up by Fred Lakin, after Deutsch made the following comment at a talk on visual programming by Scott Kim and Warren Robinett: "Well, this is all fine and well, but the problem with visual programming languages is that you can't have more than 50 visual primitives on the screen at the same time. How are you going to write an operating system?"
The primitives in a visual language are the separate graphical elements used to build a program, and having more of them available at the same time allows the programmer to read more information. This limit is sometimes cited as an example of the advantage of textual over visual languages, pointing out the greater information density of text, and posing a difficulty in scaling the language.
However, criticisms of the limit include that it is not clear whether a similar limit also exists in textual programming languages; and that the limit could be overcome by applying modularity to visual programming as is commonly done in textual programming.
See also
Cognitive dimensions of notations
Conway's law
References
External links
Parsons and Cranshaw commentary on Deutsch Limit in "Patterns of Visual Programming"
Baeza-Yates's commentary on Visual Programming
Computer programming
Adages
Computer programming folklore
Software engineering folklore
Programming principles
Visual programming languages
|
https://en.wikipedia.org/wiki/URL%20shortening
|
URL shortening is a technique on the World Wide Web in which a Uniform Resource Locator (URL) may be made substantially shorter and still direct to the required page. This is achieved by using a redirect which links to the web page that has a long URL. For example, the URL "" can be shortened to "", and the URL "" can be shortened to "". Often the redirect domain name is shorter than the original one. A friendly URL may be desired for messaging technologies that limit the number of characters in a message (for example SMS), for reducing the amount of typing required if the reader is copying a URL from a print source, for making it easier for a person to remember, or for the intention of a permalink. In November 2009, the shortened links of the URL shortening service Bitly were accessed 2.1 billion times.
Other uses of URL shortening are to "beautify" a link, track clicks, or disguise the underlying address. This is because the URL shortener can redirect to just about any web domain, even malicious ones. So, although disguising of the underlying address may be desired for legitimate business or personal reasons, it is open to abuse. Some URL shortening service providers have found themselves on spam blocklists, because of the use of their redirect services by sites trying to bypass those very same blocklists. Some websites prevent short, redirected URLs from being posted.
Purposes
There are several reasons to use URL shortening. Often regular unshortened links may be aesthetically unpleasing. Many web developers pass descriptive attributes in the URL to represent data hierarchies, command structures, transaction paths or session information. This can result in URLs that are hundreds of characters long and that contain complex character patterns. Such URLs are difficult to memorize, type out or distribute. As a result, long URLs must be copied and pasted for reliability. Thus, short URLs may be more convenient for websites or hard copy publications (e.g. a printe
|
https://en.wikipedia.org/wiki/Cc%3AMail
|
cc:Mail is a discontinued store-and-forward LAN-based email system originally developed on Microsoft's MS-DOS platform by Concentric Systems, Inc. in the 1980s. The company, founded by Robert Plummer, Hubert Lipinski, and Michael Palmer, later changed its name to PCC Systems, Inc., and then to cc:Mail, Inc. At the height of its popularity, cc:Mail had about 14 million users, and won various awards for being the top email software package of the mid-1990s.
Architecture overview
In the 1980s and 1990s, it became common in office environments to have a personal computer on every desk, all connected via a local area network (LAN). Typically, (at least) one computer is set up as a file server, so that any computer on the LAN can store and access files on the server as if they were local files. cc:Mail was designed to operate in that environment.
The central point of focus in the cc:Mail architecture is the cc:Mail "post office," which is a collection of files located on the file server and consisting of the message store and related data. However, no cc:Mail software needs to be installed or run on the file server itself. The cc:Mail application is installed on the user desktops. It provides a user interface, and reads and writes to the post office files directly in order to send, access, and manage email messages. This arrangement is called a "shared-file mail system" (which was also implemented later in competing products such as Microsoft Mail). This is in contrast to a "client/server mail system" which involves a mail client application interacting with a mail server application (the latter then being the focal point of message handling). Client/server mail was added later to the cc:Mail product architecture (see below), and also became available in competing offerings (such as Microsoft Exchange).
Other than the cc:Mail desktop application, key software elements of the cc:Mail architecture include cc:Mail Router (for transferring messages between post offices,
|
https://en.wikipedia.org/wiki/Change%20request
|
A change request, sometimes called change control request (CCR), is a document containing a call for an adjustment of a system; it is of great importance in the change management process.
Purpose and elements
A change request is declarative, i.e. it states what needs to be accomplished, but leaves out how the change should be carried out. Important elements of a change request are an ID, the customer (ID), the deadline (if applicable), an indication whether the change is required or optional, the change type (often chosen from a domain-specific ontology) and a change abstract, which is a piece of narrative (Keller, 2005). An example of a change request can be found in Figure 1 on the right.
Sources
Change requests typically originate from one of five sources:
problem reports that identify bugs that must be fixed, which forms the most common source
system enhancement requests from users
events in the development of other systems
changes in underlying structure and or standards (e.g. in software development this could be a new operating system)
demands from senior management (Dennis, Wixom & Tegarden, 2002).
Additionally, in Project Management, change requests may also originate from an unclear understanding of the goals and the objectives of the project.
Synonyms
Change requests have many different names, which essentially describe the same concept:
Request For Change (RFC) by Rajlich (1999); RFC is also a common term in ITIL (Keller, 2005) and PRINCE2 (Onna & Koning, 2003).
Engineering Change (EC) by Huang and Mak (1999);
Engineering Change Request (ECR) at Aero (Helms, 2002);
Engineering Change Order (ECO) by Loch and Terwiesch (1999) and Pikosz and Malmqvist (1998). Engineering Change Order is a separate step after ECR. After ECR is approved by Engineering Department then an ECO is made for making the change;
Change Notice at Chemical (Helms, 2002);
Action Request (AR) at ABB Robotics AB (Kajko-Mattson, 1999);
Change Request (CR) is, among others,
|
https://en.wikipedia.org/wiki/Transfer%20learning
|
Transfer learning (TL) is a technique in machine learning (ML) in which knowledge learned from a task is re-used in order to boost performance on a related task. For example, for image classification, knowledge gained while learning to recognize cars could be applied when trying to recognize trucks. This topic is related to the psychological literature on transfer of learning, although practical ties between the two fields are limited. Reusing/transferring information from previously learned tasks to new tasks has the potential to significantly improve learning efficiency.
History
In 1976, Bozinovski and Fulgosi published a paper addressing transfer learning in neural network training. The paper gives a mathematical and geometrical model of the topic. In 1981, a report considered the application of transfer learning to a dataset of images representing letters of computer terminals, experimentally demonstrating positive and negative transfer learning.
In 1993, Pratt formulated the discriminability-based transfer (DBT) algorithm.
In 1997, Pratt and Thrun guest-edited a special issue of Machine Learning devoted to transfer learning, and by 1998, the field had advanced to include multi-task learning, along with more formal theoretical foundations. Learning to Learn, edited by Thrun and Pratt, is a 1998 review of the subject.
Transfer learning has been applied in cognitive science. Pratt guest-edited an issue of Connection Science on reuse of neural networks through transfer in 1996.
Ng said in his NIPS 2016 tutorial that TL would become the next driver of machine learning commercial success after supervised learning.
In the 2020 paper, "Rethinking Pre-Training and self-training", Zoph et al. reported that pre-training can hurt accuracy, and advocate self-training instead.
Applications
Algorithms are available for transfer learning in Markov logic networks and Bayesian networks. Transfer learning has been applied to cancer subtype discovery, building utilization
|
https://en.wikipedia.org/wiki/History%20of%20Microsoft%20Flight%20Simulator
|
Microsoft Flight Simulator began as a set of articles on computer graphics, written by Bruce Artwick throughout 1976, about flight simulation using 3-D graphics. When the editor of the magazine told Artwick that subscribers were interested in purchasing such a program, Artwick founded Sublogic Corporation to commercialize his ideas. At first the new company sold flight simulators through mail order, but that changed in January 1979 with the release of Flight Simulator (FS) for the Apple II. They soon followed this up with versions for other systems and from there it evolved into a long-running series of computer flight simulators.
Sublogic flight simulators
First generation (Apple II and TRS-80)
− January 1979 for Apple II
− January 1980 for TRS-80
Second generation (Tandy Color Computer 3, Apple II, Commodore 64, and Atari 8-bit)
− December 1983 for Apple II
− June 1984 for Commodore 64
− October 1984 for Atari 8-bit family
− Sometime in 1987 for CoCo 3
Third generation (Amiga, Atari ST, and Macintosh)
− March 1986 for Apple Macintosh
− November 1986 for Amiga and Atari ST
In 1984, Amiga Corporation asked Artwick to port Flight Simulator for its forthcoming computer, but Commodore's purchase of Amiga temporarily ended the relationship. Sublogic instead finished a Macintosh version, released by Microsoft, then resumed work on the Amiga and Atari ST versions.
Although still called Flight Simulator II, the Amiga and Atari ST versions compare favorably with Microsoft Flight Simulator 3.0. Notable features included a windowing system allowing multiple simultaneous 3d views - including exterior views of the aircraft itself - and (on the Amiga and Atari ST) modem play.
Info gave the Amiga version five out of five, describing it as the "finest incarnation". Praising the "superb" graphics, the magazine advised to "BEGIN your game collection with this one!"
Microsoft Flight Simulator
Flight Simulator 1.0
− Released in November 1982
Flight Simulator 2.0
− Releas
|
https://en.wikipedia.org/wiki/Q-derivative
|
In mathematics, in the area of combinatorics and quantum calculus, the q-derivative, or Jackson derivative, is a q-analog of the ordinary derivative, introduced by Frank Hilton Jackson. It is the inverse of Jackson's q-integration. For other forms of q-derivative, see .
Definition
The q-derivative of a function f(x) is defined as
It is also often written as . The q-derivative is also known as the Jackson derivative.
Formally, in terms of Lagrange's shift operator in logarithmic variables, it amounts to the operator
which goes to the plain derivative, as .
It is manifestly linear,
It has a product rule analogous to the ordinary derivative product rule, with two equivalent forms
Similarly, it satisfies a quotient rule,
There is also a rule similar to the chain rule for ordinary derivatives. Let . Then
The eigenfunction of the q-derivative is the q-exponential eq(x).
Relationship to ordinary derivatives
Q-differentiation resembles ordinary differentiation, with curious differences. For example, the q-derivative of the monomial is:
where is the q-bracket of n. Note that so the ordinary derivative is regained in this limit.
The n-th q-derivative of a function may be given as:
provided that the ordinary n-th derivative of f exists at x = 0. Here, is the q-Pochhammer symbol, and is the q-factorial. If is analytic we can apply the Taylor formula to the definition of to get
A q-analog of the Taylor expansion of a function about zero follows:
Higher order q-derivatives
The following representation for higher order -derivatives is known:
is the -binomial coefficient. By changing the order of summation as , we obtain the next formula:
Higher order -derivatives are used to -Taylor formula and the -Rodrigues' formula (the formula used to construct -orthogonal polynomials).
Generalizations
Post Quantum Calculus
Post quantum calculus is a generalization of the theory of quantum calculus, and it uses the following operator:
Hahn difference
Wolfgang Hahn i
|
https://en.wikipedia.org/wiki/Categorical%20set%20theory
|
Categorical set theory is any one of several versions of set theory developed from or treated in the context of mathematical category theory.
See also
Categorical logic
References
External links
Category theory
Set theory
Formal methods
Categorical logic
|
https://en.wikipedia.org/wiki/Robot%20calibration
|
Robot calibration is a process used to improve the accuracy of robots, particularly industrial robots which are highly repeatable but not accurate. Robot calibration is the process of identifying certain parameters in the kinematic structure of an industrial robot, such as the relative position of robot links. Depending on the type of errors modeled, the calibration can be classified in three different ways. Level-1 calibration only models differences between actual and reported joint displacement values, (also known as mastering). Level-2 calibration, also known as kinematic calibration, concerns the entire geometric robot calibration which includes angle offsets and joint lengths. Level-3 calibration, also called a non-kinematic calibration, models errors other than geometric defaults such as stiffness, joint compliance, and friction. Often Level-1 and Level-2 calibration are sufficient for most practical needs.
Parametric robot calibration is the process of determining the actual values of kinematic and dynamic parameters of an industrial robot (IR). Kinematic parameters describe the relative position and orientation of links and joints in the robot while the dynamic parameters describe arm and joint masses and internal friction.
Non-parametric robot calibration circumvents the parameter identification. Used with serial robots, it is based on the direct compensation of mapped errors in the workspace. Used with parallel robots, non-parametric calibration can be performed by the transformation of the configuration space.
Robot calibration can remarkably improve the accuracy of robots programmed offline. A calibrated robot has a higher absolute as well as relative positioning accuracy compared to an uncalibrated one; i.e., the real position of the robot end effector corresponds better to the position calculated from the mathematical model of the robot. Absolute positioning accuracy is particularly relevant in connection with robot exchangeability and off-line pr
|
https://en.wikipedia.org/wiki/Firewall%20pinhole
|
In computer networking, a firewall pinhole is a port that is not protected by a firewall to allow a particular application to gain access to a service on a host in the network protected by the firewall.
Leaving ports open in firewall configurations exposes the protected system to potentially malicious abuse. A fully closed firewall prevents applications from accessing services on the other side of the firewall. For protection, the mechanism for opening a pinhole in the firewall should implement user validation and authorization.
For firewalls performing a network address translation (NAT) function, the mapping between the external {IP address, port} socket and the internal {IP address, port} socket is often called a pinhole.
Pinholes can be created manually or programmatically. They can be temporary, created dynamically for a specific duration such as for a dynamic connection, or permanent, such as for signaling functions.
Firewalls sometimes automatically close pinholes after a period of time (typically a few minutes) to minimize the security exposure. Applications that require a pinhole to be kept open often need to generate artificial traffic through the pinhole in order to cause the firewall to restart its timer.
See also
Port forwarding
Port triggering
NAT hole punching
NAT traversal
TCP hole punching
UDP hole punching
ICMP hole punching
Port Control Protocol (PCP)
NAT Port Mapping Protocol (NAT-PMP)
Internet Gateway Device Protocol (UPnP IGD)
Computer network security
|
https://en.wikipedia.org/wiki/LEED
|
Leadership in Energy and Environmental Design (LEED) is a green building certification program used worldwide. Developed by the non-profit U.S. Green Building Council (USGBC), it includes a set of rating systems for the design, construction, operation, and maintenance of green buildings, homes, and neighborhoods, which aims to help building owners and operators be environmentally responsible and use resources efficiently.
there were over 105,000 LEED-certified buildings and over 205,000 LEED-accredited professionals in 185 countries worldwide.
In the USA, the District of Columbia consistently leads in LEED-certified square footage per capita, followed in 2022 by the top-ranking states of Massachusetts, Illinois, New York, California, and Maryland.
Outside the United States, the top-ranking countries for 2022 were Mainland China, India, Canada, Brazil, and Sweden.
LEED Canada has developed a separate rating system adapted to the Canadian climate and regulations.
Some U.S. federal agencies, state and local governments require or reward LEED certification. This can include tax credits, zoning allowances, reduced fees, and expedited permitting. Offices, healthcare-, and education-related buildings are the most frequent LEED-certified buildings in the US (over 60%), followed by warehouses, distribution centers, retail projects and multifamily dwellings (another 20%).
Studies have found that for-rent LEED office spaces generally have higher rents and occupancy rates and lower capitalization rates.
LEED is a design tool rather than a performance-measurement tool and has focused on energy modeling rather than actual energy consumption. It has been criticized for a point system that can lead to inappropriate design choices and the prioritization of LEED certification points over actual energy conservation; for lacking climate specificity; for not sufficiently addressing issues of climate change and extreme weather; and for not incorporating principles of a circular econo
|
https://en.wikipedia.org/wiki/Monopulse%20radar
|
Monopulse radar is a radar system that uses additional encoding of the radio signal to provide accurate directional information. The name refers to its ability to extract range and direction from a single signal pulse.
Monopulse radar avoids problems seen in conical scanning radar systems, which can be confused by rapid changes in signal strength. The system also makes jamming more difficult. Most radars designed since the 1960s are monopulse systems. The monopulse method is also used in passive systems, such as electronic support measures and radio astronomy. Monopulse radar systems can be constructed with reflector antennas, lens antennas or array antennas.
Historically, monopulse systems have been classified as either phase-comparison monopulse or amplitude monopulse. Modern systems determine the direction from the monopulse ratio, which contain both amplitude and phase information. The monopulse method does not require that the measured signals are pulsed. The alternative name "simultaneous lobing" has therefore been suggested, but not popularized.
Background
Conical scan
Conical scanning is not considered to be a form of monopulse radar, but the following summary provides background that can aid understanding.
Conical scan systems send out a signal slightly to one side of the antenna's boresight and then rotate the feed horn to make the lobe rotate around the boresight line. A target centered on the boresight is always slightly illuminated by the lobe, and provides a strong return. If the target is to one side, it will be illuminated only when the lobe is pointed in that general direction, resulting in a weaker signal overall (or a flashing one if the rotation is slow enough). This varying signal will reach a maximum when the antenna is rotated so it is aligned in the direction of the target.
By looking for this maximum and moving the antenna in that direction, a target can be automatically tracked. This is greatly eased by using two antennas, angled
|
https://en.wikipedia.org/wiki/Comparison%20of%20DNS%20server%20software
|
This article presents a comparison of the features, platform support, and packaging of many independent implementations of Domain Name System (DNS) name server software.
Servers compared
Each of these DNS servers is an independent implementation of the DNS protocols, capable of resolving DNS names for other computers, publishing the DNS names of computers, or both. Excluded from consideration are single-feature DNS tools (such as proxies, filters, and firewalls) and redistributions of servers listed here (many products repackage BIND, for instance, with proprietary user interfaces).
DNS servers are grouped into several categories of specialization of servicing domain name system queries. The two principal roles, which may be implemented either uniquely or combined in a given product are:
Authoritative server: authoritative name servers publish DNS mappings for domains under their authoritative control. Typically, a company (e.g. "Acme Example Widgets") would provide its own authority services to respond to address queries, or for other DNS information, for www.example.int. These servers are listed as being at the top of the authority chain for their respective domains, and are capable of providing a definitive answer. Authoritative name servers can be primary name servers, also known as master servers, i.e. they contain the original set of data, or they can be secondary or slave name servers, containing data copies usually obtained from synchronization directly with the primary server, either via a DNS mechanism, or by other data store synchronization mechanisms.
Recursive server: recursive servers (sometimes called "DNS caches", "caching-only name servers") provide DNS name resolution for applications, by relaying the requests of the client application to the chain of authoritative name servers to fully resolve a network name. They also (typically) cache the result to answer potential future queries within a certain expiration (time-to-live) period. Most I
|
https://en.wikipedia.org/wiki/Sudan%20function
|
In the theory of computation, the Sudan function is an example of a function that is recursive, but not primitive recursive. This is also true of the better-known Ackermann function. The Sudan function was the first function having this property to be published.
It was discovered (and published ) in 1927 by Gabriel Sudan, a Romanian mathematician who was a student of David Hilbert.
Definition
Value tables
Values of F0
F0(x, y) = x + y
Values of F1
F1(x, y) = 2y · (x + 2) − y − 2
Values of F2
Values von F3
Notes and references
Bibliography
External links
OEIS: A260003, A260004
Arithmetic
Large integers
Special functions
Theory of computation
|
https://en.wikipedia.org/wiki/First%20Data
|
First Data Corporation is a financial services company headquartered in Atlanta, Georgia, United States. The company's STAR Network provides nationwide domestic debit acceptance at more than 2 million retail POS, ATM, and at online outlets for nearly a third of all U.S. debit cards.
First Data has six million merchants, the largest in the payments industry. The company handles 45% of all US credit and debit transactions, including handling prepaid gift card processing for many US brands such as Starbucks. It processes around 2,800 transactions per second and $2.2 trillion in card transactions annually, with an 80% market share in gas and groceries in 2014. First Data's SpendTrend Report is frequently used by national news networks such as WSJ, USA Today, ESPN, The New York Times, Vox Media, and Bloomberg.
On January 16, 2019, Fiserv announced a deal to acquire First Data in an all-stock deal with equity value of $22 billion. Fiserv completed the acquisition of First Data on Monday, July 29, 2019.
History
In 1969, the Mid-America Bankcard Association (MABA) was formed in Omaha, Nebraska, as a non-profit bankcard processing cooperative. First Data Resources (FDR) was founded in Omaha, Nebraska in June 1971 by Perry "Bill" Esping, along with Mike Liddy and Jack Weekly. It started off by providing processing services to the Mid-America Bankcard Association (MABA). In 1976, First Data became the first processor of Visa and MasterCard bank-issued credit cards. In 1980, American Express Information Services Corporation (ISC) bought 80% of First Data. The remaining 20% was purchased in 5% increments each subsequent year until June 1983. First Data Corporation was incorporated on April 7, 1989.
First Data Corporation was spun off from American Express and went public in 1992. In 1995, the company merged with First Financial Management Corp. (FFMC) and was then organized into three major business units serving card issuers, merchants, and consumers. Western Union became
|
https://en.wikipedia.org/wiki/Enumerative%20combinatorics
|
Enumerative combinatorics is an area of combinatorics that deals with the number of ways that certain patterns can be formed. Two examples of this type of problem are counting combinations and counting permutations. More generally, given an infinite collection of finite sets Si indexed by the natural numbers, enumerative combinatorics seeks to describe a counting function which counts the number of objects in Sn for each n. Although counting the number of elements in a set is a rather broad mathematical problem, many of the problems that arise in applications have a relatively simple combinatorial description. The twelvefold way provides a unified framework for counting permutations, combinations and partitions.
The simplest such functions are closed formulas, which can be expressed as a composition of elementary functions such as factorials, powers, and so on. For instance, as shown below, the number of different possible orderings of a deck of n cards is f(n) = n!. The problem of finding a closed formula is known as algebraic enumeration, and frequently involves deriving a recurrence relation or generating function and using this to arrive at the desired closed form.
Often, a complicated closed formula yields little insight into the behavior of the counting function as the number of counted objects grows.
In these cases, a simple asymptotic approximation may be preferable. A function is an asymptotic approximation to if as . In this case, we write
Generating functions
Generating functions are used to describe families of combinatorial objects. Let denote the family of objects and let F(x) be its generating function. Then
where denotes the number of combinatorial objects of size n. The number of combinatorial objects of size n is therefore given by the coefficient of . Some common operation on families of combinatorial objects and its effect on the generating function will now be developed.
The exponential generating function is also sometimes used. I
|
https://en.wikipedia.org/wiki/Unconventional%20computing
|
Unconventional computing is computing by any of a wide range of new or unusual methods. It is also known as alternative computing.
The term unconventional computation was coined by Cristian S. Calude and John Casti and used at the First International Conference on Unconventional Models of Computation in 1998.
Background
The general theory of computation allows for a variety of models. Computing technology first developed using mechanical systems and then evolved into the use of electronic devices. Other fields of modern physics provide additional avenues for development.
Computational model
Computational models use computer programs to simulate and study complex systems using an algorithmic or mechanistic approach. They are commonly used to study complex nonlinear systems for which simple analytical solutions are not readily available. Experimentation with the model is done by adjusting parameters in the computer and studying the differences in the outcome. Operation theories of the model can be derived/deduced from these computational experiments. Examples of computational models include weather forecasting models, earth simulator models, flight simulator models, molecular protein folding models, and neural network models.
Mechanical computing
Historically, mechanical computers were used in industry before the advent of the transistor.
Mechanical computers retain some interest today both in research and as analogue computers. Some mechanical computers have a theoretical or didactic relevance, such as billiard-ball computers, while hydraulic ones like the MONIAC or the Water integrator were used effectively.
While some are actually simulated, others are not. No attempt is made to build a functioning computer through the mechanical collisions of billiard balls. The domino computer is another theoretically interesting mechanical computing scheme.
Analog computing
An analog computer is a type of computer that uses analog signals, which are continuous physi
|
https://en.wikipedia.org/wiki/Timarit.is
|
Timarit.is (also known as Tímarit.is, Tidarrit.fo and Aviisitoqqat.gl) is an open access digital library run by the National and University Library of Iceland which hosts digital editions of newspapers and magazines published in Iceland, Faroe Islands and Greenland as well as publications in their languages elsewhere, such as Canada which had a large influx of Icelanders in the late 19th and early 20th centuries. The project was initially sponsored by the West Nordic Council and launched its web interface under the title VESTNORD in 2002. The web interface has since undergone two major revisions, in 2003 and 2008. With the last revision a decision was made to gradually convert images from the DjVu image format to the more common PDF. Hence, part of the collection can be viewed with the DjVu plugin and part with a PDF reader.
The digital collection covers material from the 17th century to the early 21st century and offers users the ability to collect bookmarks on their free account for ease of use as well as do a text search on the majority of the collection. As of February 2009 there were more than 2,6 million images in the archive of which 2 million had been OCRed.
Initially the aim was to limit access to newspapers published before 1930 to avoid questions of copyright but shortly afterwards the project made an agreement with Morgunblaðið to scan and publish issues which are three years old. This agreement was followed with others involving both current and defunct newspapers published in the 20th century. Newspapers published after 2000 are usually sent to the library in digital format. The general rule, depending on agreements with each publisher, is to make these available 2–3 years after their initial publication.
References
External links
Timarit.is
Open-access archives
Icelandic digital libraries
Online databases
Internet properties established in 2002
Libraries established in 2002
|
https://en.wikipedia.org/wiki/Deme%20%28biology%29
|
In biology, a deme, in the strict sense, is a group of individuals that belong to the same taxonomic group. However, when biologists, and especially ecologists, use the term ‘deme’ they usually refer to it as the definition of a gamodeme: a local group of individuals (from the same taxon) that interbreed with each other and share a gene pool. The latter definition of a deme is only applicable to sexual reproducing species, while the former is more neutral and also takes asexual reproducing species into account, such as certain plant species. In the following sections the latter (and most frequently used) definition of a deme will be used.
In evolutionary computation, a "deme" often refers to any isolated subpopulation subjected to selection as a unit rather than as individuals.
Local adaptation
A population of a species usually has multiple demes. Environments between these demes can differ. Demes could, therefore, become locally adapted to their environment. A good example of this is the Adaptive Deme Formation (ADF) hypothesis in insects. The ADF hypothesis states that herbivorous insects can become adapted to specific host plants in their local environment because local plants can have unique nutrient patches to which insects may become adapted. This hypothesis predicts that less mobile insect demes are more likely to become locally adapted than more dispersive insect. However, a meta-analysis, based on 17 studies on this subject, showed that dispersive insect demes were as likely to become locally adapted as less mobile insects. Moreover, this study found a small indication that feeding behaviour might stimulate the local adaptation of demes. Endophagous insects were more likely to become locally adapted than exophagous insects. The explanation for this could be that endophagous insects come in more close and continuous contact to the plant's mechanical, chemical and phenological defensive mechanisms.
Speciation and demes
Speciation could occur at the level
|
https://en.wikipedia.org/wiki/KPXJ
|
KPXJ (channel 21) is a television station licensed to Minden, Louisiana, United States, serving the Shreveport area as an affiliate of The CW. The station is owned by locally based KTBS, LLC, alongside ABC affiliate KTBS-TV (channel 3). Both stations share studios on East Kings Highway on the eastern side of Shreveport, while KPXJ's transmitter is located near St. Johns Baptist Church Road (southeast of Mooringsport and Caddo Lake) in rural northern Caddo Parish.
History
Early history; as a Pax TV owned-and-operated station
The UHF channel 21 allocation was contested between multiple groups that competed for approval by the Federal Communications Commission (FCC) to be the holder of the construction permit to build and license to operate a new television station on the third commercial UHF allocation to be assigned to the Shreveport–Texarkana market (assigned to the Shreveport suburb of Minden, Louisiana). Among the prospective applicants were John E. Powley (who applied for the license on January 16, 1996), Tucson, Arizona-based Northwest Television Inc. (owned by company president William L. Yde III, president, who applied for the license on January 18, 1996) and five parties who each applied for individual applications on April 4 and 5, 1996: Los Angeles-based Venture Technologies Group LLC (majority owned by Lawrence Rogow, who also served as the group's president), Little Rock-based Kaleidoscope Partners (forerunner company to Equity Broadcasting), Washington, D.C.-based WinStar Broadcasting Corp. (owned by Stuart B. Rekant), Wichita, Kansas-based entrepreneur Marcia T. Turner, Columbia, South Carolina-based Universal Media (majority owned by company president Murray Michaels) and Shreveport-based Word of Life Ministries Inc.
On December 19, 1997, West Palm Beach, Florida-based Paxson Communications (now Ion Media Networks) – which was preparing to launch Pax TV, a family-oriented broadcast television network, that tapped Paxson Communications-owned affiliat
|
https://en.wikipedia.org/wiki/KSHV-TV
|
KSHV-TV (channel 45) is a television station in Shreveport, Louisiana, United States, affiliated with MyNetworkTV. It is owned by Nexstar Media Group alongside Texarkana, Texas–licensed NBC affiliate KTAL-TV (channel 6); Nexstar also provides certain services to Fox affiliate KMSS-TV (channel 33) under a shared services agreement (SSA) with Mission Broadcasting. The stations share studios on North Market Street and Deer Park Road in northeast Shreveport, while KSHV-TV's transmitter is located southeast of Mooringsport.
History
Early history
The UHF channel 45 allocation in Shreveport was contested between three groups that competed for the Federal Communications Commission (FCC)'s approval of a construction permit to build and license to operate a new television station. Word of Life Ministries Inc. – a non-stock arm of the Word of Life Center, a nondenominational church on West 70th Street/Meriwether Road (near LA 3132) in southwestern Shreveport that was managed by founding church co-pastor Sam Carr – filed the initial application on October 29, 1986. On September 3, 1987, Word of Life Ministries reached a settlement agreement with the second applicant for the license, Media Communications, Inc., which agreed to dismiss its license application. Three months later on December 9, an application by the third applicant for UHF channel 45, Shreveport-based Godfrey & Associates, was dismissed with prejudice by Joseph Chachkin, the administrative law judge appointed in its dispute over the construction permit with Word of Life, for failure to prosecute; this resulted in the FCC granting the permit to Word of Life.
The station signed on the air on April 15, 1994, as KWLB (for "Word of Life Broadcasting"). operating as an independent station. It mostly aired religious programs, family-oriented shows and cartoons. In March 1995, Lafayette-based White Knight Broadcasting (owned by media executive Sheldon Galloway) purchased the station from Word of Life Ministries for $3.
|
https://en.wikipedia.org/wiki/Hyperspectral%20imaging
|
Hyperspectral imaging collects and processes information from across the electromagnetic spectrum. The goal of hyperspectral imaging is to obtain the spectrum for each pixel in the image of a scene, with the purpose of finding objects, identifying materials, or detecting processes. There are three general types of spectral imagers. There are push broom scanners and the related whisk broom scanners (spatial scanning), which read images over time, band sequential scanners (spectral scanning), which acquire images of an area at different wavelengths, and snapshot hyperspectral imagers, which uses a staring array to generate an image in an instant.
Whereas the human eye sees color of visible light in mostly three bands (long wavelengths - perceived as red, medium wavelengths - perceived as green, and short wavelengths - perceived as blue), spectral imaging divides the spectrum into many more bands. This technique of dividing images into bands can be extended beyond the visible. In hyperspectral imaging, the recorded spectra have fine wavelength resolution and cover a wide range of wavelengths. Hyperspectral imaging measures continuous spectral bands, as opposed to multiband imaging which measures spaced spectral bands.
Engineers build hyperspectral sensors and processing systems for applications in astronomy, agriculture, molecular biology, biomedical imaging, geosciences, physics, and surveillance. Hyperspectral sensors look at objects using a vast portion of the electromagnetic spectrum. Certain objects leave unique 'fingerprints' in the electromagnetic spectrum. Known as spectral signatures, these 'fingerprints' enable identification of the materials that make up a scanned object. For example, a spectral signature for oil helps geologists find new oil fields.
Sensors
Figuratively speaking, hyperspectral sensors collect information as a set of 'images'. Each image represents a narrow wavelength range of the electromagnetic spectrum, also known as a spectral band.
|
https://en.wikipedia.org/wiki/Spherical%20law%20of%20cosines
|
In spherical trigonometry, the law of cosines (also called the cosine rule for sides) is a theorem relating the sides and angles of spherical triangles, analogous to the ordinary law of cosines from plane trigonometry.
Given a unit sphere, a "spherical triangle" on the surface of the sphere is defined by the great circles connecting three points , and on the sphere (shown at right). If the lengths of these three sides are (from to (from to ), and (from to ), and the angle of the corner opposite is , then the (first) spherical law of cosines states:
Since this is a unit sphere, the lengths , and are simply equal to the angles (in radians) subtended by those sides from the center of the sphere. (For a non-unit sphere, the lengths are the subtended angles times the radius, and the formula still holds if and are reinterpreted as the subtended angles). As a special case, for , then , and one obtains the spherical analogue of the Pythagorean theorem:
If the law of cosines is used to solve for , the necessity of inverting the cosine magnifies rounding errors when is small. In this case, the alternative formulation of the law of haversines is preferable.
A variation on the law of cosines, the second spherical law of cosines, (also called the cosine rule for angles) states:
where and are the angles of the corners opposite to sides and , respectively. It can be obtained from consideration of a spherical triangle dual to the given one.
Proofs
First proof
Let , and denote the unit vectors from the center of the sphere to those corners of the triangle. The angles and distances do not change if the coordinate system is rotated, so we can rotate the coordinate system so that is at the north pole and is somewhere on the prime meridian (longitude of 0). With this rotation, the spherical coordinates for are where is the angle measured from the north pole not from the equator, and the spherical coordinates for are The Cartesian coordinates for ar
|
https://en.wikipedia.org/wiki/Rotation%20number
|
In mathematics, the rotation number is an invariant of homeomorphisms of the circle.
History
It was first defined by Henri Poincaré in 1885, in relation to the precession of the perihelion of a planetary orbit. Poincaré later proved a theorem characterizing the existence of periodic orbits in terms of rationality of the rotation number.
Definition
Suppose that is an orientation-preserving homeomorphism of the circle Then may be lifted to a homeomorphism of the real line, satisfying
for every real number and every integer .
The rotation number of is defined in terms of the iterates of :
Henri Poincaré proved that the limit exists and is independent of the choice of the starting point . The lift is unique modulo integers, therefore the rotation number is a well-defined element of Intuitively, it measures the average rotation angle along the orbits of .
Example
If is a rotation by (where ), then
and its rotation number is (cf. irrational rotation).
Properties
The rotation number is invariant under topological conjugacy, and even monotone topological semiconjugacy: if and are two homeomorphisms of the circle and
for a monotone continuous map of the circle into itself (not necessarily homeomorphic) then and have the same rotation numbers. It was used by Poincaré and Arnaud Denjoy for topological classification of homeomorphisms of the circle. There are two distinct possibilities.
The rotation number of is a rational number (in the lowest terms). Then has a periodic orbit, every periodic orbit has period , and the order of the points on each such orbit coincides with the order of the points for a rotation by . Moreover, every forward orbit of converges to a periodic orbit. The same is true for backward orbits, corresponding to iterations of , but the limiting periodic orbits in forward and backward directions may be different.
The rotation number of is an irrational number . Then has no periodic orbits (this follows immedi
|
https://en.wikipedia.org/wiki/Proof-carrying%20code
|
Proof-carrying code (PCC) is a software mechanism that allows a host system to verify properties about an application via a formal proof that accompanies the application's executable code. The host system can quickly verify the validity of the proof, and it can compare the conclusions of the proof to its own security policy to determine whether the application is safe to execute. This can be particularly useful in ensuring memory safety (i.e. preventing issues like buffer overflows).
Proof-carrying code was originally described in 1996 by George Necula and Peter Lee.
Packet filter example
The original publication on proof-carrying code in 1996 used packet filters as an example: a user-mode application hands a function written in machine code to the kernel that determines whether or not an application is interested in processing a particular network packet. Because the packet filter runs in kernel mode, it could compromise the integrity of the system if it contains malicious code that writes to kernel data structures. Traditional approaches to this problem include interpreting a domain-specific language for packet filtering, inserting checks on each memory access (software fault isolation), and writing the filter in a high-level language which is compiled by the kernel before it is run. These approaches have performance disadvantages for code as frequently run as a packet filter, except for the in-kernel compilation approach, which only compiles the code when it is loaded, not every time it is executed.
With proof-carrying code, the kernel publishes a security policy specifying properties that any packet filter must obey: for example, will not access memory outside of the packet and its scratch memory area. A theorem prover is used to show that the machine code satisfies this policy. The steps of this proof are recorded and attached to the machine code which is given to the kernel program loader. The program loader can then rapidly validate the proof, allowing i
|
https://en.wikipedia.org/wiki/Biocybernetics
|
Biocybernetics is the application of cybernetics to biological science disciplines such as neurology and multicellular systems. Biocybernetics plays a major role in systems biology, seeking to integrate different levels of information to understand how biological systems function. The field of cybernetics itself has origins in biological disciplines such as neurophysiology. Biocybernetics is an abstract science and is a fundamental part of theoretical biology, based upon the principles of systemics. Biocybernetics is a psychological study that aims to understand how the human body functions as a biological system and performs complex mental functions like thought processing, motion, and maintaining homeostasis.(PsychologyDictionary.org)Within this field, many distinct qualities allow for different distinctions within the cybernetic groups such as humans and insects such as beehives and ants. Humans work together but they also have individual thoughts that allow them to act on their own, while worker bees follow the commands of the queen bee. (Seeley, 1989). Although humans often work together, they can also separate from the group and think for themselves.(Gackenbach, J. 2007) A unique example of this within the human sector of biocybernetics would be in society during the colonization period, when Great Britain established their colonies in North America and Australia. Many of the traits and qualities of the mother country were inherited by the colonies, as well as niche qualities that were unique to them based on their areas like language and personality—similar vines and grasses, where the parent plant produces offshoots, spreading from the core. Once the shoots grow their roots and get separated from the mother plant, they will survive independently and be considered their plant. Society is more closely related to plants than to animals since, like plants, there is no distinct separation between parent and offspring. The branching of society is more similar t
|
https://en.wikipedia.org/wiki/Ethnoecology
|
Ethnoecology is the scientific study of how different groups of people living in different locations understand the ecosystems around them, and their relationships with surrounding environments.
It seeks valid, reliable understanding of how we as humans have interacted with the environment and how these intricate relationships have been sustained over time.
The "ethno" (see ethnology) prefix in ethnoecology indicates a localized study of a people, and in conjunction with ecology, signifies people's understanding and experience of environments around them. Ecology is the study of the interactions between living organisms and their environment; enthnoecology applies a human focused approach to this subject. The development of the field lies in applying indigenous knowledge of botany and placing it in a global context.
History
Ethnoecology began with some of the early works of Dr. Hugh Popenoe, an agronomist and tropical soil scientist who has worked with the University of Florida, the National Science Foundation, and the National Research Council. Popenoe has also worked with Dr Harold Conklin, a cognitive anthropologist who did extensive linguistic and ethnoecological research in Southeast Asia.
In his 1954 dissertation "The Relation of the Hanunoo Culture to the Plant World", Harold Conklin coined the term ethnoecology when he described his approach as "ethnoecological". After earning his PhD, he began teaching at Columbia University while continuing his research among the Hanunoo.
In 1955, Conklin published one of his first ethnoecological studies. His "Hanunoo Color Categories" study helped scholars understand the relationship between classification systems and conceptualization of the world within cultures. In this experiment, Conklin discovered that people in various cultures recognize colors differently due to their unique classification system. Within his results he found that the Hanunoo uses two levels of colors. The first level consists of four basic
|
https://en.wikipedia.org/wiki/Worksheet
|
A worksheet, in the word's original meaning, is a sheet of paper on which one performs work. They come in many forms, most commonly associated with children's school work assignments, tax forms, and accounting or other business environments. Software is increasingly taking over the paper-based worksheet.
It can be a printed page that a child completes with a writing instrument. No other materials are needed. It is "a sheet of paper on which work schedules, working time, special instructions, etc. are recorded. A piece or scrap of paper on which problems, ideas, or the like, are set down in tentative form." In education, a worksheet may have questions for students and places to record answers.
In accounting, a worksheet is, or was, a sheet of ruled paper with rows and columns on which an accountant could record information or perform calculations. These are often called columnar pads, and typically green-tinted.
In computing, spreadsheet software presents, on a computer monitor, a user interface that resembles one or more paper accounting worksheets. Microsoft Excel, a popular spreadsheet program, refers to a single spreadsheet (more technically, a two-dimensional matrix or array) as a worksheet, and it refers to a collection of worksheets as a workbook.
Education
In the classroom setting, worksheets usually refer to a loose sheet of paper with questions or exercises for students to complete and record answers. They are used, to some degree, in most subjects, and have widespread use in the math curriculum where there are two major types. The first type of math worksheet contains a collection of similar math problems or exercises. These are intended to help a student become proficient in a particular mathematical skill that was taught to them in class. They are commonly given to students as homework. The second type of math worksheet is intended to introduce new topics, and are often completed in the classroom. They are made up of a progressive set of question
|
https://en.wikipedia.org/wiki/Oracle%20RAC
|
In database computing, Oracle Real Application Clusters (RAC) — an option for the Oracle Database software produced by Oracle Corporation and introduced in 2001 with Oracle9i — provides software for clustering and high availability in Oracle database environments. Oracle Corporation includes RAC with the Enterprise Edition, provided the nodes are clustered using Oracle Clusterware.
Functionality
Oracle RAC allows multiple computers to run Oracle RDBMS software simultaneously while accessing a single database, thus providing clustering.
In a non-RAC Oracle database, a single instance accesses a single database. The database consists of a collection of data files, control files, and redo logs located on disk. The instance comprises the collection of Oracle-related memory and background processes that run on a computer system.
In an Oracle RAC environment, 2 or more instances concurrently access a single database. This allows an application or user to connect to either computer and have access to a single coordinated set of data. The instances are connected with each other through an "Interconnect" which enables all the instances to be in sync in accessing the data.
Aims
The main aim of Oracle RAC is to implement a clustered database to provide performance, scalability and resilience & high availability of data at instance level.
Implementation
Oracle RAC depends on the infrastructure component Oracle Clusterware to coordinate multiple servers and their sharing of data storage.
The FAN (Fast Application Notification) technology detects down-states.
RAC administrators can use the srvctl tool to manage RAC configurations,
Cache Fusion
Prior to Oracle 9, network-clustered Oracle databases used a storage device as the data-transfer medium (meaning that one node would write a data block to disk and another node would read that data from the same disk), which had the inherent disadvantage of lackluster performance. Oracle 9i addressed this issue: RAC uses a dedicated
|
https://en.wikipedia.org/wiki/Strain%20hardening%20exponent
|
The strain hardening exponent (also called the strain hardening index), usually denoted , a constant often used in calculations relating to stress–strain behavior in work hardening. It occurs in the formula known as Hollomon's equation (after John Herbert Hollomon Jr.) who originally posited it as
where represents the applied true stress on the material, is the true strain, and is the strength coefficient.
The value of the strain hardening exponent lies between 0 and 1, with a value of 0 implying a perfectly plastic solid and a value of 1 representing a perfectly elastic solid. Most metals have an -value between 0.10 and 0.50.
Tabulation
References
External links
More complete picture about the strain hardening exponent in the stress–strain curve on www.key-to-steel.com
Mechanical engineering
Solid mechanics
|
https://en.wikipedia.org/wiki/Code%20morphing
|
Code morphing is an approach used in obfuscating software to protect software applications from reverse engineering, analysis, modifications, and cracking. This technology protects intermediate level code such as compiled from Java and .NET languages (Oxygene, C#, Visual Basic, etc.) rather than binary object code. Code morphing breaks up the protected code into several processor commands or small command snippets and replaces them by others, while maintaining the same end result. Thus the protector obfuscates the code at the intermediate level.
Code morphing is a multilevel technology containing hundreds of unique code transformation patterns. In addition this technology transforms some intermediate layer commands into virtual machine commands (like p-code). Code morphing does not protect against runtime tracing, which can reveal the execution logic of any protected code.
Unlike other code protectors, there is no concept of code decryption with this method. Protected code blocks are always in the executable state, and they are executed (interpreted) as transformed code. The original intermediate code is absent to a certain degree, but deobfuscation can still give a clear view of the original code flow.
Code morphing is also used to refer to the just-in-time compilation technology used in Transmeta processors such as the Crusoe and Efficeon to implement the x86 instruction set architecture.
Code morphing is often used in obfuscating the copy protection or other checks that a program makes to determine whether it is a valid, authentic installation, or an unauthorized copy, in order to make the removal of the copy-protection code more difficult than would otherwise be the case.
See also
Intermediate language
References
Software obfuscation
Source code
Warez
|
https://en.wikipedia.org/wiki/TUN/TAP
|
In computer networking, TUN and TAP are kernel virtual network devices. Being network devices supported entirely in software, they differ from ordinary network devices which are backed by physical network adapters.
The Universal TUN/TAP Driver originated in 2000 as a merger of the corresponding drivers in Solaris, Linux and BSD. The driver continues to be maintained as part of the Linux and FreeBSD kernels.
Design
Though both are for tunneling purposes, TUN and TAP can't be used together because they transmit and receive packets at different layers of the network stack. TUN, namely network TUNnel, simulates a network layer device and operates in layer 3 carrying IP packets. TAP, namely network TAP, simulates a link layer device and operates in layer 2 carrying Ethernet frames. TUN is used with routing. TAP can be used to create a user space network bridge.
Packets sent by an operating system via a TUN/TAP device are delivered to a user space program which attaches itself to the device. A user space program may also pass packets into a TUN/TAP device. In this case the TUN/TAP device delivers (or "injects") these packets to the operating-system network stack thus emulating their reception from an external source.
Applications
Virtual private networks
OpenVPN, Ethernet/IP over TCP/UDP; encrypted, compressed
ZeroTier, Ethernet/IP over TCP/UDP; encrypted, compressed, cryptographic addressing scheme
FreeLAN, open-source, free, multi-platform IPv4, IPv6 and peer-to-peer VPN software over UDP/IP.
n2n, an open source Layer 2 over Layer 3 VPN application which uses a peer-to-peer architecture for network membership and routing
Tinc, Ethernet/IPv4/IPv6 over TCP/UDP; encrypted, compressed
VTun, Ethernet/IP/serial/Unix pipe over TCP; encrypted, compressed, traffic-shaping
OpenSSH
coLinux, Ethernet/IP over TCP/UDP
Hamachi
OpenConnect
WireGuard
Tailscale
vtun
Virtual-machine networking
Bochs
coLinux
Hercules (S/390 emulator)
Open vSwitch
QEMU/KVM
User-
|
https://en.wikipedia.org/wiki/Attribute%20domain
|
In computing, the attribute domain is the set of values allowed in an attribute.
For example:
Rooms in hotel (1-300)
Age (1-99)
Married (yes or no)
Nationality (Nepalese, Indian, American, or British)
Colors (Red, Yellow, Green)
For the relational model it is a requirement that each part of a tuple be atomic. The consequence is that each value in the tuple must be of some basic type, like a string or an integer. For the elementary type to be atomic it cannot be broken into more pieces. Alas, the domain is an elementary type, and attribute domain the domain a given attribute belongs to an abstraction belonging to or characteristic of an entity.
For example, in SQL, one can create their own domain for an attribute with the command
CREATE DOMAIN SSN_TYPE AS CHAR(9) ;
The above command says : "Create a datatype SSN_TYPE that is of character type with size 9 "
References
Type theory
Database theory
|
https://en.wikipedia.org/wiki/List%20of%20statistical%20software
|
Statistical software are specialized computer programs for analysis in statistics and econometrics.
Open-source
ADaMSoft – a generalized statistical software with data mining algorithms and methods for data management
ADMB – a software suite for non-linear statistical modeling based on C++ which uses automatic differentiation
Chronux – for neurobiological time series data
DAP – free replacement for SAS
Environment for DeveLoping KDD-Applications Supported by Index-Structures (ELKI) a software framework for developing data mining algorithms in Java
Epi Info – statistical software for epidemiology developed by Centers for Disease Control and Prevention (CDC). Apache 2 licensed
Fityk – nonlinear regression software (GUI and command line)
GNU Octave – programming language very similar to MATLAB with statistical features
gretl – gnu regression, econometrics and time-series library
intrinsic Noise Analyzer (iNA) – For analyzing intrinsic fluctuations in biochemical systems
jamovi – A free software alternative to IBM SPSS Statistics
JASP – A free software alternative to IBM SPSS Statistics with additional option for Bayesian methods
JMulTi – For econometric analysis, specialised in univariate and multivariate time series analysis
Just another Gibbs sampler (JAGS) – a program for analyzing Bayesian hierarchical models using Markov chain Monte Carlo developed by Martyn Plummer. It is similar to WinBUGS
KNIME – An open source analytics platform built with Java and Eclipse using modular data pipeline workflows
LIBSVM – C++ support vector machine libraries
mlpack – open-source library for machine learning, exploits C++ language features to provide maximum performance and flexibility while providing a simple and consistent application programming interface (API)
Mondrian – data analysis tool using interactive statistical graphics with a link to R
Neurophysiological Biomarker Toolbox – Matlab toolbox for data-mining of neurophysiological biomarkers
OpenBUGS
|
https://en.wikipedia.org/wiki/Help%20key
|
A Help key, found in the shape of a dedicated key explicitly labeled , or as another key, typically one of the function keys, on a computer keyboard, is a key which, when pressed, produces information on the screen/display to aid the user in their current task, such as using a specific function in an application program.
In the case of a non-dedicated Help key, the location of the key will sometimes vary between different software packages. Most common in computer history, however, is the development of a de facto Help key location for each brand/family of computer, exemplified by the use of F1 on IBM compatible PCs.
Apple keyboards
On a full-sized Apple keyboard, the help key was labelled simply as , located to the left of the . Where IBM compatible PC keyboards had the , Apple keyboards had the help key instead. As of 2007, new Apple keyboards do not have a help key. In its place, a full-sized Apple keyboard has a instead. Instead of a mechanical help key, the menu bar for most applications contain a Help menu as a matter of convention.
Commodore and Amiga keyboards
The Commodore 128 had a key in the second block of top row keys. Amiga keyboards had a key, labelled as such, above the arrow keys on the keyboard, and next to a key (where the cluster is on a standard PC keyboard).
Atari keyboards
The keyboards of the Atari 16- and 32-bit computers had a key above the arrow keys on the keyboard. Atari 8-bit XL and XE series keyboards had dedicated keys, but in the group of differently-styled system keys separated from the rest of the keyboard.
Sun Microsystems (Oracle)
Most of the Sun Microsystems keyboards have a dedicate "" key in the left top corner (left from the "" key above block of 10 () extra keys.
References
Computer keys
Online help
|
https://en.wikipedia.org/wiki/Lip%C3%B3t%20Fej%C3%A9r
|
Lipót Fejér (or Leopold Fejér, ; 9 February 1880 – 15 October 1959) was a Hungarian mathematician of Jewish heritage. Fejér was born Leopold Weisz, and changed to the Hungarian name Fejér around 1900.
Biography
He was born in Pécs, Austria-Hungary, into the Jewish family of Victoria Goldberger and Samuel Weiss. His maternal great-grandfather Samuel Nachod was a doctor and his grandfather was a renowned scholar, author of a Hebrew-Hungarian dictionary. Leopold's father, Samuel Weiss, was a shopkeeper in Pecs. In primary schools Leopold was not doing well, so for a while his father took him away to home schooling. The future scientist developed his interest in mathematics in high school thanks to his teacher Sigismund Maksay.
Fejér studied mathematics and physics at the University of Budapest and at the University of Berlin, where he was taught by Hermann Schwarz. In 1902 he earned his doctorate from University of Budapest (today Eötvös Loránd University). From 1902 to 1905 Fejér taught there and from 1905 until 1911 he taught at Franz Joseph University in Kolozsvár in Austria-Hungary (now Cluj-Napoca in Romania). In 1911 Fejér was appointed to the chair of mathematics at the University of Budapest and he held that post until his death. He was elected corresponding member (1908), member (1930) of the Hungarian Academy of Sciences.
During his period in the chair at Budapest Fejér led a highly successful Hungarian school of analysis. He was the thesis advisor of mathematicians such as John von Neumann, Paul Erdős, George Pólya and Pál Turán. Thanks to Feuer, Hungary has developed a strong mathematical school: he has educated a new generation of students who have gone on to become eminent scientists. As Poya recalled, a large number of them became interested in mathematics thanks to Fejér, his fascinating personality and charisma. Fejér gave short (no more than an hour) but very entertaining lectures and often sat with students in cafés, discussing mathematical prob
|
https://en.wikipedia.org/wiki/Extension%20%28predicate%20logic%29
|
The extension of a predicatea truth-valued functionis the set of tuples of values that, used as arguments, satisfy the predicate. Such a set of tuples is a relation.
Examples
For example, the statement "d2 is the weekday following d1" can be seen as a truth function associating to each tuple (d2, d1) the value true or false. The extension of this truth function is, by convention, the set of all such tuples associated with the value true, i.e.
{(Monday, Sunday),
(Tuesday, Monday),
(Wednesday, Tuesday),
(Thursday, Wednesday),
(Friday, Thursday),
(Saturday, Friday),
(Sunday, Saturday)}
By examining this extension we can conclude that "Tuesday is the weekday following Saturday" (for example) is false.
Using set-builder notation, the extension of the n-ary predicate can be written as
Relationship with characteristic function
If the values 0 and 1 in the range of a characteristic function are identified with the values false and true, respectivelymaking the characteristic function a predicate, then for all relations R and predicates the following two statements are equivalent:
is the characteristic function of R
R is the extension of
See also
Extensional logic
Extensional set
Extensionality
Intension
References
extension (semantics) in nLab
Predicate logic
|
https://en.wikipedia.org/wiki/Asemic%20writing
|
Asemic writing is a wordless open semantic form of writing. The word asemic means "having no specific semantic content", or "without the smallest unit of meaning". With the non-specificity of asemic writing there comes a vacuum of meaning, which is left for the reader to fill in and interpret. All of this is similar to the way one would deduce meaning from an abstract work of art. Where asemic writing distinguishes itself among traditions of abstract art is in the asemic author's use of gestural constraint, and the retention of physical characteristics of writing such as lines and symbols. Asemic writing is a hybrid art form that fuses text and image into a unity, and then sets it free to arbitrary subjective interpretations. It may be compared to free writing or writing for its own sake, instead of writing to produce verbal context. The open nature of asemic works allows for meaning to occur across linguistic understanding; an asemic text may be "read" in a similar fashion regardless of the reader's natural language. Multiple meanings for the same symbolism are another possibility for an asemic work, that is, asemic writing can be polysemantic or have zero meaning, infinite meanings, or its meaning can evolve over time. Asemic works leave for the reader to decide how to translate and explore an asemic text; in this sense, the reader becomes co-creator of the asemic work.
In 1997, visual poets Tim Gaze and Jim Leftwich first applied the word asemic to name their quasi-calligraphic writing gestures. They then began to distribute them to poetry magazines both online and in print. The authors explored sub-verbal and sub-letteral forms of writing, and textual asemia as a creative option and as an intentional practice. Since the late 1990s, asemic writing has blossomed into a worldwide literary/art movement. It has especially grown in the early part of the 21st century, though there is an acknowledgement of a long and complex history, which precedes the activities of t
|
https://en.wikipedia.org/wiki/Marcus%20du%20Sautoy
|
Marcus Peter Francis du Sautoy (; born 26 August 1965) is a British mathematician, Simonyi Professor for the Public Understanding of Science at the University of Oxford, Fellow of New College, Oxford and author of popular mathematics and popular science books. He was previously a fellow of All Souls College, Oxford, Wadham College, Oxford and served as president of the Mathematical Association, an Engineering and Physical Sciences Research Council (EPSRC) senior media fellow, and a Royal Society University Research Fellow.
In 1996, he was awarded the title of distinction of Professor of Mathematics.
Education and early life
Du Sautoy was born in London to Bernard du Sautoy, employed in the computer industry, and Jennifer ( Deason) du Sautoy, who left the Foreign Office to raise her children. He grew up in Henley-on-Thames. His grandfather, Peter du Sautoy, was chairman of the publisher Faber and Faber, and managed the estates of James Joyce and Samuel Beckett.
Du Sautoy was educated at Gillotts Comprehensive School and King James's Sixth Form College (now Henley College) and Wadham College, Oxford, where he was awarded a first class honours degree in mathematics. In 1991 he completed a doctorate in mathematics on discrete groups, analytic groups and Poincaré series, supervised by Dan Segal.
Career and research
Du Sautoy's research "uses classical tools from number theory to explore the mathematics of symmetry". Du Sautoy's academic work concerns mainly group theory and number theory.
Du Sautoy is known for his work popularising mathematics, and has been named by The Independent on Sunday as one of the UK's leading scientists. He has also served on the advisory board of
Mangahigh.com, an online maths game website. He is a regular contributor to the BBC Radio 4's In Our Time programme and has written for The Times and The Guardian. He has written numerous academic articles and books on mathematics, the most recent being an exploration of the current state
|
https://en.wikipedia.org/wiki/Clipper%20%28electronics%29
|
In electronics, a clipper is a circuit designed to prevent a signal from exceeding a predetermined reference voltage level. A clipper does not distort the remaining part of the applied waveform. Clipping circuits are used to select, for purposes of transmission, that part of a signal waveform which lies above or below the predetermined reference voltage level.
Clipping may be achieved either at one level or two levels. A clipper circuit can remove certain portions of an arbitrary waveform near the positive or negative peaks or both. Clipping changes the shape of the waveform and alters its spectral components.
A clipping circuit consists of linear elements like resistors and non-linear elements like diodes or transistors, but it does not contain energy-storage elements like capacitors.
Clipping circuits are also called slicers or amplitude selectors.
Types
Diode clipper
A simple diode clipper can be made with a diode and a resistor. This will remove either the positive, or the negative half of the waveform depending on the direction the diode is connected. The simple circuit clips at zero voltage (or to be more precise, at the small forward voltage of the forward biased diode) but the clipping voltage can be set to any desired value with the addition of a reference voltage. The diagram illustrates a positive reference voltage but the reference can be positive or negative for both positive and negative clipping giving four possible configurations in all.
The simplest circuit for the voltage reference is a resistor potential divider connected between the voltage rails. This can be improved by replacing the lower resistor with a zener diode with a breakdown voltage equal to the required reference voltage. The zener acts as a voltage regulator stabilising the reference voltage against supply and load variations.
Zener diode
In the example circuit on the right, two zener diodes are used to clip the voltage VIN. The voltage in either direction is limited t
|
https://en.wikipedia.org/wiki/Ribotyping
|
Ribotyping is a molecular technique for bacterial identification and characterization that uses information from rRNA-based phylogenetic analyses. It is a rapid and specific method widely used in clinical diagnostics and analysis of microbial communities in food, water, and beverages.
All bacteria have ribosomal genes, but the exact sequence is unique to each species, serving as a genetic fingerprint. Therefore, sequencing the particular 16S gene and comparing it to a database would yield identification of the particular species.
Technique
Ribotyping involves the digestion of bacterial genomic DNA with specific restriction enzymes. Each restriction enzyme cuts DNA at a specific nucleotide sequence, resulting in fragments of different lengths.
Those fragments are then run on a Gel electrophoresis, where they are separated according to size: the application of electrical field to the gel in which they are suspended causes the movement of DNA fragments (all negatively charged due to the presence of phosphate groups) through a matrix towards the positively charged end of the field. Small fragments move more easily and rapidly through the matrix, reaching a bigger distance from the starting position than larger fragments.
Following the separation in the gel matrix, the DNA fragments are moved onto nylon membranes and hybridized with a labelled 16S or 23S rRNA probe. This way only the fragments coding for such rRNA are visualised and can be analyzed. The pattern is then digitized and used to identify the origin of the DNA by a comparison with reference organisms in a computer database.
Conceptually, ribotyping is similar to probing restriction fragments of chromosomal DNA with cloned probes (randomly cloned probes or probes derived from a specific coding sequence such as that of a virulence factor).
See also
Genotyping
References
Genetics techniques
|
https://en.wikipedia.org/wiki/Wadley%20loop
|
The "Wadley-drift-canceling-loop", also known as a "Wadley loop", is a system of two oscillators, a frequency synthesizer, and two frequency mixers in the radio-frequency signal path. The system was designed by Dr. Trevor Wadley in the 1940s in South Africa. The circuit was first used for a stable wavemeter. (A wavemeter is used for measuring the wavelength and therefore also the frequency of a signal)
There is no regulation loop in a "Wadley-loop", which is why the term is in quotation marks. However, the circuit configuration is not known by more accurate names.
The "Wadley loop" was used in radio receivers from the 1950s to approximately 1980. The "Wadley loop" was mostly used in more expensive stationary radio receivers, but the "Wadley loop" was also used in a portable radio receiver (Barlow-Wadley XCR-30 Mark II).
Overview
In a traditional superheterodyne radio receiver, most oscillator drift and instability occur in the first frequency converter stage, because it is tunable and operating at a high frequency.
Unlike other drift-reducing techniques (such as crystal control or frequency synthesis), the Wadley Loop does not attempt to stabilize the oscillator. Instead, it cancels the drift mathematically.
Principles of operation
The Wadley loop works by:
combining the first oscillator with the received signal in a frequency mixer to translate it to an intermediate frequency that is above the receiver's tuning range,
mixing the same oscillator with a comb of harmonics from a crystal oscillator,
selecting one of the results of (2) with a band-pass filter, and
mixing this with the IF signal from (1).
Since the high-IF of part 1 drifts in the same direction and the same amount as the "synthetic oscillator" of part 3, when they are mixed in part 4, the drift terms cancel out and the result is a crystal-stable signal at a second intermediate frequency.
However, the drift makes it impossible to use high-IF selectivity to reject undesired signals. Instead,
|
https://en.wikipedia.org/wiki/Push%E2%80%93pull%20converter
|
A push–pull converter is a type of DC-to-DC converter, a switching converter that uses a transformer to change the voltage of a DC power supply. The distinguishing feature of a push-pull converter is that the transformer primary is supplied with current from the input line by pairs of transistors in a symmetrical push-pull circuit. The transistors are alternately switched on and off, periodically reversing the current in the transformer. Therefore, current is drawn from the line during both halves of the switching cycle. This contrasts with buck-boost converters, in which the input current is supplied by a single transistor which is switched on and off, so current is drawn from the line during only a part of the switching cycle. During the remainder of the cycle, the output power is supplied by energy stored in inductors or capacitors in the power supply. Push–pull converters have steadier input current, create less noise on the input line, and are more efficient in higher power applications.
Circuit
Conceptual schematic of a full-bridge converter. This is not a center tapped or split primary push-pull converter.
The term push–pull is sometimes used to generally refer to any converter with bidirectional excitation of the transformer. For example, in a full-bridge converter, the switches (connected as an H-bridge) alternate the voltage across the supply side of the transformer, causing the transformer to function as it would for AC power and produce a voltage on its output side. However, push–pull more commonly refers to a two-switch topology with a split primary winding.
In any case, the output is then rectified and sent to the load. Capacitors are often included at the output to filter the switching noise.
In practice, it is necessary to allow a small interval between powering the transformer one way and powering it the other: the “switches” are usually pairs of transistors (or similar devices), and were the two transistors in the pair to switch simulta
|
https://en.wikipedia.org/wiki/Odell%20Down%20Under
|
Odell Down Under is a 1994 game for Microsoft Windows and Classic Mac OS that takes place in the Great Barrier Reef of Australia. Released by MECC, it is the sequel to Odell Lake.
History
Odell Down Under was released alongside a re-release of its companion game, Odell Lake. The game was recommended for ages 9 to adult.
The game was generally praised; School Library Journal cited the "realistic and beautiful" graphics and detailed field guide as strengths, while Booklist called it a "marvelous introduction to life in a thriving underwater community." The game was a finalist for MacUser's 1994 Editor's Choice Award for Children's Software.
Gameplay
The player takes on the role of a fish who in turn must eat, stay clean and avoid being eaten by predators to survive. There are several modes of gameplay. In Tournament mode, the player plays every fish in the game, starting as the tiny silver sprat and eventually reaching the great white shark. A shorter Challenge mode picks four random fish (from smallest to largest) instead. The player can choose to play any fish in Practice Mode.
Finally, in Create-A-Fish the player creates their own species based on various parameters such as size and agility, which also affect the appearance of the fish. The color, special ability, and nocturnal or diurnal habits are also selected. Special moves, also present for some 'real' fish, include the stingray's sting and the cuttlefish's ink squirt.
Each fish has different preferences for food, as described in the educational summary before the game starts. The game consists of nine screens, arranged in three levels from the sandy bottom to the reef's top, that various fish, including the player, move through looking for food. To survive, or to gain enough points to reach the next fish, the player's fish has to find enough food (which can include plants, crustaceans or coral as well as fish) to prevent its constantly decreasing energy bar from reaching 0 and death. The other main conce
|
https://en.wikipedia.org/wiki/Gate%20%28hydraulic%20engineering%29
|
In hydraulic engineering, a gate is a rotating or sliding structure, supported by hinges or by a rotating horizontal or vertical axis, that can be located at an extreme of a large pipe or canal in order to control the flow of water or any fluid from one side to the other. It is usually placed at the mouth of irrigation channels to avoid water loss or at the end of drainage channels to elude water entrance.
Gate Valve
When using a gate, one thing that is used in certain applications such as manufacturing, mining, and others is the gate valve. Fluids will run through the valve to help lubricate the moving parts of a machine, transmit power, close off openings to moving parts, and to assist evaporating the amount of heat coming through. These fluids will flow throughout the gate valve with little resistance to flow and there is additionally small drops in pressure. Within the gate valve, there is a gatelike disk that moves up and down perpendicular to the path of flow and seats against two seat faces to shut off flow.
The velocity of the fluid against a partly opened disk may cause vibration and chattering which will ultimately lead to damage to the seating surfaces. This is a common way that gate valves fail.
See also
Sluice
Valve
References
Hydraulic engineering
|
https://en.wikipedia.org/wiki/Whitaker%20Foundation
|
The Whitaker Foundation was based in Arlington, Virginia and was an organization that primarily supported biomedical engineering education and research, but also supported other forms of medical research. It was founded and funded by U. A. Whitaker in 1975 upon his death with additional support coming from his wife Helen Whitaker upon her death in 1982. The foundation contributed more than $700 million to various universities and medical schools. The foundation decided to spend its financial resources over a finite period, rather than creating an organization that would be around forever, in order to have the maximum impact. The Whitaker Foundation closed on June 30, 2006. The foundation helped create 30 biomedical engineering programs at various universities in the United States and helped finance the construction of 13 buildings, many of them subsequently bearing the name "Whitaker" in some form.
Whitaker International Fellows and Scholars Program
The Whitaker International Fellows and Scholars Program funded more than 400 pre-doctoral research fellows and post-doctoral scholars between 2011 and 2018 to perform biomedical research outside of the United States. The program was managed by the Institute for International Education, who also manages the Fulbright Program. In addition to traditional laboratory research, the Whitaker International Program also funded internships in scientific policy and classroom-based educational programs. The last grants were awarded in 2018, however, the program continues to pursue Concluding Initiatives that develop and promote leadership in biomedical engineering, with an international focus.
References
External links
The Whitaker Foundation
Whitaker International Fellows and Scholars Program
Biomedical engineering
Medical and health foundations in the United States
|
https://en.wikipedia.org/wiki/LCD%20Smartie
|
LCD Smartie is open-source software for Microsoft Windows which allows a character LCD to be used as an auxiliary display device for a PC. Supported devices include displays based on the Hitachi HD44780 LCD controller, the Matrix Orbital Serial/USB LCD, and Palm OS devices (when used in conjunction with PalmOrb). The program has built in support for many systems statistics (i.e. cpu load, network utilization, free disk space...), downloading RSS feeds, Winamp integration and support for several other popular applications. To support less common applications LCD Smartie uses a powerful plugin system.
The project was started as freeware by BasieP who wrote it in Delphi. After running the software as freeware from 2001 to late 2004, BasieP passed the project on to Chris Lansley as an Open Source project hosted on the SourceForge servers. Chris Lansley maintained the project for few years, and now the whole project remains alive thanks to the program community.
LCD Smartie is a relatively mature software and development of the main executable has slowed considerably, most of the new features are introduced by new plugins which are released by both the core team and by the community. The LCD Smartie forums are the primary source for support and developer discussion.
To facilitate the use of LCD Smartie on modern PCs running version of windows 7 and 8 the team has started working on a USB interface to connect LCDs to a PC that does not require any additional kernel driver and provides a complete plug and play experience.
External links
Official project page on SourceForge.
Official program forum
Limbo's home page with plugins for LCD Smartie.
lcdsmartie-laz An actively maintained fork
Free software
Liquid crystal displays
Pascal (programming language) software
|
https://en.wikipedia.org/wiki/Medical%20transcription
|
Medical transcription, also known as MT, is an allied health profession dealing with the process of transcribing voice-recorded medical reports that are dictated by physicians, nurses and other healthcare practitioners. Medical reports can be voice files, notes taken during a lecture, or other spoken material. These are dictated over the phone or uploaded digitally via the Internet or through smart phone apps.
History
Medical transcription as it is currently known has existed since the beginning of the 20th century when standardization of medical records and data became critical to research. At that time, medical stenographers recorded medical information, taking doctors' dictation in shorthand. With the creation of audio recording devices, it became possible for physicians and their transcribers to work asynchronously.
Over the years, transcription equipment has changed from manual typewriters, to electric typewriters, to word processors, and finally, , to computers. Storage methods have also changed: from plastic disks and magnetic belts to cassettes, endless loops, and digital recordings. Today, speech recognition (SR), also known as continuous speech recognition (CSR), is increasingly used, with medical transcriptions and, in some cases, "editors" providing supplemental editorial services. Natural-language processing takes "automatic" transcription a step further, providing an interpretive function that speech recognition alone does not provide.
In the past, these medical reports consisted of very abbreviated handwritten notes that were added in the patient's file for interpretation by the primary physician responsible for the treatment. Ultimately, these handwritten notes and typed reports were consolidated into a single patient file and physically stored along with thousands of other patient records in the medical records department. Whenever the need arose to review the records of a specific patient, the patient's file would be retrieved from the filing c
|
https://en.wikipedia.org/wiki/Converse%20relation
|
In mathematics, the converse relation, or transpose, of a binary relation is the relation that occurs when the order of the elements is switched in the relation. For example, the converse of the relation 'child of' is the relation 'parent of'. In formal terms, if and are sets and is a relation from to then is the relation defined so that if and only if In set-builder notation,
The notation is analogous with that for an inverse function. Although many functions do not have an inverse, every relation does have a unique converse. The unary operation that maps a relation to the converse relation is an involution, so it induces the structure of a semigroup with involution on the binary relations on a set, or, more generally, induces a dagger category on the category of relations as detailed below. As a unary operation, taking the converse (sometimes called conversion or transposition) commutes with the order-related operations of the calculus of relations, that is it commutes with union, intersection, and complement.
Since a relation may be represented by a logical matrix, and the logical matrix of the converse relation is the transpose of the original, the converse relation is also called the transpose relation. It has also been called the opposite or dual of the original relation, or the inverse of the original relation, or the reciprocal of the relation
Other notations for the converse relation include or
Examples
For the usual (maybe strict or partial) order relations, the converse is the naively expected "opposite" order, for examples,
A relation may be represented by a logical matrix such as
Then the converse relation is represented by its transpose matrix:
The converse of kinship relations are named: " is a child of " has converse " is a parent of ". " is a nephew or niece of " has converse " is an uncle or aunt of ". The relation " is a sibling of " is its own converse, since it is a symmetric relation.
Properties
In the monoid of binary end
|
https://en.wikipedia.org/wiki/Delay%20differential%20equation
|
In mathematics, delay differential equations (DDEs) are a type of differential equation in which the derivative of the unknown function at a certain time is given in terms of the values of the function at previous times.
DDEs are also called time-delay systems, systems with aftereffect or dead-time, hereditary systems, equations with deviating argument, or differential-difference equations. They belong to the class of systems with the functional state, i.e. partial differential equations (PDEs) which are infinite dimensional, as opposed to ordinary differential equations (ODEs) having a finite dimensional state vector. Four points may give a possible explanation of the popularity of DDEs:
Aftereffect is an applied problem: it is well known that, together with the increasing expectations of dynamic performances, engineers need their models to behave more like the real process. Many processes include aftereffect phenomena in their inner dynamics. In addition, actuators, sensors, and communication networks that are now involved in feedback control loops introduce such delays. Finally, besides actual delays, time lags are frequently used to simplify very high order models. Then, the interest for DDEs keeps on growing in all scientific areas and, especially, in control engineering.
Delay systems are still resistant to many classical controllers: one could think that the simplest approach would consist in replacing them by some finite-dimensional approximations. Unfortunately, ignoring effects which are adequately represented by DDEs is not a general alternative: in the best situation (constant and known delays), it leads to the same degree of complexity in the control design. In worst cases (time-varying delays, for instance), it is potentially disastrous in terms of stability and oscillations.
Voluntary introduction of delays can benefit the control system.
In spite of their complexity, DDEs often appear as simple infinite-dimensional models in the very complex ar
|
https://en.wikipedia.org/wiki/Wengo
|
Wengo was at the beginning of 2004 a subsidiary of French telecom service provider Neuf Cegetel. As of February 2012, Wengo employs 80 people in the Paris headquarters, and is a subsidiary of Vivendi. Wengo is now repositioned as an online personal and consulting services marketplace.
Voice over IP
Wengo was founded in September 2004, launching its service at the beginning of 2005 based on what is now known as the "WengoPhone, classic edition". In September 2005, Wengo opened its visiophony service at the same time as Skype on a cross platform (Windows, Linux and Mac OS X). In 2006, Wengo integrated the Gaim project into its software, allowing its users to communicate via Instant Messaging with other users on the MSN, Yahoo! or Google Talk networks.
Wengo and Skype started offering free PSTN calls in 2006, which accelerated the commoditization of telephony calls. In June 2006, Wengo offered a two-month unlimited calling plan to several destinations including Belgium, Guadeloupe, India, Martinique, Poland and Vietnam. The offer was posted on many websites, including FatWallet, and attracted many new customers. Wengo later suspended many of these accounts, in the company's interests, after many people began abusing the unlimited calling system.
WengoPhone
Originally, Wengo was supporting the development of WengoPhone. WengoPhone was a free and open source (GPL) VoIP (including video conferencing, SMS and chat) softphone through which it offered PC to PSTN phone calls. This software used the free and open SIP protocol, it was developed under the name WengoPhone by the OpenWengo project. Their Firefox browser extension was the first browser-based VoIP client for OS X. The VoIP service was presented on the French market as an attempt to compete with Skype, Yahoo and other virtual network service providers. The economic model of this activity, however, was not sustainable. Wengo decided that WengoPhone was outside its core business, and in February 2008 transferred t
|
https://en.wikipedia.org/wiki/Method%20of%20averaging
|
In mathematics, more specifically in dynamical systems, the method of averaging (also called averaging theory) exploits systems containing time-scales separation: a fast oscillation versus a slow drift. It suggests that we perform an averaging over a given amount of time in order to iron out the fast oscillations and observe the qualitative behavior from the resulting dynamics. The approximated solution holds under finite time inversely proportional to the parameter denoting the slow time scale. It turns out to be a customary problem where there exists the trade off between how good is the approximated solution balanced by how much time it holds to be close to the original solution.
More precisely, the system has the following form
of a phase space variable The fast oscillation is given by versus a slow drift of . The averaging method yields an autonomous dynamical system
which approximates the solution curves of inside a connected and compact region of the phase space and over time of .
Under the validity of this averaging technique, the asymptotic behavior of the original system is captured by the dynamical equation for . In this way, qualitative methods for autonomous dynamical systems may be employed to analyze the equilibria and more complex structures, such as slow manifold and invariant manifolds, as well as their stability in the phase space of the averaged system.
In addition, in a physical application it might be reasonable or natural to replace a mathematical model, which is given in the form of the differential equation for , with the corresponding averaged system , in order to use the averaged system to make a prediction and then test the prediction against the results of a physical experiment.
The averaging method has a long history, which is deeply rooted in perturbation problems that arose in celestial mechanics (see, for example in ).
First example
Consider a perturbed logistic growth
and the averaged equation
The purpose of the method o
|
https://en.wikipedia.org/wiki/Security%20testing
|
Security testing is a process intended to detect flaws in the security mechanisms of an information system and as such help enable it to protect data and maintain functionality as intended. Due to the logical limitations of security testing, passing the security testing process is not an indication that no flaws exist or that the system adequately satisfies the security requirements.
Typical security requirements may include specific elements of confidentiality, integrity, authentication, availability, authorization and non-repudiation. Actual security requirements tested depend on the security requirements implemented by the system. Security testing as a term has a number of different meanings and can be completed in a number of different ways. As such, a Security Taxonomy helps us to understand these different approaches and meanings by providing a base level to work from.
Confidentiality
A security measure which protects against the disclosure of information to parties other than the intended recipient is by no means the only way of ensuring the security.
Integrity
Integrity of information refers to protecting information from being modified by unauthorized parties
A measure intended to allow the receiver to determine that the information provided by a system is correct.
Integrity schemes often use some of the same underlying technologies as confidentiality schemes, but they usually involve adding information to a communication, to form the basis of an algorithmic check, rather than the encoding all of the communication.
To check if the correct information is transferred from one application to other.
Authentication
This might involve confirming the identity of a person, tracing the origins of an artifact, ensuring that a product is what its packaging and labelling claims to be, or assuring that a computer program is a trusted one.
Authorization
The process of determining that a requester is allowed to receive a service or perform an operation.
|
https://en.wikipedia.org/wiki/Topology%20table
|
A topology table is used by routers that route traffic in a network. It consists of all routing tables inside the Autonomous System where the router is positioned. Each router using the routing protocol EIGRP then maintains a topology table for each configured network protocol — all routes learned, that are leading to a destination are found in the topology table. EIGRP must have a reliable connection. The routing table of all routers of an Autonomous System is same.
Routing
Table
|
https://en.wikipedia.org/wiki/Rake%20%28software%29
|
Rake is a software task management and build automation tool created by Jim Weirich. It allows the user to specify tasks and describe dependencies as well as to group tasks in a namespace. It is similar to SCons and Make. It's written in the Ruby programming language and the Rakefiles (equivalent of Makefiles in Make) use Ruby syntax. Rake uses Ruby's anonymous function blocks to define various tasks, allowing the use of Ruby syntax. It has a library of common tasks: for example, functions to do common file-manipulation tasks and a library to remove compiled files (the "clean" task). Like Make, Rake can also synthesize tasks based on patterns: for example, automatically building a file compilation task based on filename patterns. Rake is now part of the standard library of Ruby from version 1.9 onward.
Example
Below is an example of a simple Rake script to build a C Hello World program.
file 'hello.o' => 'hello.c' do
sh 'cc -c -o hello.o hello.c'
end
file 'hello' => 'hello.o' do
sh 'cc -o hello hello.o'
end
Rules
When a file is named as a prerequisite but it does not have a file task defined for it, Rake will attempt to synthesize a task by looking at a list of rules supplied in the Rakefile. For example, suppose we were trying to invoke task "mycode.o" with no tasks defined for it. If the Rakefile has a rule that looks like this:
rule '.o' => '.c' do |t|
sh "cc #{t.source} -c -o #{t.name}"
end
This rule will synthesize any task that ends in ".o". It has as a prerequisite that a source file with an extension of ".c" must exist. If Rake is able to find a file named "mycode.c", it will automatically create a task that builds "mycode.o" from "mycode.c". If the file "mycode.c" does not exist, Rake will attempt to recursively synthesize a rule for it.
When a task is synthesized from a rule, the source attribute of the task is set to the matching source file. This allows users to write rules with actions that reference the source file.
Advanced rules
Any reg
|
https://en.wikipedia.org/wiki/Nonnegative%20matrix
|
In mathematics, a nonnegative matrix, written
is a matrix in which all the elements are equal to or greater than zero, that is,
A positive matrix is a matrix in which all the elements are strictly greater than zero. The set of positive matrices is a subset of all non-negative matrices. While such matrices are commonly found, the term is only occasionally used due to the possible confusion with positive-definite matrices, which are different. A matrix which is both non-negative and is positive semidefinite is called a doubly non-negative matrix.
A rectangular non-negative matrix can be approximated by a decomposition with two other non-negative matrices via non-negative matrix factorization.
Eigenvalues and eigenvectors of square positive matrices are described by the Perron–Frobenius theorem.
Properties
The trace and every row and column sum/product of a nonnegative matrix is nonnegative.
Inversion
The inverse of any non-singular M-matrix is a non-negative matrix. If the non-singular M-matrix is also symmetric then it is called a Stieltjes matrix.
The inverse of a non-negative matrix is usually not non-negative. The exception is the non-negative monomial matrices: a non-negative matrix has non-negative inverse if and only if it is a (non-negative) monomial matrix. Note that thus the inverse of a positive matrix is not positive or even non-negative, as positive matrices are not monomial, for dimension .
Specializations
There are a number of groups of matrices that form specializations of non-negative matrices, e.g. stochastic matrix; doubly stochastic matrix; symmetric non-negative matrix.
See also
Metzler matrix
Bibliography
Abraham Berman, Robert J. Plemmons, Nonnegative Matrices in the Mathematical Sciences, 1994, SIAM. .
A. Berman and R. J. Plemmons, Nonnegative Matrices in the Mathematical Sciences, Academic Press, 1979 (chapter 2),
R.A. Horn and C.R. Johnson, Matrix Analysis, Cambridge University Press, 1990 (chapter 8).
Henryk Minc, N
|
https://en.wikipedia.org/wiki/Fundamental%20vector%20field
|
In the study of mathematics and especially differential geometry, fundamental vector fields are an instrument that describes the infinitesimal behaviour of a smooth Lie group action on a smooth manifold. Such vector fields find important applications in the study of Lie theory, symplectic geometry, and the study of Hamiltonian group actions.
Motivation
Important to applications in mathematics and physics is the notion of a flow on a manifold. In particular, if is a smooth manifold and is a smooth vector field, one is interested in finding integral curves to . More precisely, given one is interested in curves such that:
for which local solutions are guaranteed by the Existence and Uniqueness Theorem of Ordinary Differential Equations. If is furthermore a complete vector field, then the flow of , defined as the collection of all integral curves for , is a diffeomorphism of . The flow given by is in fact an action of the additive Lie group on .
Conversely, every smooth action defines a complete vector field via the equation:
It is then a simple result that there is a bijective correspondence between actions on and complete vector fields on .
In the language of flow theory, the vector field is called the infinitesimal generator. Intuitively, the behaviour of the flow at each point corresponds to the "direction" indicated by the vector field. It is a natural question to ask whether one may establish a similar correspondence between vector fields and more arbitrary Lie group actions on .
Definition
Let be a Lie group with corresponding Lie algebra . Furthermore, let be a smooth manifold endowed with a smooth action . Denote the map such that , called the orbit map of corresponding to . For , the fundamental vector field corresponding to is any of the following equivalent definitions:
where is the differential of a smooth map and is the zero vector in the vector space .
The map can then be shown to be a Lie algebra homomorphism.
Application
|
https://en.wikipedia.org/wiki/Generalized%20quadrangle
|
In geometry, a generalized quadrangle is an incidence structure whose main feature is the lack of any triangles (yet containing many quadrangles). A generalized quadrangle is by definition a polar space of rank two. They are the with n = 4 and near 2n-gons with n = 2. They are also precisely the partial geometries pg(s,t,α) with α = 1.
Definition
A generalized quadrangle is an incidence structure (P,B,I), with I ⊆ P × B an incidence relation, satisfying certain axioms. Elements of P are by definition the points of the generalized quadrangle, elements of B the lines. The axioms are the following:
There is an s (s ≥ 1) such that on every line there are exactly s + 1 points. There is at most one point on two distinct lines.
There is a t (t ≥ 1) such that through every point there are exactly t + 1 lines. There is at most one line through two distinct points.
For every point p not on a line L, there is a unique line M and a unique point q, such that p is on M, and q on M and L.
(s,t) are the parameters of the generalized quadrangle. The parameters are allowed to be infinite. If either s or t is one, the generalized quadrangle is called trivial. For example, the 3x3 grid with P = {1,2,3,4,5,6,7,8,9} and B = {123, 456, 789, 147, 258, 369} is a trivial GQ with s = 2 and t = 1. A generalized quadrangle with parameters (s,t) is often denoted by GQ(s,t).
The smallest non-trivial generalized quadrangle is GQ(2,2), whose representation was dubbed "the doily" by Stan Payne in 1973.
Properties
Graphs
There are two interesting graphs that can be obtained from a generalized quadrangle.
The collinearity graph having as vertices the points of a generalized quadrangle, with the collinear points connected. This graph is a strongly regular graph with parameters ((s+1)(st+1), s(t+1), s-1, t+1) where (s,t) is the order of the GQ.
The incidence graph whose vertices are the points and lines of the generalized quadrangle and two vertices are adjacent if one is a point, t
|
https://en.wikipedia.org/wiki/The%20Valley%20of%20the%20Shadow
|
The Valley of the Shadow is a digital history project about the American Civil War, launched in 1993 and hosted by the University of Virginia. It details the experiences of Confederate soldiers from Augusta County, Virginia and Union soldiers from Franklin County, Pennsylvania, United States.
Project founders William G. Thomas III and Edward L. Ayers referred to it as "an applied experiment in digital scholarship." The site contains scanned copies of four newspapers from each of the counties in addition to those of surrounding cities such as Richmond and New York: the Staunton Spectator (Staunton, Virginia; Whig), the Republican Vindicator (Staunton, Virginia; Democratic), the Franklin Repository and Transcript (Chambersburg, Pennsylvania; Republican), and the Valley Spirit (Chambersburg, Pennsylvania; Democratic). Elsa A. Nystrom and Justin A. Nystrom state about the site:
In 2022, on the 30th anniversary of the project, New American History released an updated version of the Valley of the Shadow with enhanced images and search features.
References
Further reading
Alkalimat, Abdul, The African American Experience in Cyberspace: A Resource Guide to the Best Web Sites on Black Culture and History
Serge Noiret: "La "nuova storiografia digitale" negli Stati Uniti, (1999-2004)." in Memoria e Ricerca, n.18, January–April 2005, pp.169-185, URL: .
Serge Noiret: "Y a t-il une Histoire Numérique 2.0 ?" in Les historiens et l'informatique. Un métier à réinventer, edited by Jean-Philippe Genet and Andrea Zorzi, Rome: Ecole Française de Rome, 2011.
External links
The Valley of the Shadow - original website
The Valley of the Shadow - updated website
Educational institutions in the United States with year of establishment missing
Information technology projects
Virginia in the American Civil War
Pennsylvania in the American Civil War
History websites of the United States
University of Virginia
Chambersburg, Pennsylvania
Digital history projects
Digital humanities proj
|
https://en.wikipedia.org/wiki/Grid%20dip%20oscillator
|
"Dip meter" can also refer to an influential early commercial expert system called Dipmeter Advisor; or may refer to an instrument that measures the magnetic dip angle of Earth's magnetic field, the field line angle in a vertical plane.
Grid dip oscillator (GDO), also called grid dip meter, gate dip meter, dip meter, or just dipper, is a type of electronic instrument that measures the resonant frequency of nearby unconnected radio frequency tuned circuits. It is a variable-frequency oscillator that circulates a small-amplitude signal through an exposed coil, whose electromagnetic field can interact with adjacent circuitry. The oscillator loses power when its coil is near a circuit that resonates at the same frequency. A meter on the GDO registers the amplitude drop, or "dip", hence the name.
Dip oscillators have been widely used by amateur radio operators for measuring the properties of resonant circuits, filters, and antennas. They can also be used for transmission line testing, as signal generators, and for measuring inductance and capacitance of components. Measurement with a GDO is called "dipping" a circuit.
Principle of operation
Central to the dip meter is a high-frequency variable-frequency oscillator with a calibrated tuning capacitor and matching interchangeable coils, as shown in the circuit diagram on the right. Resonance is indicated by a dip in amplitude of the signal within the GDO, by a meter on the device.
When the oscillator's exposed coil is in the vicinity of another resonant circuit, the coupled pair behaves as a low-Q transformer whose coupling is most effective when their respective resonant frequencies match. The degree of coupling affects the frequency and amplitude of oscillation in the dip meter, which is sensed in any of several ways, the simplest and most usual of which is a built-in microammeter. The distance between the coil and the tested circuit needs to be adjusted carefully so that the GDO amplitude is significantly affected
|
https://en.wikipedia.org/wiki/Design%20marker
|
In software engineering, a design marker is a technique of documenting design choices in source code using the Marker Interface pattern. Marker interfaces have traditionally been limited to those interfaces intended for explicit, runtime verification (normally via instanceof). A design marker is a marker interface used to document a design choice. In Java programs the design choice is documented in the marker interface's Javadoc documentation.
Many choices made at software design time cannot be directly expressed in today's implementation languages like C# and Java. These design choices (known by names like Design Pattern, Design Contract, Refactoring, Effective Programming Idioms, Blueprints, etc.) must be implemented via programming and naming conventions, because they go beyond the built-in functionality of production programming languages. The consequences of this limitation conspire over time to erode design investments as well as to promote a false segregation between the designer and implementer mindsets.
Two independent proposals recognize these problems and give the same basic strategies for tackling them. Until now, the budding explicit programming movement has been linked to the use of an experimental Java research tool called ELIDE. The Design Markers technique requires only standard Javadoc-like tools to garner many of the benefits of Explicit Programming.
See also
Design Patterns
Marker interface pattern
External links
Design Markers: Explicit Programming for the rest of us
Design Markers home page
Explicit Programming manifesto
Software design
|
https://en.wikipedia.org/wiki/Polar%20space
|
In mathematics, in the field of geometry, a polar space of rank n (), or projective index , consists of a set P, conventionally called the set of points, together with certain subsets of P, called subspaces, that satisfy these axioms:
Every subspace is isomorphic to a projective space with and K a division ring. (That is, it is a Desarguesian projective geometry.) For each subspace the corresponding d is called its dimension.
The intersection of two subspaces is always a subspace.
For each subspace A of dimension and each point p not in A, there is a unique subspace B of dimension containing p and such that is -dimensional. The points in are exactly the points of A that are in a common subspace of dimension 1 with p.
There are at least two disjoint subspaces of dimension .
It is possible to define and study a slightly bigger class of objects using only relationship between points and lines: a polar space is a partial linear space (P,L), so that for each point p ∈ P and
each line l ∈ L, the set of points of l collinear to p, is either a singleton or the whole l.
Finite polar spaces (where P is a finite set) are also studied as combinatorial objects.
Generalized quadrangles
A polar space of rank two is a generalized quadrangle; in this case, in the latter definition, the set of points of a line collinear with a point p is the whole of only if p ∈ . One recovers the former definition from the latter under the assumptions that lines have more than 2 points, points lie on more than 2 lines, and there exist a line and a point p not on so that p is collinear to all points of .
Finite classical polar spaces
Let be the projective space of dimension over the finite field and let be a reflexive sesquilinear form or a quadratic form on the underlying vector space. The elements of the finite classical polar space associated with this form are the elements of the totally isotropic subspaces (when is a sesquilinear form) or the totally singular subspa
|
https://en.wikipedia.org/wiki/Hyperbolic%20equilibrium%20point
|
In the study of dynamical systems, a hyperbolic equilibrium point or hyperbolic fixed point is a fixed point that does not have any center manifolds. Near a hyperbolic point the orbits of a two-dimensional, non-dissipative system resemble hyperbolas. This fails to hold in general. Strogatz notes that "hyperbolic is an unfortunate name—it sounds like it should mean 'saddle point'—but it has become standard." Several properties hold about a neighborhood of a hyperbolic point, notably
A stable manifold and an unstable manifold exist,
Shadowing occurs,
The dynamics on the invariant set can be represented via symbolic dynamics,
A natural measure can be defined,
The system is structurally stable.
Maps
If is a C1 map and p is a fixed point then p is said to be a hyperbolic fixed point when the Jacobian matrix has no eigenvalues on the unit circle.
One example of a map whose only fixed point is hyperbolic is Arnold's cat map:
Since the eigenvalues are given by
We know that the Lyapunov exponents are:
Therefore it is a saddle point.
Flows
Let be a C1 vector field with a critical point p, i.e., F(p) = 0, and let J denote the Jacobian matrix of F at p. If the matrix J has no eigenvalues with zero real parts then p is called hyperbolic. Hyperbolic fixed points may also be called hyperbolic critical points or elementary critical points.
The Hartman–Grobman theorem states that the orbit structure of a dynamical system in a neighbourhood of a hyperbolic equilibrium point is topologically equivalent to the orbit structure of the linearized dynamical system.
Example
Consider the nonlinear system
(0, 0) is the only equilibrium point. The Jacobian matrix of the linearization at the equilibrium point is
The eigenvalues of this matrix are . For all values of α ≠ 0, the eigenvalues have non-zero real part. Thus, this equilibrium point is a hyperbolic equilibrium point. The linearized system will behave similar to the non-linear system near (0, 0). When α
|
https://en.wikipedia.org/wiki/Center%20manifold
|
In the mathematics of evolving systems, the concept of a center manifold was originally developed to determine stability of degenerate equilibria. Subsequently, the concept of center manifolds was realised to be fundamental to mathematical modelling.
Center manifolds play an important role in bifurcation theory because interesting behavior takes place on the center manifold and in multiscale mathematics because the long time dynamics of the micro-scale often are attracted to a relatively simple center manifold involving the coarse scale variables.
Informal description
Saturn's rings capture much center-manifold geometry. Dust particles in the rings are subject to tidal forces, which act characteristically to "compress and stretch". The forces compress particle orbits into the rings, stretch particles along the rings, and ignore small shifts in ring radius. The compressing direction defines the stable manifold, the stretching direction defining the unstable manifold, and the neutral direction is the center manifold.
While geometrically accurate, one major difference distinguishes Saturn's rings from a physical center manifold. Like most dynamical systems, particles in the rings are governed by second-order laws. Understanding trajectories requires modeling position and a velocity/momentum variable, to give a tangent manifold structure called phase space. Physically speaking, the stable, unstable and neutral manifolds of Saturn's ring system do not divide up the coordinate space for a particle's position; they analogously divide up phase space instead.
The center manifold typically behaves as an extended collection of saddle points. Some position-velocity pairs are driven towards the center manifold, while others are flung away from it. Small perturbations that generally push them about randomly, and often push them out of the center manifold. There are, however, dramatic counterexamples to instability at the center manifold, called Lagrangian coheren
|
https://en.wikipedia.org/wiki/Homoclinic%20orbit
|
In the study of dynamical systems, a homoclinic orbit is a path through phase space which joins a saddle equilibrium point to itself. More precisely, a homoclinic orbit lies in the intersection of the stable manifold and the unstable manifold of an equilibrium. It is a heteroclinic orbit–a path between any two equilibrium points–in which the endpoints are one and the same.
Consider the continuous dynamical system described by the ordinary differential equation
Suppose there is an equilibrium at , then a solution is a homoclinic orbit if
If the phase space has three or more dimensions, then it is important to consider the topology of the unstable manifold of the saddle point. The figures show two cases. First, when the stable manifold is topologically a cylinder, and secondly, when the unstable manifold is topologically a Möbius strip; in this case the homoclinic orbit is called twisted.
Discrete dynamical system
Homoclinic orbits and homoclinic points are defined in the same way for iterated functions, as the intersection of the stable set and unstable set of some fixed point or periodic point of the system.
We also have the notion of homoclinic orbit when considering discrete dynamical systems. In such a case, if is a diffeomorphism of a manifold , we say that is a homoclinic point if it has the same past and future - more specifically, if there exists a fixed (or periodic) point
such that
Properties
The existence of one homoclinic point implies the existence of an infinite number of them.
This comes from its definition: the intersection of a stable and unstable set. Both sets are invariant by definition, which means that the forward iteration of the homoclinic point is both on the stable and unstable set. By iterating N times, the map approaches the equilibrium point by the stable set, but in every iteration it is on the unstable manifold too, which shows this property.
This property suggests that complicated dynamics arise by the existence of a hom
|
https://en.wikipedia.org/wiki/Heteroclinic%20orbit
|
[[Image:Heteroclinic orbit in pendulum phaseportrait.png|thumb|right|The phase portrait of the pendulum equation {{math|1=''x + sin x = 0}}. The highlighted curve shows the heteroclinic orbit from to . This orbit corresponds with the (rigid) pendulum starting upright, making one revolution through its lowest position, and ending upright again.]]
In mathematics, in the phase portrait of a dynamical system, a heteroclinic orbit (sometimes called a heteroclinic connection) is a path in phase space which joins two different equilibrium points. If the equilibrium points at the start and end of the orbit are the same, the orbit is a homoclinic orbit.
Consider the continuous dynamical system described by the ordinary differential equation
Suppose there are equilibria at Then a solution is a heteroclinic orbit from to if both limits are satisfied:
This implies that the orbit is contained in the stable manifold of and the unstable manifold of .
Symbolic dynamics
By using the Markov partition, the long-time behaviour of hyperbolic system can be studied using the techniques of symbolic dynamics. In this case, a heteroclinic orbit has a particularly simple and clear representation. Suppose that is a finite set of M symbols. The dynamics of a point x is then represented by a bi-infinite string of symbols
A periodic point of the system is simply a recurring sequence of letters. A heteroclinic orbit is then the joining of two distinct periodic orbits. It may be written as
where is a sequence of symbols of length k, (of course, ), and is another sequence of symbols, of length m (likewise, ). The notation simply denotes the repetition of p an infinite number of times. Thus, a heteroclinic orbit can be understood as the transition from one periodic orbit to another. By contrast, a homoclinic orbit can be written as
with the intermediate sequence being non-empty, and, of course, not being p, as otherwise, the orbit would simply be .
See also
Heteroclinic co
|
https://en.wikipedia.org/wiki/Peixoto%27s%20theorem
|
In the theory of dynamical systems, Peixoto's theorem, proved by Maurício Peixoto, states that among all smooth flows on surfaces, i.e. compact two-dimensional manifolds, structurally stable systems may be characterized by the following properties:
The set of non-wandering points consists only of periodic orbits and fixed points.
The set of fixed points is finite and consists only of hyperbolic equilibrium points.
Finiteness of attracting or repelling periodic orbits.
Absence of saddle-to-saddle connections.
Moreover, they form an open set in the space of all flows endowed with C1 topology.
See also
Andronov–Pontryagin criterion
References
Jacob Palis, W. de Melo, Geometric Theory of Dynamical Systems. Springer-Verlag, 1982
Stability theory
Theorems in dynamical systems
|
https://en.wikipedia.org/wiki/Diode%20modelling
|
In electronics, diode modelling refers to the mathematical models used to approximate the actual behaviour of real diodes to enable calculations and circuit analysis. A diode's I-V curve is nonlinear.
A very accurate, but complicated, physical model composes the I-V curve from three exponentials with a slightly different steepness (i.e. ideality factor), which correspond to different recombination mechanisms in the device; at very large and very tiny currents the curve can be continued by linear segments (i.e. resistive behaviour).
In a relatively good approximation a diode is modelled by the single-exponential Shockley diode law. This nonlinearity still complicates calculations in circuits involving diodes
so even simpler models are often used.
This article discusses the modelling of p-n junction diodes, but the techniques may be generalized to other solid state diodes.
Large-signal modelling
Shockley diode model
The Shockley diode equation relates the diode current of a p-n junction diode to the diode voltage . This relationship is the diode I-V characteristic:
,
where is the saturation current or scale current of the diode (the magnitude of the current that flows for negative in excess of a few , typically 10−12A). The scale current is proportional to the cross-sectional area of the diode. Continuing with the symbols: is the thermal voltage (, about 26 mV at normal temperatures), and is known as the diode ideality factor (for silicon diodes is approximately 1 to 2).
When the formula can be simplified to:
.
This expression is, however, only an approximation of a more complex I-V characteristic. Its applicability is particularly limited in case of ultrashallow junctions, for which better analytical models exist.
Diode-resistor circuit example
To illustrate the complications in using this law, consider the problem of finding the voltage across the diode in Figure 1.
Because the current flowing through the diode is the same as the current thro
|
https://en.wikipedia.org/wiki/El%20Nombre
|
El Nombre is a children's educational programme about an anthropomorphic Mexican gerbil character, originally from a series of educational sketches on Numbertime, the BBC schools programme about mathematics. He was also the only character to appear in all Numbertime episodes. His voice was provided by Steve Steen, while the other characters' voices were provided by Sophie Aldred, Kate Robbins, and (from 1999) former Blue Peter host Janet Ellis. For the ninth (and final) series of Numbertime in 2001, Michael Fenton-Stevens also provided voices of certain other characters in the El Nombre sketches.
The character's name means "The Name" in Spanish, not "The Number", which would be "El Número", but does mean "The Number" in Catalan.
Setting
El Nombre is set in the fictional town of Santa Flamingo (originally known as Santo Flamingo), home of Little Juan, his Mama, Pedro Gonzales, Juanita Conchita, Maria Consuela Tequila Chiquita, Little Pepita Consuela Tequila Chiquita, Tanto the tarantula, Señor Gelato the ice-cream seller, Leonardo de Sombrero the pizza delivery boy, Señor Calculo the bank manager, Señor Manuel the greengrocer, Miss Constanza Bonanza the school teacher, Señora Fedora the balloon seller and mayor, Señor Loco the steam engine driver, Señor Chipito the carpenter and the local bandit Don Fandango (although it was not actually given a name until the fifth series of Numbertime premiered in January 1998); whenever he was needed, El Nombre swung into action to solve the townspeople's simple mathematical problems, usually talking in rhyme. His character was a parody of the fictional hero Zorro, wearing a similar black cowl mask and huge sombrero, appearing unexpectedly to save the townsfolk from injustice, and generally swinging around on his bullwhip – however, unlike Zorro, he was often quite inept (in fact, on one occasion, Tanto tipped a bucket of water onto him after he made him reenact the Incy Wincy Spider rhyme).
When El Nombre first appeared on Nu
|
https://en.wikipedia.org/wiki/Gain%20compression
|
Gain compression is a reduction in differential or slope gain caused by nonlinearity of the transfer function of the amplifying device. This nonlinearity may be caused by heat due to power dissipation or by overdriving the active device beyond its linear region. It is a large-signal phenomenon of circuits.
Relevance
Gain compression is relevant in any system with a wide dynamic range, such as audio or RF. It is more common in tube circuits than transistor circuits, due to topology differences, possibly causing the differences in audio performance called "valve sound". The front-end RF amps of radio receivers are particularly susceptible to this phenomenon when overloaded by a strong unwanted signal.
Audio effects
A tube radio or tube amplifier will increase in volume to a point, and then as the input signal extends beyond the linear range of the device, the effective gain is reduced, altering the shape of the waveform. The effect is also present in transistor circuits. The extent of the effect depends on the topology of the amplifier.
Differences between clipping and compression
Clipping, as a form of signal compression, differs from the operation of the typical studio audio level compressor, in which gain compression is not instantaneous (delayed in time via attack and release settings).
Clipping destroys any audio information which is over a certain threshold. Compression and limiting, change the shape of the entire waveform, not just the shape of the waveform above the threshold. This is why it is possible to limit and compress with very high ratios without causing distortion.
Limiting or clipping
Gain is a linear operation. Gain compression is not linear and, as such, its effect is one of distortion, due to the nonlinearity of the transfer characteristic which also causes a loss of 'slope' or 'differential' gain. So the output is less than expected using the small signal gain of the amplifier.
In clipping, the signal is abruptly limited to a cert
|
https://en.wikipedia.org/wiki/Extinction%20vortex
|
Extinction vortices are a class of models through which conservation biologists, geneticists and ecologists can understand the dynamics of and categorize extinctions in the context of their causes. This model shows the events that ultimately lead small populations to become increasingly vulnerable as they spiral toward extinction. Developed by M. E. Gilpin and M. E. Soulé in 1986, there are currently four classes of extinction vortices. The first two (R and D) deal with environmental factors that have an effect on the ecosystem or community level, such as disturbance, pollution, habitat loss etc. Whereas the second two (F and A) deal with genetic factors such as inbreeding depression and outbreeding depression, genetic drift etc.
Types of vortices
R Vortex: The R vortex is initiated when there is a disturbance which facilitates a lowering of population size (N) and a corresponding increase in variability (Var(r)). This event can make populations vulnerable to additional disturbances which will lead to further decreases in population size (N) and further increases in variability (Var(r)). A prime example of this would be the disruption of sex ratios in a population away from the species optimum.
D Vortex: The D vortex is initiated when population size (N) decreases and variability (Var(r)) increases such that the spatial distribution (D) of the population is increased and the population becomes "patchy" or fragmented. Within these fragments, local extinction rates increase which, through positive feedback, further increases D.
F Vortex: The F vortex is initiated by a decrease in population size (N) which leads to a decrease in heterozygosity, and therefore a decrease in genetic diversity. Decreased population size makes the effects of genetic drift more prominent, resulting in increased risk of inbreeding depression and an increase in population genetic load, which over time will result in extinction.
A Vortex: The A vortex is a result of an increase in the impact
|
https://en.wikipedia.org/wiki/FLAG-tag
|
FLAG-tag, or FLAG octapeptide, or FLAG epitope, is a peptide protein tag that can be added to a protein using recombinant DNA technology, having the sequence DYKDDDDK (where D=aspartic acid, Y=tyrosine, and K=lysine). It is one of the most specific tags and it is an artificial antigen to which specific, high affinity monoclonal antibodies have been developed and hence can be used for protein purification by affinity chromatography and also can be used for locating proteins within living cells. FLAG-tag has been used to separate recombinant, overexpressed protein from wild-type protein expressed by the host organism. FLAG-tag can also be used in the isolation of protein complexes with multiple subunits, because FLAG-tag's mild purification procedure tends not to disrupt such complexes. FLAG-tag-based purification has been used to obtain proteins of sufficient purity and quality to carry out 3D structure determination by x-ray crystallography.
A FLAG-tag can be used in many different assays that require recognition by an antibody. If there is no antibody against a given protein, adding a FLAG-tag to a protein allows the protein to be studied with an antibody against the FLAG-tag sequence. Examples are cellular localization studies by immunofluorescence, immunoprecipitation or detection by SDS PAGE protein electrophoresis and Western blotting.
The peptide sequence of the FLAG-tag from the N-terminus to the C-terminus is: DYKDDDDK (1012 Da). Additionally, FLAG-tags may be used in tandem, commonly the 3xFLAG peptide: DYKDHD-G-DYKDHD-I-DYKDDDDK (with the final tag encoding an enterokinase cleavage site). FLAG-tag can be fused to the C-terminus or the N-terminus of a protein, or inserted within a protein. Some commercially available antibodies (e.g., M1/4E11) recognize the epitope only when FLAG-tag is present at the N-terminus. However, other available antibodies (e.g., M2) are position-insensitive. The tyrosine residue in the FLAG-tag can be sulfated when expressed on
|
https://en.wikipedia.org/wiki/Iddq%20testing
|
Iddq testing is a method for testing CMOS integrated circuits for the presence of manufacturing faults. It relies on measuring the supply current (Idd) in the quiescent state (when the circuit is not switching and inputs are held at static values). The current consumed in the state is commonly called Iddq for Idd (quiescent) and hence the name.
Iddq testing uses the principle that in a correctly operating quiescent CMOS digital circuit, there is no static current path between the power supply and ground, except for a small amount of leakage. Many common semiconductor manufacturing faults will cause the current to increase by orders of magnitude, which can be easily detected. This has the advantage of checking the chip for many possible faults with one measurement. Another advantage is that it may catch faults that are not found by conventional stuck-at fault test vectors.
Iddq testing is somewhat more complex than just measuring the supply current. If a line is shorted to Vdd, for example, it will still draw no extra current if the gate driving the signal is attempting to set it to '1'. However, a different input that attempts to set the signal to 0 will show a large increase in quiescent current, signalling a bad part. Typical Iddq tests may use 20 or so inputs. Note that Iddq test inputs require only controllability, and not observability. This is because the observability is through the shared power supply connection.
Advantages and disadvantages
Iddq testing has many advantages:
It is a simple and direct test that can identify physical defects.
The area and design time overhead are very low.
Test generation is fast.
Test application time is fast since the vector sets are small.
It catches some defects that other tests, particularly stuck-at logic tests, do not.
Drawback:
Compared to scan chain testing, Iddq testing is time consuming, and thus more expensive, as is achieved by current measurements that take much more time than reading digital pins i
|
https://en.wikipedia.org/wiki/Boolean%20domain
|
In mathematics and abstract algebra, a Boolean domain is a set consisting of exactly two elements whose interpretations include false and true. In logic, mathematics and theoretical computer science, a Boolean domain is usually written as {0, 1}, or
The algebraic structure that naturally builds on a Boolean domain is the Boolean algebra with two elements. The initial object in the category of bounded lattices is a Boolean domain.
In computer science, a Boolean variable is a variable that takes values in some Boolean domain. Some programming languages feature reserved words or symbols for the elements of the Boolean domain, for example false and true. However, many programming languages do not have a Boolean datatype in the strict sense. In C or BASIC, for example, falsity is represented by the number 0 and truth is represented by the number 1 or −1, and all variables that can take these values can also take any other numerical values.
Generalizations
The Boolean domain {0, 1} can be replaced by the unit interval , in which case rather than only taking values 0 or 1, any value between and including 0 and 1 can be assumed. Algebraically, negation (NOT) is replaced with conjunction (AND) is replaced with multiplication (), and disjunction (OR) is defined via De Morgan's law to be .
Interpreting these values as logical truth values yields a multi-valued logic, which forms the basis for fuzzy logic and probabilistic logic. In these interpretations, a value is interpreted as the "degree" of truth – to what extent a proposition is true, or the probability that the proposition is true.
See also
Boolean-valued function
GF(2)
References
Further reading
(455 pages) (NB. Contains extended versions of the best manuscripts from the 10th International Workshop on Boolean Problems held at the Technische Universität Bergakademie Freiberg, Germany on 2012-09-19/21.)
(480 pages) (NB. Contains extended versions of the best manuscripts from the 11th International Work
|
https://en.wikipedia.org/wiki/Portable%20object%20%28computing%29
|
In distributed programming, a portable object is an object which can be accessed through a normal method call while possibly residing in memory on another computer. It is portable in the sense that it moves from machine to machine, irrespective of operating system or computer architecture. This mobility is the end goal of many remote procedure call systems.
The advantage of portable objects is that they are easy to use and very expressive, allowing programmers to be completely unaware that objects reside in other locations. Detractors cite this as a fault, as naïve programmers will not expect network-related errors or the unbounded nondeterminism associated with large networks.
See also
CORBA Common Object Request Broker Architecture, cross-language cross-platform object model
Portable Object Adapter part of the CORBA standard
D-Bus current open cross-language cross-platform Freedesktop.org Object Model
Bonobo deprecated GNOME cross-language Object Model
DCOP deprecated KDE interprocess and software componentry communication system
KParts KDE component framework
XPCOM Mozilla applications cross-platform Component Object Model
COM Microsoft Windows only cross-language Object Model
DCOM Distributed COM, extension making COM able to work in networks
Common Language Infrastructure current .NET cross-language cross-platform Object Model
IBM System Object Model SOM, a component system from IBM used in OS/2
Java Beans
Java Remote Method Invocation (Java RMI)
Internet Communications Engine
Language binding
Foreign function interface
Calling convention
Name mangling
Application programming interface - API
Application Binary Interface - ABI
Comparison of application virtual machines
SWIG open source automatic interfaces bindings generator from many languages to many languages
Distributed computing architecture
Object (computer science)
|
https://en.wikipedia.org/wiki/Static%20cast
|
In the C++ programming language, static_cast is an operator that performs an explicit type conversion.
Syntax
static_cast<type> (object);
The type parameter must be a data type to which object can be converted via a known method, whether it be a builtin or a cast. The type can be a reference or an enumerator.
All types of conversions that are well-defined and allowed by the compiler are performed using static_cast.
The static_cast<> operator can be used for operations such as:
converting a pointer of a base class to a pointer of a non-virtual derived class (downcasting);
converting numeric data types such as enums to ints or floats.
Although static_cast conversions are checked at compile time to prevent obvious incompatibilities, no run-time type checking is performed that would prevent a cast between incompatible data types, such as pointers. A static_cast from a pointer to a class B to a pointer to a derived class D is ill-formed if B is an inaccessible or ambiguous base of D. A static_cast from a pointer of a virtual base class (or a base class of a virtual base class) to a pointer of a derived class is ill-formed.
See also
dynamic cast
reinterpret_cast
const_cast
References
C++
Type theory
Articles with underscores in the title
|
https://en.wikipedia.org/wiki/Illegal%20opcode
|
An illegal opcode, also called an unimplemented operation, unintended opcode or undocumented instruction, is an instruction to a CPU that is not mentioned in any official documentation released by the CPU's designer or manufacturer, which nevertheless has an effect. Illegal opcodes were common on older CPUs designed during the 1970s, such as the MOS Technology 6502, Intel 8086, and the Zilog Z80. On these older processors, many exist as a side effect of the wiring of transistors in the CPU, and usually combine functions of the CPU that were not intended to be combined. On old and modern processors, there are also instructions intentionally included in the processor by the manufacturer, but that are not documented in any official specification.
The effect of many illegal opcodes, on many processors, is just a trap to an error handler. However, some processors that trap for most illegal opcodes do not do so for some illegal opcodes, and some other processors do not check for illegal opcodes, and, instead, perform an undocumented operation.
Overview
While most accidental illegal instructions have useless or even highly undesirable effects (such as crashing the computer), some can have useful functions in certain situations. Such instructions were sometimes exploited in computer games of the 1970s and 1980s to speed up certain time-critical sections. Another common use was in the ongoing battle between copy protection implementations and cracking. Here, they were a form of security through obscurity, and their secrecy usually did not last very long.
A danger associated with the use of illegal instructions was that, given the fact that the manufacturer does not guarantee their existence and function, they might disappear or behave differently with any change of the CPU internals or any new revision of the CPU, rendering programs that use them incompatible with the newer revisions. For example, a number of older Apple II games did not work correctly on the newer Apple
|
https://en.wikipedia.org/wiki/Join%20and%20meet
|
In mathematics, specifically order theory, the join of a subset of a partially ordered set is the supremum (least upper bound) of denoted and similarly, the meet of is the infimum (greatest lower bound), denoted In general, the join and meet of a subset of a partially ordered set need not exist. Join and meet are dual to one another with respect to order inversion.
A partially ordered set in which all pairs have a join is a join-semilattice. Dually, a partially ordered set in which all pairs have a meet is a meet-semilattice. A partially ordered set that is both a join-semilattice and a meet-semilattice is a lattice. A lattice in which every subset, not just every pair, possesses a meet and a join is a complete lattice. It is also possible to define a partial lattice, in which not all pairs have a meet or join but the operations (when defined) satisfy certain axioms.
The join/meet of a subset of a totally ordered set is simply the maximal/minimal element of that subset, if such an element exists.
If a subset of a partially ordered set is also an (upward) directed set, then its join (if it exists) is called a directed join or directed supremum. Dually, if is a downward directed set, then its meet (if it exists) is a directed meet or directed infimum.
Definitions
Partial order approach
Let be a set with a partial order and let An element of is called the (or or ) of and is denoted by if the following two conditions are satisfied:
(that is, is a lower bound of ).
For any if then (that is, is greater than or equal to any other lower bound of ).
The meet need not exist, either since the pair has no lower bound at all, or since none of the lower bounds is greater than all the others. However, if there is a meet of then it is unique, since if both are greatest lower bounds of then and thus If not all pairs of elements from have a meet, then the meet can still be seen as a partial binary operation on
If the meet does exist the
|
https://en.wikipedia.org/wiki/Workcell
|
A workcell is an arrangement of resources in a manufacturing environment to improve the quality, speed and cost of the process. Workcells are designed to improve these by improving process flow and eliminating waste. They are based on the principles of Lean Manufacturing as described in The Machine That Changed the World by Womack, Jones and Roos.
History
Classical manufacturing management approaches dictate that costs be lowered by breaking the process into steps, and ensuring that each of these steps minimizes cost and maximizes efficiency. This discrete approach has resulted in machines placed apart from each other to maximize the efficiency and throughput of each machine. The traditional accounting for machine capitalization is based on the number of parts produced, and this approach reinforces the idea of lowering the cost of each machine (by having them produce as many parts as possible.) Increasing the number of parts (WIP) adds waste in areas such as Inventory and Transportation.
Large amounts of excess Inventory often now accumulate between the machines in the process for reasons to do with 'unbalanced' line capacities and batch processing. In addition, the parts must now be transported between the machines. An increase in the number of machines involved also will reduce each worker's multi-skilling proficiency (since that would need them to learn how to operate multiple machines, and they too will need to move between those machines.)
Lean Manufacturing focuses on optimizing the end-to-end process as a whole. This enables a focus in the process on creating a finished product at the lowest cost (instead of lowering the cost of each step.) A common approach to achieving this is known as the workcell. Machines involved in building a product are placed next to each other to minimize transportation of both parts and people (an L-shaped desk with upper shelves is a good office example, which enables many types of office equipment to be within the reach of
|
https://en.wikipedia.org/wiki/Mass-to-charge%20ratio
|
The mass-to-charge ratio (m/Q) is a physical quantity relating the mass (quantity of matter) and the electric charge of a given particle, expressed in units of kilograms per coulomb (kg/C). It is most widely used in the electrodynamics of charged particles, e.g. in electron optics and ion optics.
It appears in the scientific fields of electron microscopy, cathode ray tubes, accelerator physics, nuclear physics, Auger electron spectroscopy, cosmology and mass spectrometry. The importance of the mass-to-charge ratio, according to classical electrodynamics, is that two particles with the same mass-to-charge ratio move in the same path in a vacuum, when subjected to the same electric and magnetic fields.
Some disciplines use the charge-to-mass ratio (Q/m) instead, which is the multiplicative inverse of the mass-to-charge ratio. The CODATA recommended value for an electron is
Origin
When charged particles move in electric and magnetic fields the following two laws apply:
Lorentz force law:
Newton's second law of motion:
where F is the force applied to the ion, m is the mass of the particle, a is the acceleration, Q is the electric charge, E is the electric field, and v × B is the cross product of the ion's velocity and the magnetic flux density.
This differential equation is the classic equation of motion for charged particles. Together with the particle's initial conditions, it completely determines the particle's motion in space and time in terms of m/Q. Thus mass spectrometers could be thought of as "mass-to-charge spectrometers". When presenting data in a mass spectrum, it is common to use the dimensionless m/z, which denotes the dimensionless quantity formed by dividing the mass number of the ion by its charge number.
Combining the two previous equations yields:
This differential equation is the classic equation of motion of a charged particle in a vacuum. Together with the particle's initial conditions, it determines the particle's motion in space and tim
|
https://en.wikipedia.org/wiki/Provider-independent%20address%20space
|
A provider-independent address space (PI) is a block of IP addresses assigned by a regional Internet registry (RIR) directly to an end-user organization. The user must contract with a local Internet registry (LIR) through an Internet service provider to obtain routing of the address block within the Internet.
Provider-independent addresses offer end-users the opportunity to change service providers without renumbering of their networks and to use multiple access providers in a multi-homed configuration. However, provider-independent blocks may increase the burden on global routers, as the opportunity for efficient route aggregation through Classless Inter-Domain Routing (CIDR) may not exist.
IPv4 assignments
One of the RIRs is RIPE NCC. The RIPE NCC can no longer assign IPv4 Provider Independent (PI) address space as it is now using the last of IPv4 address space that it holds. IPv4 address space from this last is allocated according to section 5.1 of "IPv4 Address Allocation and Assignment Policies for the RIPE NCC Service Region". IPv4 Provider-aggregatable (PA) Address space
can only be allocated to RIPE NCC members.
IPv6 assignments
In April 2009 RIPE accepted a policy proposal of January 2006 to assign IPv6 provider-independent IPv6 prefixes. Assignments are taken from the address range and have a minimum size of a prefix.
See also
Multihoming
References
Network addressing
IP addresses
|
https://en.wikipedia.org/wiki/The%20Lexicon%20of%20Comicana
|
The Lexicon of Comicana is a 1980 book by the American cartoonist Mort Walker. It was intended as a tongue-in-cheek look at the devices used by comics cartoonists. In it, Walker invented an international set of symbols called symbolia after researching cartoons around the world (described by the term comicana). In 1964, Walker had written an article called "Let's Get Down to Grawlixes", a satirical piece for the National Cartoonists Society. He used terms such as grawlixes for his own amusement, but they soon began to catch on and acquired an unexpected validity. The Lexicon was written in response to this.
The names he invented for them sometimes appear in dictionaries, and serve as convenient terminology occasionally used by cartoonists and critics. A 2001 gallery showing of comic- and street-influenced art in San Francisco, for example, was called "Plewds! Squeans! and Spurls!"
Examples
Agitrons: wiggly lines around a shaking object or character.
Blurgits, swalloops: curved lines preceding or trailing after a character's moving limbs.
Briffits (💨): clouds of dust that hang in the wake of a swiftly departing character or object.
Dites, hites and vites: straight lines drawn across flat, clear and reflective surfaces, such as windows and mirrors. The first letter indicates direction: diagonal, horizontal and vertical respectively. Hites may also be used trailing after something moving with great speed.
Emanata: lines drawn around the head to indicate shock or surprise
Grawlixes (#, $, *, @): typographical symbols standing in for profanities, appearing in dialogue balloons in place of actual dialogue.
Indotherm (♨): wavy, rising lines used to represent steam or heat.
Lucaflect: a shiny spot on a surface of something, depicted as a four-paned window shape.
Plewds (💦): flying sweat droplets that appear around a character's head when working hard, stressed, etc.
Quimps (🪐): A special example of the grawlix, a symbol resembling the planet Saturn.
Solrads: radiating li
|
https://en.wikipedia.org/wiki/Comparison%20of%20parser%20generators
|
This is a list of notable lexer generators and parser generators for various language classes.
Regular languages
Regular languages are a category of languages (sometimes termed Chomsky Type 3) which can be matched by a state machine (more specifically, by a deterministic finite automaton or a nondeterministic finite automaton) constructed from a regular expression. In particular, a regular language can match constructs like "A follows B", "Either A or B", "A, followed by zero or more instances of B", but cannot match constructs which require consistency between non-adjacent elements, such as "some instances of A followed by the same number of instances of B", and also cannot express the concept of recursive "nesting" ("every A is eventually followed by a matching B"). A classic example of a problem which a regular grammar cannot handle is the question of whether a given string contains correctly-nested parentheses. (This is typically handled by a Chomsky Type 2 grammar, also termed a context-free grammar.)
Deterministic context-free languages
Context-free languages are a category of languages (sometimes termed Chomsky Type 2) which can be matched by a sequence of replacement rules, each of which essentially maps each non-terminal element to a sequence of terminal elements and/or other nonterminal elements. Grammars of this type can match anything that can be matched by a regular grammar, and furthermore, can handle the concept of recursive "nesting" ("every A is eventually followed by a matching B"), such as the question of whether a given string contains correctly-nested parentheses. The rules of Context-free grammars are purely local, however, and therefore cannot handle questions that require non-local analysis such as "Does a declaration exist for every variable that is used in a function?". To do so technically would require a more sophisticated grammar, like a Chomsky Type 1 grammar, also termed a context-sensitive grammar. However, parser generators for c
|
https://en.wikipedia.org/wiki/GeneRIF
|
A GeneRIF or Gene Reference Into Function is a short (255 characters or fewer) statement about the function of a gene. GeneRIFs provide a simple mechanism for allowing scientists to add to the functional annotation of genes described in the Entrez Gene database. In practice, function is constructed quite broadly. For example, there are GeneRIFs that discuss the role of a gene in a disease, GeneRIFs that point the viewer towards a review article about the gene, and GeneRIFs that discuss the structure of a gene. However, the stated intent is for GeneRIFs to be about gene function. Currently over half a million geneRIFs have been created for genes from almost 1000 different species.
GeneRIFs are always associated with specific entries in the Entrez Gene database. Each GeneRIF has a pointer to the PubMed ID (a type of document identifier) of a scientific publication that provides evidence for the statement made by the GeneRIF. GeneRIFs are often extracted directly from the document that is identified by the PubMed ID, very frequently from its title or from its final sentence.
GeneRIFs are usually produced by NCBI indexers, but anyone may submit a GeneRIF.
To be processed, a valid Gene ID must exist for the specific gene, or the Gene staff must have assigned an overall Gene ID to the species. The latter case is implemented via records in Gene with the symbol NEWENTRY. Once the Gene ID is identified, only three types of information are required to complete a submission:
a concise phrase describing a function or functions (less than 255 characters in length, preferably more than a restatement of the title of the paper);
a published paper describing that function, implemented by supplying the PubMed ID of a citation in PubMed;
a valid e-mail address (which will remain confidential).
Example
Here are some GeneRIFs taken from Entrez Gene for GeneID 7157, the human gene TP53.
The PubMed document identifiers have been omitted from the examples. Note the wide variab
|
https://en.wikipedia.org/wiki/International%20Aerial%20Robotics%20Competition
|
The International Aerial Robotics Competition (IARC) began in 1991 on the campus of the Georgia Institute of Technology and is the longest running university-based robotics competition in the world. Since 1991, collegiate teams with the backing of industry and government have fielded autonomous flying robots in an attempt to perform missions requiring robotic behaviors never before exhibited by a flying machine. In 1990, the term “aerial robotics” was coined by competition creator Robert Michelson to describe a new class of small highly intelligent flying machines. The successive years of competition saw these aerial robots grow in their capabilities from vehicles that could at first barely maintain themselves in the air, to the most recent automatons which are self-stable, self-navigating, and able to interact with their environment—especially objects on the ground.
The primary goal of the competition has been to provide a reason for the state of the art in aerial robotics to move forward. Challenges set before the international collegiate community have been geared towards producing advances in the state of the art at an increasingly aggressive pace. From 1991 through 2009, a total of six missions have been proposed. Each of them involved fully autonomous robotic behavior that was undemonstrated at the time and impossible for any robotic system fielded anywhere in the world, even by the most sophisticated military robots belonging to the super powers.
In October 2013 a new seventh mission was proposed. As with previous missions, the Mission 7 involves totally autonomous flying robots, but this is the first IARC mission to involve the interaction between multiple ground robots and even simultaneous competition between two aerial robots working against each other and against the clock to influence the behavior and trajectory of up to ten autonomous ground robots.
In 2016, the International Aerial Robotics Competition and its creator were officially recognized
|
https://en.wikipedia.org/wiki/Experience%20modifier
|
In the insurance industry in the United States, an experience modifier or experience modification is an adjustment of an employer's premium for worker's compensation coverage based on the losses the insurer has experienced from that employer. An experience modifier of 1 would be applied for an employer that had demonstrated the actuarially expected performance. Poorer loss experience leads to a modifier greater than 1, and better experience to a modifier less than 1. The loss experience used in determining the modifier typically comprises three years but excluding the immediate past year. For instance, if a policy expired on January 1, 2018, the period reflected by the experience modifier would run from January 1, 2014 to January 1, 2017.
Methods of calculation
Experience modifiers are normally recalculated for an employer annually by using experience ratings. The rating is a method used by insurers to determine pricing of premiums for different groups or individuals based on the group or individual's history of claims. The experience rating approach uses an individual's or group’s historic data as a proxy for future risk, and insurers adjust and set insurance premiums and plans accordingly. Each year, a newer year's data is added to the three year window of experience used in the calculation, and the oldest year from the prior calculation is dropped off. The other two years worth of data in the rating window are also updated on an annual basis.
Experience modifiers are calculated by organizations known as "rating bureaus" and rely on information reported by insurance companies. The rating bureau used by most states is the NCCI, the National Council on Compensation Insurance. But a number of states have independent rating bureaus: California, Michigan, Delaware, and Pennsylvania have stand-alone rating bureaus that do not integrate data with NCCI. Other states such as Wisconsin, Texas, New York, New Jersey, Indiana, and North Carolina, maintain their own rati
|
https://en.wikipedia.org/wiki/Knaster%E2%80%93Kuratowski%E2%80%93Mazurkiewicz%20lemma
|
The Knaster–Kuratowski–Mazurkiewicz lemma is a basic result in mathematical fixed-point theory published in 1929 by Knaster, Kuratowski and Mazurkiewicz.
The KKM lemma can be proved from Sperner's lemma and can be used to prove the Brouwer fixed-point theorem.
Statement
Let be an -dimensional simplex with n vertices labeled as .
A KKM covering is defined as a set of closed sets such that for any , the convex hull of the vertices corresponding to is covered by .
The KKM lemma says that in every KKM covering, the common intersection of all n sets is nonempty, i.e:
Example
When , the KKM lemma considers the simplex which is a triangle, whose vertices can be labeled 1, 2 and 3. We are given three closed sets such that:
covers vertex 1, covers vertex 2, covers vertex 3.
The edge 12 (from vertex 1 to vertex 2) is covered by the sets and , the edge 23 is covered by the sets and , the edge 31 is covered by the sets and .
The union of all three sets covers the entire triangle
The KKM lemma states that the sets have at least one point in common.
The lemma is illustrated by the picture on the right, in which set #1 is blue, set #2 is red and set #3 is green. The KKM requirements are satisfied, since:
Each vertex is covered by a unique color.
Each edge is covered by the two colors of its two vertices.
The triangle is covered by all three colors.
The KKM lemma states that there is a point covered by all three colors simultaneously; such a point is clearly visible in the picture.
Note that it is important that all sets are closed, i.e., contain their boundary. If, for example, the red set is not closed, then it is possible that the central point is contained only in the blue and green sets, and then the intersection of all three sets may be empty.
Equivalent results
Generalizations
Rainbow KKM lemma (Gale)
David Gale proved the following generalization of the KKM lemma. Suppose that, instead of one KKM covering, we have n different KKM coverings: . T
|
https://en.wikipedia.org/wiki/Hong%20Kong%20Mathematics%20Olympiad
|
Hong Kong Mathematics Olympiad (HKMO, ) is a Mathematics Competition held in Hong Kong every year, jointly organized by The Education University of Hong Kong and Education Bureau. At present, more than 250 secondary schools send teams of 4-6 students of or below Form 5 to enter the competition. It is made up of a Heat Event and a Final Event, which both forbid the usage of calculators and calculation assisting equipments (e.g. printed mathematical table). Though it bears the term Mathematics Olympiad, it has no relationship with the International Mathematical Olympiad.
History
The predecessor of HKMO is the Inter-school Mathematics Olympiad initiated by the Mathematics Society of Northcote College of Education in 1974, which had attracted 20 secondary schools to participate. Since 1983, the competition is jointly conducted by the Mathematics Department of Northcote College of Education and the Mathematics Section of the Advisory Inspectorate Division of the Education Department. Also in 1983, the competition is formally renamed as Hong Kong Mathematics Olympiad.
Format and Scoring in the Heat Event
The Heat Event is usually held in four venues, for contestants from schools on Hong Kong Island, and in Kowloon, New Territories East and New Territories West respectively. It comprises an individual event and a group event. Each team sends 4 contestants among 4-6 team members for each event.
For the individual event, 1 mark and 2 marks will be given to each correct answer in Part A and Part B respectively. The maximum score for a team should be 80.
For the group event, 2 marks will be given to each correct answer. The maximum score for a team should be 20.
For the geometric construction event, the maximum score for a team should be 20 (all working, including construction work, must be clearly shown).
In other words, a contesting school may earn 120 marks at most in the Heat Event. The top 50 may enter the Final Event.
Format and Scoring in the Final Event
The Fina
|
https://en.wikipedia.org/wiki/Mesh%20Computers
|
PC Peripherals Ltd, trading as MESH Computers, is a private computer company based in London, England. As well as being a manufacturer of personal computers, the company sells peripherals and components through their website.
History
MESH was founded in 1987. During its first 20 years of business, MESH Computers could only be purchased directly from the manufacturer; however, in November 2006, MESH began to sell through major retailers like Comet Group.
MESH Computers has recently opened up a number of new routes to market, including resellers in the UK like Ebuyer.
In 2009, Mesh announced the MESH Cute home theatre PC, in a variety of colours for the living room.
The BBC created a series of programmes to teach school children about computer technology and advanced production techniques in a modern factory setting and MESH was filmed as one of the examples, alongside Rolls-Royce and Coca-Cola.
MESH was the last of the major UK PC manufacturers that still create custom-built PCs for end users. At its peak, the mainstream market was full of local brands like Evesham Technology, Granville Technology (Tiny/Time), Elonex, Opus, Cube Enterprises, MJN and Dan; most of them shut down in the Great Recession.
Viglen and RM Plc continued to operate, but specialise in education systems.
MESH computers appeared on Watchdog having been accused of having inadequate customer support and services.
In the summer of 2010, MESH Computers was voted PC Manufacturer of the Year by both Computer Shopper magazine and the Expert Reviews web site.
MESH reviews have been mixed.
Administration
On 31 May 2011 it was announced that MESH Computers had gone into administration under the law firm MacIntyre Hudson, and that key assets had been bought by components firm PC Peripherals, owned by Reza Jafari.
In February 2012, the owner of MESH (and its largest creditor) at the time it went into administration, Mehdi ("Max") Sherafati, was appointed a director of PC Peripherals, effectively rega
|
https://en.wikipedia.org/wiki/Biogeomorphology
|
Biogeomorphology and ecogeomorphology are the study of interactions between organisms and the development of landforms, and are thus fields of study within geomorphology and ichnology. Organisms affect geomorphic processes in a variety of ways. For example, trees can reduce landslide potential where their roots penetrate to underlying rock, plants and their litter inhibit soil erosion, biochemicals produced by plants accelerate the chemical weathering of bedrock and regolith, and marine animals cause the bioerosion of coral. The study of the interactions between marine biota and coastal landform processes is called coastal biogeomorphology.
Phytogeomorphology is an aspect of biogeomorphology that deals with the narrower subject of how terrain affects plant growth. In recent years a large number of articles have appeared in the literature dealing with how terrain attributes affect crop growth and yield in farm fields, and while they don't use the term phytogeomorphology the dependencies are the same. Precision agriculture models where crop variability is at least partially defined by terrain attributes can be considered as phytogeomorphological precision agriculture.
Overview
Biogeomorphology is a multidisciplinary focus of geomorphology that takes research approaches from both geomorphology and ecology. It is a sub discipline of geomorphology. Biogeomorphology can be synthesized into two distinct approaches:
1. The influences that geomorphology plays on the biodiversity and distribution of flora and fauna.
2. The influences that biotic factors have on the way landforms are developed.
There has been much work on these approaches such as; the effect that parent material has on the distribution of plants, the increase of precipitation due to an influx of transpiration, the stability of a hillslope due to the abundance of vegetation or, the increase of sedimentation due to a beaver dam. Biogeomorphology shows the axiomatic relationship between certain land form
|
https://en.wikipedia.org/wiki/Association%20of%20Environmental%20Professionals
|
The Association of Environmental Professionals (AEP) is a California-based non-profit organization of interdisciplinary professionals including environmental science, resource management, environmental planning and other professions contributing to this field. AEP is the first organization of its kind in the USA, and its influence and model have spawned numerous other regional organizations throughout the United States, as well as the separate National Association of Environmental Professionals (NAEP). From inception in the mid-1970s the organization has been closely linked with the upkeep of the California Environmental Quality Act (CEQA), California being one of the first states to adopt a comprehensive law to govern the environmental review of public policy and project review.
History, organization and governance
AEP was founded in the State of California in 1974 and held its first organization wide meeting of members in Palo Alto, California, on the Stanford University campus. At that meeting the first directors and officers were elected and by-laws adopted. From then on the board of directors has met quarterly to establish governance, coordinate legislative liaison and plan annual meeting. There are nine AEP chapters, covering the California geographical regions of: Channel Counties, Inland Empire, Los Angeles County, Monterey Bay-Silicon Valley, Orange County, San Diego, San Francisco Bay Area, Superior, and Central.
Publications and member activities
AEP publishes a quarterly magazine, the Environmental Monitor, which contains technical articles, legislative updates and other information useful to its members. As mentioned above the organization has developed a skilled legislative advocacy program, which is remarkable in its pursuit of clarity of language, efficient functioning of environmental review and ethical goals for its profession. AEP also provides annual recognition awards for excellence in various categories, provides members with informat
|
https://en.wikipedia.org/wiki/Terraforming%20in%20popular%20culture
|
Terraforming is well represented in contemporary literature, usually in the form of science fiction, as well as in popular culture. While many stories involving interstellar travel feature planets already suited to habitation by humans and supporting their own indigenous life, some authors prefer to address the unlikeliness of such a concept by instead detailing the means by which humans have converted inhospitable worlds to ones capable of supporting life through artificial means.
History of use
Author Jack Williamson is credited with inventing and popularizing the term "terraform". In July 1942, under the pseudonym Will Stewart, Williamson published a science fiction novella entitled "Collision Orbit" in Astounding Science-Fiction magazine. The series was later published as two novels, Seetee Shock (1949) and Seetee Ship (1951). American geographer Richard Cathcart successfully lobbied for formal recognition of the verb "to terraform", and it was first included in the fourth edition of the Shorter Oxford English Dictionary in 1993.
The concept of terraforming in popular culture predates Williamson's work; for example, the idea of turning the Moon into a habitable environment with atmosphere was already present in La Journée d'un Parisien au XXIe siècle ("A Day of a Parisian in the 21st Century", 1910) by . In fact, perhaps predating the concept of terraforming, is that of xenoforming – a process in which aliens change the Earth to suit their own needs, already suggested in the classic The War of the Worlds (1898) of H.G. Wells.
Literature
Terraforming of fictional planets in literature
Terraforming is one of the basic concepts around which Frank Herbert's Dune novels (1965-1985) are based: the Fremen's obsession with converting the desert-world Arrakis to earthlike conditions supplies the fugitive Paul Atreides with a ready-made army of followers (In later books, the focus shifts to those trying to "arrakisform" earthlike planets to support the giant sand
|
https://en.wikipedia.org/wiki/Microsoft%20Speech%20API
|
The Speech Application Programming Interface or SAPI is an API developed by Microsoft to allow the use of speech recognition and speech synthesis within Windows applications. To date, a number of versions of the API have been released, which have shipped either as part of a Speech SDK or as part of the Windows OS itself. Applications that use SAPI include Microsoft Office, Microsoft Agent and Microsoft Speech Server.
In general, all versions of the API have been designed such that a software developer can write an application to perform speech recognition and synthesis by using a standard set of interfaces, accessible from a variety of programming languages. In addition, it is possible for a 3rd-party company to produce their own Speech Recognition and Text-To-Speech engines or adapt existing engines to work with SAPI. In principle, as long as these engines conform to the defined interfaces they can be used instead of the Microsoft-supplied engines.
In general, the Speech API is a freely redistributable component which can be shipped with any Windows application that wishes to use speech technology. Many versions (although not all) of the speech recognition and synthesis engines are also freely redistributable.
There have been two main 'families' of the Microsoft Speech API. SAPI versions 1 through 4 are all similar to each other, with extra features in each newer version. SAPI 5, however, was a completely new interface, released in 2000. Since then several sub-versions of this API have been released.
Basic architecture
The Speech API can be viewed as an interface or piece of middleware which sits between applications and speech engines (recognition and synthesis). In SAPI versions 1 to 4, applications could directly communicate with engines. The API included an abstract interface definition which applications and engines conformed to. Applications could also use simplified higher-level objects rather than directly call methods on the engines.
In SAPI 5 howeve
|
https://en.wikipedia.org/wiki/Microsoft%20Speech%20Server
|
The Microsoft Speech Server is a product from Microsoft designed to allow the authoring and deployment of IVR applications incorporating Speech Recognition, Speech Synthesis and DTMF.
The first version of the server was released in 2004 as Microsoft Speech Server 2004 and supported applications developed for U.S. English-speaking users. A later release (Speech Server 2004 R2) was released in 2005 and added support for North American Spanish and Canadian French as well as additional features and fixes.
In August 2006, Microsoft announced that Speech Server 2007, originally slated to be released in May 2007, had been merged with the Microsoft Office Live Communications Server product line to create Microsoft Office Communications Server.
The Speech Server 2007 components of Office Communications Server are also available separately in the free Speech Server 2007 Developers Edition.
See also
Microsoft Office Communications Server
Speech Recognition
Speech Synthesis
Interactive voice response
External links
Free Speech Server 2007 Developers Edition
Microsoft Speech Server homepage
Speech Server
Voice technology
Speech processing software
|
https://en.wikipedia.org/wiki/Frobenius%20theorem%20%28real%20division%20algebras%29
|
In mathematics, more specifically in abstract algebra, the Frobenius theorem, proved by Ferdinand Georg Frobenius in 1877, characterizes the finite-dimensional associative division algebras over the real numbers. According to the theorem, every such algebra is isomorphic to one of the following:
(the real numbers)
(the complex numbers)
(the quaternions).
These algebras have real dimension , and , respectively. Of these three algebras, and are commutative, but is not.
Proof
The main ingredients for the following proof are the Cayley–Hamilton theorem and the fundamental theorem of algebra.
Introducing some notation
Let be the division algebra in question.
Let be the dimension of .
We identify the real multiples of with .
When we write for an element of , we imply that is contained in .
We can consider as a finite-dimensional -vector space. Any element of defines an endomorphism of by left-multiplication, we identify with that endomorphism. Therefore, we can speak about the trace of , and its characteristic and minimal polynomials.
For any in define the following real quadratic polynomial:
Note that if then is irreducible over .
The claim
The key to the argument is the following
Claim. The set of all elements of such that is a vector subspace of of dimension . Moreover as -vector spaces, which implies that generates as an algebra.
Proof of Claim: Let be the dimension of as an -vector space, and pick in with characteristic polynomial . By the fundamental theorem of algebra, we can write
We can rewrite in terms of the polynomials :
Since , the polynomials are all irreducible over . By the Cayley–Hamilton theorem, and because is a division algebra, it follows that either for some or that for some . The first case implies that is real. In the second case, it follows that is the minimal polynomial of . Because has the same complex roots as the minimal polynomial and because it is real it follows that
Since is t
|
https://en.wikipedia.org/wiki/Noise%20%28electronics%29
|
In electronics, noise is an unwanted disturbance in an electrical signal.
Noise generated by electronic devices varies greatly as it is produced by several different effects.
In particular, noise is inherent in physics and central to thermodynamics. Any conductor with electrical resistance will generate thermal noise inherently. The final elimination of thermal noise in electronics can only be achieved cryogenically, and even then quantum noise would remain inherent.
Electronic noise is a common component of noise in signal processing.
In communication systems, noise is an error or undesired random disturbance of a useful information signal in a communication channel. The noise is a summation of unwanted or disturbing energy from natural and sometimes man-made sources. Noise is, however, typically distinguished from interference, for example in the signal-to-noise ratio (SNR), signal-to-interference ratio (SIR) and signal-to-noise plus interference ratio (SNIR) measures. Noise is also typically distinguished from distortion, which is an unwanted systematic alteration of the signal waveform by the communication equipment, for example in signal-to-noise and distortion ratio (SINAD) and total harmonic distortion plus noise (THD+N) measures.
While noise is generally unwanted, it can serve a useful purpose in some applications, such as random number generation or dither.
Noise types
Different types of noise are generated by different devices and different processes. Thermal noise is unavoidable at non-zero temperature (see fluctuation-dissipation theorem), while other types depend mostly on device type (such as shot noise, which needs a steep potential barrier) or manufacturing quality and semiconductor defects, such as conductance fluctuations, including 1/f noise.
Thermal noise
Johnson–Nyquist noise (more often thermal noise) is unavoidable, and generated by the random thermal motion of charge carriers (usually electrons), inside an electrical conductor, which
|
https://en.wikipedia.org/wiki/River%20engineering
|
River engineering is a discipline of civil engineering which studies human intervention in the course, characteristics, or flow of a river with the intention of producing some defined benefit. People have intervened in the natural course and behaviour of rivers since before recorded history—to manage the water resources, to protect against flooding, or to make passage along or across rivers easier. Since the Yuan Dynasty and Ancient Roman times, rivers have been used as a source of hydropower. From the late 20th century, the practice of river engineering has responded to environmental concerns broader than immediate human benefit. Some river engineering projects have focused exclusively on the restoration or protection of natural characteristics and habitats.
Hydromodification encompasses the systematic response to alterations to riverine and non-riverine water bodies such as coastal waters (estuaries and bays) and lakes. The U.S. Environmental Protection Agency (EPA) has defined hydromodification as the "alteration of the hydrologic characteristics of coastal and non-coastal waters, which in turn could cause degradation of water resources." River engineering has often resulted in unintended systematic responses, such as reduced habitat for fish and wildlife, and alterations of water temperature and sediment transport patterns.
Beginning in the late 20th century, the river engineering discipline has been more focused on repairing hydromodified degradations and accounting for potential systematic response to planned alterations by considering fluvial geomorphology. Fluvial geomorphology is the study of how rivers change their form over time. Fluvial geomorphology is the cumulation of a number of sciences including open channel hydraulics, sediment transport, hydrology, physical geology, and riparian ecology. River engineering practitioners attempt to understand fluvial geomorphology, implement a physical alteration, and maintain public safety.
Characteristics of r
|
https://en.wikipedia.org/wiki/List%20of%20online%20music%20databases
|
Below is a table of online music databases that are largely free of charge. Many of the sites provide a specialized service or focus on a particular music genre. Some of these operate as an online music store or purchase referral service in some capacity. Among the sites that have information on the largest number of entities are those sites that focus on discographies of composing and performing artists.
Performance rights organisations (PRO) typically have their own databases as per country they represent, in accordance with CISAC, to help domestic artists collect royalties. Information available on these portals include songwriting credits, publishing percentage splits, and alternate titles for different distribution channels. It is one of the most accurate and official types of databases because it involves direct communication between the artists, record labels, distributors, legal teams, publishers and a global governing body regulating PRO's. Many countries that observe copyright have an organisation established, currently there are 119 CISAC members, and they may be not-for-profit. The databases are typically known as 'repertory searches' or 'searching works' and may require an account while others are open to view for free as public including the USA's ASCAP Songview and Ace services, Canada's SOCAN, South Korea's KOMCA, France's SACEM, and Israel's ACUM.
General databases
Music genre specific
Specialized areas
Printed music (sheets) databases
Metadata providers and distributors
Non-functioning databases
See also
Automatic content recognition
Comparison of digital music stores
List of music sharing websites
Comparison of music streaming services
Comparison of online music lockers
List of music software
List of Internet radio stations
List of online digital musical document libraries
Streaming media
Virtual Library of Musicology
References
Music
Databases
|
https://en.wikipedia.org/wiki/Disturbance%20%28ecology%29
|
In ecology, a disturbance is a temporary change in environmental conditions that causes a pronounced change in an ecosystem. Disturbances often act quickly and with great effect, to alter the physical structure or arrangement of biotic and abiotic elements. A disturbance can also occur over a long period of time and can impact the biodiversity within an ecosystem.
Major ecological disturbances may include fires, flooding, storms, insect outbreaks and trampling. Earthquakes, various types of volcanic eruptions, tsunami, firestorms, impact events, climate change, and the devastating effects of human impact on the environment (anthropogenic disturbances) such as clearcutting, forest clearing and the introduction of invasive species can be considered major disturbances.
Not only invasive species can have a profound effect on an ecosystem, but also naturally occurring species can cause disturbance by their behavior. Disturbance forces can have profound immediate effects on ecosystems and can, accordingly, greatly alter the natural community’s population size or species richness. Because of these and the impacts on populations, disturbance determines the future shifts in dominance, various species successively becoming dominant as their life history characteristics, and associated life-forms, are exhibited over time.
Definition and types
The scale of disturbance ranges from events as small as a single tree falling, to as large as a mass extinction. Many natural ecosystems experience periodic disturbance that may broadly fall into a cyclical pattern. Ecosystems that form under these conditions are often maintained by regular disturbance. Wetland ecosystems, for example, can be maintained by the movement of water through them and by periodic fires. Different types of disturbance events occur in different habitats and climates with different weather conditions. Natural fire disturbances for example occur more often in areas with a higher incidence of lightning and flam
|
https://en.wikipedia.org/wiki/Extended%20boot%20record
|
An extended boot record (EBR), or extended partition boot record (EPBR), is a descriptor for a logical partition under the common DOS disk drive partitioning system. In that system, when one (and only one) partition record entry in the master boot record (MBR) is designated an extended partition, then that partition can be subdivided into a number of logical partitions. The actual structure of that extended partition is described by one or more EBRs, which are located inside the extended partition. The first (and sometimes only) EBR will always be located on the very first sector of the extended partition.
Unlike primary partitions, which are all described by a single partition table within the MBR, and thus limited in number, each EBR precedes the logical partition it describes. If another logical partition follows, then the first EBR will contain an entry pointing to the next EBR; thus, multiple EBRs form a linked list. This means the number of logical drives that can be formed within an extended partition is limited only by the amount of available disk space in the given extended partition.
While in Windows versions up to XP logical partitions within the extended partition were aligned following conventions called "drive geometry" or "CHS", since Windows Vista they are aligned to a 1-MiB boundary. Due to this difference in alignment, the Logical Disk Manager of XP (Disk Management) may delete these extended partitions without warning.
EBR structure and values
EBRs have essentially the same structure as the MBR; except only the first two entries of the partition table are supposed to be used, besides having the mandatory boot record signature (or magic number) of at the end of the sector. This 2-byte signature appears in a disk editor as first and last, because IBM-compatible PCs store hexadecimal words in little-endian order (see table below).
Structures
The IBM Boot Manager (included with OS/2 operating systems and some early versions of Partition Mag
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.