source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Bill%20of%20quantities
|
A bill of quantities is a document used in tendering in the construction industry in which materials, parts, and labor (and their costs) are itemized. It also (ideally) details the terms and conditions of the construction or repair contract and itemizes all work to enable a contractor to price the work for which he or she is bidding. The quantities may be measured in number, area, volume, weight or time. Preparing a bill of quantities requires that the design is complete and a specification has been prepared.
The bill of quantities is issued to tenderers for them to prepare a price for carrying out the construction work. The bill of quantities assists tenderers in the calculation of construction costs for their tender, and, as it means all tendering contractors will be pricing the same quantities (rather than taking-off quantities from the drawings and specifications themselves), it also provides a fair and accurate system for tendering.
Creation
Bill of quantities are prepared by quantity surveyors and building estimators, and "Indeed the bill of quantities was the raison d'être for the development of quantity surveying as a separate profession."
The practice historically of estimating building costs in this way arose from non-contractual measurements, taken off drawings to assist tenderers in quoting lump sum prices.
There are different styles of bills of quantities, mainly the elemental bill of quantities and trade bills
Contingency sum
A contingency sum is an item found within a bill of quantities.
The item refers to unforeseeable cost likely to be incurred during the contracts.
There are two types of contingency sum. The first refers to a specific item, e.g., "additional alterations to services when installing said shower unit", where an item for alterations to existing services is not contained within the bill of quantities but some work is envisaged.
The second type of sum is where money can be allocated to any item, within the bill of quantities, in
|
https://en.wikipedia.org/wiki/%CE%9BProlog
|
λProlog, also written lambda Prolog, is a logic programming language featuring polymorphic typing, modular programming, and higher-order programming. These extensions to Prolog are derived from the higher-order hereditary Harrop formulas used to justify the foundations of λProlog. Higher-order quantification, simply typed λ-terms, and higher-order unification gives λProlog the basic supports needed to capture the λ-tree syntax approach to higher-order abstract syntax, an approach to representing syntax that maps object-level bindings to programming language bindings. Programmers in λProlog need not deal with bound variable names: instead various declarative devices are available to deal with binder scopes and their instantiations.
History
Since 1986, λProlog has received numerous implementations. As of 2023, the language and its implementations are still actively being developed.
The Abella theorem prover has been designed to provide an interactive environment for proving theorems about the declarative core of λProlog.
See also
Curry's paradox#Lambda calculus — about inconsistency problems caused by combining (propositional) logic and untyped lambda calculus
References
Tutorials and texts
Dale Miller and Gopalan Nadathur have written the book Programming with higher-order logic, published by Cambridge University Press in June 2012.
Amy Felty has written in a 1997 tutorial on lambda Prolog and its Applications to Theorem Proving.
John Hannan has written a tutorial on Program Analysis in lambda Prolog for the 1998 PLILP Conference.
Olivier Ridoux has written Lambda-Prolog de A à Z... ou presque (163 pages, French). It is available as PostScript, PDF, and html.
External links
λProlog homepage
Entry at the Software Preservation Group.
Implementations
The Teyjus λProlog compiler is currently the oldest implementation still being maintained. This compiler project is led by Gopalan Nadathur and various of his colleagues and students.
ELPI: an Embeddable
|
https://en.wikipedia.org/wiki/Storage%20tank
|
Storage tanks are containers that hold liquids, compressed gases (gas tank; or in U.S.A "pressure vessel", which is not typically labeled or regulated as a storage tank) or mediums used for the short- or long-term storage of heat or cold. The term can be used for reservoirs (artificial lakes and ponds), and for manufactured containers. The usage of the word tank for reservoirs is uncommon in American English but is moderately common in British English. In other countries, the term tends to refer only to artificial containers.
In the USA, storage tanks operate under no (or very little) pressure, distinguishing them from pressure vessels. Storage tanks are often cylindrical in shape, perpendicular to the ground with flat bottoms, and a fixed frangible or floating roof. There are usually many environmental regulations applied to the design and operation of storage tanks, often depending on the nature of the fluid contained within. Above-ground storage tanks (ASTs) differ from underground storage tanks (USTs) in the kinds of regulations that are applied. Above ground storage tanks can be used to hold materials such as petroleum, waste matter, water, chemicals, and other hazardous materials, all while meeting strict industry standards and regulations.
Reservoirs can be covered, in which case they may be called covered or underground storage tanks or reservoirs. Covered water tanks are common in urban areas.
Storage tanks are available in many shapes: vertical and horizontal cylindrical; open top and closed top; flat bottom, cone bottom, slope bottom and dish bottom. Large tanks tend to be vertical cylindrical, or to have rounded corners transition from vertical side wall to bottom profile, to easier withstand hydraulic hydrostatically induced pressure of contained liquid. Most container tanks for handling liquids during transportation are designed to handle varying degrees of pressure.
In order for volume measurements from the tank to be used, it shall have a capaci
|
https://en.wikipedia.org/wiki/Stamp%20mill
|
A stamp mill (or stamp battery or stamping mill) is a type of mill machine that crushes material by pounding rather than grinding, either for further processing or for extraction of metallic ores. Breaking material down is a type of unit operation.
Description
A stamp mill consists of a set of heavy steel (iron-shod wood in some cases) stamps, loosely held vertically in a frame, in which the stamps can slide up and down. They are lifted by cams on a horizontal rotating shaft. As the cam moves from under the stamp, the stamp falls onto the ore below, crushing the rock, and the lifting process is repeated at the next pass of the cam.
Each one frame and stamp set is sometimes called a "battery" or, confusingly, a "stamp" and mills are sometimes categorised by how many stamps they have, i.e. a "10 stamp mill" has 10 sets. They usually are arranged linearly, but when a mill is enlarged, a new line of them may be constructed rather than extending the line. Abandoned mill sites (as documented by industrial archaeologists) will usually have linear rows of foundation sets as their most prominent visible feature as the overall apparatus can exceed 20 feet in height, requiring large foundations. Stamps are usually arranged in sets of five.
Some ore processing applications used large quantities of water so some stamp mills are located near natural or artificial bodies of water. For example, the Redridge Steel Dam was built to supply stamp mills with process water. The California Stamp made its major debut at the 1894 San Francisco midsummer fair. It was the first type that generated electricity, powered by a wood feed steam boiler. Steam started the wheels and belts turning; a generator that also was steam driven supplied the electricity for overhead lighting. This was a big plus for mining company, enabling more production time.
History
The main components for water-powered stamp mills – water wheels, cams, and hammers – were known in the Hellenistic era in the Eastern
|
https://en.wikipedia.org/wiki/DOD-STD-2167A
|
DOD-STD-2167A (Department of Defense Standard 2167A), titled "Defense Systems Software Development", was a United States defense standard, published on February 29, 1988, which updated the less well known DOD-STD-2167 published 4 June 1985. This document established "uniform requirements for the software development that are applicable throughout the system life cycle." This revision was written to allow the contractor more flexibility and was a significant reorganization and reduction of the previous revision; e.g.., where the previous revision prescribed pages of design and coding standards, this revision only gave one page of general requirements for the contractor's coding standards; while DOD-STD-2167 listed 11 quality factors to be addressed for each software component in the SRS, DOD-STD-2167A only tasked the contractor to address relevant quality factors in the SRS. Like DOD-STD-2167, it was designed to be used with DOD-STD-2168, "Defense System Software Quality Program".
On December 5, 1994 it was superseded by MIL-STD-498, which merged DOD-STD-2167A, DOD-STD-7935A, and DOD-STD-2168 into a single document, and addressed some vendor criticisms.
Criticism
One criticism of the standard was that it was biased toward the Waterfall Model. Although the document states "the contractor is responsible for selecting software development methods (for example, rapid prototyping)", it also required "formal reviews and audits" that seemed to lock the vendor into designing and documenting the system before any implementation began.
Another criticism was the focus on design documents, to the exclusion of Computer-Aided Software Engineering (CASE) tools being used in the industry. Vendors would often use the CASE tools to design the software, then write several standards-required documents to describe the CASE-formatted data. This created problems matching design documents to the actual product.
Predecessors
DOD-STD-2167 and DOD-STD-2168 (often mistakenly referred to a
|
https://en.wikipedia.org/wiki/Frege%27s%20theorem
|
In metalogic and metamathematics, Frege's theorem is a metatheorem that states that the Peano axioms of arithmetic can be derived in second-order logic from Hume's principle. It was first proven, informally, by Gottlob Frege in his 1884 Die Grundlagen der Arithmetik (The Foundations of Arithmetic) and proven more formally in his 1893 Grundgesetze der Arithmetik I (Basic Laws of Arithmetic I). The theorem was re-discovered by Crispin Wright in the early 1980s and has since been the focus of significant work. It is at the core of the philosophy of mathematics known as neo-logicism (at least of the Scottish School variety).
Overview
In The Foundations of Arithmetic (1884), and later, in Basic Laws of Arithmetic (vol. 1, 1893; vol. 2, 1903), Frege attempted to derive all of the laws of arithmetic from axioms he asserted as logical (see logicism). Most of these axioms were carried over from his Begriffsschrift; the one truly new principle was one he called the Basic Law V (now known as the axiom schema of unrestricted comprehension): the "value-range" of the function f(x) is the same as the "value-range" of the function g(x) if and only if ∀x[f(x) = g(x)]. However, not only did Basic Law V fail to be a logical proposition, but the resulting system proved to be inconsistent, because it was subject to Russell's paradox.
The inconsistency in Frege's Grundgesetze overshadowed Frege's achievement: according to Edward Zalta, the Grundgesetze "contains all the essential steps of a valid proof (in second-order logic) of the fundamental propositions of arithmetic from a single consistent principle." This achievement has become known as Frege's theorem.
Frege's theorem in propositional logic
In propositional logic, Frege's theorem refers to this tautology:
(P → (Q→R)) → ((P→Q) → (P→R))
The theorem already holds in one of the weakest logics imaginable, the constructive implicational calculus. The proof under the Brouwer–Heyting–Kolmogorov interpretation reads .
In words:
|
https://en.wikipedia.org/wiki/Slipform%20stonemasonry
|
Slipform stonemasonry is a method for making a reinforced concrete wall with stone facing in which stones and mortar are built up in courses within reusable slipforms. It is a cross between traditional mortared stone wall and a veneered stone wall. Short forms, up to 60 cm high, are placed on both sides of the wall to serve as a guide for the stone work. The stones are placed inside the forms with the good faces against the form work. Concrete is poured in behind the rocks. Rebar is added for strength, to make a wall that is approximately half reinforced concrete and half stonework. The wall can be faced with stone on one side or both sides. After the concrete sets enough to hold the wall together, the forms are "slipped" up to pour the next level. With slipforms it is easy for a novice to build free-standing stone walls.
History
Slipform stonemasonry was developed by New York architect Ernest Flagg in 1920. Flagg built a vertical framework as tall as the wall, then inserted 2x6 or 2x8 planks as forms to guide the stonework. When the masonry work reached the top of a plank, Flagg inserted another one, adding more planks until he reached the top of the wall. Helen and Scott Nearing modified the technique in Vermont in the 1930s, using slipforms that were slipped up the wall.
Gallery
Notes
The diagram of the slipform wall section is completely misleading without showing the 2nd form.
External links
Slipform Stone Masonry
Stonemasonry
Construction
Types of wall
|
https://en.wikipedia.org/wiki/Synchronization%20%28computer%20science%29
|
In computer science, synchronization is the task of coordinating multiple of processes to join up or handshake at a certain point, in order to reach an agreement or commit to a certain sequence of action.
Motivation
The need for synchronization does not arise merely in multi-processor systems but for any kind of concurrent processes; even in single processor systems. Mentioned below are some of the main needs for synchronization:
Forks and Joins: When a job arrives at a fork point, it is split into N sub-jobs which are then serviced by n tasks. After being serviced, each sub-job waits until all other sub-jobs are done processing. Then, they are joined again and leave the system. Thus, parallel programming requires synchronization as all the parallel processes wait for several other processes to occur.
Producer-Consumer: In a producer-consumer relationship, the consumer process is dependent on the producer process until the necessary data has been produced.
Exclusive use resources: When multiple processes are dependent on a resource and they need to access it at the same time, the operating system needs to ensure that only one processor accesses it at a given point in time. This reduces concurrency.
Requirements
Thread synchronization is defined as a mechanism which ensures that two or more concurrent processes or threads do not simultaneously execute some particular program segment known as critical section. Processes' access to critical section is controlled by using synchronization techniques. When one thread starts executing the critical section (serialized segment of the program) the other thread should wait until the first thread finishes. If proper synchronization techniques are not applied, it may cause a race condition where the values of variables may be unpredictable and vary depending on the timings of context switches of the processes or threads.
For example, suppose that there are three processes, namely 1, 2, and 3. All three of them are concu
|
https://en.wikipedia.org/wiki/Global%20Information%20Grid-Bandwidth%20Expansion
|
The Global Information Grid Bandwidth Expansion (GIG-BE) Program was a major United States Department of Defense (DOD) net-centric transformational initiative executed by DISA. Part of the Global Information Grid project, GIG-BE created a ubiquitous "bandwidth-available" environment to improve national security intelligence, surveillance and reconnaissance, information assurance, as well as command and control. Through GIG-BE, DISA leveraged DOD's existing end-to-end information transport capabilities, significantly expanding capacity and reliability to select Joint Staff-approved locations worldwide. GIG-BE achieved Full Operational Capability (FOC) on December 20, 2005.
Scope
This program provided increased bandwidth and diverse physical access to approximately 87 critical sites in the continental United States (CONUS), US Pacific Command (PACOM) and US European Command (EUCOM). These locations are interconnected via an expanded GIG core.
Capabilities and services
GIG-BE provides a secure, robust, optical terrestrial network that delivers very high-speed classified and unclassified Internet Protocol (IP) services to key operating locations worldwide. The Assistant Secretary of Defense for Networks and Information Integration's (ASD/NII) vision is a "color to every base," physically diverse network access, optical mesh upgrades for the backbone network, and regional/MAN upgrades, where needed. "A color to every base" implies that every site has an OC-192 (10 gigabits per second) of usable IP dedicated to that site.
Implementation
After extensive component integration and operational testing, implementation began in the middle of the 2004 fiscal year and extended through calendar year 2005. The initial implementation concentrated on six sites used during the proof of Initial Operational Capability (IOC), achieved on September 30, 2004. The GIG-BE Program Office conducted detailed site surveys at all of the approximately 87 Joint Staff-approved locations and
|
https://en.wikipedia.org/wiki/Pharyngula
|
The pharyngula is a stage in the embryonic development of vertebrates. At this stage, the embryos of all vertebrates are similar, having developed features typical of vertebrates, such as the beginning of a spinal cord. Named by William Ballard, the pharyngula stage follows the blastula, gastrula and neurula stages.
Morphological similarity in vertebrate embryos
At the pharyngula stage, all vertebrate embryos show remarkable similarities, i.e., it is a "phylotypic stage" of the sub-phylum, containing the following features:
notochord
dorsal hollow nerve cord
post-anal tail, and
a series of paired branchial grooves.
The branchial grooves are matched on the inside by a series of paired gill pouches. In fish, the pouches and grooves eventually meet and form the gill slits, which allow water to pass from the pharynx over the gills and out the body.
In the other vertebrates, the grooves and pouches disappear. In humans, the chief trace of their existence is the eustachian tube and auditory canal which (interrupted only by the eardrum) connect the pharynx with the outside of the head.
The existence of a common pharyngula stage for vertebrates was first proposed by German biologist Ernst Haeckel (1834–1919) in 1874.
The hourglass model
The observation of the conservation of animal morphology during the embryonic phylotypic period, where there is maximal similarity between the species within each animal phylum, has led to the proposition that embryogenesis diverges more extensively in the early and late stages than the middle stage, and is known as the hourglass model. Comparative genomic studies suggest that the phylotypic stage is the maximally conserved stage during embryogenesis.
See also
Evolutionary developmental biology
Embryogenesis
Embryo drawing
Recapitulation theory
References
Evolutionary biology
Embryology
|
https://en.wikipedia.org/wiki/Levenshtein%20coding
|
Levenshtein coding is a universal code encoding the non-negative integers developed by Vladimir Levenshtein.
Encoding
The code of zero is "0"; to code a positive number:
Initialize the step count variable C to 1.
Write the binary representation of the number without the leading "1" to the beginning of the code.
Let M be the number of bits written in step 2.
If M is not 0, increment C, repeat from step 2 with M as the new number.
Write C "1" bits and a "0" to the beginning of the code.
The code begins:
To decode a Levenshtein-coded integer:
Count the number of "1" bits until a "0" is encountered.
If the count is zero, the value is zero, otherwise
Start with a variable N, set it to a value of 1 and repeat count minus 1 times:
Read N bits, prepend "1", assign the resulting value to N
The Levenshtein code of a positive integer is always one bit longer than the Elias omega code of that integer. However, there is a Levenshtein code for zero, whereas Elias omega coding would require the numbers to be shifted so that a zero is represented by the code for one instead.
Example code
Encoding
void levenshteinEncode(char* source, char* dest)
{
IntReader intreader(source);
BitWriter bitwriter(dest);
while (intreader.hasLeft())
{
int num = intreader.getInt();
if (num == 0)
bitwriter.outputBit(0);
else
{
int c = 0;
BitStack bits;
do {
int m = 0;
for (int temp = num; temp > 1; temp>>=1) // calculate floor(log2(num))
++m;
for (int i=0; i < m; ++i)
bits.pushBit((num >> i) & 1);
num = m;
++c;
} while (num > 0);
for (int i=0; i < c; ++i)
bitwriter.outputBit(1);
bitwriter.outputBit(0);
while (bits.length() > 0)
bitwriter.outputBit(bits.popBit());
}
}
}
Decoding
void levensh
|
https://en.wikipedia.org/wiki/Test%20management
|
Test management most commonly refers to the activity of managing a testing process. A test management tool is software used to manage tests (automated or manual) that have been previously specified by a test procedure. It is often associated with automation software. Test management tools often include requirement and/or specification management modules that allow automatic generation of the requirement test matrix (RTM), which is one of the main metrics to indicate functional coverage of a system under test (SUT).
Creating tests definitions in a database
Test definition includes: test plan, association with product requirements and specifications. Eventually, some relationship can be set between tests so that precedences can be established.
E.g. if test A is parent of test B and if test A is failing, then it may be useless to perform test B.
Tests should also be associated with priorities.
Every change on a test must be versioned so that the QA team has a comprehensive view of the history of the test.
Preparing test campaigns
This includes building some bundles of test cases and executing them (or scheduling their execution).
Execution can be either manual or automatic.
Manual execution
The user will have to perform all the test steps manually and inform the system of the result.
Some test management tools includes a framework to interface the user with the test plan to facilitate this task. There are several ways to run tests. The simplest way to run a test is to run a test case. The test case can be associated with other test artifacts such as test plans, test scripts, test environments, test case execution records, and test suites.
Automatic execution
There are numerous ways of implementing automated tests.
Automatic execution requires the test management tool to be compatible with the tests themselves.
To do so, test management tools may propose proprietary automation frameworks or APIs to interface with third-party or proprietary automated tests.
Gener
|
https://en.wikipedia.org/wiki/Energy%20Institute
|
The Energy Institute (EI) is a professional organization for engineers and other professionals in energy-related fields. The EI was formed in 2003 by the merger of the Institute of Petroleum (dating back to 1913) and the Institute of Energy (dating back to 1925). It has an international membership of about 20,000 people and 200 companies. Its main office is at 61 New Cavendish Street, London. EI is a registered charity with a Royal Charter.
In the United Kingdom, EI has the authority to establish professional registration for the titles of Chartered Engineer, Incorporated Engineer, and Engineering Technician, as a licensed member institution of the Engineering Council. It is also licensed by the Society for the Environment to award Chartered Environmentalist status.
Formation
In 2003 the Institute of Petroleum and the Institute of Energy merged to form the Energy Institute. The offices of the Institute of Petroleum became the offices of the combined organization, and the offices of the Institute of Energy in London were closed.
History of The Institute of Petroleum
The Institute of Petroleum was formed in 1913 as The Institution of Petroleum Technologists (IPT). The first president was Boverton Redwood, and the second president was John Cadman; the IPT named its Cadman Award in his honour. In 1938 the organization expanded to cover the oil and gas industry and was renamed The Institute of Petroleum (IP).
History of the Institute of Energy
The Institution of Fuel Economy Engineers was founded in 1925, and the Institution of Fuel Technology in 1926. The two merged in 1927 as the Institute of Fuel (IoF). The first president after the merger was Alfred Mond. The IoF named the Melchett Award in his honour. In 1979 the organization became the Institute of Energy (IoE).
Structure
The EI is a registered charity, formed to promote the science of energy and fuels in all applications for the public benefit. It is governed by its council, which has a president and vice-p
|
https://en.wikipedia.org/wiki/Compact%20excavator
|
A compact or mini excavator is a tracked or wheeled vehicle with an approximate operating weight from 0.7 to 8.5 tonnes. It generally includes a standard backfill blade and features independent boom swing.
Hydraulic excavators are somewhat different from other construction equipment in that all movement and functions of the machine are accomplished through the transfer of hydraulic fluid. The compact excavator's work group and blade are activated by hydraulic fluid acting upon hydraulic cylinders. The excavator's slew (rotation) and travel functions are also activated by hydraulic fluid powering hydraulic motors.
History
The compact excavator has its origins with equipment manufacturer Akio Takeuchi who founded the Takeuchi Manufacturing company in 1963. Created as an improvement to overcome issues that other manufacturers typically ignored. It was first introduced in 1971 when Akio was asked to create a smaller excavator that could work specifically on house foundations. This excavator was more compact, versatile and able to perform greater than its larger counterparts.
Structure
Most compact hydraulic excavators have three distinct assemblies: house, undercarriage and workgroup.
House
The house structure contains the operator's compartment, engine compartment, hydraulic pump and distribution components. The house structure is attached to the top of the undercarriage via a swing bearing. The house, along with the workgroup, is able to rotate or slew upon the undercarriage without limit due to a hydraulic distribution valve which supplies oil to the undercarriage components.
Slew
Slewing refers to rotating the excavator's house assembly. Unlike a conventional backhoe, the operator can slew the entire house and workgroup upon the undercarriage for spoil placement.
Undercarriage
The undercarriage consists of rubber or steel tracks, drive sprockets, rollers, idlers and associated components/structures. The undercarriage supports the house structure and the wor
|
https://en.wikipedia.org/wiki/Dnsmasq
|
dnsmasq is free software providing Domain Name System (DNS) caching, a Dynamic Host Configuration Protocol (DHCP) server, router advertisement and network boot features, intended for small computer networks.
dnsmasq has low requirements for system resources, can run on Linux, BSDs, Android and macOS, and is included in most Linux distributions. Consequently, it "is present in a lot of home routers and certain Internet of Things gadgets" and is included in Android.
Details
dnsmasq is a lightweight, easy to configure DNS forwarder, designed to provide DNS (and optionally DHCP and TFTP) services to a small-scale network. It can serve the names of local machines which are not in the global DNS.
dnsmasq's DHCP server supports static and dynamic DHCP leases, multiple networks and IP address ranges. The DHCP server integrates with the DNS server and allows local machines with DHCP-allocated addresses to appear in the DNS. dnsmasq caches DNS records, reducing the load on upstream nameservers and improving performance, and can be configured to automatically pick up the addresses of its upstream servers.
dnsmasq accepts DNS queries and either answers them from a small, local cache or forwards them to a real, recursive DNS server. It loads the contents of /etc/hosts, so that local host names which do not appear in the global DNS can be resolved. This also means that records added to your local /etc/hosts file with the format "0.0.0.0 annoyingsite.com" can be used to prevent references to "annoyingsite.com" from being resolved by your browser. This can quickly evolve to a local ad blocker when combined with adblocking site list providers. If done on a router, one can efficiently remove advertising content for an entire household or company.
dnsmasq supports modern Internet standards such as IPv6 and DNSSEC, network booting with support for BOOTP, PXE and TFTP and also Lua scripting.
Some Internet service-providers rewrite the NXDOMAIN (domain does not exist) responses
|
https://en.wikipedia.org/wiki/Barrier%20%28computer%20science%29
|
In parallel computing, a barrier is a type of synchronization method. A barrier for a group of threads or processes in the source code means any thread/process must stop at this point and cannot proceed until all other threads/processes reach this barrier.
Many collective routines and directive-based parallel languages impose implicit barriers. For example, a parallel do loop in Fortran with OpenMP will not be allowed to continue on any thread until the last iteration is completed. This is in case the program relies on the result of the loop immediately after its completion. In message passing, any global communication (such as reduction or scatter) may imply a barrier.
In concurrent computing, a barrier may be in a raised or lowered state. The term latch is sometimes used to refer to a barrier that starts in the raised state and cannot be re-raised once it is in the lowered state. The term count-down latch is sometimes used to refer to a latch that is automatically lowered once a pre-determined number of threads/processes have arrived.
Implementation
The basic barrier has mainly two variables, one of which records the pass/stop state of the barrier, the other of which keeps the total number of threads that have entered in the barrier. The barrier state was initialized to be "stop" by the first threads coming into the barrier. Whenever a thread enters, based on the number of threads already in the barrier, only if it is the last one, the thread sets the barrier state to be "pass" so that all the threads can get out of the barrier. On the other hand, when the incoming thread is not the last one, it is trapped in the barrier and keeps testing if the barrier state has changed from "stop" to "pass", and it gets out only when the barrier state changes to "pass". The following C++ code demonstrates this procedure.
struct barrier_type
{
// how many processors have entered the barrier
// initialize to 0
int arrive_counter;
// how many processors have exi
|
https://en.wikipedia.org/wiki/Military%20Cryptanalytics
|
Military Cryptanalytics (or MILCRYP as it is sometimes known) is a revision by Lambros D. Callimahos of the series of books written by William F. Friedman under the title Military Cryptanalysis. It may also contain contributions by other cryptanalysts. It was a training manual for National Security Agency and military cryptanalysts. It was published for government use between 1957 and 1977, though parts I and II were written in 1956 and 1959.
Callimahos on the work
From the Introduction in Part I, Volume I, by Callimahos:
"This text represents an extensive expansion and revision, both in scope and content, of the earlier work entitled 'Military Cryptanalysis, Part I' by William F. Friedman. This expansion and revision was necessitated by the considerable advancement made in the art since the publication of the previous text."
Callimahos referred to parts III–VI at the end of the first volume:
"...Part III will deal with varieties of aperiodic substitution systems, elementary cipher devices and cryptomechanisms, and will embrace a detailed treatment of cryptomathematics and diagnostic tests in cryptanalysis; Part IV will treat transposition and fractioning systems, and combined substitution-transposition systems; Part V will treat the reconstruction of codes, and the solution of enciphered code systems, and Part VI will treat the solution of representative machine cipher systems."
However, parts IV–VI were never completed.
Declassification
Both Military Cryptanalytics and Military Cryptanalysis have been subjects of Mandatory Declassification Review (MDR) requests, including one by John Gilmore in 1992-1993 and two by Charles Varga in 2004 and 2016.
All four parts of Military Cryptanalysis and the first two parts of the Military Cryptanalytics series have been declassified. The third part of Military Cryptanalytics was declassified in part in December 2020 and published by GovernmentAttic.org in 2021. In 1984 NSA released copies of Military Cryptanalytics parts I
|
https://en.wikipedia.org/wiki/Universe%20%28Unix%29
|
In some versions of the Unix operating system, the term universe was used to denote some variant of the working environment. During the late 1980s, most commercial Unix variants were derived from either System V or BSD. Most versions provided both BSD and System V universes and allowed the user to switch between them. Each universe, typically implemented by separate directory trees or separate filesystems, usually included different versions of commands, libraries, man pages, and header files. While such a facility offered the ability to develop applications portable across both System V and BSD variants, the requirements in disk space and maintenance (separate configuration files, twice the work in patching systems) gave them a problematic reputation. Systems that offered this facility included Harris/Concurrent's CX/UX, Convex's Convex/OS, Apollo's Domain/OS (version 10 only), Pyramid's DC/OSx (dropped in SVR4-based version 2), Concurrent's Masscomp/RTU, MIPS Computer Systems' RISC/os, Sequent's DYNIX/ptx and Siemens' SINIX.
Some versions of System V Release 4 retain a system similar to Dual Universe concept, with BSD commands (which behave differently from classic System V commands) in , BSD header files in and library files in . can also be found in NeXTSTEP and OPENSTEP, as well as Solaris.
External links
Sven Mascheck, DYNIX 3.2.0 and SINIX V5.20 Universes
Unix
|
https://en.wikipedia.org/wiki/Dundee%20Society
|
The Dundee Society was a society of graduates of CA-400, a National Security Agency course in cryptology devised by Lambros D. Callimahos, which included the Zendian Problem (a practical exercise in traffic analysis and cryptanalysis). The class was held once a year, and new members were inducted into the Society upon completion of the class. The Society was founded in the mid-1950s and continued on after Callimahos' retirement from NSA in 1976. The last CA-400 class was held at NSA in 1979, formally closing the society's membership rolls.
The society took its name from an empty jar of Dundee Marmalade that Callimahos kept on his desk for use as a pencil caddy. Callimahos came up with the society's name while trying to schedule a luncheon for former CA-400 students at the Ft. Meade Officers' Club; being unable to use either the course name or the underlying government agency's name for security reasons, he spotted the ceramic Dundee jar and decided to use "The Dundee Society" as the cover name for the luncheon reservation. CA-400 students were presented with ceramic Dundee Marmalade jars at the close of the course as part of the induction ceremony into the Dundee Society. When Dundee switched from ceramic to glass jars, Callimahos would still present graduates with ceramic Dundee jars, but the jars were then collected back up for use in next year's induction ceremony, and members were "encouraged" to seek out Dundee jars for their own collections if they wished to have a permanent token of induction.
See also
American Cryptogram Association
National Cryptologic School
References
National Security Agency
Cryptography organizations
Cryptologic education
Clubs and societies in the United States
|
https://en.wikipedia.org/wiki/Zendian%20problem
|
The Zendian problem was an exercise in communication intelligence operations (mainly traffic analysis and cryptanalysis) devised by Lambros D. Callimahos as part of an advanced course, CA-400, that Callimahos taught to National Security Agency cryptanalysts starting in the 1950s.
Content
The scenario involves 375 radio messages said to have been intercepted on December 23 by the US Army contingent of a United Nations force landed on the fictional island of Zendia in the Pacific Ocean.
A typical intercept looks like this:
<nowiki>
XYR DE OWN 4235KCS 230620T USM-99/00091
9516 8123 0605 7932 8423 5095 8444 6831
JAAAJ EUEBD OETDN GXAWR SUTEU EIWEN YUENN ODEUH RROMM EELGE
AEGID TESRR RASEB ENORS RNOMM EAYTU NEONT ESFRS NTCRO QCEET
OCORE IITLP OHSRG SSELY TCCSV SOTIU GNTIV EVOMN TMPAA CIRCS
ENREN OTSOI ENREI EKEIO PFRNT CDOGE NYFPE TESNI EACEA ISTEM
SOFEA TROSE EQOAO OSCER HTTAA LUOUY LSAIE TSERR ESEPA PHVDN
HNNTI IARTX LASLD URATT OPPLO AITMW OTIAS TNHIR DCOUT NMFCA
SREEE USSDS DHOAH REEXI PROUT NTTHD JAAAJ EUEBD
</nowiki>
For each message, the first line is provided by the intercept operator, giving call signs, frequency, time, and reference number. The rest of the message is a transcript of the Morse code transmission.
At the beginning of the intercepted message there is a header which consists of 8 four-digit groups. Initially, the meaning of the numeric header is not known; the meanings of various components of this header (such as a serial number assigned by the transmitting organization's message center) can be worked out through traffic analysis.
The rest of the message consists of "indicators" and ciphertext; the first group is evidently a "discriminant" indicating the cryptosystem used, and (depending on the cryptosystem) some or all of the second group may contain a message-specific keying element such as initial rotor settings. The first two groups are repeated at the end of the message, which allows correction of garbled indicators. Th
|
https://en.wikipedia.org/wiki/MTR%20%28software%29
|
My traceroute, originally named Matt's traceroute (MTR), is a computer program that combines the functions of the traceroute and ping programs in one network diagnostic tool.
MTR probes routers on the route path by limiting the number of hops individual packets may traverse, and listening to responses of their expiry. It will regularly repeat this process, usually once per second, and keep track of the response times of the hops along the path.
History
The original Matt's traceroute program was written by Matt Kimball in 1997. Roger Wolff took over maintaining MTR (renamed My traceroute) in October 1998.
Fundamentals
MTR is licensed under the terms of the GNU General Public License (GPL) and works under modern Unix-like operating systems. It normally works under the text console, but it also has an optional GTK+-based graphical user interface (GUI).
MTR relies on Internet Control Message Protocol (ICMP) Time Exceeded (type 11, code 0) packets coming back from routers, or ICMP Echo Reply packets when the packets have hit their destination host. MTR also has a User Datagram Protocol (UDP) mode (invoked with "-u" on the command line or pressing the "u" key in the curses interface) that sends UDP packets, with the time to live (TTL) field in the IP header increasing by one for each probe sent, toward the destination host. When the UDP mode is used, MTR relies on ICMP port unreachable packets (type 3, code 3) when the destination is reached.
MTR also supports IPv6 and works in a similar manner but instead relies on ICMPv6 messages.
The tool is often used for network troubleshooting. By showing a list of routers traversed, and the average round-trip time as well as packet loss to each router, it allows users to identify links between two given routers responsible for certain fractions of the overall latency or packet loss through the network. This can help identify network overuse problems.
Examples
This example shows MTR running on Linux tracing a route from the
|
https://en.wikipedia.org/wiki/STED%20microscopy
|
Stimulated emission depletion (STED) microscopy is one of the techniques that make up super-resolution microscopy. It creates super-resolution images by the selective deactivation of fluorophores, minimizing the area of illumination at the focal point, and thus enhancing the achievable resolution for a given system. It was developed by Stefan W. Hell and Jan Wichmann in 1994, and was first experimentally demonstrated by Hell and Thomas Klar in 1999. Hell was awarded the Nobel Prize in Chemistry in 2014 for its development. In 1986, V.A. Okhonin (Institute of Biophysics, USSR Academy of Sciences, Siberian Branch, Krasnoyarsk) had patented the STED idea. This patent was unknown to Hell and Wichmann in 1994.
STED microscopy is one of several types of super resolution microscopy techniques that have recently been developed to bypass the diffraction limit of light microscopy to increase resolution. STED is a deterministic functional technique that exploits the non-linear response of fluorophores commonly used to label biological samples in order to achieve an improvement in resolution, that is to say STED allows for images to be taken at resolutions below the diffraction limit. This differs from the stochastic functional techniques such as Photoactivated localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM) as these methods use mathematical models to reconstruct a sub diffraction limit from many sets of diffraction limited images.
Background
In traditional microscopy, the resolution that can be obtained is limited by the diffraction of light. Ernst Abbe developed an equation to describe this limit. The equation is:
where D is the diffraction limit, λ is the wavelength of the light, and NA is the numerical aperture, or the refractive index of the medium multiplied by the sine of the angle of incidence. n describes the refractive index of the specimen, α measures the solid half‐angle from which light is gathered by an objective, λ is
|
https://en.wikipedia.org/wiki/189%20%28number%29
|
189 (one hundred [and] eighty-nine) is the natural number following 188 and preceding 190.
In mathematics
189 is a centered cube number and a heptagonal number.
The centered cube numbers are the sums of two consecutive cubes, and 189 can be written as sum of two cubes in two ways: and The smallest number that can be written as the sum of two positive cubes in two ways is 1729.
There are 189 zeros among the decimal digits of the positive integers with at most three digits.
The largest prime number that can be represented in 256-bit arithmetic is the "ultra-useful prime" used in quasi-Monte Carlo methods and in some cryptographic systems.
See also
The year AD 189 or 189 BC
List of highways numbered 189
References
Integers
|
https://en.wikipedia.org/wiki/Little%20b%20%28programming%20language%29
|
Little b is a domain-specific programming language, more specifically, a modeling language, designed to build modular mathematical models of biological systems. It was designed and authored by Aneil Mallavarapu. Little b is being developed in the Virtual Cell Program at Harvard Medical School, headed by mathematician Jeremy Gunawardena.
This language is based on Lisp and is meant to allow modular programming to model biological systems. It will allow more flexibility to facilitate rapid change that is required to accurately capture complex biological systems.
The language draws on techniques from artificial intelligence and symbolic mathematics, and provides syntactic conveniences derived from object-oriented languages. The language was originally denoted with a lowercase b (distinguishing it from B, the predecessor to the widely used C programming language), but the name was eventually changed to "little b" to avoid confusion and to pay homage to Smalltalk.
References
Krieger K. "Life in Silico: A Different Kind of Intelligent Design". Science. 312(5771):189–190.
https://arstechnica.com/uncategorized/2008/07/little-b-project-creates-biology-specific-programming-system/
https://www.computerworld.com/article/2551598/big-things-from-little-b.html
External links
Biology enters 'The Matrix' through new computer language EurekAlert article
Programming languages
Dynamic programming languages
Dynamically typed programming languages
Object-oriented programming languages
Lisp (programming language)
Specification languages
Cross-platform free software
Programming languages created in 2004
|
https://en.wikipedia.org/wiki/Bol%20loop
|
In mathematics and abstract algebra, a Bol loop is an algebraic structure generalizing the notion of group. Bol loops are named for the Dutch mathematician Gerrit Bol who introduced them in .
A loop, L, is said to be a left Bol loop if it satisfies the identity
, for every a,b,c in L,
while L is said to be a right Bol loop if it satisfies
, for every a,b,c in L.
These identities can be seen as weakened forms of associativity, or a strengthened form of (left or right) alternativity.
A loop is both left Bol and right Bol if and only if it is a Moufang loop. Alternatively, a right or left Bol loop is Moufang if and only if it satisfies the flexible identity a(ba) = (ab)a . Different authors use the term "Bol loop" to refer to either a left Bol or a right Bol loop.
Properties
The left (right) Bol identity directly implies the left (right) alternative property, as can be shown by setting b to the identity.
It also implies the left (right) inverse property, as can be seen by setting b to the left (right) inverse of a, and using loop division to cancel the superfluous factor of a. As a result, Bol loops have two-sided inverses.
Bol loops are also power-associative.
Bruck loops
A Bol loop where the aforementioned two-sided inverse satisfies the automorphic inverse property, (ab)−1 = a−1 b−1 for all a,b in L, is known as a (left or right) Bruck loop or K-loop (named for the American mathematician Richard Bruck). The example in the following section is a Bruck loop.
Bruck loops have applications in special relativity; see Ungar (2002). Left Bruck loops are equivalent to Ungar's (2002) gyrocommutative gyrogroups, even though the two structures are defined differently.
Example
Let L denote the set of n x n positive definite, Hermitian matrices over the complex numbers. It is generally not true that the matrix product AB of matrices A, B in L is Hermitian, let alone positive definite. However, there exists a unique P in L and a unique unitary matrix U such that A
|
https://en.wikipedia.org/wiki/Phase%20boundary
|
In thermal equilibrium, each phase (i.e. liquid, solid etc.) of physical matter comes to an end at a transitional point, or spatial interface, called a phase boundary, due to the immiscibility of the matter with the matter on the other side of the boundary. This immiscibility is due to at least one difference between the two substances' corresponding physical properties. The behavior of phase boundaries has been a developing subject of interest and an active research field, called interface science, in physics and mathematics for almost two centuries, due partly to phase boundaries naturally arising in many physical processes, such as the capillarity effect, the growth of grain boundaries, the physics of binary alloys, and the formation of snow flakes.
One of the oldest problems in the area dates back to Lamé and Clapeyron who studied the freezing of the ground. Their goal was to determine the thickness of solid crust generated by the cooling of a liquid at constant temperature filling the half-space. In 1889, Stefan, while working on the freezing of the ground developed these ideas further and formulated the two-phase model which came to be known as the Stefan Problem.
The proof for the existence and uniqueness of a solution to the Stefan problem was developed in many stages. Proving the general existence and uniqueness of the solutions in the case of was solved by Shoshana Kamin.
References
Phase transitions
Applied mathematics
|
https://en.wikipedia.org/wiki/Comparison%20of%20iPod%20file%20managers
|
This is a list of iPod file managers. i.e. software that permits the transferring of media files content between an iPod and a computer or vice versa.
iTunes is the official iPod managing software, but 3rd parties have created alternatives to work around restrictions in iTunes. e.g. transferring content from an iPod to a computer is restricted by iTunes.
General
Media organization and transfer features
iPod syncing and maintenance features
iPhone & iPod Touch compatibility
See also
iPod
iTunes
iPhone
References
iPod Managers
ITunes
|
https://en.wikipedia.org/wiki/Scrum%20%28software%20development%29
|
Scrum is an agile project management system commonly used in software development and other industries.
Scrum prescribes for teams to break work into goals to be completed within time-boxed iterations, called sprints. Each sprint is no longer than one month and commonly lasts two weeks. The scrum team assesses progress in time-boxed, stand-up meetings of up to 15 minutes, called daily scrums. At the end of the sprint, the team holds two further meetings: one sprint review to demonstrate the work for stakeholders and solicit feedback, and one internal sprint retrospective.
Scrum's approach to product development involves bringing decision-making authority to an operational level. Unlike a sequential approach to product development, scrum is an iterative and incremental framework for product development. Scrum allows for continuous feedback and flexibility, requiring teams to self-organize by encouraging physical co-location or close online collaboration, and mandating frequent communication among all team members. The flexible and semi-unplanned approach of scrum is based in part on the notion of requirements volatility, that stakeholders will change their requirements as the project evolves.
History
The use of the term scrum in software development came from a 1986 Harvard Business Review paper titled "The New New Product Development Game" by Hirotaka Takeuchi and Ikujiro Nonaka. Based on case studies from manufacturing firms in the automotive, photocopier, and printer industries, the authors outlined a new approach to product development for increased speed and flexibility. They called this the rugby approach, as the process involves a single cross-functional team operating across multiple overlapping phases, in which the team "tries to go the distance as a unit, passing the ball back and forth". The authors later developed scrum in their book, The Knowledge Creating Company.
In the early 1990s, Ken Schwaber used what would become scrum at his company, Adva
|
https://en.wikipedia.org/wiki/Jordan%27s%20lemma
|
In complex analysis, Jordan's lemma is a result frequently used in conjunction with the residue theorem to evaluate contour integrals and improper integrals. The lemma is named after the French mathematician Camille Jordan.
Statement
Consider a complex-valued, continuous function , defined on a semicircular contour
of positive radius lying in the upper half-plane, centered at the origin. If the function is of the form
with a positive parameter , then Jordan's lemma states the following upper bound for the contour integral:
with equality when vanishes everywhere, in which case both sides are identically zero. An analogous statement for a semicircular contour in the lower half-plane holds when .
Remarks
If is continuous on the semicircular contour for all large and
then by Jordan's lemma
For the case , see the estimation lemma.
Compared to the estimation lemma, the upper bound in Jordan's lemma does not explicitly depend on the length of the contour .
Application of Jordan's lemma
Jordan's lemma yields a simple way to calculate the integral along the real axis of functions holomorphic on the upper half-plane and continuous on the closed upper half-plane, except possibly at a finite number of non-real points , , …, . Consider the closed contour , which is the concatenation of the paths and shown in the picture. By definition,
Since on the variable is real, the second integral is real:
The left-hand side may be computed using the residue theorem to get, for all larger than the maximum of , , …, ,
where denotes the residue of at the singularity . Hence, if satisfies condition (), then taking the limit as tends to infinity, the contour integral over vanishes by Jordan's lemma and we get the value of the improper integral
Example
The function
satisfies the condition of Jordan's lemma with for all with . Note that, for ,
hence () holds. Since the only singularity of in the upper half plane is at , the above application yields
Si
|
https://en.wikipedia.org/wiki/Croatian%20National%20Corpus
|
Croatian National Corpus (, HNK) is the biggest and the most important corpus of Croatian. Its compilation started in 1998 at the Institute of Linguistics of the Faculty of Humanities and Social Sciences, University of Zagreb following the ideas of Marko Tadić. The theoretical foundations and the expression of the need for a general-purpose, representative and multi-million corpus of Croatian started to appear even earlier. The Croatian National Corpus is compiled from selected texts written in Croatian covering all fields, topics, genres and styles: from literary and scientific texts to text-books, newspaper, user-groups and chat rooms.
The initial composition was divided in two constituents:
30-million corpus of contemporary Croatian (30m) where samples from texts from 1990 on were included. The criteria for inclusion of text samples were: written by native speakers, different fields, genres and topics. Translated text or poetry were excluded.
Croatian Electronic Text Archive (HETA) where the complete text were included, particularly serial publications (volumes, series, editions etc.) which would imbalance the 30m if they were inserted there.
Since 2004, with the adoption of the concept of the 3rd generation corpus, the two-constituent structure has been abandoned in favor of several subcorpora and larger size. Since 2005 HNK 105 million tokens and is composed of number of different subcorpora which can be searched individually and all together in a whole corpus. Since 2004 HNK also migrated to a new server platform, namely Manatee/Bonito server-client architecture. For searching the HNK (today still with free test access) a free client program Bonito is needed. The author of this corpus manager is Pavel Rychlý from the Natural Language Processing Laboratory of the Faculty of Informatics, Masaryk University in Brno, Czech Republic. Its interface features complex and more elaborated queries over corpus, different types of statistical results, total or partial
|
https://en.wikipedia.org/wiki/Blue-listed
|
Blue-listed species are species that belong to the Blue List and includes any indigenous species or subspecies (taxa) considered to be vulnerable in their locale in order to provide early warning to federal and regional governments. Vulnerable taxa are of special concern because of characteristics that make them particularly sensitive to human activities or natural events. Blue-listed taxa are at risk, but are not extirpated, endangered or threatened.
History
The concept of a Blue List was derived in 1971 by Robert Arbib from the National Audubon Society in his article, "Announcing-- The Blue List: an 'early warning' system for birds". The article stated that the list was made up for species that appear to be locally common in North America, but is undergoing non-cyclic declines. Starting from 1971, it was utilized to list vulnerable bird species throughout North America. Unlike the US Fish and Wildlife Endangered Species List, the Blue List was made to identify patterns of population losses for regional bird populations before they could be listed as endangered. Every decade after its release, the list is revisited and revised based on regional editors and species get "nominated" to be added to the list. From then on, species that are part of the Blue List were referred to as Blue-listed species.
Status Ranks
Initially, in order to identify the types of risks that each Blue-Listed species have, the Blue List has identified various categories for Blue-Listed species based on the following alphabets:
"A" : the species population is "greatly down in numbers"
"B" : the species population is "down in numbers"
"C" : the species population is experiencing no change
"D" : the species population is "up in numbers"
"E" : the species population is "greatly up in numbers"
Using this metric reginal editors were able to report on the species along with their status ranks in order to identify the patterns of population growth that each species is facing. Later on, t
|
https://en.wikipedia.org/wiki/Black-ray%20goby
|
Stonogobiops nematodes, the Filament-finned prawn-goby, the Antenna goby, the high-fin goby, the red-banded goby, the high-fin red-banded goby, the striped goby, the barber-pole goby, or the black-ray Goby, is a species of marine goby native to the Indian Ocean and western Pacific Ocean from the Seychelles to the Philippines and Bali.
Physical features
Adult fish can grow up to in length, with the striking pointed dorsal fin becoming more raised and pronounced in adulthood. This elongated fin is the most obvious distinguishing feature between the black-ray goby and its close cousin, the yellow snout goby (S. Xanthorhinica). The fish are coloured with four diagonal brown stripes across a white body, and a distinctive yellow head.
It is almost impossible for anybody less than a specialised expert in the specific field of these types of fish to discern differences between males and females of the species.
Natural environment
This goby inhabits sandy or sand-rubble bottoms adjacent to reefs at depths of from . It is one of several species that form commensal relationships with Randall's pistol shrimp (Alpheus randalli).
Behaviour in the wild
This species shares a burrow with its shrimp partner. The goby has much better eyesight than the shrimp, and, as such, acts as the watchman for both of them, keeping an eye out for danger. The shrimp spends the day digging a burrow in the sand in which both live. Burrows usually measure up to one inch in diameter, and can reach up to four feet in length. The two animals maintain continuous contact, with the shrimp placing one of its antennae permanently on the goby's tail. When danger threatens, the goby will make continuous flicks of its tail, warning the shrimp there is a predator nearby, and the shrimp will remain safely in the burrow. If the danger reaches a certain level, the goby will dart into the burrow after the shrimp.
At night, the goby will go into the burrow, and the shrimp will collapse the entrance to close
|
https://en.wikipedia.org/wiki/Globule%20%28CDN%29
|
Globule was an open-source collaborative content delivery network developed at the Vrije Universiteit in Amsterdam since 2006. It is implemented as a third-party module for the Apache HTTP Server that allows any given server to replicate its documents to other Globule servers. This can improve the site's performance, maintain the site available to its clients even if some servers are down, and to a certain extent help to resist to flash crowds and the Slashdot effect. the project is discontinued and is no longer maintained.
Globule takes care of maintaining consistency between the replicas, monitoring the servers, and automatically redirecting clients to one of the available replicas. Globule also supports the replication of PHP documents accessing MySQL databases. It runs on Unix and Windows systems.
See also
Codeen
References
External links
A paper describing Globule's architecture as a collaborative content delivery network
Distributed data storage
Apache httpd modules
|
https://en.wikipedia.org/wiki/Process
|
A process is a series or set of activities that interact to produce a result; it may occur once-only or be recurrent or periodic.
Things called a process include:
Business and management
Business process, activities that produce a specific service or product for customers
Business process modeling, activity of representing processes of an enterprise in order to deliver improvements
Manufacturing process management, a collection of technologies and methods used to define how products are to be manufactured.
Process architecture, structural design of processes, applies to fields such as computers, business processes, logistics, project management
Process area, related processes within an area which together satisfies an important goal for improvements within that area
Process costing, a cost allocation procedure of managerial accounting
Process management (project management), a systematic series of activities directed towards planning, monitoring the performance and causing an end result in engineering activities, business process, manufacturing processes or project management
Process-based management, is a management approach that views a business as a collection of processes
Law
Due process, the concept that governments must respect the rule of law
Legal process, the proceedings and records of a legal case
Service of process, the procedure of giving official notice of a legal proceeding
Science and technology
The general concept of the scientific process, see scientific method
Process theory, the scientific study of processes
Industrial processes, consists of the purposeful sequencing of tasks that combine resources to produce a desired output
Biology and psychology
Process (anatomy), a projection or outgrowth of tissue from a larger body
Biological process, a process of a living organism
Cognitive process, such as attention, memory, language use, reasoning, and problem solving
Mental process, a function or processes of the mind
Neuronal process, also neurite
|
https://en.wikipedia.org/wiki/UPDM
|
The Unified Profile for DoDAF/MODAF (UPDM) is the product of an Object Management Group (OMG) initiative to develop a modeling standard that supports both the USA Department of Defense Architecture Framework (DoDAF) and the UK Ministry of Defence Architecture Framework (MODAF). The current UPDM - the Unified Profile for DoDAF and MODAF was based on earlier work with the same acronym and a slightly different name - the UML Profile for DoDAF and MODAF.
History
The UPDM initiative began in 2005, when the OMG issued a Request for Proposal. This request was based on the then current versions of DoDAF (1.0) and MODAF (1.1). While the specification submission development was underway, significant changes were made to the DoDAF and MODAF. Therefore, although a UPDM 1.0 beta 1 specification was adopted by the OMG in 2007, and UPDM 1.0 beta 2 was submitted by an OMG Finalization Task Force in 2008, UPDM 1.0 beta 2 has not been endorsed by the US Department of Defense or the UK Ministry of Defence (MOD).
The UPDM 1.0 specification, the result of additional work by many members of the original submission teams, is architecturally aligned with DoDAF 1.5 and MODAF 1.2. This version of the specification has been endorsed by both the US DoD and the UK MOD.
The UPDM 2.0 specification was released in January 2013, and UPDM 2.1 was released in August 2013.
Motivation for unified profile for DoDAF/MODAF
DoDAF v1.5 Volume II includes guidance for representing DoDAF architecture products using UML. MODAF also provides similar guidance, and its meta-model is specified as a UML profile abstract syntax (i.e. extensions of UML 2.1 metaclasses). MODAF differs from DoDAF however, so the MODAF Meta-Model is not suitable for use in DoDAF tools. Differences in vendor implementations have resulted in interoperability issues between tools and additional training requirements for users. Also, the current DoDAF UML implementation guidance is based on a previous version of UML (UML v1.x), a
|
https://en.wikipedia.org/wiki/Voodoo3
|
Voodoo3 was a series of computer gaming video cards manufactured and designed by 3dfx Interactive. It was the successor to the company's high-end Voodoo2 line and was based heavily upon the older Voodoo Banshee product. Voodoo3 was announced at COMDEX '98 and arrived on store shelves in early 1999. The Voodoo3 line was the first product manufactured by the combined STB Systems and 3dfx.
History
The 'Avenger' graphics core was originally conceived immediately after Banshee. Due to mis-management by 3dfx, this caused the next-generation 'Rampage' project to suffer delays which would prove to be fatal to the entire company.
Avenger was pushed to the forefront as it offered a quicker time to market than the already delayed Rampage. Avenger was no more than the Banshee core with a second texture mapping unit (TMU) added - the same TMU which Banshee lost compared to Voodoo2. Avenger was thus merely a Voodoo2 with an integrated 128-bit 2D video accelerator and twice the clock speed.
Architecture and performance
Much was made of Voodoo3 (christened 'Avenger') and its 16-bit color rendering limitation. This was in fact quite complex, as Voodoo3 operated to full 32-bit precision (8 bits per channel, 16.7M colours) in its texture mappers and pixel pipeline as opposed to previous products from 3dfx and other vendors, which had only worked in 16-bit precision.
To save framebuffer space, the Voodoo3's rendering output was dithered to 16 bit. This offered better quality than running in pure 16-bit mode. However, a controversy arose over what happened next.
The Voodoo3's RAMDAC, which took the rendered frame from the framebuffer and generated the display image, performed a 2x2 box or 4x1 line filter on the dithered image to almost reconstruct the original 24-bit color render. 3dfx claimed this to be '22-bit' equivalent quality. As such, Voodoo3's framebuffer was not representative of the final output, and therefore, screenshots did not accurately portray Voodoo3's display
|
https://en.wikipedia.org/wiki/Topological%20half-exact%20functor
|
In mathematics, a topological half-exact functor F is a functor from a fixed topological category (for example CW complexes or pointed spaces) to an abelian category (most frequently in applications, category of abelian groups or category of modules over a fixed ring) that has a following property: for each sequence of spaces, of the form:
X → Y → C(f)
where C(f) denotes a mapping cone, the sequence:
F(X) → F(Y) → F(C(f))
is exact. If F is a contravariant functor, it is half-exact if for each sequence of spaces as above,
the sequence F(C(f)) → F(Y) → F(X) is exact.
Homology is an example of a half-exact functor, and
cohomology (and generalized cohomology theories) are examples of contravariant half-exact functors.
If B is any fibrant topological space, the (representable) functor F(X)=[X,B] is half-exact.
Homotopy theory
Homological algebra
|
https://en.wikipedia.org/wiki/EEMBC
|
EEMBC, the Embedded Microprocessor Benchmark Consortium, is a non-profit, member-funded organization formed in 1997, focused on the creation of standard benchmarks for the hardware and software used in embedded systems. The goal of its members is to make EEMBC benchmarks an industry standard for evaluating the capabilities of embedded processors, compilers, and the associated embedded system implementations, according to objective, clearly defined, application-based criteria. EEMBC members may contribute to the development of benchmarks, vote at various stages before public distribution, and accelerate testing of their platforms through early access to benchmarks and associated specifications.
Most Popular Benchmark Working Groups
In chronological order of development:
AutoBench 1.1 - single-threaded code for automotive, industrial, and general-purpose applications
Networking - single-threaded code associated with moving packets in networking applications.
MultiBench - multi-threaded code for testing scalability of multicore processors.
CoreMark - measures the performance of central processing units (CPU) used in embedded systems
BXBench - system benchmark measuring the web browsing user-experience, from the click/touch on a URL to final page rendered on the screen, and is not limited to measuring only JavaScript execution.
AndEBench-Pro - system benchmark providing a standardized, industry-accepted method of evaluating Android platform performance. It's available for free download in Google Play.
FPMark - multi-threaded code for both single- and double-precision floating-point workloads, as well as small, medium, and large data sets.
ULPMark - energy-measuring benchmark for ultra-low power microcontrollers; benchmarks include ULPMark-Core (with a focus on microcontroller core activity and sleep modes) and ULPMark-Peripheral (with a focus on microcontroller peripheral activity such as Analog-to-digital converter, Serial Peripheral Interface Bus, Real-time
|
https://en.wikipedia.org/wiki/Necrobiosis
|
Necrobiosis is the physiological death of a cell, and can be caused by conditions such as basophilia, erythema, or a tumor. It is identified both with and without necrosis.
Necrobiotic disorders are characterized by presence of necrobiotic granuloma on histopathology. Necrobiotic granuloma is described as aggregation of histiocytes around a central area of altered collagen and elastic fibers. Such a granuloma is typically arranged in a palisaded pattern.
It is associated with necrobiosis lipoidica and granuloma annulare.
Necrobiosis differs from apoptosis, which kills a damaged cell to protect the body from harm.
References
External links
Cellular processes
|
https://en.wikipedia.org/wiki/Ambiguity%20aversion
|
In decision theory and economics, ambiguity aversion (also known as uncertainty aversion) is a preference for known risks over unknown risks. An ambiguity-averse individual would rather choose an alternative where the probability distribution of the outcomes is known over one where the probabilities are unknown. This behavior was first introduced through the Ellsberg paradox (people prefer to bet on the outcome of an urn with 50 red and 50 black balls rather than to bet on one with 100 total balls but for which the number of black or red balls is unknown).
There are two categories of imperfectly predictable events between which choices must be made: risky and ambiguous events (also known as Knightian uncertainty). Risky events have a known probability distribution over outcomes while in ambiguous events the probability distribution is not known. The reaction is behavioral and still being formalized. Ambiguity aversion can be used to explain incomplete contracts, volatility in stock markets, and selective abstention in elections (Ghirardato & Marinacci, 2001).
The concept is expressed in the English proverb: "Better the devil you know than the devil you don't."
Difference from risk aversion
The distinction between ambiguity aversion and risk aversion is important but subtle. Risk aversion comes from a situation where a probability can be assigned to each possible outcome of a situation and it is defined by the preference between a risky alternative and its expected value. Ambiguity aversion applies to a situation when the probabilities of outcomes are unknown (Epstein 1999) and it is defined through the preference between risky and ambiguous alternatives, after controlling for preferences over risk.
Using the traditional two-urn Ellsberg choice, urn A contains 50 red balls and 50 blue balls while urn B contains 100 total balls (either red or blue) but the number of each is unknown. An individual who prefers a certain payoff strictly smaller than $10 over a bet t
|
https://en.wikipedia.org/wiki/SegaSoft
|
SegaSoft, originally headquartered in Redwood City, California and later San Francisco, was a joint venture by Sega and CSK (Sega's majority stockholder at the time), created in 1995 to develop and publish games for the PC and Sega Saturn, primarily in the North American market.
SegaSoft was responsible for, among other things, the Heat.net multiplayer game system and publishing the last few titles made by Rocket Science Games.
History
In 1996, SegaSoft announced that they would be publishing games for all viable platforms, not just Saturn and PC. This, however, never came to fruition, as in January 1997 SegaSoft restructured to focus on the PC and online gaming.
SegaSoft disbanded in 2000 with staff layoffs. Many of them were reassigned to Sega.com, a new company established to handle Sega's online presence in the United States.
Published games
Incomplete List
10Six
Alien Race
Bug Too!
Cosmopolitan Virtual Makeover
Cosmopolitan Virtual Makeover 2
Da Bomb
Emperor of the Fading Suns
Essence Virtual Makeover
Fatal Abyss
Flesh Feast
Golf: The Ultimate Collection
Lose Your Marbles
Grossology
Mr. Bones
Net Fighter
Obsidian
Plane Crazy
Puzzle Castle
Rocket Jockey
Science Fiction: The Ultimate Collection
Scud: The Disposable Assassin
Scud: Industrial Evolution
The Space Bar
Three Dirty Dwarves
Trampoline-Fractured Fairy Tales: A Frog Prince
Vigilance
Cancelled games
G.I. Ant
Heat Warz
Ragged Earth
Sacred Pools
Skies
Heat.net
Heat.net, stylized HEAT.NET, was an online PC gaming system produced by SegaSoft and launched in 1997 during Bernie Stolar's tenure as SEGA of America president. Heat.net hosted both Sega-published first- and second-party games, as well as popular third-party games of the era, such as Quake II and Baldur's Gate. Much like Kali, it also allowed users to play any IPX network-compatible game, regardless of whether or not it was designed for the Internet. Each supported game had its own chat lobby and game creation options
|
https://en.wikipedia.org/wiki/Platform-independent%20GUI%20library
|
A PIGUI (Platform Independent Graphical User Interface) package is a software library that a programmer uses to produce GUI code for multiple computer platforms. The package presents subroutines and/or objects (along with a programming approach) which are independent of the GUIs that the programmer is targeting. For software to qualify as PIGUI it must support several GUIs under at least two different operating systems (e.g. just supporting OPEN LOOK and X11 on two Unix boxes doesn't count). The package does not necessarily provide any additional portability features. Native look and feel is a desirable feature, but is not essential for PIGUIs.
Considerations
Using a PIGUI has limitations, such as the PIGUI only deals with the GUI aspects of the program so the programmer responsible for other portability issues, most PIGUIs slow the execution of the resulting code, and programmers are largely limited to the feature set provided by the PIGUI.
Dependence on a PIGUI can lead to project difficulties since fewer people know how to code any specific PIGUI than do a platform-specific GUI, limiting the number of people who can give advanced help, and if the vendor goes out of business there may be no further support, including future OS enhancements, though availability of source code can ease but not eliminate this problem. Also, bugs in any package, including the PIGUI, filter down to production code.
Alternative approaches
Web browsers offer a convenient alternative for many applications. Web browsers utilize HTML as a presentation layer for applications hosted on a central server, and web browsers are available for pretty much every platform. However, some applications do not lend themselves well to the web paradigm, requiring a local application with GUI capabilities. Where such applications must support multiple platforms, PIGUI can be more appropriate.
Instead of using a PIGUI, developers could partition their applications into GUI and non-GUI objects, and imp
|
https://en.wikipedia.org/wiki/Pradeep%20Sindhu
|
Pradeep Sindhu is an Indian-American business executive. He is the chairman, chief development officer (CDO) and co-founder of data center technology company Fungible. Previously, he co-founded Juniper Networks, where he was the chief scientist and served as CEO until 1996.
Biography
Sindhu holds a B.Tech. in electrical engineering (1974) from the Indian Institute of Technology, Kanpur, M.S. in electrical engineering (1976) from the University of Hawaiʻi, and a PhD (1982) in computer science from Carnegie Mellon University where he studied under Bob Sproull.
Work
Sindhu had worked at the Computer Science Lab of Xerox PARC for 11 years. Sindhu worked on design tools for very-large-scale integration (VLSI) of integrated circuits and high-speed interconnects for shared memory architecture multiprocessors.
Sindhu founded Juniper Networks along with Dennis Ferguson and Bjorn Liencres in February 1996 in California. The company was subsequently reincorporated in Delaware in March 1998 and went public on 25 June 1999.
Sindhu worked on the architecture, design, and development of the Juniper M40 data router.
Sindhu's earlier work subsequently influenced the architecture, design, and development of Sun Microsystems' first high-performance multiprocessor system family, which included systems such as the SS1000 and SC2000.
Sindhu is the founder and CEO of data center technology company Fungible.
References
External links
Pradeep Sindhu's entry on the Juniper Networks Website
Interview with Pradeep Sindhu
1953 births
Living people
20th-century American businesspeople
21st-century American businesspeople
American technology chief executives
American manufacturing businesspeople
Carnegie Mellon University alumni
Indian emigrants to the United States
IIT Kanpur alumni
Juniper Networks
Computer networking people
University of Hawaiʻi at Mānoa alumni
American chief technology officers
American people of Indian descent
American computer businesspeople
American chief executiv
|
https://en.wikipedia.org/wiki/High%20impedance
|
In electronics, high impedance means that a point in a circuit (a node) allows a relatively small amount of current through, per unit of applied voltage at that point. High impedance circuits are low current and potentially high voltage, whereas low impedance circuits are the opposite (low voltage and potentially high current). Numerical definitions of "high impedance" vary by application.
High impedance inputs are preferred on measuring instruments such as voltmeters or oscilloscopes. In audio systems, a high-impedance input may be required for use with devices such as crystal microphones or other devices with high internal impedance.
Analog electronics
In analog circuits a high impedance node is one that does not have any low impedance paths to any other nodes in the frequency range being considered. Since the terms low and high depend on context to some extent, it is possible in principle for some high impedance nodes to be described as low impedance in one context, and high impedance in another; so the node (perhaps a signal source or amplifier input) has relatively low currents for the voltages involved.
High impedance nodes have higher thermal noise voltages and are more prone to capacitive and inductive noise pick up. When testing, they are often difficult to probe as the impedance of an oscilloscope or multimeter can heavily affect the signal or voltage on the node. High impedance signal outputs are characteristic of some transducers (such as crystal pickups); they require a very high impedance load from the amplifier to which they are connected. Vacuum tube amplifiers, and field effect transistors more easily supply high-impedance inputs than bipolar junction transistor-based amplifiers, although current buffer circuits or step-down transformers can match a high-impedance input source to a low impedance amplifier.
Digital electronics
In digital circuits, a high impedance (also known as hi-Z, tri-stated, or floating) output is not being driven to any de
|
https://en.wikipedia.org/wiki/Instruments%20used%20in%20general%20surgery
|
There are many different surgical specialties, some of which require very specific kinds of surgical instruments to perform.
General surgery is a specialty focused on the abdominal contents, as well as the thyroid gland, and diseases involving skin, breasts, various soft tissues, trauma, peripheral vascular disease, hernias, and endoscopic procedures.
This page is dedicated specifically to listing surgical instruments used in general surgery.
Instruments can be classified in many ways - but broadly speaking, there are five kinds of instruments.
Cutting and dissecting instruments:
Scalpels, scissors, and saws are the most traditional.
Elevators can be both cutting and lifting/retracting.
Although the term dissection is broad, energy devices such as diathermy/cautery are often used as more modern alternatives.
Grasping or holding instruments:
Classically this included forceps and clamps predominantly.
Roughly, forceps can be divided into traumatic (tissue crushing) and atraumatic (tissue preserving, such as Debakey's)
Numerous examples are available for different purposes by field.
Hemostatic instruments:
This includes instruments utilized for the cessation of bleeding.
Artery forceps are a classic example in which bleeding is halted by direct clamping of a vessel.
Sutures are often used, aided by a needle holder.
Cautery and related instruments are used with increasing frequency in high resource countries.
Retractors:
Surgery is often considered to be largely about exposure.
A multitude of retractors exist to aid in exposing the body's cavities accessed during surgery.
These can broadly be handheld (often by a junior assistant) or self-retaining.
Elevators can be both cutting and lifting/retracting.
Tissue unifying instruments and materials:
This would include instruments that aid in tissue unification (such as needle holders or staple applicators)
And the materials themselves
Instruments used in surgery are:
References
Surgical instruments
Surg
|
https://en.wikipedia.org/wiki/Quick%20Response%20Engine
|
Quick Response Engine was a planning and scheduling program developed for the OS/400 platform. The program was developed by the Acacia Technologies division of Computer Associates in 1996. In 2002 the group was sold to SSA Global Technologies.
References
AS/400
Automated planning and scheduling
|
https://en.wikipedia.org/wiki/Legend%20of%20the%20Octopus
|
The Legend of the Octopus is a sports tradition during Detroit Red Wings home playoff games involving dead octopuses thrown onto the ice rink. The origins of the activity go back to the 1952 playoffs, when a National Hockey League team played two best-of-seven series to capture the Stanley Cup. Having eight arms, the octopus symbolized the number of playoff wins necessary for the Red Wings to win the Stanley Cup. The practice started on April 15, 1952, when Pete and Jerry Cusimano, brothers and storeowners in Detroit's Eastern Market, hurled an octopus into the rink of Olympia Stadium. The team swept the Toronto Maple Leafs and Montreal Canadiens en route to winning the championship.
History
Since 1952, the practice has persisted with each passing year. In one 1995 game, fans threw 36 octopuses, including a specimen weighing . The Red Wings' unofficial mascot is a purple octopus named Al, and during playoff runs, two of these mascots were also hung from the rafters of Joe Louis Arena, symbolizing the 16 wins now needed to take home the Stanley Cup. The practice has become such an accepted part of the team's lore, fans have developed various techniques and "octopus etiquette" for launching the creatures onto the ice.
On October 4, 1987, the last day of the regular Major League Baseball season, an octopus was thrown on the field in the top of the seventh inning at Tiger Stadium in Detroit as the Tigers defeated the Toronto Blue Jays, 1–0, clinching the AL East division championship. In May of that year, the Red Wings had defeated the Toronto Maple Leafs in the Stanley Cup playoffs.
At the final game at Joe Louis Arena, 35 octopuses were thrown onto the ice.
Twirling ban
Al Sobotka, the former head ice manager at Little Caesars Arena and one of the two Zamboni drivers, was the person who retrieved the thrown octopuses from the ice. When the Red Wings played at Joe Louis Arena, he was known to twirl an octopus above his head as he walked across the ice rink to the
|
https://en.wikipedia.org/wiki/Innumeracy%20%28book%29
|
Innumeracy: Mathematical Illiteracy and its Consequences is a 1988 book by mathematician John Allen Paulos about innumeracy (deficiency of numeracy) as the mathematical equivalent of illiteracy: incompetence with numbers rather than words. Innumeracy is a problem with many otherwise educated and knowledgeable people. While many people would be ashamed to admit they are illiterate, there is very little shame in admitting innumeracy by saying things like "I'm a people person, not a numbers person", or "I always hated math", but Paulos challenges whether that widespread cultural excusing of innumeracy is truly worthy of acceptability.
Paulos speaks mainly of the common misconceptions about, and inability to deal comfortably with, numbers, and the logic and meaning that they represent. He looks at real-world examples in stock scams, psychics, astrology, sports records, elections, sex discrimination, UFOs, insurance and law, lotteries, and drug testing. Paulos discusses innumeracy with quirky anecdotes, scenarios, and facts, encouraging readers in the end to look at their world in a more quantitative way. The book sheds light on the link between innumeracy and pseudoscience. For example, the fortune telling psychic's few correct and general observations are remembered over the many incorrect guesses. He also stresses the problem between the actual number of occurrences of various risks and popular perceptions of those risks happening. The problems of innumeracy come at a great cost to society. Topics include probability and coincidence, innumeracy in pseudoscience, statistics, and trade-offs in society. For example, the danger of getting killed in a car accident is much greater than terrorism and this danger should be reflected in how we allocate our limited resources.
Background
John Allen Paulos (born July 4, 1945) is an American professor of mathematics at Temple University in Pennsylvania. He is a writer and speaker on mathematics and the importance of mathematic
|
https://en.wikipedia.org/wiki/Sogitec%204X
|
The Sogitec 4X was a digital audio workstation developed by Giuseppe di Giugno at IRCAM (Paris) in the 1980s. It was the last large hardware processor before the development of the ISPW. Later solutions combined control and audio processing in the same computer like Max/MSP. 4X built on the achievements of the earlier Halaphone, capable of timbre alteration and sound localization.
Nicolas Schöffer was one of the first users to build his composition method with this computer.
Sources
External links
Computer music
Digital signal processing
|
https://en.wikipedia.org/wiki/Jeffrey%20P.%20Buzen
|
Jeffrey Peter Buzen (born May 28, 1943) is an American computer scientist in system performance analysis best known for his contributions to queueing theory. His PhD dissertation (available as https://archive.org/details/DTIC_AD0731575) and his 1973 paper Computational algorithms for closed queueing networks with exponential servers have guided the study of queueing network modeling for decades.
Born in Brooklyn, Buzen holds three degrees in Applied Mathematics -- an ScB (1965) from Brown University and, from Harvard University, an MS (1966) and a PhD (1971). He was a systems programmer at the National Institutes of Health in Bethesda, Maryland (1967–69), where his technique for optimizing the performance of a realtime biomedical computer system led to his first publication at a 1969 IEEE conference. After completing his PhD, he held concurrent appointments as a Lecturer in Computer Science at Harvard and as a Systems Engineer at Honeywell (1971-76). Some of his students at Harvard have gone on to become well known figures in computing. Buzen was PhD thesis advisor for Robert M. Metcalfe (1973), Turing Award winner and co-inventor of Ethernet, and for John M. McQuillan (1974), developer the original adaptive routing algorithms used in ARPAnet and Internet. Buzen also co-taught (with Ugo Gagliardi) a two-semester graduate level course on Operating Systems (AM 251a/AM251br) that Microsoft co-founder Bill Gates took during his Freshman year (1973-74). Two decades later, Gates wrote "It was the only 'computer course' I officially ever took at Harvard." (private email, July 24, 1995)
In addition to being an educator and a researcher, Buzen is also an entrepreneur. Along with fellow Harvard Applied Mathematics PhDs Robert Goldberg and Harold Schwenk, he co-founded BGS Systems in 1975. The company, which began operations in his basement, developed, marketed and supported software products for the performance management and capacity planning of enterprise computer
|
https://en.wikipedia.org/wiki/NOAA%27s%20Environmental%20Real-time%20Observation%20Network
|
The NOAA Environmental Real-time Observation Network (NERON) is a project to establish a nationwide network of high quality near real-time weather monitoring stations across the United States. A 20-mile by 20-mile grid has been established, with the hopes of having one observation system within each grid cell. Effort is being put forth by local National Weather Service (NWS) offices and other state climate groups to ensure that sites in the network meet important criteria. The network will be composed of existing, and in some cases upgraded, sites (ASOS, Cooperative Observer, etc.) as well as new sites being established for other local and state efforts. Many stations in New England and New York have already been installed.
See also
Citizen Weather Observer Program (CWOP)
Community Collaborative Rain, Hail and Snow Network (CoCoRaHS)
Mesonet
References
Improved Accuracy in Measuring Precipitation with the NERON Network in New England
Meteorological data and networks
|
https://en.wikipedia.org/wiki/Dynamic%20Kernel%20Module%20Support
|
Dynamic Kernel Module Support (DKMS) is a program/framework that enables generating Linux kernel modules whose sources generally reside outside the kernel source tree. The concept is to have DKMS modules automatically rebuilt when a new kernel is installed.
Framework
An essential feature of DKMS is that it automatically recompiles all DKMS modules if a new kernel version is installed. This allows drivers and devices outside of the mainline kernel to continue working after a Linux kernel upgrade.
Another benefit of DKMS is that it allows the installation of a new driver on an existing system, running an arbitrary kernel version, without any need for manual compilation or precompiled packages provided by the vendor.
DKMS was written by the Linux Engineering Team at Dell in 2003. It is included in many distributions, such as Ubuntu, Debian, Fedora, SUSE, Mageia and Arch. DKMS is free software released under the terms of the GNU General Public License (GPL) v2 or later.
DKMS supports both the rpm and deb package formats out of the box.
See also
Binary blob
References
External links
Building a kernel module using Dynamic Kernel Module Support (DKMS) on CentOS Wiki
Dynamic Kernel Module Support on ArchWiki
Dell
Linux kernel
|
https://en.wikipedia.org/wiki/Electroless%20nickel-phosphorus%20plating
|
Electroless nickel-phosphorus plating, also referred to as E-nickel, is a chemical process that deposits an even layer of nickel-phosphorus alloy on the surface of a solid substrate, like metal or plastic. The process involves dipping the substrate in a water solution containing nickel salt and a phosphorus-containing reducing agent, usually a hypophosphite salt. It is the most common version of electroless nickel plating (EN plating) and is often referred by that name. A similar process uses a borohydride reducing agent, yielding a nickel-boron coating instead.
Unlike electroplating, processes in general do not require passing an electric current through the bath and the substrate; the reduction of the metal cations in solution to metallic is achieved by purely chemical means, through an autocatalytic reaction. This creates an even layer of metal regardless of the geometry of the surface – in contrast to electroplating which suffers from uneven current density due to the effect of substrate shape on the electric resistance of the bath and therefore on the current distribution within it. Moreover, can be applied to non-conductive surfaces.
It has many industrial applications, from merely decorative to the prevention of corrosion and wear. It can be used to apply composite coatings, by suspending suitable powders in the bath.
Historical overview
The reduction of nickel salts to nickel metal by hypophosphite was accidentally discovered by Charles Adolphe Wurtz in 1844. In 1911, François Auguste Roux of L'Aluminium Français patented the process (using both hypophosphite and orthophosphite) for general metal plating.
However, Roux's invention does not seem to have received much commercial use. In 1946 the process was accidentally rediscovered by Abner Brenner and Grace E. Riddell of the National Bureau of Standards. They tried adding various reducing agents to an electroplating bath in order to prevent undesirable oxidation reactions at the anode. When they
|
https://en.wikipedia.org/wiki/Lattice-based%20access%20control
|
In computer security, lattice-based access control (LBAC) is a complex access control model based on the interaction between any combination of objects (such as resources, computers, and applications) and subjects (such as individuals, groups or organizations).
In this type of label-based mandatory access control model, a lattice is used to define the levels of security that an object may have and that a subject may have access to. The subject is only allowed to access an object if the security level of the subject is greater than or equal to that of the object.
Mathematically, the security level access may also be expressed in terms of the lattice (a partial order set) where each object and subject have a greatest lower bound (meet) and least upper bound (join) of access rights. For example, if two subjects A and B need access to an object, the security level is defined as the meet of the levels of A and B. In another example, if two objects X and Y are combined, they form another object Z, which is assigned the security level formed by the join of the levels of X and Y.
LBAC is also known as a label-based access control (or rule-based access control) restriction as opposed to role-based access control (RBAC).
Lattice based access control models were first formally defined by Denning (1976); see also Sandhu (1993).
See also
References
Computer security models
Lattice theory
Access control
|
https://en.wikipedia.org/wiki/Taxonomy%20of%20Drosera
|
The genus Drosera was divided in 1994 by Seine & Barthlott into three subgenera and 11 sections on the basis of morphological characteristics.
Discovery and description of new species has been occurring since the 10th century, and as recently as the 1940s barely more than 80 species were known. In recent years, Australian Allen Lowrie has done extensive work in the genus, particularly in describing numerous new species from Australia. His classification of the genus was replaced by Jan Schlauer's work in 1996, although the correct classification is still disputed.
Drosera subg. Arcturia
Drosera arcturi
Drosera murfetii
Drosera stenopetala
Drosera subg. Bryastrum
D. sect. Bryastrum
Drosera pygmaea
Drosera sect. Lamprolepis
Drosera allantostigma
Drosera androsacea
Drosera barbigera
Drosera callistos
Drosera citrina
Drosera closterostigma
Drosera dichrosepala
Drosera echinoblastus
Drosera eneabba
Drosera enodes
Drosera gibsonii
Drosera grievei
Drosera helodes
Drosera hyperostigma
Drosera lasiantha
Drosera leucoblasta
Drosera leucostigma
Drosera mannii
Drosera microscapa
Drosera miniata
Drosera nitidula
Drosera nivea
Drosera occidentalis
Drosera omissa
Drosera oreopodion
Drosera paleacea
Drosera parvula
Drosera patens
Drosera pedicellaris
Drosera platystigma
Drosera pulchella
Drosera pycnoblasta
Drosera rechingeri
Drosera roseana
Drosera sargentii
Drosera scorpioides
Drosera sewelliae
Drosera silvicola
Drosera spilos
Drosera stelliflora
Drosera walyunga
Drosera subg. Coelophylla
Drosera glanduligera
Drosera subg. Drosera
Drosera sect. Arachnopus
Drosera hartmeyerorum
Drosera serpens
Drosera fragrans
Drosera aurantiaca
Drosera aquatica
Drosera barrettorum
Drosera nana
Drosera glabriscapa
Drosera margaritacea
Drosera indica
Drosera finlaysoniana
Drosera sect. Drosera
Drosera acaulis
Drosera admirabilis
Drosera affinis
Drosera afra
Drosera alba
Drosera aliciae
Drosera amazonica
Drosera anglica
Drosera arenicola
Drosera ascendens
Drosera bequaertii
Drosera biflo
|
https://en.wikipedia.org/wiki/Matrox%20G200
|
The G200 is a 2D, 3D, and video accelerator chip for personal computers designed by Matrox. It was released in 1998.
History
Matrox had been known for years as a significant player in the high-end 2D graphics accelerator market. Cards they produced were excellent Windows accelerators, and some of the later cards such as Millennium and Mystique excelled at MS-DOS as well. Matrox stepped forward in 1994 with their Impression Plus to innovate with one of the first 3D accelerator boards, but that card only could accelerate a very limited feature set (no texture mapping), and was primarily targeted at CAD applications.
Matrox, seeing the slow but steady growth in interest in 3D graphics on PCs with NVIDIA, Rendition, and ATI's new cards, began experimenting with 3D acceleration more aggressively and produced the Mystique. Mystique was their most feature-rich 3D accelerator in 1997, but still lacked key features including bilinear filtering. Then, in early 1998, Matrox teamed up with PowerVR to produce an add-in 3D board called Matrox m3D using the PowerVR PCX2 chipset. This board was one of the very few times that Matrox would outsource for their graphics processor, and was certainly a stop-gap measure to hold out until the G200 project was ready to go.
Overview
With the G200, Matrox aimed to combine its past products' competent 2D and video acceleration with a full-featured 3D accelerator. The G200 chip was used on several boards, most notably the Millennium G200 and Mystique G200. Millennium G200 received the new SGRAM memory and a faster RAMDAC, while Mystique G200 was cheaper and equipped with slower SDRAM memory but gained a TV-out port. Most G200 boards shipped standard with 8 MB RAM and were expandable to 16 MB with an add-on module. The cards also had ports for special add-on boards, such as the Rainbow Runner, which could add various functionality.
G200 was Matrox's first fully AGP-compliant graphics processor. While the earlier Millennium II had been adapte
|
https://en.wikipedia.org/wiki/Verhoeff%20algorithm
|
The Verhoeff algorithm is a checksum for error detection first published by Dutch mathematician Jacobus Verhoeff in 1969. It was the first decimal check digit algorithm which detects all single-digit errors, and all transposition errors involving two adjacent digits, which was at the time thought impossible with such a code.
The method was independently discovered by H. Peter Gumm in 1985, this time including a formal proof and an extension to any base.
Goals
Verhoeff had the goal of finding a decimal code—one where the check digit is a single decimal digit—which detected all single-digit errors and all transpositions of adjacent digits. At the time, supposed proofs of the nonexistence of these codes made base-11 codes popular, for example in the ISBN check digit.
His goals were also practical, and he based the evaluation of different codes on live data from the Dutch postal system, using a weighted points system for different kinds of error. The analysis broke the errors down into a number of categories: first, by how many digits are in error; for those with two digits in error, there are transpositions (ab → ba), twins (aa → 'bb'), jump transpositions (abc → cba), phonetic (1a → a0), and jump twins (aba → cbc). Additionally there are omitted and added digits. Although the frequencies of some of these kinds of errors might be small, some codes might be immune to them in addition to the primary goals of detecting all singles and transpositions.
The phonetic errors in particular showed linguistic effects, because in Dutch, numbers are typically read in pairs; and also while 50 sounds similar to 15 in Dutch, 80 doesn't sound like 18.
Taking six-digit numbers as an example, Verhoeff reported the following classification of the errors:.
Description
The general idea of the algorithm is to represent each of the digits (0 through 9) as elements of the dihedral group . That is, map digits to , manipulate these, then map back into digits. Let this mapping be
Let
|
https://en.wikipedia.org/wiki/Freivalds%27%20algorithm
|
Freivalds' algorithm (named after Rūsiņš Mārtiņš Freivalds) is a probabilistic randomized algorithm used to verify matrix multiplication. Given three n × n matrices , , and , a general problem is to verify whether . A naïve algorithm would compute the product explicitly and compare term by term whether this product equals . However, the best known matrix multiplication algorithm runs in time. Freivalds' algorithm utilizes randomization in order to reduce this time bound to
with high probability. In time the algorithm can verify a matrix product with probability of failure less than .
The algorithm
Input
Three n × n matrices , , and .
Output
Yes, if ; No, otherwise.
Procedure
Generate an n × 1 random 0/1 vector .
Compute .
Output "Yes" if ; "No," otherwise.
Error
If , then the algorithm always returns "Yes". If , then the probability that the algorithm returns "Yes" is less than or equal to one half. This is called one-sided error.
By iterating the algorithm k times and returning "Yes" only if all iterations yield "Yes", a runtime of and error probability of is achieved.
Example
Suppose one wished to determine whether:
A random two-element vector with entries equal to 0 or 1 is selected say and used to compute:
This yields the zero vector, suggesting the possibility that AB = C. However, if in a second trial the vector is selected, the result becomes:
The result is nonzero, proving that in fact AB ≠ C.
There are four two-element 0/1 vectors, and half of them give the zero vector in this case ( and ), so the chance of randomly selecting these in two trials (and falsely concluding that AB=C) is 1/22 or 1/4. In the general case, the proportion of r yielding the zero vector may be less than 1/2, and a larger number of trials (such as 20) would be used, rendering the probability of error very small.
Error analysis
Let p equal the probability of error. We claim that if A × B = C, then p = 0, and if A × B ≠ C, then p ≤ 1/2.
Case A × B = C
This is re
|
https://en.wikipedia.org/wiki/Unix%20domain%20socket
|
A Unix domain socket aka UDS or IPC socket (inter-process communication socket) is a data communications endpoint for exchanging data between processes executing on the same host operating system. It is also referred to by its address family AF_UNIX. Valid socket types in the UNIX domain are:
SOCK_STREAM (compare to TCP) – for a stream-oriented socket
SOCK_DGRAM (compare to UDP) – for a datagram-oriented socket that preserves message boundaries (as on most UNIX implementations, UNIX domain datagram sockets are always reliable and don't reorder datagrams)
SOCK_SEQPACKET (compare to SCTP) – for a sequenced-packet socket that is connection-oriented, preserves message boundaries, and delivers messages in the order that they were sent
The Unix domain socket facility is a standard component of POSIX operating systems.
The API for Unix domain sockets is similar to that of an Internet socket, but rather than using an underlying network protocol, all communication occurs entirely within the operating system kernel. Unix domain sockets may use the file system as their address name space. (Some operating systems, like Linux, offer additional namespaces.) Processes reference Unix domain sockets as file system inodes, so two processes can communicate by opening the same socket.
In addition to sending data, processes may send file descriptors across a Unix domain socket connection using the sendmsg() and recvmsg() system calls. This allows the sending processes to grant the receiving process access to a file descriptor for which the receiving process otherwise does not have access. This can be used to implement a rudimentary form of capability-based security.
See also
Network socket
Berkeley sockets
Pipeline
Netlink
References
External links
ucspi-unix, UNIX-domain socket client-server command-line tools
Unix sockets vs Internet sockets
Unix Sockets - Beej's Guide to Unix IPC
Network socket
Unix
fr:Berkeley sockets#Socket unix
|
https://en.wikipedia.org/wiki/Gauge%20factor
|
Gauge factor (GF) or strain factor of a strain gauge is the ratio of relative change in electrical resistance R, to the mechanical strain ε. The gauge factor is defined as:
where
ε = strain =
= absolute change in length
= original length
ν = Poisson's ratio
ρ = resistivity
ΔR = change in strain gauge resistance due axial strain and lateral strain
R = unstrained resistance of strain gauge
Piezoresistive effect
It is a common misconception that the change in resistance of a strain gauge is based solely, or most heavily, on the geometric terms. This is true for some materials (), and the gauge factor is simply:
However, most commercial strain gauges utilise resistors made from materials that demonstrate a strong piezoresistive effect. The resistivity of these materials changes with strain, accounting for the term of the defining equation above. In constantan strain gauges (the most commercially popular), the effect accounts for 20% of the gauge factor, but in silicon gauges, the contribution of the piezoresistive term is much larger than the geometric terms. This can be seen in the general examples of strain gauges below:
Effect of temperature
The definition of the gauge factor does not rely on temperature, however the gauge factor only relates resistance to strain if there are no temperature effects. In practice, where changes in temperature or temperature gradients exist, the equation to derive resistance will have a temperature term. The total effect is:
where
α = temperature coefficient
θ = temperature change
References
Equations
|
https://en.wikipedia.org/wiki/Inner%20cell%20mass
|
The inner cell mass (ICM) or embryoblast (known as the pluriblast in marsupials) is a structure in the early development of an embryo. It is the mass of cells inside the blastocyst that will eventually give rise to the definitive structures of the fetus. The inner cell mass forms in the earliest stages of embryonic development, before implantation into the endometrium of the uterus. The ICM is entirely surrounded by the single layer of trophoblast cells of the trophectoderm.
Further development
The physical and functional separation of the inner cell mass from the trophectoderm (TE) is a special feature of mammalian development and is the first cell lineage specification in these embryos. Following fertilization in the oviduct, the mammalian embryo undergoes a relatively slow round of cleavages to produce an eight-cell morula. Each cell of the morula, called a blastomere, increases surface contact with its neighbors in a process called compaction. This results in a polarization of the cells within the morula, and further cleavage yields a blastocyst of roughly 32 cells. In mice, about 12 internal cells comprise the new inner cell mass and 20 – 24 cells comprise the surrounding trophectoderm. There is variation between species of mammals as to the number of cells at compaction with bovine embryos showing differences related to compaction as early as 9-15 cells and in rabbits not until after 32 cells. There is also interspecies variation in gene expression patterns in early embryos.
The ICM and the TE will generate distinctly different cell types as implantation starts and embryogenesis continues. Trophectoderm cells form extraembryonic tissues, which act in a supporting role for the embryo proper. Furthermore, these cells pump fluid into the interior of the blastocyst, causing the formation of a polarized blastocyst with the ICM attached to the trophectoderm at one end (see figure). This difference in cellular localization causes the ICM cells exposed to the fl
|
https://en.wikipedia.org/wiki/Mathematical%20sociology
|
Mathematical sociology or the sociology of mathematics is an interdisciplinary field of research concerned both with the use of mathematics within sociological research as well as research into the relationships that exist between maths and society.
Because of this, mathematical sociology can have a diverse meaning depending on the authors in question and the kind of research being carried out. This creates contestation over whether mathematical sociology is a derivative of sociology, an intersection of the two disciplines, or a discipline in its own right. This is a dynamic, ongoing academic development that leaves mathematical sociology sometimes blurred and lacking in uniformity, presenting grey areas and need for further research into developing its academic merit.
History
Starting in the early 1940s, Nicolas Rashevsky, and subsequently in the late 1940s, Anatol Rapoport and others, developed a relational and probabilistic approach to the characterization of large social networks in which the nodes are persons and the links are acquaintanceship. During the late 1940s, formulas were derived that connected local parameters such as closure of contacts – if A is linked to both B and C, then there is a greater than chance probability that B and C are linked to each other – to the global network property of connectivity.
Moreover, acquaintanceship is a positive tie, but what about negative ties such as animosity among persons? To tackle this problem, graph theory, which is the mathematical study of abstract representations of networks of points and lines, can be extended to include these two types of links and thereby to create models that represent both positive and negative sentiment relations, which are represented as signed graphs. A signed graph is called balanced if the product of the signs of all relations in every cycle (links in every graph cycle) is positive. Through formalization by mathematician Frank Harary, this work produced the fundamental theore
|
https://en.wikipedia.org/wiki/Model%20transformation%20language
|
A model transformation language in systems and software engineering is a language intended specifically for model transformation.
Overview
The notion of model transformation is central to model-driven development. A model transformation, which is essentially a program which operates on models, can be written in a general-purpose programming language, such as Java. However, special-purpose model transformation languages can offer advantages, such as syntax that makes it easy to refer to model elements. For writing bidirectional model transformations, which maintain consistency between two or more models, a specialist bidirectional model transformation language is particularly important, because it can help avoid the duplication that would result from writing each direction of the transformation separately.
Currently, most model transformation languages are being developed in academia. The OMG has standardised a family of model transformation languages called QVT, but the field is still immature.
There are ongoing debates regarding the benefits of specialised model transformation languages, compared to the use of general-purpose programming languages (GPLs) such as Java. While GPLs have advantages in terms of more widely-available practitioner knowledge and tool support, the specialised transformation languages do provide more declarative facilities and more powerful specialised features to support model transformations.
Available transformation languages
ATL : a transformation language developed by the INRIA
Beanbag (see ) : an operation-based language for establishing consistency over data incrementally
GReAT : a transformation language available in the GME
Epsilon family (see ) : a model management platform that provides transformation languages for model-to-model, model-to-text, update-in-place, migration and model merging transformations.
F-Alloy : a DSL reusing part of the Alloy syntax and allowing the concise specification of efficiently computable m
|
https://en.wikipedia.org/wiki/Industrial-grade%20prime
|
Industrial-grade primes (the term is apparently due to Henri Cohen) are integers for which primality has not been certified (i.e. rigorously proven), but they have undergone probable prime tests such as the Miller–Rabin primality test, which has a positive, but negligible, failure rate, or the Baillie–PSW primality test, which no composites are known to pass.
Industrial-grade primes are sometimes used instead of certified primes in algorithms such as RSA encryption, which require the user to generate large prime numbers. Certifying the primality of large numbers (over 100 digits for instance) is significantly harder than showing they are industrial-grade primes. The latter can be done almost instantly with a failure rate so low that it is highly unlikely to ever fail in practice. In other words, the number is believed to be prime with very high, but not absolute, confidence.
References
Cryptographic algorithms
Prime numbers
|
https://en.wikipedia.org/wiki/List%20of%20engineering%20schools%20in%20Massachusetts
|
This is a list of BS degree granting engineering schools in Massachusetts, arranged in alphabetical order.
See also
List of colleges and universities in Massachusetts
List of colleges and universities in metropolitan Boston
List of systems engineering at universities
References
External links
Complete Guide to Engineering Schools, USNews.com, accessed April 16, 2006.
Massachusetts education-related lists
Massachusetts
Massachusetts, Engineering
|
https://en.wikipedia.org/wiki/Product-family%20engineering
|
Product-family engineering (PFE), also known as product-line engineering, is based on the ideas of "domain engineering" created by the Software Engineering Institute, a term coined by James Neighbors in his 1980 dissertation at University of California, Irvine. Software product lines are quite common in our daily lives, but before a product family can be successfully established, an extensive process has to be followed. This process is known as product-family engineering.
Product-family engineering can be defined as a method that creates an underlying architecture of an organization's product platform. It provides an architecture that is based on commonality as well as planned variabilities. The various product variants can be derived from the basic product family, which creates the opportunity to reuse and differentiate on products in the family. Product-family engineering is conceptually similar to the widespread use of vehicle platforms in the automotive industry.
Product-family engineering is a relatively new approach to the creation of new products. It focuses on the process of engineering new products in such a way that it is possible to reuse product components and apply variability with decreased costs and time. Product-family engineering is all about reusing components and structures as much as possible.
Several studies have proven that using a product-family engineering approach for product development can have several benefits. Here is a list of some of them:
Higher productivity
Higher quality
Faster time-to-market
Lower labor needs
The Nokia case mentioned below also illustrates these benefits.
Overall process
The product family engineering process consists of several phases. The three main phases are:
Phase 1: Product management
Phase 2: Domain engineering
Phase 3: Product engineering
The process has been modeled on a higher abstraction level. This has the advantage that it can be applied to all kinds of product lines and families, not on
|
https://en.wikipedia.org/wiki/Chyron%20Corporation
|
The Chyron Corporation, formerly ChyronHego Corporation, headquartered in Melville, New York, is a company that specializes in broadcast graphics creation, playout, and real-time data visualization for live television, news, weather, and sports production. Chyron's graphics offerings include hosted services for graphics creation and order management, on-air graphics systems, channel branding, weather graphics, graphics asset management, clip servers, social media and second screen applications, touchscreen graphics, telestration, virtual graphics, and player tracking.
The company was founded in 1966 as Systems Resources Corporation. In its early days it was renamed "Chiron" after the centaur Chiron in Greek mythology. In the 1970s it pioneered the development of broadcast titling and graphics systems. Use of its graphics generators by the major New York City–based US television networks ABC, NBC, and eventually CBS, integrated text and graphics into news and sports coverage on broadcast television and later on cable TV.
By the 1980s, Chyron had captured a 70% market share in its field. In it was the most profitable company on Long Island. In 1983 it achieved a market capitalization of $112 million, high at the time for a small high-tech firm before the age of dot-com and the Internet.
Corporate history
Chyron's graphics generator technology was originated by Systems Resources Corporation, founded in 1966 by Francis Mechner and engineer Eugene Leonard as equal partners and sole directors and shareholders. Mechner had just sold his educational technology company Basic Systems, Inc. to Xerox Corporation; and Leonard had sold Digitronics Corporation, of which he was president. Mechner and Leonard previously worked together in the late 1950s at Schering Corporation, creating a computerized data collection and analysis system for its behavioral psychopharmacology laboratory.
Mechner provided the capital for Systems Resources Corporation's first five years of operatio
|
https://en.wikipedia.org/wiki/Gemfire
|
Gemfire (released in Japan as Royal Blood or ロイヤルブラッド Roiyaru Buraddo, Super Royal Blood or スーパーロイヤルブラッド Sūpā Roiyaru Buraddo in its Super Famicom version) is a medieval war game for MSX, Nintendo Entertainment System, Super NES, FM Towns, Mega Drive/Genesis, DOS, and later Microsoft Windows, developed by Koei. The object in the game is to unify a fictional island by force. Players use infantry, cavalry, and archers, as well as fantasy units such as magicians, dragons or gargoyles in order to capture the castle needed to control that particular territory.
A sequel, Royal Blood II, was released in the Japan market for Windows.
Plot
The game takes place in the fictitious Isle of Ishmeria. Once upon a time, six wizards, each wielding a unique brand of magic, used their powers to protect the island and maintain peace. This was disrupted when they were collectively challenged by a Fire Dragon, summoned forth by a wizard intent on plunging the country into darkness.
The sea-dwelling dragon of peace known as the Pastha charged the six wizards with the task of fighting back. They succeeded, sealing the Fire Dragon away into a ruby at the top of a crown, and themselves became the six jewels around the crown's base. The crown, called Gemfire, was a symbol of utmost power and authority.
When Gemfire fell into the hands of the now current King of Ishmeria, Eselred, he sought to abuse the object's power, using it to embark on a tyrannical reign, instilling fear within his oppressed subjects. Ishmeria fell into despair as his power flourished. Finally, his young daughter, Princess Robyn, could not bear to watch her father's grievous misdeeds any longer — she seized Gemfire and pried the six wizard gems loose, causing them to shoot upward into the sky and circle briefly overhead before scattering themselves to different parts of Ishmeria. When a furious Eselred learned of Robyn's actions, he had her locked her away in a tower; but it was futile as the deed had already been do
|
https://en.wikipedia.org/wiki/Clifford%20theory
|
In mathematics, Clifford theory, introduced by , describes the relation between representations of a group and those of a normal subgroup.
Alfred H. Clifford
Alfred H. Clifford proved the following result on the restriction of finite-dimensional irreducible representations from a group G to a normal subgroup N of finite index:
Clifford's theorem
Theorem. Let π: G → GL(n,K) be an irreducible representation with K a field. Then the restriction of π to N breaks up into a direct sum of irreducible representations of N of equal dimensions. These irreducible representations of N lie in one orbit for the action of G by conjugation on the equivalence classes of irreducible representations of N. In particular the number of pairwise nonisomorphic summands is no greater than the index of N in G.
Clifford's theorem yields information about the restriction of a complex irreducible character of a finite group G to a normal subgroup N. If μ is a complex character of N, then for a fixed element g of G, another character, μ(g), of N may be constructed by setting
for all n in N. The character μ(g) is irreducible if and only if μ is. Clifford's theorem states that if χ is a complex irreducible character of G, and μ is an irreducible character of N with
then
where e and t are positive integers, and each gi is an element of G. The integers e and t both divide the index [G:N]. The integer t is the index of a subgroup of G, containing N, known as the inertial subgroup of μ. This is
and is often denoted by
The elements gi may be taken to be representatives of all the right cosets of the subgroup IG(μ) in G.
In fact, the integer e divides the index
though the proof of this fact requires some use of Schur's theory of projective representations.
Proof of Clifford's theorem
The proof of Clifford's theorem is best explained in terms of modules (and the module-theoretic version works for irreducible modular representations). Let K be a field, V be an irreducible K[G]-module, VN be
|
https://en.wikipedia.org/wiki/List%20of%20platform-independent%20GUI%20libraries
|
This is a list of notable library packages implementing a graphical user interface (GUI) platform-independent GUI library (PIGUI). These can be used to develop software that can be ported to multiple computing platforms with no change to its source code.
In C, C++
In other languages
No longer available or supported
See also
List of widget toolkits
List of rich web application frameworks
Further reading
Richard Chimera, Evaluation of Platform Independent User Interface Builders, March 1993, Human-Computer Interaction Laboratory University of Maryland
References
Computer libraries
Cross-platform software
|
https://en.wikipedia.org/wiki/Gunfire%20locator
|
A gunfire locator or gunshot detection system is a system that detects and conveys the location of gunfire or other weapon fire using acoustic, vibration, optical, or potentially other types of sensors, as well as a combination of such sensors. These systems are used by law enforcement, security, military, government offices, schools and businesses to identify the source and, in some cases, the direction of gunfire and/or the type of weapon fired. Most systems possess three main components:
An array of microphones or sensors (accelerometers, infrared detectors, etc) either co-located or geographically dispersed
A processing unit
A user-interface that displays gunfire alerts
In general categories, there are environmental packaged systems for primarily outdoor use (both military and civilian/urban) which are high cost and then also lower cost consumer/industrial packaged systems for primarily indoor use. Systems used in urban settings integrate a geographic information system so the display includes a map and address location of each incident. Some indoor gunfire detection systems utilize detailed floor plans with detector location overlay to show shooter locations on an app or web based interface.
History
Determination of the origin of gunfire by sound was conceived before World War I where it was first used operationally (see: Artillery sound ranging).
In 1990, a unique algorithm was used as a starting point : Metravib defence, working with Délégation Générale pour l’Armement (DGA) – the French defence procurement agency – studied the acoustic signature of submarines. The DGA & Section Technique de l’Armée de Terre (STAT), the French Army’s engineering section
subsequently commissioned Metravib D. to find a solution for shot detection, a way to assist soldiers and peacekeepers who come under fire from snipers without knowing precisely where the shots were coming from.
In the early 1990s, the areas of East Palo Alto and eastern Menlo Park, California, were
|
https://en.wikipedia.org/wiki/Data%20integration
|
Data integration involves combining data residing in different sources and providing users with a unified view of them. This process becomes significant in a variety of situations, which include both commercial (such as when two similar companies need to merge their databases) and scientific (combining research results from different bioinformatics repositories, for example) domains. Data integration appears with increasing frequency as the volume (that is, big data) and the need to share existing data explodes. It has become the focus of extensive theoretical work, and numerous open problems remain unsolved. Data integration encourages collaboration between internal as well as external users. The data being integrated must be received from a heterogeneous database system and transformed to a single coherent data store that provides synchronous data across a network of files for clients. A common use of data integration is in data mining when analyzing and extracting information from existing databases that can be useful for Business information.
History
Issues with combining heterogeneous data sources are often referred to as information silos, under a single query interface have existed for some time. In the early 1980s, computer scientists began designing systems for interoperability of heterogeneous databases. The first data integration system driven by structured metadata was designed at the University of Minnesota in 1991, for the Integrated Public Use Microdata Series (IPUMS). IPUMS used a data warehousing approach, which extracts, transforms, and loads data from heterogeneous sources into a unique view schema so data from different sources become compatible. By making thousands of population databases interoperable, IPUMS demonstrated the feasibility of large-scale data integration. The data warehouse approach offers a tightly coupled architecture because the data are already physically reconciled in a single queryable repository, so it usually takes lit
|
https://en.wikipedia.org/wiki/Change%20management%20%28engineering%29
|
The change request management process in systems engineering is the process of requesting, determining attainability, planning, implementing, and evaluating of changes to a system. Its main goals are to support the processing and traceability of changes to an interconnected set of factors.
Introduction
There is considerable overlap and confusion between change request management, change control and configuration management. The definition below does not yet integrate these areas.
Change request management has been embraced for its ability to deliver benefits by improving the affected system and thereby satisfying "customer needs," but has also been criticized for its potential to confuse and needlessly complicate change administration. In some cases, notably in the Information Technology domain, more funds and work are put into system maintenance (and change request management) than into the initial creation of a system. Typical investment by organizations during initial implementation of large ERP systems is 15 to 20 percent of overall budget.
In the same vein, Hinley describes two of Lehman's laws of software evolution:
The law of continuing change: Systems that are used must change, or else automatically become less useful.
The law of increasing complexity: Through changes, the structure of a system becomes ever more complex, and more resources are required to simplify it.
Change request management is also of great importance in the field of manufacturing, which is confronted with many changes due to increasing and worldwide competition, technological advances and demanding customers. Because many systems tend to change and evolve as they are used, the problems of these industries are experienced to some degree in many others.
Notes: In the process below, it is arguable that the change committee should be responsible not only for accept/reject decisions, but also prioritization, which influences how change requests are batched for processing.
The proce
|
https://en.wikipedia.org/wiki/Computer%20Automated%20Measurement%20and%20Control
|
Computer-Aided Measurement And Control (CAMAC) is a standard bus and modular-crate electronics standard for data acquisition and control used in particle detectors for nuclear and particle physics and in industry. The bus allows data exchange between plug-in modules (up to 24 in a single crate) and a crate controller, which then interfaces to a PC or to a VME-CAMAC interface.
The standard was originally defined by the ESONE Committee as standard EUR 4100 in 1972, and covers the mechanical, electrical, and logical elements of a parallel bus ("dataway") for the plug-in modules. Several standards have been defined for multiple crate systems, including the Parallel Branch Highway definition and Serial Highway definition. Vendor-specific Host/Crate interfaces have also been built.
The CAMAC standard encompasses IEEE standards:
583 The base standard
683 Block transfer specifications (Q-stop and Q-scan)
596 Parallel Branch Highway systems
595 Serial highway system
726 Real-time Basic for CAMAC
675 Auxiliary crate controller specification/support
758 FORTRAN subroutines for CAMAC.
Within the , modules are addressed by slot (geographical addressing). The left-most 22 slots are available for application modules while the right-most two slots are dedicated to a crate controller. Within a slot the standard defines 16 subaddresses (0–15). A slot commanded by the controller with one of 32 function codes (0–31). Of these function codes, 0–7 are read functions and will transfer data to the controller from the addressed module, while 16–23 are write function codes which will transfer data from the controller to the module.
In addition to functions that address the module, the following global functions are defined:
I – Crate inhibit
Z – Crate zero
C – Crate clear
The original standard was capable of one 24-bit data transfer every microsecond. Later a revision to the standard was released to support short cycles which allow a transfer every 450 ns. A follow on
|
https://en.wikipedia.org/wiki/Haynes%E2%80%93Shockley%20experiment
|
In semiconductor physics, the Haynes–Shockley experiment was an experiment that demonstrated that diffusion of minority carriers in a semiconductor could result in a current. The experiment was reported in a short paper by Haynes and Shockley in 1948, with a more detailed version published by Shockley, Pearson, and Haynes in 1949.
The experiment can be used to measure carrier mobility, carrier lifetime, and diffusion coefficient.
In the experiment, a piece of semiconductor gets a pulse of holes, for example, as induced by voltage or a short laser pulse.
Equations
To see the effect, we consider a n-type semiconductor with the length d. We are interested in determining the mobility of the carriers, diffusion constant and relaxation time. In the following, we reduce the problem to one dimension.
The equations for electron and hole currents are:
where the js are the current densities of electrons (e) and holes (p), the μs the charge carrier mobilities, E is the electric field, n and p the number densities of charge carriers, the Ds are diffusion coefficients, and x is position. The first term of the equations is the drift current, and the second term is the diffusion current.
Derivation
We consider the continuity equation:
Subscript 0s indicate equilibrium concentrations. The electrons and the holes recombine with the carrier lifetime τ.
We define
so the upper equations can be rewritten as:
In a simple approximation, we can consider the electric field to be constant between the left and right electrodes and neglect ∂E/∂x. However, as electrons and holes diffuse at different speeds, the material has a local electric charge, inducing an inhomogeneous electric field which can be calculated with Gauss's law:
where ε is permittivity, ε0 the permittivity of free space, ρ is charge density, and e0 elementary charge.
Next, change variables by the substitutions:
and suppose δ to be much smaller than . The two initial equations write:
Using the Einstein rel
|
https://en.wikipedia.org/wiki/Steven%20Strogatz
|
Steven Henry Strogatz (), born August 13, 1959, is an American mathematician and the Susan and Barton Winokur Distinguished Professor for the Public Understanding of Science and Mathematics at Cornell University.
He is known for his work on nonlinear systems, including contributions to the study of synchronization in dynamical systems, and for his research in a variety of areas of applied mathematics, including mathematical biology and complex network theory.
Strogatz is the host of Quanta Magazine'''s The Joy of Why podcast. He previously hosted The Joy of x podcast, named after his book of the same name.
Education
Strogatz attended high school at Loomis Chaffee from 1972 to 1976. He then attended Princeton University, graduating summa cum laude with a B.A. in mathematics. Strogatz completed his senior thesis, titled "The mathematics of supercoiled DNA: an essay in geometric biology", under the supervision of Frederick J. Almgren, Jr. Strogatz then studied as a Marshall Scholar at Trinity College, Cambridge, from 1980 to 1982, and then received a Ph.D. in applied mathematics from Harvard University in 1986 for his research on the dynamics of the human sleep-wake cycle. He completed his postdoc under Nancy Kopell at Boston University.
Career
After spending three years as a National Science Foundation Postdoctoral Fellow at Harvard and Boston University, Strogatz joined the faculty of the department of mathematics at MIT in 1989. His research on dynamical systems was recognized with a Presidential Young Investigator Award from the National Science Foundation in 1990. In 1994 he moved to Cornell where he is a professor of mathematics. From 2007 to 2023 he was the Jacob Gould Schurman Professor of Applied Mathematics, and in 2023 he was named the inaugural holder of the Susan and Barton Winokur Distinguished Professorship for the Public Understanding of Science and Mathematics. From 2004 to 2010, he was also on the external faculty of the Santa Fe Institute.
Rese
|
https://en.wikipedia.org/wiki/Representation%20theorem
|
In mathematics, a representation theorem is a theorem that states that every abstract structure with certain properties is isomorphic to another (abstract or concrete) structure.
Examples
Algebra
Cayley's theorem states that every group is isomorphic to a permutation group.
Representation theory studies properties of abstract groups via their representations as linear transformations of vector spaces.
Stone's representation theorem for Boolean algebras states that every Boolean algebra is isomorphic to a field of sets.
A variant, Stone's representation theorem for distributive lattices, states that every distributive lattice is isomorphic to a sublattice of the power set lattice of some set.
Another variant, Stone's duality, states that there exists a duality (in the sense of an arrow-reversing equivalence) between the categories of Boolean algebras and that of Stone spaces.
The Poincaré–Birkhoff–Witt theorem states that every Lie algebra embeds into the commutator Lie algebra of its universal enveloping algebra.
Ado's theorem states that every finite-dimensional Lie algebra over a field of characteristic zero embeds into the Lie algebra of endomorphisms of some finite-dimensional vector space.
Birkhoff's HSP theorem states that every model of an algebra A is the homomorphic image of a subalgebra of a direct product of copies of A.
In the study of semigroups, the Wagner–Preston theorem provides a representation of an inverse semigroup S, as a homomorphic image of the set of partial bijections on S, and the semigroup operation given by composition.
Category theory
The Yoneda lemma provides a full and faithful limit-preserving embedding of any category into a category of presheaves.
Mitchell's embedding theorem for abelian categories realises every small abelian category as a full (and exactly embedded) subcategory of a category of modules over some ring.
Mostowski's collapsing theorem states that every well-founded extensional structure is isomorphic t
|
https://en.wikipedia.org/wiki/Hole%20punching%20%28networking%29
|
Hole punching (or sometimes punch-through) is a technique in computer networking for establishing a direct connection between two parties in which one or both are behind firewalls or behind routers that use network address translation (NAT). To punch a hole, each client connects to an unrestricted third-party server that temporarily stores external and internal address and port information for each client. The server then relays each client's information to the other, and using that information each client tries to establish direct connection; as a result of the connections using valid port numbers, restrictive firewalls or routers accept and forward the incoming packets on each side.
Hole punching does not require any knowledge of the network topology to function. ICMP hole punching, UDP hole punching and TCP hole punching respectively use Internet Control Message, User Datagram and Transmission Control Protocols.
Overview
Networked devices with public or globally accessible IP addresses can create connections between one another easily. Clients with private addresses may also easily connect to public servers, as long as the client behind a router or firewall initiates the connection. However, hole punching (or some other form of NAT traversal) is required to establish a direct connection between two clients that both reside behind different firewalls or routers that use network address translation (NAT).
Both clients initiate a connection to an unrestricted server, which notes endpoint and session information including public IP and port along with private IP and port. The firewalls also note the endpoints in order to allow responses from the server to pass back through. The server then sends each client's endpoint and session information to the other client, or peer. Each client tries to connect to its peer through the specified IP address and port that the peer's firewall has opened for the server. The new connection attempt punches a hole in the client's fir
|
https://en.wikipedia.org/wiki/Werner%20Fenchel
|
Moritz Werner Fenchel (; 3 May 1905 – 24 January 1988) was a mathematician known for his contributions to geometry and to optimization theory. Fenchel established the basic results of convex analysis and nonlinear optimization theory which would, in time, serve as the foundation for nonlinear programming. A German-born Jew and early refugee from Nazi suppression of intellectuals, Fenchel lived most of his life in Denmark. Fenchel's monographs and lecture notes are considered influential.
Biography
Early life and education
Fenchel was born on 3 May 1905 in Berlin, Germany, his younger brother was the Israeli film director and architect Heinz Fenchel.
Fenchel studied mathematics and physics at the University of Berlin between 1923 and 1928. He wrote his doctorate thesis in geometry (Über Krümmung und Windung geschlossener Raumkurven) under Ludwig Bieberbach.
Professorship in Germany
From 1928 to 1933, Fenchel was Professor E. Landau's Assistant at the University of Göttingen. During a one-year leave (on Rockefeller Fellowship) between 1930 and 1931, Fenchel spent time in Rome with Levi-Civita, as well as in Copenhagen with Harald Bohr and Tommy Bonnesen.
He visited Denmark again in 1932.
Professorship in exile
Fenchel taught at Göttingen until 1933, when the Nazi discrimination laws led to mass-firings of Jews.
Fenchel emigrated to Denmark somewhere between April and September 1933, ultimately obtaining a position at the University of Copenhagen. In December 1933, Fenchel married fellow German refugee mathematician Käte Sperling.
When Germany occupied Denmark, Fenchel and roughly eight-thousand other Danish Jews received refuge in Sweden, where he taught (between 1943 and 1945) at the Danish School in Lund. After the Allied powers' liberation of Denmark, Fenchel returned to Copenhagen.
Professorship postwar
In 1946, Fenchel was elected a member of the Royal Danish Academy of Sciences and Letters.
On leave between 1949 and 1951, Fenchel taught in the U.S
|
https://en.wikipedia.org/wiki/Common%20Data%20Representation
|
Common Data Representation (CDR) is used to represent structured or primitive data types passed as arguments or results during remote invocations on Common Object Request Broker Architecture (CORBA) distributed objects.
It enables clients and servers written in different programming languages to work together. For example, it translates little-endian to big-endian. It assumes prior agreement on type, so no information is given with data representation in messages.
External links
Official CDR spec (see PDF page 4).
ACE Library provides CDR streams.
Common Object Request Broker Architecture
Data serialization formats
|
https://en.wikipedia.org/wiki/Greater%20sciatic%20foramen
|
The greater sciatic foramen is an opening (foramen) in the posterior human pelvis. It is formed by the sacrotuberous and sacrospinous ligaments. The piriformis muscle passes through the foramen and occupies most of its volume. The greater sciatic foramen is wider in women than in men.
Structure
It is bounded as follows:
anterolaterally by the greater sciatic notch of the ilium.
posteromedially by the sacrotuberous ligament.
inferiorly by the sacrospinous ligament and the ischial spine.
superiorly by the anterior sacroiliac ligament.
Function
The piriformis, which exits the pelvis through the foramen, occupies most of its volume.
The following structures also exit the pelvis through the greater sciatic foramen:
See also
Lesser sciatic foramen
References
External links
(, )
Anatomy
Bones of the pelvis
|
https://en.wikipedia.org/wiki/Nick%20DeWolf
|
Nicholas DeWolf (July 12, 1928 – April 16, 2006) was co-founder of Teradyne, a Boston, Massachusetts-based manufacturer of automatic test equipment. He founded the company in 1960 with Alex d'Arbeloff, a classmate at MIT.
Early life and education
DeWolf was born in Philadelphia, Pennsylvania and graduated with an S.B. in EECS from MIT in 1948.
Career
During his eleven years as CEO of Teradyne, DeWolf is credited with designing more than 300 semiconductor and other test systems, including the J259, the world's first computer-operated integrated circuit tester.
After leaving Teradyne in 1971, DeWolf moved to Aspen, Colorado, where in 1979, he teamed with artist Travis Fulton to create Aspen's "dancing fountain". DeWolf also designed a computer system without hard disks or fans; this system (the ON! computer) booted up in seconds, a much faster time than even the computers of today.
Awards
1979: Semiconductor Equipment and Materials International SEMI Award for North America.
2001: Telluride Tech Festival Award of Technology, Boulder, CO.
2005: inducted into the Aspen Hall of Fame with wife Maggie DeWolf.
Photography
DeWolf was also a keen and prolific photographer. His son-in-law and archivist, Steve Lundeen, is scanning DeWolf's complete archive and making it available on Flickr.
Death
DeWolf died in Aspen, Colorado at the age of 77.
Quotes
"What the customer demands is last year's model, cheaper. To find out what the customer needs you have to understand what the customer is doing as well as he understands it. Then you build what he needs and you educate him to the fact that he needs it."
"To select a component, size a product, design a system or plan a new company, first test the extremes and then have the courage to resist what is popular and the wisdom to choose what is best".
References
External links
The photographic archive of Nick DeWolf on Flickr
'Nicholas DeWolf: The Father of ATE (Automatic Test Equipment)' biography at The Chip History C
|
https://en.wikipedia.org/wiki/Semipredicate%20problem
|
In computer programming, a semipredicate problem occurs when a subroutine intended to return a useful value can fail, but the signalling of failure uses an otherwise valid return value. The problem is that the caller of the subroutine cannot tell what the result means in this case.
Example
The division operation yields a real number, but fails when the divisor is zero. If we were to write a function that performs division, we might choose to return 0 on this invalid input. However, if the dividend is 0, the result is 0 too. This means there is no number we can return to uniquely signal attempted division by zero, since all real numbers are in the range of division.
Practical implications
Early programmers handled potentially exceptional cases such as division using a convention requiring the calling routine to verify the inputs before calling the division function. This had two problems: first, it greatly encumbered all code that performed division (a very common operation); second, it violated the Don't repeat yourself and encapsulation principles, the former of which suggesting eliminating duplicated code, and the latter suggesting that data-associated code be contained in one place (in this division example, the verification of input was done separately). For a computation more complicated than division, it could be difficult for the caller to recognize invalid input; in some cases, determining input validity may be as costly as performing the entire computation. The target function could also be modified and would then expect different preconditions than would the caller; such a modification would require changes in every place where the function was called.
Solutions
The semipredicate problem is not universal among functions that can fail.
Using a custom convention to interpret return values
If the range of a function does not cover the entire space corresponding to the data type of the function's return value, a value known to be impossible under norm
|
https://en.wikipedia.org/wiki/Engineering%20%26%20Technology
|
Engineering & Technology (E+T) is a science, engineering and technology magazine published by Redactive on behalf of IET Services, a wholly owned subsidiary of the Institution of Engineering and Technology (IET), a registered charity in the United Kingdom. The magazine is issued 6 times per year in print and online. The E+T website is also updated regularly with news stories. E+T is distributed to the 154,000 plus membership of the IET around the world.
The magazine was launched in April 2008 as a result of the merger between the Institution of Electrical Engineers and the Institution of Incorporated Engineers on 31 March 2006. Prior to the merger, both organisations had their own membership magazine, the IEE's monthly IEE Review and the IIE's Engineering Technology. Engineering & Technology is an amalgamation of the two, and was initially published monthly. Alongside this, members also received one of seven other monthly magazines published by the IET relating to a field of the subject of their choice, with the option to purchase any of the other titles. In January 2008, the IET merged these seven titles into E+T to make a nearly fortnightly magazine with a larger pagination, providing all members with one magazine covering all topics. In January 2011 the frequency was reduced to 12 times per year and to 11 times per year in 2015 and 10 times per year in 2017.
E+T journalists have been shortlisted and won multiple magazine industry awards, including those presented by the British Society of Magazine Editors, Trade And Business Publications International and the Professional Publishers Association.
References
External links
Official website
Professional and trade magazines
Engineering magazines
Science and technology magazines published in the United Kingdom
2008 establishments in the United Kingdom
Magazines established in 2008
Institution of Engineering and Technology
|
https://en.wikipedia.org/wiki/Ogden%20tables
|
The Ogden tables are a set of statistical tables and other information for use in court cases in the UK. Their purpose is to make it easier to calculate future losses in personal injury and fatal accident cases.
The tables take into account life expectancy and provide a range of discount rates from -2.0% to 3.0% in steps of 0.5%. The discount rate is fixed by the Lord Chancellor under section 1 of the Damages Act 1996; as of 15 July 2019, this rate is -0.25%. The discount rate in Northern Ireland is -1.5%.
The full and official name of the tables is Actuarial Tables with explanatory notes for use in Personal Injury and Fatal Accident Cases, but the unofficial name became common parlance following the Civil Evidence Act 1995, where this shorthand name was used as a subheading – Sir Michael Ogden QC having been the chairman of the Working Party for the first four editions.
History
The tables were first published in 1984.
Section 10 of the Civil Evidence Act 1995 authorised their use in evidence in the UK "for the purpose of assessing, in an action for personal injury, the sum to be awarded as general damages for future pecuniary loss". They were first used by the House of Lords in Wells v. Wells in July 1999.
The 7th edition of the tables made changes to the discount rate range (previously 0.0% to 5.0% revised to -2.0% to 3.0%) to allow for a revision of the rate by the Lord Chancellor (currently under consideration as at 24 October 2011) and to provide for the implications of the case of Helmot v. Simon. The 8th edition was published in 2020 and updated in August 2022.
Using the Ogden tables
There are 28 tables of data in the Ogden Tables. Table 1 (Males) and Table 2 (Females) are for life expectancy and loss for life. Tables 3 to 14 are for loss of earnings up to various retirement ages. Tables 15 to 26 are for loss of pension from various retirement ages. Table 27 is for discounting for a time in the future and Table 28 is for a recurring loss over a period
|
https://en.wikipedia.org/wiki/Linear%20dynamical%20system
|
Linear dynamical systems are dynamical systems whose evolution functions are linear. While dynamical systems, in general, do not have closed-form solutions, linear dynamical systems can be solved exactly, and they have a rich set of mathematical properties. Linear systems can also be used to understand the qualitative behavior of general dynamical systems, by calculating the equilibrium points of the system and approximating it as a linear system around each such point.
Introduction
In a linear dynamical system, the variation of a state vector
(an -dimensional vector denoted ) equals a constant matrix
(denoted ) multiplied by
. This variation can take two forms: either
as a flow, in which varies
continuously with time
or as a mapping, in which
varies in discrete steps
These equations are linear in the following sense: if
and
are two valid solutions, then so is any linear combination
of the two solutions, e.g.,
where and
are any two scalars. The matrix
need not be symmetric.
Linear dynamical systems can be solved exactly, in contrast to most nonlinear ones. Occasionally, a nonlinear system can be solved exactly by a change of variables to a linear system. Moreover, the solutions of (almost) any nonlinear system can be well-approximated by an equivalent linear system near its fixed points. Hence, understanding linear systems and their solutions is a crucial first step to understanding the more complex nonlinear systems.
Solution of linear dynamical systems
If the initial vector
is aligned with a right eigenvector of
the matrix , the dynamics are simple
where is the corresponding eigenvalue;
the solution of this equation is
as may be confirmed by substitution.
If is diagonalizable, then any vector in an -dimensional space can be represented by a linear combination of the right and left eigenvectors (denoted ) of the matrix .
Therefore, the general solution for is
a linear combination of the individual solutions for the rig
|
https://en.wikipedia.org/wiki/Quantum%20Turing%20machine
|
A quantum Turing machine (QTM) or universal quantum computer is an abstract machine used to model the effects of a quantum computer. It provides a simple model that captures all of the power of quantum computation—that is, any quantum algorithm can be expressed formally as a particular quantum Turing machine. However, the computationally equivalent quantum circuit is a more common model.
Quantum Turing machines can be related to classical and probabilistic Turing machines in a framework based on transition matrices. That is, a matrix can be specified whose product with the matrix representing a classical or probabilistic machine provides the quantum probability matrix representing the quantum machine. This was shown by Lance Fortnow.
Informal sketch
A way of understanding the quantum Turing machine (QTM) is that it generalizes the classical Turing machine (TM) in the same way that the quantum finite automaton (QFA) generalizes the deterministic finite automaton (DFA). In essence, the internal states of a classical TM are replaced by pure or mixed states in a Hilbert space; the transition function is replaced by a collection of unitary matrices that map the Hilbert space to itself.
That is, a classical Turing machine is described by a 7-tuple .
For a three-tape quantum Turing machine (one tape holding the input, a second tape holding intermediate calculation results, and a third tape holding output):
The set of states is replaced by a Hilbert space.
The tape alphabet symbols are likewise replaced by a Hilbert space (usually a different Hilbert space than the set of states).
The blank symbol is an element of the Hilbert space.
The input and output symbols are usually taken as a discrete set, as in the classical system; thus, neither the input nor output to a quantum machine need be a quantum system itself.
The transition function is a generalization of a transition monoid and is understood to be a collection of unitary matrices that are automorphism
|
https://en.wikipedia.org/wiki/Lattice%20reduction
|
In mathematics, the goal of lattice basis reduction is to find a basis with short, nearly orthogonal vectors when given an integer lattice basis as input. This is realized using different algorithms, whose running time is usually at least exponential in the dimension of the lattice.
Nearly orthogonal
One measure of nearly orthogonal is the orthogonality defect. This compares the product of the lengths of the basis vectors with the volume of the parallelepiped they define. For perfectly orthogonal basis vectors, these quantities would be the same.
Any particular basis of vectors may be represented by a matrix , whose columns are the basis vectors . In the fully dimensional case where the number of basis vectors is equal to the dimension of the space they occupy, this matrix is square, and the volume of the fundamental parallelepiped is simply the absolute value of the determinant of this matrix . If the number of vectors is less than the dimension of the underlying space, then volume is . For a given lattice , this volume is the same (up to sign) for any basis, and hence is referred to as the determinant of the lattice or lattice constant .
The orthogonality defect is the product of the basis vector lengths divided by the parallelepiped volume;
From the geometric definition it may be appreciated that with equality if and only if the basis is orthogonal.
If the lattice reduction problem is defined as finding the basis with the smallest possible defect, then the problem is NP-hard . However, there exist polynomial time algorithms to find a basis with defect
where c is some constant depending only on the number of basis vectors and the dimension of the underlying space (if different). This is a good enough solution in many practical applications.
In two dimensions
For a basis consisting of just two vectors, there is a simple and efficient method of reduction closely analogous to the Euclidean algorithm for the greatest common divisor of two integers. As with
|
https://en.wikipedia.org/wiki/Pointwise
|
In mathematics, the qualifier pointwise is used to indicate that a certain property is defined by considering each value of some function An important class of pointwise concepts are the pointwise operations, that is, operations defined on functions by applying the operations to function values separately for each point in the domain of definition. Important relations can also be defined pointwise.
Pointwise operations
Formal definition
A binary operation on a set can be lifted pointwise to an operation on the set of all functions from to as follows: Given two functions and , define the function by
Commonly, o and O are denoted by the same symbol. A similar definition is used for unary operations o, and for operations of other arity.
Examples
where .
See also pointwise product, and scalar.
An example of an operation on functions which is not pointwise is convolution.
Properties
Pointwise operations inherit such properties as associativity, commutativity and distributivity from corresponding operations on the codomain.
If is some algebraic structure, the set of all functions to the carrier set of can be turned into an algebraic structure of the same type in an analogous way.
Componentwise operations
Componentwise operations are usually defined on vectors, where vectors are elements of the set for some natural number and some field . If we denote the -th component of any vector as , then componentwise addition is .
Componentwise operations can be defined on matrices. Matrix addition, where is a componentwise operation while matrix multiplication is not.
A tuple can be regarded as a function, and a vector is a tuple. Therefore, any vector corresponds to the function such that , and any componentwise operation on vectors is the pointwise operation on functions corresponding to those vectors.
Pointwise relations
In order theory it is common to define a pointwise partial order on functions. With A, B posets, the set of functions A → B ca
|
https://en.wikipedia.org/wiki/Dbx%20Model%20700%20Digital%20Audio%20Processor
|
The dbx Model 700 Digital Audio Processor was a professional audio ADC/DAC combination unit, which digitized a stereo analog audio input into a bitstream, which was then encoded and encapsulated in an analog composite video signal, for recording to tape using a VCR as a transport. Unlike other similar pieces of equipment like the Sony PCM-F1, the Model 700 used a technique called Companded Predictive Delta Modulation, rather than the now-common pulse-code modulation. At the time of its introduction in the mid-1980s the device was the first commercial product to use this method, although it had been proposed in the 1960s and prototyped in the late '70s.
History
Unlike the many digital recording formats that would follow (e.g. DAT and ADAT), the Model 700 had no capability for storage on its own, and relied on an analog recording medium supplied by the user. In general, any high-quality VHS VCR would do, although 3/4" U-matic or Beta decks could also have been used. If viewed on a monitor, the output stream of a Model 700 looked like analog TV "static" or noise, with slight black bars running down either side.
Early on, the machine was hailed as "the best recording device you can buy," and Stereophile Magazine reviewed it positively. Many people liked the format because it offered more dynamic range than analog tape, but without the "hard clipping" inherent in PCM audio recorders of the time. The Model 700 had been designed from the beginning to have many 'tape-like' characteristics, including "soft saturation," and at a time when most professional and amateur recordists were used to analog tape, this was considered a significant feature. It also offered 14 dB more dynamic range than 44.1 kHz/16b audio, and because of its very high sample rate (644 kHz), it did not contain the same anti-aliasing filters necessary in PCM recorders at the time, which were thought to cause undesirable harmonic interference.
The device sold for $4,600 in 1986, and that was without a
|
https://en.wikipedia.org/wiki/PostBQP
|
In computational complexity theory, PostBQP is a complexity class consisting of all of the computational problems solvable in polynomial time on a quantum Turing machine with postselection and bounded error (in the sense that the algorithm is correct at least 2/3 of the time on all inputs).
Postselection is not considered to be a feature that a realistic computer (even a quantum one) would possess, but nevertheless postselecting machines are interesting from a theoretical perspective.
Removing either one of the two main features (quantumness, postselection) from PostBQP gives the following two complexity classes, both of which are subsets of PostBQP:
BQP is the same as PostBQP except without postselection
BPPpath is the same as PostBQP except that instead of quantum, the algorithm is a classical randomized algorithm (with postselection)
The addition of postselection seems to make quantum Turing machines much more powerful: Scott Aaronson proved PostBQP is equal to PP, a class which is believed to be relatively powerful, whereas BQP is not known even to contain the seemingly smaller class NP. Using similar techniques, Aaronson also proved that small changes to the laws of quantum computing would have significant effects. As specific examples, under either of the two following changes, the "new" version of BQP would equal PP:
if we broadened the definition of 'quantum gate' to include not just unitary operations but linear operations, or
if the probability of measuring a basis state was proportional to instead of for any even integer p > 2.
Basic properties
In order to describe some of the properties of PostBQP we fix a formal way of describing quantum postselection. Define a quantum algorithm to be a family of quantum circuits (specifically, a uniform circuit family). We designate one qubit as the postselection qubit P and another as the output qubit Q. Then PostBQP is defined by postselecting upon the event that the postselection qubit is . Explicitly, a
|
https://en.wikipedia.org/wiki/Britalus%20rotary%20engine
|
The Britalus rotary engine was invented in 1982 by Kenneth W. Porter, P.E., M.S.A.E, of King County, Washington. It operates on a modified Brayton cycle, but with continuous pulsed combustion, similar to that of a gas turbine. It can burn most commonly available hydrocarbon fuels and features the high compression ratio (14:1) typical of a Diesel cycle. The engine is patented, US Patent 4336686 of 1982.
Overview
The main feature of the Britalus engine is an enclosed barrel-shaped cylinder block carrying compressor and expander pistons and rotating within a compact three-lobed stationary housing. The pistons carry rollers that follow an internal cam, causing the reciprocal motion of the pistons for compression and expansion. The rotor is statically and dynamically balanced and thereby operates with minimal vibration. A sleeve pinion gear on the rear of the rotor connects to a layshaft spur gear and provides the output shaft drive to the connected load.
Another distinguishing feature is the stationary slotted sleeve valve enclosing the single combustion chamber, and its co-axial slotted sleeve carried by the rotating cylinder barrel. This feature enables the charging air to enter the combustion chamber and allows evacuation later of the products of combustion to the expander cylinders and pistons.
Similar engines
Similar external combustion rotary engines have been patented by:
Everett F. Irwin, Patents US4458480 of 1984 and US4531360 of 1985
Tigane Rein, Patent WO2003087563 of 2003
References
Porter, K. W., Constant Volume Continuous External Combustion Rotary Engine with Piston Compressor and Expander" U.S. Patent: 4,336,686, June 29, 1982.
Porter, K,W., A Modified-Brayton Cycle Pulse Turbine Engine - AIAA-1988-3067 - AIAA/ASME/SAE/ASEE 24th Joint Propulsion Conference, 1988
Proposed engines
External combustion engines
|
https://en.wikipedia.org/wiki/Sonic%20artifact
|
In sound and music production, sonic artifact, or simply artifact, refers to sonic material that is accidental or unwanted, resulting from the editing or manipulation of a sound.
Types
Because there are always technical restrictions in the way a sound can be recorded (in the case of acoustic sounds) or designed (in the case of synthesised or processed sounds), sonic errors often occur. These errors are termed artifacts (or sound/sonic artifacts), and may be pleasing or displeasing. A sonic artifact is sometimes a type of digital artifact, and in some cases is the result of data compression (not to be confused with dynamic range compression, which also may create sonic artifacts).
Often an artifact is deliberately produced for creative reasons. For example to introduce a change in timbre of the original sound or to create a sense of cultural or stylistic context. A well-known example is the overdriving of an electric guitar or electric bass signal to produce a clipped, distorted guitar tone or fuzz bass.
Editing processes that deliberately produce artifacts often involve technical experimentation. A good example of the deliberate creation of sonic artifacts is the addition of grainy pops and clicks to a recent recording in order to make it sound like a vintage vinyl record.
Flanging and distortion were originally regarded as sonic artifacts; as time passed they became a valued part of pop music production methods. Flanging is added to electric guitar and keyboard parts. Other magnetic tape artifacts include wow, flutter, saturation, hiss, noise, and print-through.
It is valid to consider the genuine surface noise such as pops and clicks that are audible when a vintage vinyl recording is played back or recorded onto another medium as sonic artifacts, although not all sonic artifacts must contain in their meaning or production a sense of "past", more so a sense of "by-product". Other vinyl record artifacts include turntable rumble, ticks, crackles and groove ec
|
https://en.wikipedia.org/wiki/Engineering%20ethics
|
Engineering ethics is the field of system of moral principles that apply to the practice of engineering. The field examines and sets the obligations by engineers to society, to their clients, and to the profession. As a scholarly discipline, it is closely related to subjects such as the philosophy of science, the philosophy of engineering, and the ethics of technology.
Background and origins
Up to the 19th century and growing concerns
As engineering rose as a distinct profession during the 19th century, engineers saw themselves as either independent professional practitioners or technical employees of large enterprises. There was considerable tension between the two sides as large industrial employers fought to maintain control of their employees.
In the United States growing professionalism gave rise to the development of four founding engineering societies: The American Society of Civil Engineers (ASCE) (1851), the American Institute of Electrical Engineers (AIEE) (1884), the American Society of Mechanical Engineers (ASME) (1880), and the American Institute of Mining Engineers (AIME) (1871). ASCE and AIEE were more closely identified with the engineer as learned professional, where ASME, to an extent, and AIME almost entirely, identified with the view that the engineer is a technical employee.
Even so, at that time ethics was viewed as a personal rather than a broad professional concern.
Turn of the 20th century and turning point
When the 19th century drew to a close and the 20th century began, there had been series of significant structural failures, including some spectacular bridge failures, notably the Ashtabula River Railroad Disaster (1876), Tay Bridge Disaster (1879), and the Quebec Bridge collapse (1907). These had a profound effect on engineers and forced the profession to confront shortcomings in technical and construction practice, as well as ethical standards.
One response was the development of formal codes of ethics by three of the four found
|
https://en.wikipedia.org/wiki/Interconnect%20bottleneck
|
The interconnect bottleneck comprises limits on integrated circuit (IC) performance due to connections between components instead of their internal speed.
In 2006 it was predicted to be a "looming crisis" by 2010.
Improved performance of computer systems has been achieved, in large part, by downscaling the IC minimum feature size. This allows the basic IC building block, the transistor, to operate at a higher frequency, performing more computations per second. However, downscaling of the minimum feature size also results in tighter packing of the wires on a microprocessor, which increases parasitic capacitance and signal propagation delay. Consequently, the delay due to the communication between the parts of a chip becomes comparable to the computation delay itself. This phenomenon, known as an “interconnect bottleneck”, is becoming a major problem in high-performance computer systems.
This interconnect bottleneck can be solved by utilizing optical interconnects to replace the long metallic interconnects. Such hybrid optical/electronic interconnects promise better performance even with larger designs. Optics has widespread use in long-distance communications; still it has not yet been widely used in chip-to-chip or on-chip interconnections because they (in centimeter or micrometer range) are not yet industry-manufacturable owing to costlier technology and lack of fully mature technologies. As optical interconnections move from computer network applications to chip level interconnections, new requirements for high connection density and alignment reliability have become as critical for the effective utilization of these links. There are still many materials, fabrication, and packaging challenges in integrating optic and electronic technologies.
See also
Bus (computing)
Interconnects (integrated circuits)
Network-on-chip
Optical network on chip
Optical interconnect
Photonics
Von Neumann architecture
References
Digital electronics
Optical communications
Fi
|
https://en.wikipedia.org/wiki/W-algebra
|
In conformal field theory and representation theory, a W-algebra is an associative algebra that generalizes the Virasoro algebra. W-algebras were introduced by Alexander Zamolodchikov, and the name "W-algebra" comes from the fact that Zamolodchikov used the letter W for one of the elements of one of his examples.
Definition
A W-algebra is an associative algebra that is generated by the modes of a finite number of meromorphic fields , including the energy-momentum tensor . For , is a primary field of conformal dimension . The generators of the algebra are related to the meromorphic fields by the mode expansions
The commutation relations of are given by the Virasoro algebra, which is parameterized by a central charge . This number is also called the central charge of the W-algebra. The commutation relations
are equivalent to the assumption that is a primary field of dimension .
The rest of the commutation relations can in principle be determined by solving the Jacobi identities.
Given a finite set of conformal dimensions (not necessarily all distinct), the number of W-algebras generated by may be zero, one or more. The resulting W-algebras may exist for all , or only for some specific values of the central charge.
A W-algebra is called freely generated if its generators obey no other relations than the commutation relations. Most commonly studied W-algebras are freely generated, including the W(N) algebras. In this article, the sections on representation theory and correlation functions apply to freely generated W-algebras.
Constructions
While it is possible to construct W-algebras by assuming the existence of a number of meromorphic fields and solving the Jacobi identities, there also exist systematic constructions of families of W-algebras.
Drinfeld-Sokolov reduction
From a finite-dimensional Lie algebra , together with an embedding , a W-algebra may be constructed from the universal enveloping algebra of the affine Lie algebra by a kind of BRST co
|
https://en.wikipedia.org/wiki/DNA%20laddering
|
DNA laddering is a feature that can be observed when DNA fragments, resulting from Apoptosis DNA fragmentation are visualized after separation by gel electrophoresis the first described in 1980 by Andrew Wyllie at the University Edinburgh medical school DNA fragments can also be delected in cells that underwent necrosis, when theses DNA fragments after separation are subjected to gel electrophoresis which in the results in a characteristic ladder pattern,
DNA degradation
DNA laddering is a distinctive feature of DNA degraded by caspase-activated DNase (CAD), which is a key event during apoptosis. CAD cleaves genomic DNA at internucleosomal linker regions, resulting in DNA fragments that are multiples of 180–185 base-pairs in length. Separation of the fragments by agarose gel electrophoresis and subsequent visualization, for example by ethidium bromide staining, results in a characteristic "ladder" pattern. A simple method of selective extraction of fragmented DNA from apoptotic cells without the presence of high molecular weight DNA sections, generating the laddering pattern, utilizes pretreatment of cells in ethanol.
Apoptosis and necrosis
While most of the morphological features of apoptotic cells are short-lived, DNA laddering can be used as final state read-out method and has therefore become a reliable method to distinguish apoptosis from necrosis. DNA laddering can also be used to see if cells underwent apoptosis in the presence of a virus. This is useful because it can help determine the effects a virus has on a cell.
DNA laddering can only be used to detect apoptosis during the later stages of apoptosis. This is due to DNA fragmentation taking place in a later stage of the apoptosis process. DNA laddering is used to test for apoptosis of many cells, and is not accurate at testing for only a few cells that committed apoptosis. To enhance the accuracy in testing for apoptosis, other assays are used along with DNA laddering such as TEM and TUNEL. With recen
|
https://en.wikipedia.org/wiki/Supermathematics
|
Supermathematics is the branch of mathematical physics which applies the mathematics of Lie superalgebras to the behaviour of bosons and fermions. The driving force in its formation in the 1960s and 1970s was Felix Berezin.
Objects of study include superalgebras (such as super Minkowski space and super-Poincaré algebra), superschemes, supermetrics/supersymmetry, supermanifolds, supergeometry, and supergravity, namely in the context of superstring theory.
References
"The importance of Lie algebras"; Professor Isaiah Kantor, Lund University
External links
Felix Berezin, The Life and Death of the Mastermind of Supermathematics, edited by Mikhail Shifman, World Scientific, Singapore, 2007,
Mathematical physics
Supersymmetry
Lie algebras
String theory
|
https://en.wikipedia.org/wiki/Vertical%20circle
|
In astronomy, a vertical circle is a great circle on the celestial sphere that is perpendicular to the horizon. Therefore, it contains the vertical direction, passing through the zenith and the nadir. There is a vertical circle for any given azimuth, where azimuth is the angle measured east from the north on the celestial horizon. The vertical circle which is on the east–west direction is called the prime vertical. The vertical circle which is on the north–south direction is called the local celestial meridian (LCM), or principal vertical. Vertical circles are part of the horizontal coordinate system.
Instruments like this were more common in 19th century observatories and were important for locating and recording coordinates in the cosmos, and observatories often had various other instruments for certain functions as well as advanced clocks of the period. The popularly known example in the observatories, were the Great refractors which became larger and larger and came to have dominating effect to the point that observatories were moved simply to have better conditions for their biggest telescope, in the modern style where observatories often have one instrument only in a remote location on the Earth or even in outer space. However, in the 19th century it was more basic with observatorys often making recording of coordinates of different items and to determine the shape of the Earth and times.
See also
Meridian circle
Equatorial telescope
Comet seeker
References
Astronomical coordinate systems
Circles
|
https://en.wikipedia.org/wiki/Front%20panel
|
A front panel was used on early electronic computers to display and allow the alteration of the state of the machine's internal registers and memory. The front panel usually consisted of arrays of indicator lamps, digit and symbol displays, toggle switches, dials, and push buttons mounted on a sheet metal face plate. In early machines, CRTs might also be present (as an oscilloscope, or, for example, to mirror the contents of Williams–Kilburn tube memory). Prior to the development of CRT system consoles,
many computers such as the IBM 1620 had console typewriters.
Usually the contents of one or more hardware registers would be represented by a row of lights, allowing the contents to be read directly when the machine was stopped. The switches allowed direct entry of data and address values into registers or memory.
Details
On some machines, certain lights and switches were reserved for use under program control. These were often referred to as sense indicators, sense lights and sense switches. For example, the original Fortran compiler for the IBM 704 contained specific statements for testing and manipulation of the 704's sense lights and switches. These switches were often used by the program to control optional behavior, for example information might be printed only if a particular sense switch was set.
Operating systems made for computers with blinkenlights, for example, RSTS/E and RSX-11, would frequently have an idle task blink the panel lights in some recognizable fashion. System programmers often became very familiar with these light patterns and could tell from them how busy the system was and, sometimes, exactly what it was doing at the moment. The Master Control Program for the Burroughs Corporation B6700 mainframe would display a large block-letter "B" when the system was idle.
Switches and lights required little additional logic circuitry and usually no software support, important when logic hardware components were costly and software often limit
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.