source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Windows%20System%20Assessment%20Tool
|
The Windows System Assessment Tool (WinSAT) is a module of Microsoft Windows Vista, Windows 7, Windows 8, Windows 10 and Windows 11 that is available in the Control Panel under Performance Information and Tools (except in Windows 8.1, Windows 10 & Windows 11). It measures various performance characteristics and capabilities of the hardware it is running on and reports them as a Windows Experience Index (WEI) score. The WEI includes five subscores: processor, memory, 2D graphics, 3D graphics, and disk; the basescore is equal to the lowest of the subscores and is not an average of the subscores. WinSAT reports WEI scores on a scale from 1.0 to 5.9 for Windows Vista, 7.9 for Windows 7, and 9.9 for Windows 8, Windows 10 and Windows 11.
The WEI enables users to match their computer hardware performance with the performance requirements of software. For example, the Aero graphical user interface will not automatically be enabled unless the system has a WEI score of 3 or higher.
The WEI can also be used to show which part of a system would be expected to provide the greatest increase in performance when upgraded. For example, a computer with the lowest subscore being its memory, would benefit more from a RAM upgrade than adding a faster hard drive (or any other component).
Detailed raw performance information, like actual disk bandwidth, can be obtained by invoking winsat from the command line. This also allows only specific tests to be re-run. Obtaining the WEI score from the command line is done invoking winsat formal, which also updates the value stored in %systemroot%\Performance\WinSAT\DataStore. (The XML files stored there can be easily hacked to report fake performance values.) The WEI is also available to applications through an API, so they can configure themselves as a function of hardware performance, taking advantage of its capabilities without becoming unacceptably slow.
The Windows Experience Index score is not displayed in Windows 8.1 and onwards because
|
https://en.wikipedia.org/wiki/Privacy%20law
|
Privacy law is the body of law that deals with the regulating, storing, and using of personally identifiable information, personal healthcare information, and financial information of individuals, which can be collected by governments, public or private organisations, or other individuals. It also applies in the commercial sector to things like trade secrets and the liability that directors, officers, and employees have when handing sensitive information.
Privacy laws are considered within the context of an individual's privacy rights or within reasonable expectation of privacy. The Universal Declaration of Human Rights states that everyone has the right to privacy. The interpretation of these rights varies by country and is not always universal.
Classification of privacy laws
Privacy laws can be broadly classified into:
General privacy laws that have an overall bearing on the personal information of individuals and affect the policies that govern many different areas of information.
Trespass
Negligence
Fiduciary
International legal standards on privacy
Asia-Pacific Economic Cooperation (APEC)
APEC created a voluntary Privacy Framework that was adopted by all 21 member economies in 2004 in an attempt to improve general information privacy and the cross-border transfer of information. The Framework consists of nine Privacy Principles that act as minimum standards for privacy protection: Preventing harm, Notice, Collection limitation, Use of personal information, Choice, Integrity of personal information, Security safeguards, Access and correction, and Accountability.
In 2011, APEC implemented the APEC Cross Border Privacy Rules System with the goal of balancing "the flow of information and data across borders ... essential to trust and confidence in the online marketplace." The four agreed-upon rules of the System are based upon the APEC Privacy Framework and include self-assessment, compliance review, recognition/acceptance, and dispute resolution and enforc
|
https://en.wikipedia.org/wiki/Mwave
|
Mwave was a technology developed by IBM allowing for the combination of telephony and sound card features on a single adapter card. The technology centers around the Mwave digital signal processor (DSP). The technology was used for a time to provide a combination modem and sound card for IBM's Aptiva line and some ThinkPad laptops, in addition to uses on specialized Mwave cards that handled voice recognition or ISDN networking connectivity. Similar adapter cards by third-party vendors using Mwave technology were also sold. However, plagued by consumer complaints about buggy Mwave software and hardware, IBM eventually turned to other audio and telephony solutions for its consumer products.
History
Malcolm Ware, a former developer on Mwave, dates the technology back to its development in an IBM research lab in Zurich, Switzerland in 1979. The first prototype was tested in an IBM PC in 1981. After being utilized in some other adapter cards, Mwave was given its official name and used in IBM's WindSurfer ISA/MCA card. IBM manufactured Mwave hardware for both Microsoft Windows and its own OS/2. Another revision of the technology was used in IBM's newly renamed Aptiva line. Gary Harper developed some automated test software, loosely based on the movie War Games, to test how well the Mwave modem could connect to modems used by various bulletin board systems.
One of the revisions of the Mwave card was the Mwave Dolphin. The card was an ISA legacy card that did not support plug and play and natively supported Windows through its software. It featured a 28.8k/second fax/modem and a Sound Blaster-compatible audio solution. One of the card's most publicized features was its software upgradeability: a version of the Mwave software upgraded the modem function to 33.6k. In addition, the card was key in the support of some of the Aptiva's Rapid Resume features, including Wake-up On Ring. There were various consumer complaints with users reporting problems involving either the so
|
https://en.wikipedia.org/wiki/MathChallengers
|
MathChallengers is the former Mathcounts in British Columbia. It is open to all grade 8, 9, and 10 students from British Columbia. The major sponsors are the Association of Professional Engineers and Geoscientists of B.C. (APEGBC), the B.C. Association of Mathematics Teachers (BCAMT), BC Hydro, and IBM Canada.
Rules
The Competition consists of 4 stages. Stages 1 and 2 are individual competitions. Stage 3 is a Team competition. Stage 4 is a one-on-one competition between the top 10 individuals who participated in stages 1 and 2. Math Challengers competitions may consist of the following rounds:
Stage 1: "Blitz"
Stage 1 consists of one session on a variety of mathematical subjects. Participants will be allowed to work for 40 minutes on 26 questions written on four pages (each correct answer will count as one point). Thus, the maximum number of points available in this stage is: 26.
Stage 2: "Bulls-Eye"
Stage 2 consists of three sessions on a certain mathematical subject. For each of the sessions, participants will be given 12 minutes to work on the 4 questions on that subject. The total number of questions in Stage 1 is 12 and each correct answer will count as two points. Thus, the maximum number of points available in this stage is: 24.
Stage 3: "Co-Op"
Stage 3 is a Team competition and it consists of three sessions on a variety of mathematical subjects. Participants will be allowed to work for 36 minutes on 15 questions written on one page (each correct answer will count as two points). Thus, the maximum number of points available in this stage is 30. Scientific calculators are allowed for this stage of the competition. Graphing calculators and programmable calculators are not allowed at all. Devices with wireless communication capabilities are absolutely not allowed.
Stage 4: "Face-Off"
Stage 4 is a one-to-one buzz-in verbal competition for the top scoring 10 individuals.
There will be a total of 9 match up rounds.
Participants should be provided with a
|
https://en.wikipedia.org/wiki/Myco-heterotrophy
|
Myco-heterotrophy (from Greek μύκης , "fungus", ἕτερος , "another", "different" and τροφή , "nutrition") is a symbiotic relationship between certain kinds of plants and fungi, in which the plant gets all or part of its food from parasitism upon fungi rather than from photosynthesis. A myco-heterotroph is the parasitic plant partner in this relationship. Myco-heterotrophy is considered a kind of cheating relationship and myco-heterotrophs are sometimes informally referred to as "mycorrhizal cheaters". This relationship is sometimes referred to as mycotrophy, though this term is also used for plants that engage in mutualistic mycorrhizal relationships.
Relationship between myco-heterotrophs and host fungi
Full (or obligate) myco-heterotrophy exists when a non-photosynthetic plant (a plant largely lacking in chlorophyll or otherwise lacking a functional photosystem) gets all of its food from the fungi that it parasitizes. Partial (or facultative) myco-heterotrophy exists when a plant is capable of photosynthesis, but parasitizes fungi as a supplementary food supply. There are also plants, such as some orchid species, that are non-photosynthetic and obligately myco-heterotrophic for part of their life cycle, and photosynthetic and facultatively myco-heterotrophic or non-myco-heterotrophic for the rest of their life cycle. Not all non-photosynthetic or "achlorophyllous" plants are myco-heterotrophic – some non-photosynthetic plants like dodder directly parasitize the vascular tissue of other plants. The partial or full loss of photosynthesis is reflected by extreme physical and functional reductions of plastid genomes in mycoheterophic plants, an ongoing evolutionary process.
In the past, non-photosynthetic plants were mistakenly thought to get food by breaking down organic matter in a manner similar to saprotrophic fungi. Such plants were therefore called "saprophytes". It is now known that these plants are not physiologically capable of directly breaking down organ
|
https://en.wikipedia.org/wiki/Oligolecty
|
The term oligolecty is used in pollination ecology to refer to bees that exhibit a narrow, specialized preference for pollen sources, typically to a single family or genus of flowering plants. The preference may occasionally extend broadly to multiple genera within a single plant family, or be as narrow as a single plant species. When the choice is very narrow, the term monolecty is sometimes used, originally meaning a single plant species but recently broadened to include examples where the host plants are related members of a single genus. The opposite term is polylectic and refers to species that collect pollen from a wide range of species. The most familiar example of a polylectic species is the domestic honey bee.
Oligolectic pollinators are often called oligoleges or simply specialist pollinators, and this behavior is especially common in the bee families Andrenidae and Halictidae, though there are thousands of species in hundreds of genera, in essentially all known bee families; in certain areas of the world, such as deserts, oligoleges may represent half or more of all the resident bee species. Attempts have been made to determine whether a narrow host preference is due to an inability of the bee larvae to digest and develop on a variety of pollen types, or a limitation of the adult bee's learning and perception (i.e., they simply do not recognize other flowers as potential food sources), and most of the available evidence suggests the latter. However, a few plants whose pollen contains toxic substances (e.g., Toxicoscordion and related genera in the Melanthieae) are visited by oligolectic bees, and these may fall into the former category. The evidence from large-scale phylogenetic analyses of bee evolution suggests that, for most groups of bees, oligolecty is the ancestral condition and polylectic lineages arose from among those ancestral specialists.
There are some cases where oligoleges collect their host plant's pollen as larval food but, for various r
|
https://en.wikipedia.org/wiki/Mycotroph
|
A mycotroph is a plant that gets all or part of its carbon, water, or nutrient supply through symbiotic association with fungi. The term can refer to plants that engage in either of two distinct symbioses with fungi:
Many mycotrophs have a mutualistic association with fungi in any of several forms of mycorrhiza. The majority of plant species are mycotrophic in this sense. Examples include Burmanniaceae.
Some mycotrophs are parasitic upon fungi in an association known as myco-heterotrophy.
References
Trophic ecology
Mycology
Symbiosis
Parasites of fungi
Parasitic plants
Plant nutrition
|
https://en.wikipedia.org/wiki/D-STAR
|
D-STAR (Digital Smart Technologies for Amateur Radio) is a digital voice and data protocol specification for amateur radio. The system was developed in the late 1990s by the Japan Amateur Radio League and uses minimum-shift keying in its packet-based standard. There are other digital modes that have been adapted for use by amateurs, but D-STAR was the first that was designed specifically for amateur radio.
Several advantages of using digital voice modes are that it uses less bandwidth than older analog voice modes such as amplitude modulation and frequency modulation. The quality of the data received is also better than an analog signal at the same signal strength, as long as the signal is above a minimum threshold and as long as there is no multipath propagation.
D-STAR compatible radios are available for HF, VHF, UHF, and microwave amateur radio bands. In addition to the over-the-air protocol, D-STAR also provides specifications for network connectivity, enabling D-STAR radios to be connected to the Internet or other networks, allowing streams of voice or packet data to be routed via amateur radio.
D-STAR compatible radios are manufactured by Icom, Kenwood, and FlexRadio Systems.
History
In 1998 an investigation into finding a new way of bringing digital technology to amateur radio was started. The process was funded by a ministry of the Japanese government, then called the Ministry of Posts and Telecommunications, and administered by the Japan Amateur Radio League. In 2001, D-STAR was published as the result of the research.
In September 2003 Icom named Matt Yellen, KB7TSE (now K7DN), to lead its US D-STAR development program.
Starting in April 2004 Icom began releasing new "D-STAR optional" hardware. The first to be released commercially, was a 2-meter mobile unit designated IC-2200H. Icom followed up with 2 meter and 440 MHz handheld transceivers the next year. However, the yet to be released UT-118 add-on card was required for these radios to operate i
|
https://en.wikipedia.org/wiki/Information%20Trust%20Institute
|
The Information Trust Institute (ITI) was founded in 2004 as an interdisciplinary unit designed to approach information security research from a systems perspective. It examines information security by looking at what makes machines, applications, and users trustworthy. Its mission is to create computer systems, software, and networks that society can depend on to be trustworthy, meaning secure, dependable (reliable and available), correct, safe, private, and survivable. ITI's stated goal is to create a new paradigm for designing trustworthy systems from the ground up and validating systems that are intended to be trustworthy.
Participants
ITI is an academic/industry partnership focusing on application areas such as electric power, financial systems, defense, and homeland security, among others. It brings together over 100 researchers representing numerous colleges and units at the University of Illinois at Urbana–Champaign.
Major centers within ITI
Boeing Trusted Software Center
CAESAR: the Center for Autonomous Engineering Systems and Robotics
the Center for Information Forensics
Center for Health Information Privacy and Security
the NSA Center for Information Assurance Education and Research
TCIPG: the Trustworthy Cyber Infrastructure for the Power Grid Center
Trusted ILLIAC
References
https://www.wsj.com/articles/federal-researchers-simulate-power-grid-cyberattack-find-holes-in-response-plan-1541785202
https://interestingengineering.com/11-schools-with-the-best-cybersecurity-degrees-in-the-world
https://www.scientificamerican.com/article/tracking-cyber-hackers/
https://www.scientificamerican.com/article/fog-of-cyber-warfare/
https://blog.slate.fr/globule-et-telescope/2011/06/14/le-jour-ou-des-hackers-pirateront-le-reseau-electrique/
https://www.cioreview.com/news/quick-steps-to-trace-a-hacker-nid-18191-cid-9.html
https://www.newsindiatimes.com/indian-american-recognized-for-developing-method-to-locate-power-grid-attackers/
https://www.thewilk
|
https://en.wikipedia.org/wiki/Systems%20for%20Nuclear%20Auxiliary%20Power
|
The Systems Nuclear Auxiliary POWER (SNAP) program was a program of experimental radioisotope thermoelectric generators (RTGs) and space nuclear reactors flown during the 1960s by NASA.
The SNAP program developed as a result of Project Feedback, a Rand Corporation study of reconnaissance satellites completed in 1954. As some of the proposed satellites had high power demands, some as high as a few kilowatts, the U.S. Atomic Energy Commission (AEC) requested a series of nuclear power-plant studies from industry in 1951. Completed in 1952, these studies determined that nuclear power plants were technically feasible for use on satellites.
In 1955, the AEC began two parallel SNAP nuclear power projects. One, contracted with The Martin Company, used radio-isotopic decay as the power source for its generators. These plants were given odd-numbered SNAP designations beginning with SNAP-1. The other project used nuclear reactors to generate energy, and was developed by the Atomics International Division of North American Aviation. Their systems were given even-numbered SNAP designations, the first being SNAP-2.
Most of the systems development and reactor testing was conducted at the Santa Susana Field Laboratory, Ventura County, California using a number of specialized facilities.
Odd-numbered SNAPs: radioisotope thermoelectric generators
Radioisotope thermoelectric generators use the heat of radioactive decay to produce electricity.
SNAP-1
SNAP-1 was a test platform that was never deployed, using cerium-144 in a Rankine cycle with mercury as the heat transfer fluid. Operated successfully for 2500 hours.
SNAP-3
SNAP-3 was the first RTG used in a space mission (1961). Launched aboard U.S. Navy Transit 4A and 4B navigation satellites. The electrical output of this RTG was 2.5 watts.
SNAP-7
SNAP-7A, D and F was designed for marine applications such as lighthouses and buoys; at least six units were deployed in the mid-1960s, with names SNAP-7A through SNAP-7F. SNAP-7D pr
|
https://en.wikipedia.org/wiki/Cap%20product
|
In algebraic topology the cap product is a method of adjoining a chain of degree p with a cochain of degree q, such that q ≤ p, to form a composite chain of degree p − q. It was introduced by Eduard Čech in 1936, and independently by Hassler Whitney in 1938.
Definition
Let X be a topological space and R a coefficient ring. The cap product is a bilinear map on singular homology and cohomology
defined by contracting a singular chain with a singular cochain by the formula:
Here, the notation indicates the restriction of the simplicial map to its face spanned by the vectors of the base, see Simplex.
Interpretation
In analogy with the interpretation of the cup product in terms of the Künneth formula, we can explain the existence of the cap product in the following way. Using CW approximation we may assume that is a CW-complex and (and ) is the complex of its cellular chains (or cochains, respectively). Consider then the composition
where we are taking tensor products of chain complexes, is the diagonal map which induces the map
on the chain complex, and is the evaluation map (always 0 except for ).
This composition then passes to the quotient to define the cap product , and looking carefully at the above composition shows that it indeed takes the form of maps , which is always zero for .
Fundamental Class
For any point in , we have the long-exact sequence in homology (with coefficients in ) of the pair (M, M - {x}) (See Relative homology)
An element of is called the fundamental class for if is a generator of . A fundamental class of exists if is closed and R-orientable. In fact, if is a closed, connected and -orientable manifold, the map is an isomorphism for all in and hence, we can choose any generator of as the fundamental class.
Relation with Poincaré duality
For a closed -orientable n-manifold with fundamental class in (which we can choose to be any generator of ), the cap product map
is an isomorphism for all . This result i
|
https://en.wikipedia.org/wiki/Uniqueness%20theorem%20for%20Poisson%27s%20equation
|
The uniqueness theorem for Poisson's equation states that, for a large class of boundary conditions, the equation may have many solutions, but the gradient of every solution is the same. In the case of electrostatics, this means that there is a unique electric field derived from a potential function satisfying Poisson's equation under the boundary conditions.
Proof
The general expression for Poisson's equation in electrostatics is
where is the electric potential and is the charge distribution over some region with boundary surface .
The uniqueness of the solution can be proven for a large class of boundary conditions as follows.
Suppose that we claim to have two solutions of Poisson's equation. Let us call these two solutions and . Then
and
It follows that is a solution of Laplace's equation, which is a special case of Poisson's equation that equals to . Subtracting the two solutions above gives
By applying the vector differential identity we know that
However, from () we also know that throughout the region Consequently, the second term goes to zero and we find that
By taking the volume integral over the region , we find that
By applying the divergence theorem, we rewrite the expression above as
We now sequentially consider three distinct boundary conditions: a Dirichlet boundary condition, a Neumann boundary condition, and a mixed boundary condition.
First, we consider the case where Dirichlet boundary conditions are specified as on the boundary of the region. If the Dirichlet boundary condition is satisfied on by both solutions (i.e., if on the boundary), then the left-hand side of () is zero. Consequently, we find that
Since this is the volume integral of a positive quantity (due to the squared term), we must have at all points. Further, because the gradient of is everywhere zero and is zero on the boundary, must be zero throughout the whole region. Finally, since throughout the whole region, and since throughout the whole regi
|
https://en.wikipedia.org/wiki/Static%20key
|
A cryptographic key is called static if it is intended for use for a relatively long period of time and is typically intended for use in many instances of a cryptographic key establishment scheme. Contrast with an ephemeral key.
See also
Cryptographic key types
Recommendation for Key Management — Part 1: general,
NIST Cryptographic Toolkit
Key management
|
https://en.wikipedia.org/wiki/Radar%20tracker
|
A radar tracker is a component of a radar system, or an associated command and control (C2) system, that associates consecutive radar observations of the same target into tracks. It is particularly useful when the radar system is reporting data from several different targets or when it is necessary to combine the data from several different radars or other sensors.
Role of the radar tracker
A classical rotating air surveillance radar system detects target echoes against a background of noise. It reports these detections (known as "plots") in polar coordinates representing the range and bearing of the target. In addition, noise in the radar receiver will occasionally exceed the detection threshold of the radar's Constant false alarm rate detector and be incorrectly reported as targets (known as false alarms). The role of the radar tracker is to monitor consecutive updates from the radar system (which typically occur once every few seconds, as the antenna rotates) and to determine those sequences of plots belonging to the same target, whilst rejecting any plots believed to be false alarms. In addition, the radar tracker is able to use the sequence of plots to estimate the current speed and heading of the target. When several targets are present, the radar tracker aims to provide one track for each target, with the track history often being used to indicate where the target has come from.
When multiple radar systems are connected to a single reporting post, a multiradar tracker is often used to monitor the updates from all of the radars and form tracks from the combination of detections. In this configuration, the tracks are often more accurate than those formed from single radars, as a greater number of detections can be used to estimate the tracks.
In addition to associating plots, rejecting false alarms and estimating heading and speed, the radar tracker also acts as a filter, in which errors in the individual radar measurements are smoothed out. In essenc
|
https://en.wikipedia.org/wiki/Technology%20CAD
|
Technology computer-aided design (technology CAD or TCAD) is a branch of electronic design automation that models semiconductor fabrication and semiconductor device operation. The modeling of the fabrication is termed Process TCAD, while the modeling of the device operation is termed Device TCAD. Included are the modelling of process steps (such as diffusion and ion implantation), and modelling of the behavior of the electrical devices based on fundamental physics, such as the doping profiles of the devices. TCAD may also include the creation of compact models (such as the well known SPICE transistor models), which try to capture the electrical behavior of such devices but do not generally derive them from the underlying physics. SPICE simulator itself is usually considered as part of ECAD rather than TCAD.
Introduction
Technology files and design rules are essential building blocks of the integrated circuit design process. Their accuracy and robustness over process technology, its variability and the operating conditions of the IC — environmental, parasitic interactions and testing, including adverse conditions such as electro-static discharge — are critical in determining performance, yield and reliability. Development of these technology and design rule files involves an iterative process that crosses boundaries of technology and device development, product design and quality assurance. Modeling and simulation play a critical role in support of many aspects of this evolution process.
The goals of TCAD start from the physical description of integrated circuit devices, considering both the physical configuration and related device properties, and build the links between the broad range of physics and electrical behavior models that support circuit design. Physics-based modeling of devices, in distributed and lumped forms, is an essential part of the IC process development. It seeks to quantify the underlying understanding of the technology and abstract tha
|
https://en.wikipedia.org/wiki/Logic%20Trunked%20Radio
|
Logic Trunked Radio (LTR) is a radio system developed in the late 1970s by the E. F. Johnson Company.
LTR is distinguished from some other common trunked radio systems in that it does not have a dedicated control channel. Each repeater has its own controller and all of these controllers are coordinated together. Even though each controller monitors its own channel, one of the channel controllers is assigned to be a master and all the other controllers report to it.
Typically on LTR systems, each of these controllers periodically sends out a data burst (approximately every 10 seconds on LTR Standard systems) so that the subscriber units know that the system is there. The idle data burst can be turned off if desired by the system operator. Some systems will broadcast idle data bursts only on channels used as home channels and not on those used for "overflow" conversations. To a listener, the idle data burst will sound like a short blip of static like someone keyed up and unkeyed a radio within about 1/2 second. This data burst is not sent at the same time by all the channels but happen randomly throughout all the system channels.
References
External links
Logic Trunked System article from 'Monitoring Times'
E.F. Johnson Company website
Radio electronics
Radio resource management
Radio networks
|
https://en.wikipedia.org/wiki/Coherent%20risk%20measure
|
In the fields of actuarial science and financial economics there are a number of ways that risk can be defined; to clarify the concept theoreticians have described a number of properties that a risk measure might or might not have. A coherent risk measure is a function that satisfies properties of monotonicity, sub-additivity, homogeneity, and translational invariance.
Properties
Consider a random outcome viewed as an element of a linear space of measurable functions, defined on an appropriate probability space. A functional → is said to be coherent risk measure for if it satisfies the following properties:
Normalized
That is, the risk when holding no assets is zero.
Monotonicity
That is, if portfolio always has better values than portfolio under almost all scenarios then the risk of should be less than the risk of . E.g. If is an in the money call option (or otherwise) on a stock, and is also an in the money call option with a lower strike price.
In financial risk management, monotonicity implies a portfolio with greater future returns has less risk.
Sub-additivity
Indeed, the risk of two portfolios together cannot get any worse than adding the two risks separately: this is the diversification principle.
In financial risk management, sub-additivity implies diversification is beneficial. The sub-additivity principle is sometimes also seen as problematic.
Positive homogeneity
Loosely speaking, if you double your portfolio then you double your risk.
In financial risk management, positive homogeneity implies the risk of a position is proportional to its size.
Translation invariance
If is a deterministic portfolio with guaranteed return and then
The portfolio is just adding cash to your portfolio . In particular, if then .
In financial risk management, translation invariance implies that the addition of a sure amount of capital reduces the risk by the same amount.
Convex risk measures
The notion of coherence has been subsequently rela
|
https://en.wikipedia.org/wiki/Railway%20engineering
|
Railway engineering is a multi-faceted engineering discipline dealing with the design, construction and operation of all types of rail transport systems. It encompasses a wide range of engineering disciplines, including civil engineering, computer engineering, electrical engineering, mechanical engineering, industrial engineering and production engineering. A great many other engineering sub-disciplines are also called upon.
History
With the advent of the railways in the early nineteenth century, a need arose for a specialized group of engineers capable of dealing with the unique problems associated with railway engineering. As the railways expanded and became a major economic force, a great many engineers became involved in the field, probably the most notable in Britain being Richard Trevithick, George Stephenson and Isambard Kingdom Brunel. Today, railway systems engineering continues to be a vibrant field of engineering.
Subfields
Mechanical engineering
Command, control & railway signalling
Office systems design
Data center design
SCADA
Network design
Electrical engineering
Energy electrification
Third rail
Fourth rail
Overhead contact system
Civil engineering
Permanent way engineering
Light rail systems
On-track plant
Rail systems integration
Train control systems
Cab signalling
Railway vehicle engineering
Rolling resistance
Curve resistance
Wheel–rail interface
Hunting oscillation
Railway systems engineering
Railway signalling
Fare collection
CCTV
Public address
Intrusion detection
Access control
Systems integration
Professional organisations
In the UK: The Railway Division of the Institution of Mechanical Engineers (IMechE).
In the US The American Railway Engineering and Maintenance-of-Way Association (AREMA)
In the Philippines Philippine Railway Engineers' Association, (PREA) Inc.
Worldwide The Institute of Railway Signal Engineers (IRSE)
See also
Association of American Railroads
Exsecant
Degree of curvature
List of engin
|
https://en.wikipedia.org/wiki/Control%20panel%20%28engineering%29
|
A control panel is a flat, often vertical, area where control or monitoring instruments are displayed or it is an enclosed unit that is the part of a system that users can access, such as the control panel of a security system (also called control unit).
They are found in factories to monitor and control machines or production lines and in places such as nuclear power plants, ships, aircraft and mainframe computers. Older control panels are most often equipped with push buttons and analog instruments, whereas nowadays in many cases touchscreens are used for monitoring and control purposes.
Gallery
Flat panels
Enclosed control unit
See also
Control stand
Dashboard
Electric switchboard
Fire alarm control panel
Front panel
Graphical user interface
Control panel (computer)
Dashboard (software) virtual
Lighting control console
Mixing console
Patch board
Plugboard
Telephone switchboard
Control devices
|
https://en.wikipedia.org/wiki/EF%20Johnson%20Technologies
|
EF Johnson Technologies, Inc. is a two-way radio manufacturer founded by its namesake, Edgar Frederick Johnson, in Waseca, Minnesota, United States in 1923. Today it is a wholly owned subsidiary of JVCKenwood of Yokohama, Japan.
EF Johnson Technologies offers a wide range of equipment for use by law enforcement, firefighters, EMS, and military. Products include Project 25 systems, portable/mobile two-way radios, and radio encryption products.
(Recent) Product introductions
2013: Introduced Viking VP900 multi-band portable radio.
2012: Introduced Viking VP600 portable radio.
2011: Introduced ATLAS P25 System Solutions. Named the Hot Product by APCO's Public Safety Communications magazine.
2010: Introduced the 51FIRE ES, the first portable radio engineered specifically for firefighters.
2009: Introduced Hybrid IP25, a Project 25 compliant wide area conventional system and a hybrid network intended to allow first responders to operate and interoperate between the conventional and trunked systems and eliminate the need for dispatchers to manually patch calls between the two systems.
2009: Introduced StarGate Dispatch Console, an IP-based dispatch console for first responders. StarGate was named the Hot Product by Public Safety Communications, the official magazine of the Association of Public-Safety Communications Officials (APCO).
2008: Introduced the Lightning Control Head, a mobile radio control head that incorporates electroluminescent technology.
2007: Introduced IP25 MultiSite, a switchless Project 25 trunked infrastructure system that is specifically designed for first responders. This Voice over Internet Protocol (VoIP) based system meets the NTIA mandates for narrowband operation in VHF and UHF frequencies as well as DOD mandates for Project 25 compliance.
2006: EF Johnson introduces the Enhanced (AMBE+2) Project 25 Vocoder in its entire radio product line.
History
The company was founded in 1923 by Edgar F. Johnson and his wife Ethel Johnson. Th
|
https://en.wikipedia.org/wiki/MDN%20Web%20Docs
|
MDN Web Docs, previously Mozilla Developer Network and formerly Mozilla Developer Center, is a documentation repository and learning resource for web developers. It was started by Mozilla in 2005 as a unified place for documentation about open web standards, Mozilla's own projects, and developer guides.
MDN Web Docs content is maintained by Mozilla, Google employees, and volunteers (community of developers and technical writers). It also contains content contributed by Microsoft, Google, and Samsung who, in 2017, announced they would shut down their own documentation projects and move all their documentation to MDN Web Docs. Topics include HTML5, JavaScript, CSS, Web APIs, Django, Node.js, WebExtensions, MathML, and others.
History
In 2005, Mozilla Corporation started the project under the name Mozilla Developer Center. Mozilla Corporation still funds servers and employs staff working on the projects.
The initial content for the website was provided by DevEdge, for which the Mozilla Foundation was granted a license by AOL. The site now contains a mix of content migrated from DevEdge and mozilla.org, as well as original and more up-to-date content. Documentation was also migrated from XULPlanet.com.
On Oct 3, 2016, Brave browser added Mozilla Developer Network as one of its default search engines options.
In 2017, MDN Web Docs became the unified documentation of web technology for Google, Samsung, Microsoft, and Mozilla. Microsoft started redirecting pages from Microsoft Developer Network to MDN.
In 2019, Mozilla started Beta testing a new reader site for MDN Web Docs written in React (instead of jQuery; some jQuery functionality was replaced with Cheerio library). The new site was launched on December 14, 2020. Since December 14, 2020, all editable content is stored in a Git repository hosted on GitHub, where contributors open pull requests and discuss changes.
On January 25 2021, the Open Web Docs (OWD) organization was launched as a non-profit fiscal entity
|
https://en.wikipedia.org/wiki/European%20Grid%20Infrastructure
|
European Grid Infrastructure (EGI) is a series of efforts to provide access to high-throughput computing resources across Europe using grid computing techniques. The EGI links centres in different European countries to support international research in many scientific disciplines. Following a series of research projects such as DataGrid and Enabling Grids for E-sciencE, the EGI Foundation was formed in 2010 to sustain the services of EGI.
Purpose
Science has become increasingly based on open collaboration between researchers across the world. It uses high-capacity computing to model complex systems and to process experimental results.
In the early 21st century, grid computing became popular for scientific disciplines such as high-energy physics and bioinformatics to share and combine the power of computers and sophisticated, often unique, scientific instruments in a process known as e-Science.
In addition to their scientific value, on 30 May 2008 The EU Competitiveness Council promoted "the essential role of e-infrastructures as an integrating mechanism between Member States, regions as well as different scientific disciplines, also contributing to overcoming digital divides."
EGI is partially supported by the EGI-InSPIRE EC project.
History
The European DataGrid project was first funded in 2001 for three years as one of the Framework Programmes for Research and Technological Development series.
Fabrizio Gagliardi was project manager of DataGrid and its budget was about 12 million euro, with the full project named "Research and Technological Development for an International Data Grid".
A major motivation behind the concept was the massive data requirements of CERN's LHC (Large Hadron Collider) project.
EGEE
On 1 April 2004 the Enabling Grids for E-Science in Europe (EGEE) project was funded by the European Commission through the Directorate-General for Information Society and Media, led by the information technology division of CERN.
This 24-month project of
|
https://en.wikipedia.org/wiki/Blowout%20preventer
|
A blowout preventer (BOP) (pronounced B-O-P) is a specialized valve or similar mechanical device, used to seal, control and monitor oil and gas wells to prevent blowouts, the uncontrolled release of crude oil or natural gas from a well. They are usually installed in stacks of other valves.
Blowout preventers were developed to cope with extreme erratic pressures and uncontrolled flow (formation kick) emanating from a well reservoir during drilling. Kicks can lead to a potentially catastrophic event known as a blowout. In addition to controlling the downhole (occurring in the drilled hole) pressure and the flow of oil and gas, blowout preventers are intended to prevent tubing (e.g. drill pipe and well casing), tools, and drilling fluid from being blown out of the wellbore (also known as bore hole, the hole leading to the reservoir) when a blowout threatens. Blowout preventers are critical to the safety of crew, rig (the equipment system used to drill a wellbore) and environment, and to the monitoring and maintenance of well integrity; thus blowout preventers are intended to provide fail-safety to the systems that include them.
The term BOP is used in oilfield vernacular to refer to blowout preventers. The abbreviated term preventer, usually prefaced by a type (e.g. ram preventer), is used to refer to a single blowout preventer unit. A blowout preventer may also simply be referred to by its type (e.g. ram). The terms blowout preventer, blowout preventer stack and blowout preventer system are commonly used interchangeably and in a general manner to describe an assembly of several stacked blowout preventers of varying type and function, as well as auxiliary components. A typical subsea deepwater blowout preventer system includes components such as electrical and hydraulic lines, control pods, hydraulic accumulators, test valve, kill and choke lines and valves, riser joint, hydraulic connectors, and a support frame.
Two categories of blowout preventer are most pr
|
https://en.wikipedia.org/wiki/Image%20analogy
|
An image analogy is a method of creating an image filter automatically from training data. In an image analogy process, the transformation between two images A and A' is "learned". Later, given a different image B, its "analogy" image B' can be generated based on the learned transformation.
The image analogy method has been used to simulate many types of image filters:
Toy filters, such as blurring or "embossing."
Texture synthesis from an example texture.
Super-resolution, inferring a high-resolution image from a low-resolution source.
Texture transfer, in which images are "texturized" with some arbitrary source texture.
Artistic filters, in which various drawing and painting styles, including oil, pastel, and pen-and-ink rendering, are synthesized based on scanned real-world examples.
Texture-by-numbers, in which realistic scenes, composed of a variety of textures, are created using a simple "painting" interface.
Image colorization, where color is automatically added to grayscale images.
External links
Image Analogies at the New York University Media Research Lab
Image processing
|
https://en.wikipedia.org/wiki/Message%20sequence%20chart
|
A message sequence chart (or MSC) is an interaction diagram from the SDL family standardized by the International Telecommunication Union.
The purpose of recommending MSC (Message Sequence Chart) is to provide a trace language for the specification and description of the communication behaviour of system components and their environment by means of message interchange. Since in MSCs the communication behaviour is presented in a very intuitive and transparent manner, particularly in the graphical representation, the MSC language is easy to
learn, use and interpret. In connection with other languages it can be used to support methodologies for system specification, design, simulation, testing, and documentation.
History
The first version of the MSC standard was released on March 12, 1993.
The 1996 version added references, ordering and inlining expressions concepts, and introduced HMSC (High-level Message Sequence Charts), which are the way of expressing a sequence of MSCs.
The MSC 2000 version added object orientation, refined the use of data and time in diagrams, and added the concept of remote method calls.
Latest version has been published in February 2011.
Symbols in MSC
The existing symbols are:
MSC head, lifeline, and end: a vertical line with a box at the top, and a box or a cross at the bottom.
Instance creation: horizontal dashed arrow to the newly created instance.
Message exchange: horizontal arrow.
Control flow: horizontal arrow with the 'call' prefix, dashed arrow for reply symbol, method and suspension symbols in between.
Timers: start, cancel, time out.
Time interval: relative and absolute with a dashed vertical arrow.
Conditions: usually used to represent a state of the underlying state machine.
Action: a box.
In-line expressions: alternative composition, sequential composition, exception, optional region, parallel composition, iteration (loop).
Reference: reference to another MSC.
Data concept: The user can use any data concept, i
|
https://en.wikipedia.org/wiki/Clean%20configuration
|
Clean configuration is the flight configuration of a fixed-wing aircraft when its external equipment is retracted to minimize drag, and thus maximize airspeed for a given power setting.
For most airplanes, clean configuration means simply that the wing flaps and landing gear are retracted, as these are the cause of drag due to the lack of streamlined shape. On more complex airplanes, it also means that other devices on the wings (such as slats, spoilers, and leading edge flaps) are retracted. Clean configuration is used for normal cruising at altitude during which lift, or rise in altitude, is not needed.
In military aviation, a clean configuration is generally without external stores which reduce maximum performance both due to increased weight and even more so due to increased drag.
References
Aerospace engineering
|
https://en.wikipedia.org/wiki/Dynamic%20bandwidth%20allocation
|
Dynamic bandwidth allocation is a technique by which traffic bandwidth in a shared telecommunications medium can be allocated on demand and fairly between different users of that bandwidth. This is a form of bandwidth management, and is essentially the same thing as statistical multiplexing. Where the sharing of a link adapts in some way to the instantaneous traffic demands of the nodes connected to the link.
Dynamic bandwidth allocation takes advantage of several attributes of shared networks:
all users are typically not connected to the network at one time
even when connected, users are not transmitting data (or voice or video) at all times
most traffic occurs in bursts—there are gaps between packets of information that can be filled with other user traffic
Different network protocols implement dynamic bandwidth allocation in different ways. These methods are typically defined in standards developed by standards bodies such as the ITU, IEEE, FSAN, or IETF. One example is defined in the ITU G.983 specification for passive optical network (PON).
See also
Statistical multiplexing
Channel access method
Dynamic channel allocation
Reservation ALOHA (R-ALOHA)
Telecommunications techniques
Computer networking
Radio resource management
|
https://en.wikipedia.org/wiki/Wrong%20Planet
|
Wrong Planet (sometimes referred to by its URL, wrongplanet.net) is an online community for "individuals (and parents / professionals of those) with Autism, Asperger's Syndrome, ADHD, PDDs, and other neurological differences". The site was started in 2004 by Dan Grover and Alex Plank and includes a chatroom, a forum, and articles describing how to deal with daily issues. Wrong Planet has been referenced by the mainstream U.S. media. Wrong Planet comes up in the special education curriculum of many universities in the United States. A page is dedicated to Wrong Planet and its founder in Exceptional Learners: Introduction to Special Education.
History
In 2006, Alex Plank was sued by the victims of a 19-year-old member of the site, William Freund, who shot two people (and himself) in Aliso Viejo, California, after openly telling others on the site that he planned to do so.
In 2007, a man who was accused of murdering his dermatologist posted on the site while eluding the police. Wrong Planet was covered in a Dateline NBC report on the incident.
In 2008, Wrong Planet began getting involved in autism self-advocacy, with the goal intended to further the rights of autistic individuals living in the United States. Alex Plank, representing the site, testified at the Health and Human Services's Interagency Autism Coordinating Committee.
In 2010, Wrong Planet created a television show about autism called Autism Talk TV. Sponsors of this web series include Autism Speaks. The show is hosted by Alex Plank and Jack Robison, the son of author John Elder Robison. Neurodiversity advocates have accused Plank of betraying Wrong Planet's goal for autism acceptance by accepting money from Autism Speaks for this web series.
References
External links
Online support groups
Mental health support groups
Autism-related organizations in the United States
Internet forums
American health websites
Internet properties established in 2004
Companies based in Fairfax, Virginia
Disability
|
https://en.wikipedia.org/wiki/Tunnel%20warfare
|
Tunnel warfare involves war being conducted in tunnels and other underground cavities. It often includes the construction of underground facilities in order to attack or defend, and the use of existing natural caves and artificial underground facilities for military purposes. Tunnels can be used to undermine fortifications and slip into enemy territory for a surprise attack, while it can strengthen a defense by creating the possibility of ambush, counterattack and the ability to transfer troops from one portion of the battleground to another unseen and protected. Also, tunnels can serve as shelter from enemy attack.
Since antiquity, sappers have used mining against walled cites, fortresses, castles or other strongly held and fortified military positions. Defenders have dug counter-mines to attack miners or destroy a mine threatening their fortifications. Since tunnels are commonplace in urban areas, tunnel warfare is often a feature, though usually a minor one, of urban warfare. A good example of this was seen in the Syrian Civil War in Aleppo, where in March 2015 rebels planted a large amount of explosives under the Syrian Air Force Intelligence Directorate headquarters.
Tunnels are narrow and restrict fields of fire; thus, troops in a tunnel usually have only a few areas exposed to fire or sight at any one time. They can be part of an extensive labyrinth and have culs-de-sac and reduced lighting, typically creating a closed-in night combat environment.
Pre-Gunpowder
Antiquity
Ancient Greece
The Greek historian Polybius, in his Histories, gives a graphic account of mining and counter mining at the Roman siege of Ambracia:
The Aetolians then countered the Roman mine with smoke from burning feathers with charcoal.
Another extraordinary use of siege-mining in ancient Greece was during Philip V of Macedon's siege of the little town of Prinassos, according to Polybius, "the ground around the town were extremely rocky and hard, making any siege-mining virtually i
|
https://en.wikipedia.org/wiki/Frey%20curve
|
In mathematics, a Frey curve or Frey–Hellegouarch curve is the elliptic curve
associated with a (hypothetical) solution of Fermat's equation
The curve is named after Gerhard Frey and (sometimes) .
History
came up with the idea of associating solutions of Fermat's equation with a completely different mathematical object: an elliptic curve.
If ℓ is an odd prime and a, b, and c are positive integers such that
then a corresponding Frey curve is an algebraic curve given by the equation
or, equivalently
This is a nonsingular algebraic curve of genus one defined over Q, and its projective completion is an elliptic curve over Q.
called attention to the unusual properties of the same curve as Hellegouarch, which became called a Frey curve. This provided a bridge between Fermat and Taniyama by showing that a counterexample to Fermat's Last Theorem would create such a curve that would not be modular. The conjecture attracted considerable interest when suggested that the Taniyama–Shimura–Weil conjecture implies Fermat's Last Theorem. However, his argument was not complete. In 1985, Jean-Pierre Serre proposed that a Frey curve could not be modular and provided a partial proof of this. This showed that a proof of the semistable case of the Taniyama–Shimura conjecture would imply Fermat's Last Theorem. Serre did not provide a complete proof and what was missing became known as the epsilon conjecture or ε-conjecture. In the summer of 1986, Ribet (1990) proved the epsilon conjecture, thereby proving that the Taniyama–Shimura–Weil conjecture implies Fermat's Last Theorem.
References
Number theory
|
https://en.wikipedia.org/wiki/Python%20syntax%20and%20semantics
|
The syntax of the Python programming language is the set of rules that defines how a Python program will be written and interpreted (by both the runtime system and by human readers). The Python language has many similarities to Perl, C, and Java. However, there are some definite differences between the languages. It supports multiple programming paradigms, including structured, object-oriented programming, and functional programming, and boasts a dynamic type system and automatic memory management.
Python's syntax is simple and consistent, adhering to the principle that "There should be one— and preferably only one —obvious way to do it." The language incorporates built-in data types and structures, control flow mechanisms, first-class functions, and modules for better code reusability and organization. Python also uses English keywords where other languages use punctuation, contributing to its uncluttered visual layout.
The language provides robust error handling through exceptions, and includes a debugger in the standard library for efficient problem-solving. Python's syntax, designed for readability and ease of use, makes it a popular choice among beginners and professionals alike.
Design philosophy
Python was designed to be a highly readable language. It has a relatively uncluttered visual layout and uses English keywords frequently where other languages use punctuation. Python aims to be simple and consistent in the design of its syntax, encapsulated in the mantra , from the Zen of Python.
This mantra is deliberately opposed to the Perl and Ruby mantra, "there's more than one way to do it".
Keywords
Python has 35 keywords or reserved words; they cannot be used as identifiers.
and
as
assert
async
await
break
class
continue
def
del
elif
else
except
False
finally
for
from
global
if
import
in
is
lambda
None
nonlocal
not
or
pass
raise
return
True
try
while
with
yield
In addition, Python also has 3 soft keywords. Unlike regular hard keywords, soft keyword
|
https://en.wikipedia.org/wiki/Autocorrelator
|
A real time interferometric autocorrelator is an electronic tool used to examine the autocorrelation of, among other things, optical beam intensity and spectral components through examination of variable beam path differences. See Optical autocorrelation.
Description
In an interferometric autocorrelator, the input beam is split into a fixed path beam and a variable path beam using a standard beamsplitter. The fixed path beam travels a known and constant distance, whereas the variable path beam has its path length changed via rotating mirrors or other path changing mechanisms. At the end of the two paths, the beams are ideally parallel, but slightly separated, and using a correctly positioned lens, the two beams are crossed inside a second-harmonic generating (SHG) crystal. The autocorrelation term of the output is then passed into a photomultiplying tube (PMT) and measured.
Details
Considering the input beam as a single pulse with envelope , the constant fixed path distance as , and the variable path distance as a function of time , the input to the SHG can be viewed as
This comes from being the speed of light and being the time for the beam to travel the given path. In general, SHG produces output proportional to the square of the input, which in this case is
The first two terms are based only on the fixed and variable paths respectively, but the third term is based on the difference between them, as is evident in
The PMT used is assumed to be much slower than the envelope function , so it effectively integrates the incoming signal
Since both the fixed path and variable path terms are not dependent on each other, they would constitute a background "noise" in examination of the autocorrelation term and would ideally be removed first. This can be accomplished by examining the momentum vectors
If the fixed and variable momentum vectors are assumed to be of approximately equal magnitude, the second harmonic momentum vector will fall geometrically between
|
https://en.wikipedia.org/wiki/Test%20bench
|
A test bench or testing workbench is an environment used to verify the correctness or soundness of a design or model.
The term has its roots in the testing of electronic devices, where an engineer would sit at a lab bench with tools for measurement and manipulation, such as oscilloscopes, multimeters, soldering irons, wire cutters, and so on, and manually verify the correctness of the device under test (DUT).
In the context of software or firmware or hardware engineering, a test bench is an environment in which the product under development is tested with the aid of software and hardware tools. The software may need to be modified slightly in some cases to work with the test bench but careful coding can ensure that the changes can be undone easily and without introducing bugs.
The term "test bench" is used in digital design with a hardware description language to describe the test code, which instantiates the DUT and runs the test.
An additional meaning for "test bench" is an isolated, controlled environment, very similar to the production environment but neither hidden nor visible to the general public, customers etc. Therefore making changes is safe, because final users are not involved.
See also
Sandbox (computer security)
Test harness
References
Electronic test equipment
Bench
|
https://en.wikipedia.org/wiki/Enantiostasis
|
Enantiostasis is the ability of an open system, especially a living organism, to maintain and conserve its metabolic and physiological functions in response to variations in an unstable environment. Estuarine organisms typically undergo enantiostasis in order to survive with constantly changing salt concentrations. The Australian NSW Board of Studies defines the term in its Biology syllabus as "the maintenance of metabolic and physiological functions in response to variations in the environment".
Enantiostasis is not a form of classical homeostasis, meaning "standing at a similar level," which focuses on maintenance of internal body conditions such as pH, oxygen levels, and ion concentrations. Rather than maintaining homeostatic (stable ideal) conditions, enantiostasis involves maintaining only functionality in spite of external fluctuations. However, it can be considered a type of homeostasis in a broader context because functions are kept relatively consistent. Organic compounds such as Taurine have been shown to still properly function within environments that have been disrupted from an ideal state.
The term enantiostasis was proposed by Mangum and Towle. It is derived from the Greek (; opposite, opposing, over against) and (; to stand, posture).
Trehalose
Fruit Flies Drosophila use the non-toxic sugar trehalose that is found in the hemolymph of insects to cope with changes in environmental conditions. Trehalose levels can spike up to 2% in the hemolymph in response to temperature changes, salinity and osmotic and oxidative stress.
Yeast cells accumulate Trehalose in order to withstand heat stress.
Estuarine Environments
Examples of organisms which undergo enantiostasis in an estuarine environment include:
The oxygen binding effectiveness of hemocyanin in the blue crab Callinectes sapidus varies according to the concentration of two factors, calcium ion concentration, and hydrogen ion concentration. When these concentrations are varied in the same d
|
https://en.wikipedia.org/wiki/KGCW
|
KGCW (channel 26) is a television station licensed to Burlington, Iowa, United States, serving as the CW network outlet for the Quad Cities area. It is owned and operated by network majority owner Nexstar Media Group alongside regional CBS affiliate WHBF-TV (channel 4). Nexstar also provides certain services to Fox affiliate KLJB (channel 18) under a shared services agreement (SSA) with Mission Broadcasting. The stations share studios in the Telco Building on 18th Street in downtown Rock Island, Illinois, while KGCW's transmitter is located near Orion, Illinois.
Channel 26 began broadcasting as KJMH, a local station for Burlington, in January 1988 and became a Fox affiliate that July. It was owned by local businessman Steve Hoth, who named it for his wife, JoEllen M. Hoth. In May 1994, the station lost access to Fox programming after the network moved to strip KJMH of its affiliation. It then went off the air that November.
Grant Communications acquired the station and returned it to the air on March 1, 1996, rebroadcasting KLJB-TV in Davenport. In January 2001, channel 26 was split from channel 18 to become the affiliate of The WB in the Quad Cities, where it was seen on cable and a subchannel of KLJB, and its transmitter was relocated from Burlington to a site that offered increased coverage of the Quad Cities. The station became affiliated with The CW in 2006 when The WB and UPN merged. Nexstar acquired the Grant stations in 2014, coinciding with the separate purchase of WHBF-TV.
History
KJMH: The Hoth years
Burlington Broadcast Company, which was owned by local businessman Steve Hoth, obtained a construction permit for a new television station in Burlington in 1984. The station went unbuilt for three years. An intended November 1987 launch was scrapped because of equipment problems. KJMH—named for JoEllen M. Hoth, Steven's wife—began broadcasting on January 5, 1988. The station, airing a mix of independent station programming and (for a time) a local newscas
|
https://en.wikipedia.org/wiki/Job%20control%20%28Unix%29
|
In Unix and Unix-like operating systems, job control refers to control of jobs by a shell, especially interactively, where a "job" is a shell's representation for a process group. Basic job control features are the suspending, resuming, or terminating of all processes in the job/process group; more advanced features can be performed by sending signals to the job. Job control is of particular interest in Unix due to its multiprocessing, and should be distinguished from job control generally, which is frequently applied to sequential execution (batch processing).
Overview
When using Unix or Unix-like operating systems via a terminal (or terminal emulator), a user will initially only have a single process running, their interactive shell (it may be login shell or may be not). Most tasks (directory listing, editing files, etc.) can easily be accomplished by letting the program take control of the terminal and returning control to the shell when the program exits – formally, by attaching to standard input and standard output to the shell, which reads or writes from the terminal, and catching signals sent from the keyboard, like the termination signal resulting from pressing .
However, sometimes the user will wish to carry out a task while using the terminal for another purpose. A task that is running but is not receiving input from the terminal is said to be running "in the background", while the single task that is receiving input from the terminal is "in the foreground". Job control is a facility developed to make this possible, by allowing the user to start processes in the background, send already running processes into the background, bring background processes into the foreground, and suspend or terminate processes.
The concept of a job maps the (shell) concept of a single shell command to the (operating system) concept of the possibly many processes that the command entails. Multi-process tasks come about because processes may create additional child processes,
|
https://en.wikipedia.org/wiki/Jog%20dial
|
A jog dial, jog wheel, shuttle dial, or shuttle wheel is a type of knob, ring, wheel, or dial which allows the user to shuttle or jog through audio or video media. It is commonly found on models of CD players which are made for disc jockeys, and on professional video equipment such as video tape recorders. More recently, they are found on handheld PDAs, and as the scroll wheel on computer mice. "Jog" refers to going at a very slow speed, whereas "shuttle" refers to a very fast speed.
There are two basic types of wheels. One type has no stops and can be spun the entire way around, because it is a rotary incremental encoder. This type depends on tracking the actual motion of the dial: the faster it spins forward or back, the faster it fast-forwards or rewinds. Once the dial stops moving, the media continues playing or remains paused at that point. Another type has stops on either side, and often has three or so speeds which depend on how far it is turned. Once the wheel is released, it springs back to the middle position and the media pauses or begins playing again.
If the device is set or designed to pause after the wheel is used, the audio is often stuttered, repeating a small section over and over again. This is usually done on DJ CD players, for the purpose of beatmatching, and is equivalent to an earlier turntablist DJ moving a phonograph record back and forth slightly to find the physical location of a starting beat within the groove. On the video, the pause is a freeze frame of the current video frame.
Sony Corporation holds a patent for a 5-way version of the jog dial. A 5-way jog dial allows up and down scrolling, right and left deflections, and a press-to-click action. Such jog dial was a feature of the Sony CLIÉ PDA series and Sony Ericsson P800, P900 and P910 smartphones. A 5-way jog dial has not been used by Sony or its subsidiaries since 2006.
See also
iPod click wheel
Dial box
References
External links
Audio engineering
Television t
|
https://en.wikipedia.org/wiki/Electromagnetic%20field%20solver
|
Electromagnetic field solvers (or sometimes just field solvers) are specialized programs that solve (a subset of) Maxwell's equations directly. They form a part of the field of electronic design automation, or EDA, and are commonly used in the design of integrated circuits and printed circuit boards. They are used when a solution from first principles or the highest accuracy is required.
Introduction
The extraction of parasitic circuit models is essential for various aspects of physical verification such as timing, signal integrity, substrate coupling, and power grid analysis. As circuit speeds and densities have increased, the need has grown to account accurately for parasitic effects for more extensive and more complicated interconnect structures. In addition, the electromagnetic complexity has grown as well, from resistance and capacitance to inductance, and now even full electromagnetic wave propagation. This increase in complexity has also grown for the analysis of passive devices such as integrated inductors. Electromagnetic behavior is governed by Maxwell's equations, and all parasitic extraction requires solving some form of Maxwell's equations. That form may be a simple analytic parallel plate capacitance equation or may involve a full numerical solution for a complex 3D geometry with wave propagation. In layout extraction, analytic formulas for simple or simplified geometry can be used where accuracy is less important than speed. Still, when the geometric configuration is not simple, and accuracy demands do not allow simplification, a numerical solution of the appropriate form of Maxwell's equations must be employed.
The appropriate form of Maxwell's equations is typically solved by one of two classes of methods. The first uses a differential form of the governing equations and requires the discretization (meshing) of the entire domain in which the electromagnetic fields reside. Two of the most common approaches in this first class are the finite diffe
|
https://en.wikipedia.org/wiki/GDAL
|
The Geospatial Data Abstraction Library (GDAL) is a computer software library for reading and writing raster and vector geospatial data formats (e.g. shapefile), and is released under the permissive X/MIT style free software license by the Open Source Geospatial Foundation. As a library, it presents a single abstract data model to the calling application for all supported formats. It may also be built with a variety of useful command line interface utilities for data translation and processing. Projections and transformations are supported by the PROJ library.
The related OGR library (OGR Simple Features Library), which is part of the GDAL source tree, provides a similar ability for simple features vector graphics data.
GDAL was developed mainly by Frank Warmerdam until the release of version 1.3.2, when maintenance was officially transferred to the GDAL/OGR Project Management Committee under the Open Source Geospatial Foundation.
GDAL/OGR is considered a major free software project for its "extensive capabilities of data exchange" and also in the commercial GIS community due to its widespread use and comprehensive set of functionalities.
Software using GDAL/OGR
Several software programs use the GDAL/OGR libraries to allow them to read and write multiple GIS formats. Such programs include:
ArcGIS – Uses GDAL for custom raster formats
Avenza MAPublisher - GIS and mapping tools for Adobe Illustrator. Uses GDAL for coordinate system transformation, format reading & writing, geometry operations, & unit conversion.
Avenza Geographic Imager - Spatial imaging tools for Adobe Photoshop. Uses GDAL for coordinate system transformation, format reading & writing, & unit conversion.
Avenza Maps - iOS & Android mobile mapping application. Uses GDAL to read metadata information for geospatial maps / data to transform them to WGS84 for offline navigation.
Biosphere3D – Open source landscape scenery globe
Biotop Invent
Cadwork
ENVI – Remote Sensing software
ERDAS APOLLO - Image
|
https://en.wikipedia.org/wiki/Martingale%20pricing
|
Martingale pricing is a pricing approach based on the notions of martingale and risk neutrality. The martingale pricing approach is a cornerstone of modern quantitative finance and can be applied to a variety of derivatives contracts, e.g. options, futures, interest rate derivatives, credit derivatives, etc.
In contrast to the PDE approach to pricing, martingale pricing formulae are in the form of expectations which can be efficiently solved numerically using a Monte Carlo approach. As such, martingale pricing is preferred when valuing high-dimensional contracts such as a basket of options. On the other hand, valuing American-style contracts is troublesome and requires discretizing the problem (making it like a Bermudan option) and only in 2001 F. A. Longstaff and E. S. Schwartz developed a practical Monte Carlo method for pricing American options.
Measure theory representation
Suppose the state of the market can be represented by the filtered probability space,. Let be a stochastic price process on this space. One may price a derivative security, under the philosophy of no arbitrage as,
Where is the risk-neutral measure.
is an -measurable (risk-free, possibly stochastic) interest rate process.
This is accomplished through almost sure replication of the derivative's time payoff using only underlying securities, and the risk-free money market (MMA). These underlyings have prices that are observable and known.
Specifically, one constructs a portfolio process in continuous time, where he holds shares of the underlying stock at each time , and cash earning the risk-free rate . The portfolio obeys the stochastic differential equation
One will then attempt to apply Girsanov theorem by first computing ; that is, the Radon–Nikodym derivative with respect to the observed market probability distribution. This ensures that the discounted replicating portfolio process is a Martingale under risk neutral conditions.
If such a process can be well-defined and cons
|
https://en.wikipedia.org/wiki/Direct%20coupling
|
In electronics, direct coupling or DC coupling (also called conductive coupling and galvanic coupling) is the transfer of electrical energy by means of physical contact via a conductive medium, in contrast to inductive coupling and capacitive coupling. It is a way of interconnecting two circuits such that, in addition to transferring the AC signal (or information), the first circuit also provides DC bias to the second. Thus, DC blocking capacitors are not used or needed to interconnect the circuits. Conductive coupling passes the full spectrum of frequencies including direct current.
Such coupling may be achieved by a wire, resistor, or common terminal, such as a binding post or metallic bonding.
DC bias
The provision of DC bias only occurs in a group of circuits that forms a single unit, such as an op-amp. Here the internal units or portions of the op-amp (like the input stage, voltage gain stage, and output stage) will be direct coupled and will also be used to set up the bias conditions inside the op-amp (the input stage will also supply the input bias to the voltage gain stage, for example). However, when two op-amps are directly coupled the first op-amp will supply any bias to the next - any DC at its output will form the input for the next. The resulting output of the second op-amp now represents an offset error if it is not the intended one.
Uses
This technique is used by default in circuits like IC op-amps, since large coupling capacitors cannot be fabricated on-chip. That said, some discrete circuits (such as power amplifiers) also employ direct coupling to cut cost and improve low frequency performance.
Offset error
One advantage or disadvantage (depending on application) of direct coupling is that any DC at the input appears as a valid signal to the system, and so it will be transferred from the input to the output (or between two directly coupled circuits). If this is not a desired result, then the term used for the output signal is output offset er
|
https://en.wikipedia.org/wiki/Lucy%E2%80%93Hook%20coaddition%20method
|
The Lucy–Hook coaddition method is an image processing technique for combining sub-stepped astronomical image data onto a finer grid. The method allows the option of resolution and contrast enhancement or the choice of a conservative, re-convolved, output.
Tests with very deep Hubble Space Telescope Wide Field and Planetary Camera 2 (WFPC2) imaging data of excellent quality show that these methods can be very effective and allow fine-scale features to be studied better than on the unprocessed images. The Lucy–Hook coaddition method is an extension of the standard Richardson–Lucy deconvolution iterative restoration method.
For many purposes it may be more convenient to combine dithered datasets using the Drizzle method.
References
External links
ST-ECF page about Lucy–Hook coaddition method
Astronomical imaging
Image processing
|
https://en.wikipedia.org/wiki/Drizzle%20%28image%20processing%29
|
Drizzle (or DRIZZLE) is a digital image processing method for the linear reconstruction of undersampled images. The method is normally used for the combination of astronomical images and was originally developed for the Hubble Deep Field observations made by the Hubble Space Telescope. The algorithm, known as variable-pixel linear reconstruction, or informally as "Drizzle", preserves photometry and resolution, can weight input images according to the statistical significance of each pixel, and removes the effects of geometric distortion on both image shape and photometry. In addition, it is possible to use drizzling to combine dithered images in the presence of cosmic rays.
Drizzling is commonly used by amateur astrophotographers, particularly for processing large amounts of planetary image data (typically several thousand frames), drizzling in astrophotography applications can also be used to recover higher resolution stills from terrestrial video recordings. According to astrophotographer David Ratledge, "Results using the DRIZZLE command can be spectacular with amateur instruments."
Overview
Camera optics generally introduce geometric distortion of images.
Undersampled images are, for example, common in astronomy because instrument designers are frequently forced to choose between properly sampling a small field of view and undersampling a larger field. This is a particular problem for the Hubble Space Telescope (HST), where the corrected optics may provide superb resolution, but the detectors are only able to take full advantage of the full resolving power of the telescope over a limited field of view. Fortunately, much of the information lost to undersampling can be restored. The most commonly used of these techniques are shift-and-add and interlacing.
Drizzle was originally developed to combine the dithered images of the Hubble Deep Field North and has since been widely used for the combination of dithered images from both HST's cameras and those on othe
|
https://en.wikipedia.org/wiki/Information%20algebra
|
The term "information algebra" refers to mathematical techniques of information processing. Classical information theory goes back to Claude Shannon. It is a theory of information transmission, looking at communication and storage. However, it has not been considered so far that information comes from different sources and that it is therefore usually combined. It has furthermore been neglected in classical information theory that one wants to extract those parts out of a piece of information that are relevant to specific questions.
A mathematical phrasing of these operations leads to an algebra of information, describing basic modes of information processing. Such an algebra involves several formalisms of computer science, which seem to be different on the surface: relational databases, multiple systems of formal logic or numerical problems of linear algebra. It allows the development of generic procedures of information processing and thus a unification of basic methods of computer science, in particular of distributed information processing.
Information relates to precise questions, comes from different sources, must be aggregated, and can be focused on questions of interest. Starting from these considerations, information algebras are two-sorted algebras , where is a semigroup, representing combination or aggregation of information, is a lattice of domains (related to questions) whose partial order reflects the granularity of the domain or the question, and a mixed operation representing focusing or extraction of information.
Information and its operations
More precisely, in the two-sorted algebra , the following operations are defined
Additionally, in the usual lattice operations (meet and join) are defined.
Axioms and definition
The axioms of the two-sorted algebra , in addition to the axioms of the lattice :
A two-sorted algebra satisfying these axioms is called an Information Algebra.
Order of information
A partial order of information can be
|
https://en.wikipedia.org/wiki/Animal%20geography
|
Animal geography is a subfield of the nature–society/human–environment branch of geography as well as a part of the larger, interdisciplinary umbrella of human–animal studies (HAS). Animal geography is defined as the study of "the complex entanglings of human–animal relations with space, place, location, environment and landscape" or "the study of where, when, why and how nonhuman animals intersect with human societies". Recent work advances these perspectives to argue about an ecology of relations in which humans and animals are enmeshed, taking seriously the lived spaces of animals themselves and their sentient interactions with not just human but other nonhuman bodies as well.
The Animal Geography Specialty Group of the Association of American Geographers was founded in 2009 by Monica Ogra and Julie Urbanik, and the Animal Geography Research Network was founded in 2011 by Daniel Allen.
Overview
First wave
The first wave of animal geography, known as zoogeography, came to prominence as a geographic subfield from the late 1800s through the early part of the 20th century. During this time the study of animals was seen as a key part of the discipline and the goal was "the scientific study of animal life with reference to the distribution of animals on the earth and the mutual influence of environment and animals upon each other". The animals that were the focus of studies were almost exclusively wild animals and zoogeographers were building on the new theories of evolution and natural selection. They mapped the evolution and movement of species across time and space and also sought to understand how animals adapted to different ecosystems. "The ambition was to establish general laws of how animals arranged themselves across the earth's surface or, at smaller scales, to establish patterns of spatial co-variation between animals and other environmental factors." Key works include Newbigin's Animal Geography, Bartholomew, Clarke, and Grimshaw's Atlas of Zoogeography
|
https://en.wikipedia.org/wiki/Capacitance%20meter
|
A capacitance meter is a piece of electronic test equipment used to measure capacitance, mainly of discrete capacitors. Depending on the sophistication of the meter, it may display the capacitance only, or it may also measure a number of other parameters such as leakage, equivalent series resistance (ESR), and inductance. For most purposes and in most cases the capacitor must be disconnected from circuit; ESR can usually be measured in circuit.
Simple checks without a true capacitance meter
Some checks can be made without a specialised instrument, particularly on aluminium electrolytic capacitors which tend to be of high capacitance and to be subject to poor leakage. A multimeter in a resistance range can detect a short-circuited capacitor (very low resistance) or one with very high leakage (high resistance, but lower than it should be; an ideal capacitor has infinite DC resistance). A crude idea of the capacitance can be derived with an analog multimeter in a high resistance range by observing the needle when first connected; current will flow to charge the capacitor and the needle will "kick" from infinite indicated resistance to a relatively low value, and then drift up to infinity. The amplitude of the kick is an indication of capacitance. Interpreting results requires some experience, or comparison with a good capacitor, and depends upon the particular meter and range used.
Simple and non-bridge meters
Many DVMs (digital volt meters) have a capacitance-measuring function. These usually operate by charging and discharging the capacitor under test with a known current and measuring the rate of rise of the resulting voltage; the slower the rate of rise, the larger the capacitance. DVMs can usually measure capacitance from nanofarads to a few hundred microfarads, but wider ranges are not unusual.
It is also possible to measure capacitance by passing a known high-frequency alternating current through the device under test and measuring the resulting voltage acr
|
https://en.wikipedia.org/wiki/Trimmer%20%28construction%29
|
In light-frame construction, a trimmer is a timber or metal beam (joist) used to create an opening around a stairwell, skylight, chimney, and the like. Trimmers are installed parallel to the primary floor or ceiling joists and support headers, which run perpendicular to the primary joists.
It can also refer to a jack stud that supports a header above a window or door opening.
Traditionally, a stud which was less than full length was sometimes referred to as a cripple.
References
Structural system
|
https://en.wikipedia.org/wiki/TransPAC2
|
The TransPAC2 Network was a US National Science Foundation-funded high-speed international computer network circuit connecting national research and education networks in the Asia-Pacific region to those in the US. It was the continuation of the TransPAC project which ran from 2000 through 2005.
History
The first TransPAC effort started in 1998.
The original link of 35 Mbit/sec connected to the Very high-speed Backbone Network Service (vBNS) near Chicago.
TransPAC2's Network Operations Center was located in the Informatics and Communications Technology Complex in Indianapolis, Indiana on the Indiana University – Purdue University Indianapolis (IUPUI) campus. The NOC operated 24 hours a day and 7 days a week starting in October 1998.
In May 1999 link speed was expanded to 73 Mbit/s with funding from the Japan Science and Technology Corporation.
In June 2000 link speed increased to 100 Mbit/s, and in September to 155 Mbit/s.
In May 2001, equipment from Cisco Systems replaced that from Juniper Networks in Chicago.
The principal investigator for this first phase was Michael McRobbie.
The NSF awarded a follow-on grant on December 20, 2004, with principal investigators James Williams and Douglas Van Houweling.
TransPAC2 was part of the NSF's International Research Network Connections (IRNC) program.
In April 2005, a single OC-192 circuit was provided by KDDI America.
It connected to the Asia Pacific Advanced Network in Tokyo and to a TransPAC2-managed router in Los Angeles. The Los Angeles router, using the TransPAC2 Autonomous System number 22388, maintained a 10 Gigabit Ethernet connection to the CENIC managed Pacific Wave Ethernet switch. This connection enabled direct peering with the Internet2 Abilene Network, National LambdaRail and other high speed networks on the US West Coast.
Use of the TransPAC2 network was limited to the networks carried by other research and education network aggregators. In the Pacific region, this list includes APAN, TEIN2, and AAR
|
https://en.wikipedia.org/wiki/Test%20probe
|
A test probe is a physical device used to connect electronic test equipment to a device under test (DUT). Test probes range from very simple, robust devices to complex probes that are sophisticated, expensive, and fragile. Specific types include test prods, oscilloscope probes and current probes. A test probe is often supplied as a test lead, which includes the probe, cable and terminating connector.
Voltage
Voltage probes are used to measure voltages present on the DUT. To achieve high accuracy, the test instrument and its probe must not significantly affect the voltage being measured. This is accomplished by ensuring that the combination of instrument and probe exhibit a sufficiently high impedance that will not load the DUT. For AC measurements, the reactive component of impedance may be more important than the resistive.
Simple test leads
A typical voltmeter probe consists of a single wire test lead that has on one end a connector that fits the voltmeter and on the other end a rigid, tubular plastic section that comprises both a handle and probe body. The handle allows a person to hold and guide the probe without influencing the measurement (by becoming part of the electric circuit) or being exposed to dangerous voltages that might cause electric shock. Within the probe body, the wire is connected to a rigid, pointed metal tip that contacts the DUT. Some probes allow an alligator clip to be attached to the tip, thus enabling the probe to be attached to the DUT so that it need not be held in place.
Test leads are usually made with finely stranded wire to keep them flexible, of wire gauges sufficient to conduct a few amperes of electric current. The insulation is chosen to be both flexible and have a breakdown voltage higher than the voltmeter's maximum input voltage. The many fine strands and the thick insulation make the wire thicker than ordinary hookup wire.
Two probes are used together to measure voltage, current, and two-terminal components such as r
|
https://en.wikipedia.org/wiki/RapLeaf
|
RapLeaf was a US-based marketing data and software company, which was acquired by email data provider TowerData in 2013.
Company
RapLeaf was founded in San Francisco by Auren Hoffman and Manish Shah in March 2005.
In May 2006 the Founders Fund led a seed round of about $1 million, including angel investors such as Peter Thiel and Ron Conway.
In June 2007 a second round included Founders Fund, Rembrandt Venture Partners and included Conway.
The company's first product was a meta-reputation system that allows users to create reviews and ratings of consumer transactions, which they then contribute to multiple e-commerce websites.
On January 26, 2007, Rapleaf released Upscoop, a service that allowed users to search for and manage their contacts by email address across multiple social networking sites.
In 2011, Rapleaf created a data onboarding division named LiveRamp, which later spun out into an independent company which was acquired by Acxiom in 2014 for $310 million.
In 2012, Rapleaf began selling segmented data tied to email addresses for marketers to personalize email communications. Around September 2012 the company moved its headquarters from San Francisco to Chicago, and Phil Davis became chief executive, replacing Hoffman.
Rapleaf was acquired by TowerData in 2013.
Controversy and backlash
On May 15, 2006, eBay removed a number of auction listings where the seller had included links to Rapleaf, claiming they were in violation of its terms of use.
In late August 2007, Upscoop began e-mailing entire contact lists that were provided by their users when they log in. This caused some criticism, and the company later apologized for doing so.
On July 10, 2008, Rapleaf changed its interface so that it no longer allows users to search people by email addresses. Instead, the service only allows a registered user to view their own reputation and the websites (social and business networking) to which their own e-mail address is registered. There was an immediate n
|
https://en.wikipedia.org/wiki/Diagonally%20dominant%20matrix
|
In mathematics, a square matrix is said to be diagonally dominant if, for every row of the matrix, the magnitude of the diagonal entry in a row is larger than or equal to the sum of the magnitudes of all the other (non-diagonal) entries in that row. More precisely, the matrix A is diagonally dominant if
where aij denotes the entry in the ith row and jth column.
This definition uses a weak inequality, and is therefore sometimes called weak diagonal dominance. If a strict inequality (>) is used, this is called strict diagonal dominance. The unqualified term diagonal dominance can mean both strict and weak diagonal dominance, depending on the context.
Variations
The definition in the first paragraph sums entries across each row. It is therefore sometimes called row diagonal dominance. If one changes the definition to sum down each column, this is called column diagonal dominance.
Any strictly diagonally dominant matrix is trivially a weakly chained diagonally dominant matrix. Weakly chained diagonally dominant matrices are nonsingular and include the family of irreducibly diagonally dominant matrices. These are irreducible matrices that are weakly diagonally dominant, but strictly diagonally dominant in at least one row.
Examples
The matrix
is diagonally dominant because
since
since
since .
The matrix
is not diagonally dominant because
since
since
since .
That is, the first and third rows fail to satisfy the diagonal dominance condition.
The matrix
is strictly diagonally dominant because
since
since
since .
Applications and properties
The following results can be proved trivially from Gershgorin's circle theorem. Gershgorin's circle theorem itself has a very short proof.
A strictly diagonally dominant matrix (or an irreducibly diagonally dominant matrix) is non-singular.
A Hermitian diagonally dominant matrix with real non-negative diagonal entries is positive semidefinite. This follows from
|
https://en.wikipedia.org/wiki/Calabi%20conjecture
|
In the mathematical field of differential geometry, the Calabi conjecture was a conjecture about the existence of certain kinds of Riemannian metrics on certain complex manifolds, made by . It was proved by , who received the Fields Medal and Oswald Veblen Prize in part for his proof. His work, principally an analysis of an elliptic partial differential equation known as the complex Monge–Ampère equation, was an influential early result in the field of geometric analysis.
More precisely, Calabi's conjecture asserts the resolution of the prescribed Ricci curvature problem within the setting of Kähler metrics on closed complex manifolds. According to Chern–Weil theory, the Ricci form of any such metric is a closed differential 2-form which represents the first Chern class. Calabi conjectured that for any such differential form , there is exactly one Kähler metric in each Kähler class whose Ricci form is . (Some compact complex manifolds admit no Kähler classes, in which case the conjecture is vacuous.)
In the special case that the first Chern class vanishes, this implies that each Kähler class contains exactly one Ricci-flat metric. These are often called Calabi–Yau manifolds. However, the term is often used in slightly different ways by various authors — for example, some uses may refer to the complex manifold while others might refer to a complex manifold together with a particular Ricci-flat Kähler metric.
This special case can equivalently be regarded as the complete existence and uniqueness theory for Kähler–Einstein metrics of zero scalar curvature on compact complex manifolds. The case of nonzero scalar curvature does not follow as a special case of Calabi's conjecture, since the 'right-hand side' of the Kähler–Einstein problem depends on the 'unknown' metric, thereby placing the Kähler–Einstein problem outside the domain of prescribing Ricci curvature. However, Yau's analysis of the complex Monge–Ampère equation in resolving the Calabi conjecture was suffic
|
https://en.wikipedia.org/wiki/VoIP%20VPN
|
A VoIP VPN combines voice over IP and virtual private network technologies to offer a method for delivering secure voice. Because VoIP transmits digitized voice as a stream of data, the VoIP VPN solution accomplishes voice encryption quite simply, applying standard data-encryption mechanisms inherently available in the collection of protocols used to implement a VPN.
The VoIP gateway-router first converts the analog voice signal to digital form, encapsulates the digitized voice within IP packets, then encrypts the digitized voice using IPsec, and finally routes the encrypted voice packets securely through a VPN tunnel. At the remote site, another VoIP router decodes the voice and converts the digital voice to an analog signal for delivery to the phone.
A VoIP VPN can also run within an IP in IP tunnel or using SSL-based OpenVPN. There is no encryption in former case, but traffic overhead is significantly lower in comparison with IPsec tunnel. The advantage of OpenVPN tunneling is that it can run on a dynamic IP and may provide up to 512 bits SSL encryption.
Advantages
Security is not the only reason to pass Voice over IP through a virtual private network, however. Session Initiation Protocol, a commonly used VoIP protocol is notoriously difficult to pass through a firewall because it uses random port numbers to establish connections. A VPN is also a workaround to avoid a firewall issue when configuring remote VoIP clients.
However, latest VoIP standard STUN, ICE and TURN eliminate natively some NAT problems of VoIP.
Installing an extension on a VPN is a simple means to obtain an off-premises extension (OPX), a function which in conventional landline telephony required a leased line from the private branch exchange to the remote site. A worker at a remote location could therefore appear virtually to be at the company's main office, with full internal access to telephone and network.
Disadvantages
The protocol overhead caused by the encapsulation of VoIP protoc
|
https://en.wikipedia.org/wiki/Defective%20by%20Design
|
Defective by Design (DBD) is a grassroots anti-digital rights management (DRM) initiative by the Free Software Foundation (FSF) and CivicActions. Launched in 2006, DBD believes that DRM (which they call "digital restrictions management") makes technology deliberately defective, negatively affects digital freedoms, and is "a threat to innovation in media, the privacy of readers, and freedom for computer users." The initiative regularly campaigns against the use of DRM by the media industry and software industry to increase awareness of the anti-DRM movement and pressure industries into no longer using DRM. They are known for their use of hazmat suits in their demonstrations.
DBD represents one of the first efforts of the FSF to find common cause with mainstream social activists and encourage free software advocates to become socially involved. As of late 2006, the campaign was claiming over 12,000 registered members.
Position
According to their website, DBD believes that DRM is used to control how consumers use the technology they are meant to own, as well as who can produce and distribute media—which the DBD equates to book burning—while conducting mass surveillance of media consumption habits. They argue that DRM "is designed to take away every possible use of digital media, regardless of legal rights, and sell some of these functionalities back as severely limited services."
DBD argues that DRM does not help, but rather hurts authors, publishers, studios, labels, and similar media producers and suppliers—especially those in independent media—by forcing them to work with distribution services that are difficult to switch away from. They also argue that DRM is not meant to prevent copyright infringement as claimed by proponents and is in fact completely separate from copyright, as if DRM really was used for those purposes, "every distribution method for that particular piece of media would have to be distributed by an uncrackable DRM-encumbered distribution plat
|
https://en.wikipedia.org/wiki/Nike%2BiPod
|
The Nike+iPod Sports Kit is an activity tracker device, developed by Nike, Inc., which measures and records the distance and pace of a walk or run. The Nike+iPod consists of a small transmitter device attached to or embedded in a shoe, which communicates with either the Nike+ Sportband, a receiver plugged into an iPod Nano. It can also work directly with a 2nd Generation iPod Touch (or higher), iPhone 3GS, iPhone 4, iPhone 4S, iPhone 5,
The Nike+iPod was announced on May 23, 2006. On September 7, 2010, Nike released the Nike+ Running App (originally called Nike+ GPS) on the App Store, which used a tracking engine powered by MotionX that does not require the separate shoe sensor or pedometer. This application works using the accelerometer and GPS of the iPhone and the accelerometer of the iPod Touch, which does not have a GPS chip. Nike+Running is compatible with the iPhone 6 and iPhone 6 Plus down to iPhone 3GS and iPod touch. On June 21, 2012, Nike released Nike+ Running App for Android. The current app is compatible with all Android phones running 4.0.3 and up.
Overview
The sensor and iPod kit were revealed on May 20, 2006. The kit stores information such as the elapsed time of the workout, the distance traveled, pace, and calories burned by the individual. Nike+ was a collaboration between Nike and Apple; the platform consisted of an iPod, a wireless chip, Nike shoes that accepted the wireless chip, an iTunes membership, and a Nike+ online community. iPods using Nike iPod require a sensor and remote.
The next upgraded product was the Sportband kit, which was announced in April 2008. The kit allows users to store run information without the iPod Nano. The Sportband consists of two parts: a rubber holding strap which is worn around the wrist, and a receiver which resembles a USB key-disk. The receiver displays information comparable to that of the iPod kit on the built-in display. After a run, the receiver can be plugged straight into a USB port and the softwar
|
https://en.wikipedia.org/wiki/VT1.5
|
VT1.5 is a type of virtual tributary in SONET.
SONET bandwidth is defined in multiples of an OC-1/STS-1, each of which can transport up to 51.84 Mbit/s. However, it is frequently desirable to address much smaller portions of bandwidth. To meet this need, sub-STS-1 facilities called Virtual Tributaries have been defined. In North America and Japan, the VT1.5 is the most common virtual tributary because it can carry 1.544 Mbit/s; just enough room for a DS1/T1 signal. In Europe, the VT2 (with a data rate of 2.304 Mbit/s) is used to transport E1s.
Some SONET manufacturers offer products that can switch at the VT1.5 level. Such equipment is able to re-arrange the data payloads so that inbound VT1.5s are placed into a completely different set of outbound STS's than the ones they arrived in. Among other things, this allows the bandwidth usage to be optimized and facilitates a cleaner network design.
The following is provided via Network Infrastructure/Design Student:
Four types of VT's defined in SONET
VT 1.5 (DS-1: 1.544 Mbit/s)
VT 2 (E-1: 2.048 Mbit/s)
VT 3 (DS-1C: 3.152 Mbit/s)
VT 6 (DS-2: 6.312 Mbit/s)
7 VT groups (VTG) per STS-1.
Four DS-1s map into one VTG.
One STS-1 frame can carry 28 DS-1s
DS-3 Maps directly to STS-1. VT are not needed.
External links
"Understanding Sonet VTs: A Tutorial"
Network protocols
Synchronous optical networking
|
https://en.wikipedia.org/wiki/Simplex%20noise
|
Simplex noise is the result of an n-dimensional noise function comparable to Perlin noise ("classic" noise) but with fewer directional artifacts and, in higher dimensions, a lower computational overhead. Ken Perlin designed the algorithm in 2001 to address the limitations of his classic noise function, especially in higher dimensions.
The advantages of simplex noise over Perlin noise:
Simplex noise has lower computational complexity and requires fewer multiplications.
Simplex noise scales to higher dimensions (4D, 5D) with much less computational cost: the complexity is for dimensions instead of the of classic noise.
Simplex noise has no noticeable directional artifacts (is visually isotropic), though noise generated for different dimensions is visually distinct (e.g. 2D noise has a different look than 2D slices of 3D noise, and it looks increasingly worse for higher dimensions).
Simplex noise has a well-defined and continuous gradient (almost) everywhere that can be computed quite cheaply.
Simplex noise is easy to implement in hardware.
Whereas classical noise interpolates between the gradients at the surrounding hypergrid end points (i.e., northeast, northwest, southeast and southwest in 2D), simplex noise divides the space into simplices (i.e., -dimensional triangles). This reduces the number of data points. While a hypercube in dimensions has corners, a simplex in dimensions has only corners. The triangles are equilateral in 2D, but in higher dimensions the simplices are only approximately regular. For example, the tiling in the 3D case of the function is an orientation of the tetragonal disphenoid honeycomb.
Simplex noise is useful for computer graphics applications, where noise is usually computed over 2, 3, 4, or possibly 5 dimensions. For higher dimensions, n-spheres around n-simplex corners are not densely enough packed, reducing the support of the function and making it zero in large portions of space.
Algorithm detail
Simplex noise is
|
https://en.wikipedia.org/wiki/Cauchy%27s%20functional%20equation
|
Cauchy's functional equation is the functional equation:
A function that solves this equation is called an additive function. Over the rational numbers, it can be shown using elementary algebra that there is a single family of solutions, namely for any rational constant Over the real numbers, the family of linear maps now with an arbitrary real constant, is likewise a family of solutions; however there can exist other solutions not of this form that are extremely complicated. However, any of a number of regularity conditions, some of them quite weak, will preclude the existence of these pathological solutions. For example, an additive function is linear if:
is continuous (Cauchy, 1821). In fact, it suffices for to be continuous at one point (Darboux, 1875).
is monotonic on any interval.
is bounded on any interval.
is Lebesgue measurable.
On the other hand, if no further conditions are imposed on then (assuming the axiom of choice) there are infinitely many other functions that satisfy the equation. This was proved in 1905 by Georg Hamel using Hamel bases. Such functions are sometimes called Hamel functions.
The fifth problem on Hilbert's list is a generalisation of this equation. Functions where there exists a real number such that are known as Cauchy-Hamel functions and are used in Dehn-Hadwiger invariants which are used in the extension of Hilbert's third problem from 3D to higher dimensions.
This equation is sometimes referred to as Cauchy's additive functional equation to distinguish it from Cauchy's exponential functional equation Cauchy's logarithmic functional equation and Cauchy's multiplicative functional equation
Solutions over the rational numbers
A simple argument, involving only elementary algebra, demonstrates that the set of additive maps , where are vector spaces over an extension field of , is identical to the set of -linear maps from to .
Theorem: Let be an additive function. Then is -linear.
Proof: We want to pr
|
https://en.wikipedia.org/wiki/Error%20hiding
|
In computer programming, error hiding (or error swallowing) is the practice of catching an error or exception, and then continuing without logging, processing, or reporting the error to other parts of the software. Handling errors in this manner is considered bad practice and an anti-pattern in computer programming. In languages with exception handling support, this practice is called exception swallowing.
Errors and exceptions have several purposes:
Help software maintainers track down and understand problems that happen when a user is running the software, when combined with a logging system
Provide useful information to the user of the software, when combined with meaningful error messages, error codes or error types shown in a UI, as console messages, or as data returned from an API (depending on the type of software and type of user)
Indicate that normal operation cannot continue, so the software can fall back to alternative ways of performing the required task or abort the operation.
When errors are swallowed, these purposes can't be accomplished. Information about the error is lost, which makes it very hard to track down problems. Depending on how the software is implemented, it can cause unintended side effects that cascade into other errors, destabilizing the system. Without information about the root cause of the problem, it's very hard to figure out what is going wrong or how to fix it.
Examples
Languages with exception handling
In this C# example, even though the code inside the try block throws an exception, it gets caught by the blanket catch clause. The exception has been swallowed and is considered handled, and the program continues.
try {
throw new Exception();
} catch {
// do nothing
}
In this PowerShell example, the trap clause catches the exception being thrown and swallows it by continuing execution. The "I should not be here" message is shown as if no exception had happened.
&{
trap { continue }
throw
write-output "I shou
|
https://en.wikipedia.org/wiki/International%20Council%20on%20Systems%20Engineering
|
The International Council on Systems Engineering (INCOSE; pronounced in-co-see) is a not-for-profit membership organization and professional society in the field of systems engineering with about 17,000 members including individual, corporate, and student members. INCOSE's main activities include conferences, publications, local chapters, certifications and technical working groups.
The INCOSE International Symposium is usually held in July, and the INCOSE International Workshop is held in the United States in January.
Currently, there are about 70 local INCOSE chapters globally with most chapters outside the United States representing entire countries, while chapters within the United States represent cities or regions.
INCOSE organizes about 50 technical working groups with international membership, aimed at collaboration and the creation of INCOSE products, printed and online, in the field of Systems engineering. There are working groups for topics within systems engineering practice, systems engineering in particular industries and systems engineering's relationship to other related disciplines.
INCOSE produces two main periodicals: the journal, and the practitioner magazine, and a number of individual published works, including the INCOSE Handbook. In collaboration with the IEEE Computer Society and the Systems Engineering Research Council (SERC), INCOSE produces and maintains the online Systems Engineering Book of Knowledge (SEBoK)], a wiki-style reference open to contributions from anyone, but with content controlled and managed by an editorial board.
INCOSE certifies systems engineers through its three-tier certification process, which requires a combination of education, years of experience and passing an examination based on the INCOSE Systems Engineering Handbook.
INCOSE is a member organization of the Federation of Enterprise Architecture Professional Organizations (FEAPO), a worldwide association of professional organizations formed to adva
|
https://en.wikipedia.org/wiki/Collateral%20circulation
|
Collateral circulation is the alternate circulation around a blocked artery or vein via another path, such as nearby minor vessels. It may occur via preexisting vascular redundancy (analogous to engineered redundancy), as in the circle of Willis in the brain, or it may occur via new branches formed between adjacent blood vessels (neovascularization), as in the eye after a retinal embolism or in the brain when an instance of arterial constriction occurs due to Moyamoya disease. Its formation may be related by pathological conditions such as high vascular resistance or ischaemia. It is occasionally also known as accessory circulation, auxiliary circulation, or secondary circulation. It has surgically created analogues in which shunts or anastomoses are constructed to bypass circulatory problems.
An example of the usefulness of collateral circulation is a systemic thromboembolism in cats. This is when a thrombotic embolus lodges above the external iliac artery (common iliac artery), blocking the external and internal iliac arteries and effectively shutting off all blood supply to the hind leg. Even though the main vessels to the leg are blocked, enough blood can get to the tissues in the leg via the collateral circulation to keep them alive.
Brain
Blood flow to the brain in humans and some other animals is maintained via a network of collateral arteries that anastomose (join) in the circle of Willis, which lies at the base of the brain. In the circle of Willis so-called communicating arteries exist between the front (anterior) and back (posterior) parts of the circle of Willis, as well as between the left and right side of the circle of Willis.
Leptomeningeal collateral circulation is another anastomosis in the brain.
Heart
Another example in humans and some other animals is after an acute myocardial infarction (heart attack). Collateral circulation in the heart tissue will sometimes bypass the blockage in the main artery and supply enough oxygenated blood to enabl
|
https://en.wikipedia.org/wiki/Comparison%20of%20open-source%20operating%20systems
|
These tables compare free software / open-source operating systems. Where not all of the versions support a feature, the first version which supports it is listed.
General information
Supported architectures
Supported hardware
General
Networking
Network technologies
Supported file systems
Supported file system features
Security features
See also
Berkeley Software Distribution
Comparison of operating systems
Comparison of Linux distributions
Comparison of BSD operating systems
Comparison of kernels
Comparison of file systems
Comparison of platform virtualization software
Comparison of DOS operating systems
List of operating systems
Live CD
Microsoft Windows
RTEMS
Unix
Unix-like
References
External links
Open Source Operating Systems
|
https://en.wikipedia.org/wiki/Higher%20category%20theory
|
In mathematics, higher category theory is the part of category theory at a higher order, which means that some equalities are replaced by explicit arrows in order to be able to explicitly study the structure behind those equalities. Higher category theory is often applied in algebraic topology (especially in homotopy theory), where one studies algebraic invariants of spaces, such as their fundamental weak ∞-groupoid. In higher category theory, the concept of higher categorical structures, such as (∞-categories), allows for a more robust treatment of homotopy theory, enabling one to capture finer homotopical distinctions, such as differentiating two topological spaces that have the same fundamental group, but differ in their higher homotopy groups. This approach is particularly valuable when dealing with spaces with intricate topological features, such as the Eilenberg-MacLane space.
Strict higher categories
An ordinary category has objects and morphisms, which are called 1-morphisms in the context of higher category theory. A 2-category generalizes this by also including 2-morphisms between the 1-morphisms. Continuing this up to n-morphisms between (n − 1)-morphisms gives an n-category.
Just as the category known as Cat, which is the category of small categories and functors is actually a 2-category with natural transformations as its 2-morphisms, the category n-Cat of (small) n-categories is actually an (n + 1)-category.
An n-category is defined by induction on n by:
A 0-category is a set,
An (n + 1)-category is a category enriched over the category n-Cat.
So a 1-category is just a (locally small) category.
The monoidal structure of Set is the one given by the cartesian product as tensor and a singleton as unit. In fact any category with finite products can be given a monoidal structure. The recursive construction of n-Cat works fine because if a category has finite products, the category of -enriched categories has finite products too.
While this concep
|
https://en.wikipedia.org/wiki/Presentation%E2%80%93abstraction%E2%80%93control
|
Presentation–abstraction–control (PAC) is a software architectural pattern. It is an interaction-oriented software architecture, and is somewhat similar to model–view–controller (MVC) in that it separates an interactive system into three types of components responsible for specific aspects of the application's functionality. The abstraction component retrieves and processes the data, the presentation component formats the visual and audio presentation of data, and the control component handles things such as the flow of control and communication between the other two components.
In contrast to MVC, PAC is used as a hierarchical structure of agents, each consisting of a triad of presentation, abstraction and control parts. The agents (or triads) communicate with each other only through the control part of each triad. It also differs from MVC in that within each triad, it completely insulates the presentation (view in MVC) and the abstraction (model in MVC). This provides the option to separately multithread the model and view which can give the user experience of very short program start times, as the user interface (presentation) can be shown before the abstraction has fully initialized.
History
PAC was initially developed by French computer scientist, Joëlle Coutaz in 1987. Coutaz founded the User Interface group at the Laboratoire de Génie Informatique of IMAG.
See also
Action Domain Responder
Hierarchical model–view–controller
Model–view–presenter
Model–view–viewmodel
Presenter First
PAC-Amodeus
Notes
References
External links
Architectural outline for the game Warcraft as it might be implemented using the PAC Architectural Pattern: Programming of the application PACcraft:Architecture (in French)
Pattern:Presentation-Abstraction-Control (pattern description)
PAC description in the Portland Pattern Repository
WengoPhone is a free software VoIP application that is written using the PAC design pattern.
description of PAC and motivation for use in WengoPh
|
https://en.wikipedia.org/wiki/Photon%20gas
|
In physics, a photon gas is a gas-like collection of photons, which has many of the same properties of a conventional gas like hydrogen or neon – including pressure, temperature, and entropy. The most common example of a photon gas in equilibrium is the black-body radiation.
Photons are part of a family of particles known as bosons, particles that follow Bose–Einstein statistics and with integer spin. A gas of bosons with only one type of particle is uniquely described by three state functions such as the temperature, volume, and the number of particles. However, for a black body, the energy distribution is established by the interaction of the photons with matter, usually the walls of the container. In this interaction, the number of photons is not conserved. As a result, the chemical potential of the black-body photon gas is zero at thermodynamic equilibrium. The number of state variables needed to describe a black-body state is thus reduced from three to two (e.g. temperature and volume).
Thermodynamics of a black body photon gas
In a classical ideal gas with massive particles, the energy of the particles is distributed according to a Maxwell–Boltzmann distribution. This distribution is established as the particles collide with each other, exchanging energy (and momentum) in the process. In a photon gas, there will also be an equilibrium distribution, but photons do not collide with each other (except under very extreme conditions, see two-photon physics), so the equilibrium distribution must be established by other means. The most common way that an equilibrium distribution is established is by the interaction of the photons with matter. If the photons are absorbed and emitted by the walls of the system containing the photon gas, and the walls are at a particular temperature, then the equilibrium distribution for the photons will be a black-body distribution at that temperature.
A very important difference between a Bose gas (gas of massive bosons) and a phot
|
https://en.wikipedia.org/wiki/Binary%20entropy%20function
|
In information theory, the binary entropy function, denoted or , is defined as the entropy of a Bernoulli process with probability of one of two values. It is a special case of , the entropy function. Mathematically, the Bernoulli trial is modelled as a random variable that can take on only two values: 0 and 1, which are mutually exclusive and exhaustive.
If , then and the entropy of (in shannons) is given by
,
where is taken to be 0. The logarithms in this formula are usually taken (as shown in the graph) to the base 2. See binary logarithm.
When , the binary entropy function attains its maximum value. This is the case of an unbiased coin flip.
is distinguished from the entropy function in that the former takes a single real number as a parameter whereas the latter takes a distribution or random variable as a parameter.
Sometimes the binary entropy function is also written as .
However, it is different from and should not be confused with the Rényi entropy, which is denoted as .
Explanation
In terms of information theory, entropy is considered to be a measure of the uncertainty in a message. To put it intuitively, suppose . At this probability, the event is certain never to occur, and so there is no uncertainty at all, leading to an entropy of 0. If , the result is again certain, so the entropy is 0 here as well. When , the uncertainty is at a maximum; if one were to place a fair bet on the outcome in this case, there is no advantage to be gained with prior knowledge of the probabilities. In this case, the entropy is maximum at a value of 1 bit. Intermediate values fall between these cases; for instance, if , there is still a measure of uncertainty on the outcome, but one can still predict the outcome correctly more often than not, so the uncertainty measure, or entropy, is less than 1 full bit.
Derivative
The derivative of the binary entropy function may be expressed as the negative of the logit function:
.
Taylor series
The Taylor series of the
|
https://en.wikipedia.org/wiki/Semiconductor%20process%20simulation
|
Semiconductor process simulation is the modeling of the fabrication of semiconductor devices such as transistors. It is a branch of electronic design automation, and part of a sub-field known as technology CAD, or TCAD.
The ultimate goal of process simulation is an accurate prediction of the active dopant distribution, the stress distribution and the device geometry. Process simulation is typically used as an input for device simulation, the modeling of device electrical characteristics. Collectively process and device simulation form the core tools for the design phase known as TCAD or Technology Computer Aided Design. Considering the integrated circuit design process as a series of steps with decreasing levels of abstraction, logic synthesis would be at the highest level and TCAD, being closest to fabrication, would be the phase with the least amount of abstraction. Because of the detailed physical modeling involved, process simulation is almost exclusively used to aid in the development of single devices whether discrete or as a part of an integrated circuit.
The fabrication of integrated circuit devices requires a series of processing steps called a process flow. Process simulation involves modeling all essential steps in the process flow in order to obtain dopant and stress profiles and, to a lesser extent, device geometry. The input for process simulation is the process flow and a layout. The layout is selected as a linear cut in a full layout for a 2D simulation or a rectangular cut from the layout for a 3D simulation.
TCAD has traditionally focused mainly on the transistor fabrication part of the process flow ending with the formation of source and drain contacts—also known as front end of line manufacturing. Back end of line manufacturing, e.g. interconnect and dielectric layers are not considered. One reason for delineation is the availability of powerful analysis tools such as electron microscopy techniques, scanning electron microscopy (SEM)
|
https://en.wikipedia.org/wiki/Structural%20building%20components
|
Structural building components are specialized structural building products designed, engineered and manufactured under controlled conditions for a specific application. They are incorporated into the overall building structural system by a building designer. Examples are wood or steel roof trusses, floor trusses, floor panels, I-joists, or engineered beams and headers.
A structural building component manufacturer or truss manufacturer is an individual or company regularly engaged in the manufacturing of components.
Construction
Building materials
|
https://en.wikipedia.org/wiki/Login%20session
|
In computing, a login session is the period of activity between a user logging in and logging out of a (multi-user) system.
On Unix and Unix-like operating systems, a login session takes one of two main forms:
When a textual user interface is used, a login session is represented as a kernel session — a collection of process groups with the logout action managed by a session leader.
Where an X display manager is employed, a login session is considered to be the lifetime of a designated user process that the display manager invokes.
On Windows NT-based systems, login sessions are maintained by the kernel and control of them is within the purview of the Local Security Authority Subsystem Service (LSA). winlogon responds to the secure attention key, it requests the LSA to create login sessions on login, and terminates all of the processes belonging to a login session on logout.
See also
Booting process of Windows NT
Architecture of Windows NT
Booting
Master boot record
Power-on self-test
BootVis
Further reading
Operating system technology
|
https://en.wikipedia.org/wiki/Edge%20computing
|
Edge computing is a distributed computing paradigm that brings computation and data storage closer to the sources of data. This is expected to improve response times and save bandwidth. Edge computing is an architecture rather than a specific technology, and a topology- and location-sensitive form of distributed computing.
The origins of edge computing lie in content distribution networks that were created in the late 1990s to serve web and video content from edge servers that were deployed close to users. In the early 2000s, these networks evolved to host applications and application components on edge servers, resulting in the first commercial edge computing services that hosted applications such as dealer locators, shopping carts, real-time data aggregators, and ad insertion engines.
Internet of things (IoT) is an example of edge computing. A common misconception is that edge and IoT are synonymous.
Definition
One definition of edge computing is the use of any type of computer program that delivers low latency nearer to the requests. Karim Arabi, in an IEEE DAC 2014 Keynote and subsequently in an invited talk at MIT's MTL Seminar in 2015, defined edge computing broadly as all computing outside the cloud happening at the edge of the network, and more specifically in applications where real-time processing of data is required. Thus, edge computing does not have the climate-controlled advantages of data centers despite the large amount of processing power necessary.
The term is often used as synonymous with fog computing. This especially is quite relevant for small deployments. However, when the deployment size is large, e.g., for Smart Cities, fog computing can be a distinct layer between the Edge and the Cloud. Hence in such deployments, Edge layer is a distinct layer too which has specific responsibilities.
According to The State of the Edge report, edge computing concentrates on servers "in proximity to the last mile network". Alex Reznik, Chair of the ETS
|
https://en.wikipedia.org/wiki/Truss%20connector%20plate
|
A truss connector plate, or gang plate, is a kind of tie. Truss plates are light gauge metal plates used to connect prefabricated light frame wood trusses. They are produced by punching light gauge galvanized steel to create teeth on one side. The teeth are embedded in and hold the wooden frame components to the plate and each other.
Nail plates are used to connect timber of the same thickness in the same plane. When used on trusses, they are pressed into the side of the timber using tools such as a hydraulic press or a roller. As the plate is pressed in, the teeth are all driven into the wood fibers simultaneously, and the compression between adjacent teeth reduces the tendency of the wood to split.
A truss connector plate is manufactured from ASTM A653/A653M, A591, A792/A792M, or A167 structural quality steel and is protected with zinc or zinc-aluminum alloy coatings or their stainless steel equivalent. Metal connector plates are manufactured with varying length, width and thickness (or gauge) and are designed to laterally transmit loads in wood. They are also known as stud ties, metal connector plates, mending plates, or nail plates. However, not all types of nail plates are approved for use in trusses and other structurally critical placements.
History
John Calvin Jureit invented the truss connector plate and patented it in 1955 and formed the company Gang-Nails, Inc. which was later renamed Automated Building Components,Inc.
References
Structural engineering
|
https://en.wikipedia.org/wiki/Tournament%20of%20the%20Towns
|
The Tournament of the Towns (International Mathematics Tournament of the Towns, Турнир Городов, Международный Математический Турнир Городов) is an international mathematical competition for school students originating in Russia.
The contest was created by mathematician Nikolay Konstantinov and has participants from over 100 cities in many different countries.
Organization
There are two rounds in this contest: Fall (October) and Spring (February–March) of the same academic year.
Both have an O-Level (Basic) paper and an A-Level (Advanced) paper separated by 1–2 weeks.
The O-Level contains around 5 questions and the A-Level contains around 7 questions.
The duration of the exams is 5 hours for both Levels.
The A-Level problems are more difficult than O-Level but have a greater maximum score.
Participating students are divided into two divisions;
Junior (usually grades 7–10) and Senior (two last school grades, usually grades 11–12).
To account for age differences inside of each division, students in different grades have different loadings (coefficients). A contestant's final score is his/her highest score from the four exams. It is not necessary albeit recommended to write all four exams.
Different towns are given handicaps to account for differences in population. A town's score is the average of the scores of its N best students, where its population is N hundred thousand. It is also worth noting that the minimum value of N is 5.
Philosophy
Tournament of Towns differs from many other similar competitions by its philosophy relying much more upon ingenuity than the drill. First, problems are difficult (especially in A Level in the Senior division where they are comparable with those at International Mathematical Olympiad but much more ingenious and less technical). Second, it allows the participants to choose problems they like as for each paper the participant's score is the sum of his/her 3 best answers.
The problems are mostly combinatorial, with the
|
https://en.wikipedia.org/wiki/Rhodococcus%20rhodochrous
|
Rhodococcus rhodochrous is a bacterium used as a soil inoculant in agriculture and horticulture.
It is gram positive, in the shape of rods/cocci, oxidase negative, and catalase positive.
It is industrially produced to catalyse acrylonitrile conversion to acrylamide. It is also used in the industrial production of nicotinamide (niacinamide), a derivative or active form of niacin, part of the B vitamin complex.
A 2015 study showed that Rhodococcus rhodochrous could inhibit the growth of Pseudogymnoascus destructans, the fungal species responsible for white nose syndrome in bats.
References
Further reading
Retrieved 13 November 2014.
External links
Type strain of Rhodococcus rhodochrous at BacDive - the Bacterial Diversity Metadatabase
Soil biology
Mycobacteriales
|
https://en.wikipedia.org/wiki/Architectural%20pattern
|
An architectural pattern is a general, reusable resolution to a commonly occurring problem in software architecture within a given context. The architectural patterns address various issues in software engineering, such as computer hardware performance limitations, high availability and minimization of a business risk. Some architectural patterns have been implemented within software frameworks.
The use of the word "pattern" in the software industry was influenced by similar concepts as expressed in traditional architecture, such as Christopher Alexander's A Pattern Language (1977) which discussed the practice in terms of establishing a pattern lexicon, prompting the practitioners of computer science to contemplate their own design lexicon.
Usage of this metaphor within the software engineering profession became commonplace after the publication of Design Patterns (1994) by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides—now commonly known as the "Gang of Four"—coincident with the early years of the public Internet, marking the onset of complex software systems "eating the world" and the corresponding need to codify the rapidly sprawling world of software development at the deepest possible level, while remaining flexible and adaptive.
Architectural patterns are similar to software design patterns but have a broader scope.
Definition
Even though an architectural pattern conveys an image of a system, it is not an architecture. An architectural pattern is a concept that solves and delineates some essential cohesive elements of a software architecture. Countless different architectures may implement the same pattern and share the related characteristics. Patterns are often defined as "strictly described and commonly available".
Architectural style
Following traditional building architecture, a software architectural style is a specific method of construction, characterized by the features that make it notable.
Some treat architectural patterns and ar
|
https://en.wikipedia.org/wiki/Generalizations%20of%20Pauli%20matrices
|
In mathematics and physics, in particular quantum information, the term generalized Pauli matrices refers to families of matrices which generalize the (linear algebraic) properties of the Pauli matrices. Here, a few classes of such matrices are summarized.
Multi-qubit Pauli matrices (Hermitian)
This method of generalizing the Pauli matrices refers to a generalization from a single 2-level system (qubit) to multiple such systems. In particular, the generalized Pauli matrices for a group of qubits is just the set of matrices generated by all possible products of Pauli matrices on any of the qubits.
The vector space of a single qubit is and the vector space of qubits is . We use the tensor product notation
to refer to the operator on that acts as a Pauli matrix on the th qubit and the identity on all other qubits. We can also use for the identity, i.e., for any we use . Then the multi-qubit Pauli matrices are all matrices of the form
,
i.e., for a vector of integers between 0 and 4. Thus there are such generalized Pauli matrices if we include the identity and if we do not.
Higher spin matrices (Hermitian)
The traditional Pauli matrices are the matrix representation of the Lie algebra generators , , and in the 2-dimensional irreducible representation of SU(2), corresponding to a spin-1/2 particle. These generate the Lie group SU(2).
For a general particle of spin , one instead utilizes the -dimensional irreducible representation.
Generalized Gell-Mann matrices (Hermitian)
This method of generalizing the Pauli matrices refers to a generalization from 2-level systems (Pauli matrices acting on qubits) to 3-level systems (Gell-Mann matrices acting on qutrits) and generic d-level systems (generalized Gell-Mann matrices acting on qudits).
Construction
Let be the matrix with 1 in the -th entry and 0 elsewhere. Consider the space of d×d complex matrices, , for a fixed d.
Define the following matrices,
, for .
, for .
, the identity
|
https://en.wikipedia.org/wiki/Weili%20Dai
|
Weili Dai () is a Chinese-born American businesswoman. She is the co-founder, former director, and former president of Marvell Technology Group. Dai is a successful female entrepreneur, and is the only female co-founder of a major semiconductor company. In 2015, she was listed as the 95th richest woman in the world by Forbes. Her estimated net worth is US$1.6 billion as of December 2021.
Early life
Dai was born in Shanghai, China, where she played semi-professional basketball before moving to the US at the age of 17. She has a bachelor's degree in computer science from the University of California, Berkeley.
Career
Dai was involved in software development and project management at Canon Research Center America, Inc.
Dai co-founded the American semiconductor company Marvell in 1995 with her husband Sehat Sutardja. She directed Marvell's rise to become a large company. While at Marvell, Dai worked on strategic partnerships, and marketed Marvell's technology for use in products across several markets. Dai also works to increase access to technology in the developing world and served as an ambassador of opportunity between the US and China.
Dai served as chief operating officer, executive vice president, and general manager of the Communications Business Group at Marvell. She was corporate secretary of the board, and a director of the board at Marvell Technology Group Ltd.
Dai promoted partnership with the One Laptop Per Child program (OLPC) and women in science, technology, engineering, and mathematics (STEM) fields.
She sits on the board of the disaster relief organization, Give2Asia, and was named to a committee of 100 representing the Chinese Americans. The Sutardja Dai Hall at her alma mater, UC Berkeley, was named for Dai along with her husband Sehat Sutardja, CEO of Marvell and Pantas Sutardja, CTO of Marvell. Sutardja Dai Hall is home to the Center for Information Technology Research in the Interest of Society (CITRIS). In 2015, Dai was named to the Globa
|
https://en.wikipedia.org/wiki/Thermodynamicist
|
In thermodynamics, a thermodynamicist is someone who studies thermodynamic processes and phenomena, i.e. the physics that deal with mechanical action and relations of heat.
Among the well-known number of famous thermodynamicists, include Sadi Carnot, Rudolf Clausius, Willard Gibbs, Hermann von Helmholtz, and Max Planck.
History of term
Although most consider the French physicist Nicolas Sadi Carnot to be the first true thermodynamicist, the term thermodynamics itself wasn't coined until 1849 by Lord Kelvin in his publication An Account of Carnot's Theory of the Motive Power of Heat.
The first thermodynamic textbook was written in 1859 by William Rankine, a civil and mechanical engineering professor at the University of Glasgow.
See also
References
Thermodynamics
|
https://en.wikipedia.org/wiki/SQLFilter
|
SQLFilter is a plugin for OmniPeek that indexes packets and trace files into an SQLite database. The packets can then be searched using SQL queries. The matching packets are loaded directly into OmniPeek and analyzed. The packet database can also be used to build multi-tier data mining and network forensics systems.
As more companies save large quantities of network traffic to disk, tools like the WildPackets SQLFilter make it possible to search through packet data more efficiently. For network troubleshooters, this revolutionizes the job of finding packets. Not only does the SQLFilter allow users to search for packets across thousands of trace files, it also loads the resulting packets directly into OmniPeek or EtherPeek. This cuts out many of the steps usually involved in this process and dramatically shortens time to knowledge, and time to fix.
External links
discussion of the SQLFilter Packet Data Mining and Network Forensics.
Network analyzers
Packets (information technology)
|
https://en.wikipedia.org/wiki/Functional%20completeness
|
In logic, a functionally complete set of logical connectives or Boolean operators is one which can be used to express all possible truth tables by combining members of the set into a Boolean expression. A well-known complete set of connectives is { AND, NOT }. Each of the singleton sets { NAND } and { NOR } is functionally complete. However, the set { AND, OR } is incomplete, due to its inability to express NOT.
A gate or set of gates which is functionally complete can also be called a universal gate / gates.
A functionally complete set of gates may utilise or generate 'garbage bits' as part of its computation which are either not part of the input or not part of the output to the system.
In a context of propositional logic, functionally complete sets of connectives are also called (expressively) adequate.
From the point of view of digital electronics, functional completeness means that every possible logic gate can be realized as a network of gates of the types prescribed by the set. In particular, all logic gates can be assembled from either only binary NAND gates, or only binary NOR gates.
Introduction
Modern texts on logic typically take as primitive some subset of the connectives: conjunction (); disjunction (); negation (); material conditional (); and possibly the biconditional (). Further connectives can be defined, if so desired, by defining them in terms of these primitives. For example, NOR (sometimes denoted , the negation of the disjunction) can be expressed as conjunction of two negations:
Similarly, the negation of the conjunction, NAND (sometimes denoted as ), can be defined in terms of disjunction and negation. It turns out that every binary connective can be defined in terms of , so this set is functionally complete.
However, it still contains some redundancy: this set is not a minimal functionally complete set, because the conditional and biconditional can be defined in terms of the other connectives as
It follows that the smaller set i
|
https://en.wikipedia.org/wiki/Cisco%20HDLC
|
Cisco HDLC (cHDLC) is an extension to the High-Level Data Link Control (HDLC) network protocol, and was created by Cisco Systems, Inc. HDLC is a bit-oriented synchronous data link layer protocol that was originally developed by the International Organization for Standardization (ISO). Often described as being a proprietary extension, the details of cHDLC have been widely distributed and the protocol has been implemented by many network equipment vendors. cHDLC extends HDLC with multi-protocol support.
Framing
Cisco HDLC frames uses an alternative framing structure to the standard ISO HDLC. To support multiple protocols encapsulation, cHDLC frames contain a field for identifying the network protocol.
Structure
cHDLC frame structure
The following table describes the structure of a cHDLC frame on the wire.
The Address field is used to specify the type of packet contained in the cHDLC frame; 0x0F for Unicast and 0x8F for Broadcast packets.
The Control field is always set to zero (0x00).
The Protocol Code field is used to specify the protocol type encapsulated within the cHDLC frame (e.g. 0x0800 for Internet Protocol).
SLARP address request–response frame structure
The Serial Line Address Resolution Protocol (SLARP) frame is designated by a specific cHDLC protocol code field value of 0x8035.
Three types of SLARP frame are defined: address requests (0x00), address replies (0x01), and keep-alive frames (0x02).
The following table shows the structure of a SLARP cHDLC address request–response frame.
The op-code will be 0x00 for address requests and 0x01 for address responses.
The Address and Mask fields are used to contain a four-octet IP address and mask. These are 0 for address requests.
The two-byte Reserved field is currently unused and undefined.
SLARP Keep-Alive frame structure
The following table shows the structure of a SLARP cHDLC keep-alive frame.
The op-code is 0x02 for keep-alives.
The sender sequence number increments with each keep-alive
|
https://en.wikipedia.org/wiki/Reverse%20telephone%20directory
|
A reverse telephone directory (also known as a gray pages directory, criss-cross directory or reverse phone lookup) is a collection of telephone numbers and associated customer details. However, unlike a standard telephone directory, where the user uses customer's details (such as name and address) in order to retrieve the telephone number of that person or business, a reverse telephone directory allows users to search by a telephone service number in order to retrieve the customer details for that service.
Reverse telephone directories are used by law enforcement and other emergency services in order to determine the origin of any request for assistance, however these systems include both publicly accessible (listed) and private (unlisted) services. As such, these directories are restricted to internal use only. Some forms of city directories provide this form of lookup for listed services by phone number, along with address cross-referencing.
Publicly accessible reverse telephone directories may be provided as part of the standard directory services from the telecommunications carrier in some countries. In other countries these directories are often created by phone phreakers by collecting the information available via the publicly accessible directories and then providing a search function which allows users to search by the telephone service details.
History
Printed reverse phone directories have been produced by the telephone companies (in the United States) for decades, and were distributed to the phone companies, law enforcement, and public libraries. In the early 1990s, businesses started offering reverse telephone lookups for fees, and by the early 2000s advertising-based reverse directories were available online, prompting occasional controversies revolving around privacy.
Australia
In 2001, a legal case Telstra Corporation Ltd v Desktop Marketing Systems Pty Ltd was heard in the Australian Federal Court. gave Telstra, the predominant carrier within Au
|
https://en.wikipedia.org/wiki/Biochemical%20Society
|
The Biochemical Society is a learned society in the United Kingdom in the field of biochemistry, including all the cellular and molecular biosciences.
Structure
It currently has around 7000 members, two-thirds in the UK. It is affiliated with the European body, Federation of European Biochemical Societies (FEBS). The Society's current President (2016) is Sir David Baulcombe. The Society's headquarters are in London.
History
The society was founded in 1911 by Benjamin Moore, W.D. Halliburton and others, under the name of the Biochemical Club. It acquired the existing Biochemical Journal in 1912.
The society name changed to the Biochemical Society in 1913.
In 2005, the headquarters of the society moved from Portland Place to purpose-built offices in Holborn.
In 2009, the headquarters moved again to Charles Darwin House, near Gray's Inn Road.
Past presidents include Professor Ron Laskey, Sir Philip Cohen, and Sir Tom Blundell.
Awards
The society makes a number of merit awards, four annually and others either biennially or triennially, to acknowledge excellence and achievement in both specific and general fields of science. The annual awards comprise the Morton Lecture, the Colworth Medal, the Centenary Award and the Novartis Medal and Prize.
Publishing
The Society's wholly owned publishing subsidiary, Portland Press, publishes books, a magazine, The Biochemist, and several print and online academic journals:
Biochemical Journal
Biochemical Society Symposium (online only)
Biochemical Society Transactions
Cell Signalling Biology
Clinical Science
Essays in Biochemistry
Bioscience Reports
The Society's flagship publication, the Biochemical Journal, celebrated its centenary in 2006 with the launch of a free online archive back to its first issue in 1906.
Further reading
References
External links
Biochemical Society
Portland Press
Biochemical Journal Centenary
Biochemistry organizations
British biology societies
Biotechnology organizations
Chemistry socie
|
https://en.wikipedia.org/wiki/Exceptional%20object
|
Many branches of mathematics study objects of a given type and prove a classification theorem. A common theme is that the classification results in a number of series of objects and a finite number of exceptions — often with desirable properties — that do not fit into any series. These are known as exceptional objects. In many cases, these exceptional objects play a further and important role in the subject. Furthermore, the exceptional objects in one branch of mathematics often relate to the exceptional objects in others.
A related phenomenon is exceptional isomorphism, when two series are in general different, but agree for some small values. For example, spin groups in low dimensions are isomorphic to other classical Lie groups.
Regular polytopes
The prototypical examples of exceptional objects arise in the classification of regular polytopes: in two dimensions, there is a series of regular n-gons for n ≥ 3. In every dimension above 2, one can find analogues of the cube, tetrahedron and octahedron. In three dimensions, one finds two more regular polyhedra — the dodecahedron (12-hedron) and the icosahedron (20-hedron) — making five Platonic solids. In four dimensions, a total of six regular polytopes exist, including the 120-cell, the 600-cell and the 24-cell. There are no other regular polytopes, as the only regular polytopes in higher dimensions are of the hypercube, simplex, orthoplex series. In all dimensions combined, there are therefore three series and five exceptional polytopes.
Moreover, the pattern is similar if non-convex polytopes are included: in two dimensions, there is a regular star polygon for every rational number . In three dimensions, there are four Kepler–Poinsot polyhedra, and in four dimensions, ten Schläfli–Hess polychora; in higher dimensions, there are no non-convex regular figures.
These can be generalized to tessellations of other spaces, especially uniform tessellations, notably tilings of Euclidean space (honeycombs), which hav
|
https://en.wikipedia.org/wiki/Electron-beam%20physical%20vapor%20deposition
|
Electron-beam physical vapor deposition, or EBPVD, is a form of physical vapor deposition in which a target anode is bombarded with an electron beam given off by a charged tungsten filament under high vacuum. The electron beam causes atoms from the target to transform into the gaseous phase. These atoms then precipitate into solid form, coating everything in the vacuum chamber (within line of sight) with a thin layer of the anode material.
Introduction
Thin-film deposition is a process applied in the semiconductor industry to grow electronic materials, in the aerospace industry to form thermal and chemical barrier coatings to protect surfaces against corrosive environments, in optics to impart the desired reflective and transmissive properties to a substrate and elsewhere in industry to modify surfaces to have a variety of desired properties. The deposition process can be broadly classified into physical vapor deposition (PVD) and chemical vapor deposition (CVD). In CVD, the film growth takes place at high temperatures, leading to the formation of corrosive gaseous products, and it may leave impurities in the film. The PVD process can be carried out at lower deposition temperatures and without corrosive products, but deposition rates are typically lower. Electron-beam physical vapor deposition, however, yields a high deposition rate from 0.1 to 100 μm/min at relatively low substrate temperatures, with very high material utilization efficiency. The schematic of an EBPVD system is shown in Fig 1.
Thin-film deposition process
In an EBPVD system, the deposition chamber must be evacuated to a pressure of at least 7.5 Torr (10−2 Pa) to allow passage of electrons from the electron gun to the evaporation material, which can be in the form of an ingot or rod. Alternatively, some modern EBPVD systems utilize an arc-suppression system and can be operated at vacuum levels as low as 5.0 Torr, for situations such as parallel use with magnetron sputtering. Multiple types of e
|
https://en.wikipedia.org/wiki/Ys%20I%3A%20Ancient%20Ys%20Vanished
|
also known as Ys: The Vanished Omens or The Ancient Land of Ys (Japanese title: イース), is a 1987 action role-playing game developed by Nihon Falcom. It is the first installment in the Ys series. Initially developed for the PC-8800 series by Masaya Hashimoto (director, programmer, designer) and Tomoyoshi Miyazaki (scenario writer), the game was soon ported to the Sharp X1, PC-98, FM-7, and MSX2 Japanese computer systems.
Ancient Ys Vanished saw many subsequent releases, such as an English-language version for the Master System and an enhanced remake for the TurboGrafx-CD system as part of a compilation called Ys I & II, alongside its 1988 sequel Ys II: Ancient Ys Vanished – The Final Chapter. DotEmu has released the game on Android with the following localizations: English, French, Japanese, Korean, Russian, Italian, German, and Portuguese.
Plot
Ys was a precursor to role-playing games that emphasize storytelling. The hero of Ys is an adventurous young swordsman named Adol Christin. As the story begins, he has just arrived at the Town of Minea, in the land of Esteria. He is called upon by Sara, a fortune-teller, who tells him of a great evil that is sweeping the land.
Adol is informed that he must seek out the six Books of Ys. These books contain the history of the ancient land of Ys, and will give him the knowledge he needs to defeat the evil forces. Sara gives Adol a crystal for identification and instructs him to find her aunt in Zepik Village, who holds the key to retrieving one of the Books. With that, his quest begins.
Gameplay
The player controls Adol on a game field viewed from a top-down perspective. As he travels on the main field and explores dungeons, Adol encounters numerous roaming enemies, which he must battle in order to progress.
Combat in Ys is rather different from other RPGs of the era, which either had turn-based battles or a manually activated sword. Ys instead features a battle system where fighters automatically attack when walking into t
|
https://en.wikipedia.org/wiki/Lightweight%20methodology
|
A lightweight methodology is a software development method that has only a few rules and practices, or only ones that are easy to follow. In contrast, a complex method with many rules is considered a "heavyweight methodology".
Examples of lightweight methodologies include:
Adaptive Software Development by Jim Highsmith, described in his 1999 book Adaptive Software Development
Crystal Clear family of methodologies with Alistair Cockburn,
Extreme Programming (XP), promoted by people such as Kent Beck and Martin Fowler
Feature Driven Development (FDD) developed (1999) by Jeff De Luca and Peter Coad
ICONIX process, developed by Doug Rosenberg: An UML Use Case driven approach that purports to provide just enough documentation and structure to the process to allow flexibility, yet produce software that meets user and business requirements
Most of these lightweight processes emphasize the need to deal with change in requirements and change in environment or technology by being flexible and adaptive.
References
Agile software development
Methodology
Software development process
Software development philosophies
|
https://en.wikipedia.org/wiki/Multisample%20anti-aliasing
|
Multisample anti-aliasing (MSAA) is a type of spatial anti-aliasing, a technique used in computer graphics to remove jaggies.
Definition
The term generally refers to a special case of supersampling. Initial implementations of full-scene anti-aliasing (FSAA) worked conceptually by simply rendering a scene at a higher resolution, and then downsampling to a lower-resolution output. Most modern GPUs are capable of this form of anti-aliasing, but it greatly taxes resources such as texture, bandwidth, and fillrate. (If a program is highly TCL-bound or CPU-bound, supersampling can be used without much performance hit.)
According to the OpenGL GL_ARB_multisample specification, "multisampling" refers to a specific optimization of supersampling. The specification dictates that the renderer evaluate the fragment program once per pixel, and only "truly" supersample the depth and stencil values. (This is not the same as supersampling but, by the OpenGL 1.5 specification, the definition had been updated to include fully supersampling implementations as well.)
In graphics literature in general, "multisampling" refers to any special case of supersampling where some components of the final image are not fully supersampled. The lists below refer specifically to the ARB_multisample definition.
Description
In supersample anti-aliasing, multiple locations are sampled within every pixel, and each of those samples is fully rendered and combined with the others to produce the pixel that is ultimately displayed. This is computationally expensive, because the entire rendering process must be repeated for each sample location. It is also inefficient, as aliasing is typically only noticed in some parts of the image, such as the edges, whereas supersampling is performed for every single pixel.
In multisample anti-aliasing, if any of the multi sample locations in a pixel is covered by the triangle being rendered, a shading computation must be performed for that triangle. However this calc
|
https://en.wikipedia.org/wiki/Split-radix%20FFT%20algorithm
|
The split-radix FFT is a fast Fourier transform (FFT) algorithm for computing the discrete Fourier transform (DFT), and was first described in an initially little-appreciated paper by R. Yavne (1968) and subsequently rediscovered simultaneously by various authors in 1984. (The name "split radix" was coined by two of these reinventors, P. Duhamel and H. Hollmann.) In particular, split radix is a variant of the Cooley–Tukey FFT algorithm that uses a blend of radices 2 and 4: it recursively expresses a DFT of length N in terms of one smaller DFT of length N/2 and two smaller DFTs of length N/4.
The split-radix FFT, along with its variations, long had the distinction of achieving the lowest published arithmetic operation count (total exact number of required real additions and multiplications) to compute a DFT of power-of-two sizes N. The arithmetic count of the original split-radix algorithm was improved upon in 2004 (with the initial gains made in unpublished work by J. Van Buskirk via hand optimization for N=64 ), but it turns out that one can still achieve the new lowest count by a modification of split radix (Johnson and Frigo, 2007). Although the number of arithmetic operations is not the sole factor (or even necessarily the dominant factor) in determining the time required to compute a DFT on a computer, the question of the minimum possible count is of longstanding theoretical interest. (No tight lower bound on the operation count has currently been proven.)
The split-radix algorithm can only be applied when N is a multiple of 4, but since it breaks a DFT into smaller DFTs it can be combined with any other FFT algorithm as desired.
Split-radix decomposition
Recall that the DFT is defined by the formula:
where is an integer ranging from to and denotes the primitive root of unity:
and thus: .
The split-radix algorithm works by expressing this summation in terms of three smaller summations. (Here, we give the "decimation in time" version of the split-
|
https://en.wikipedia.org/wiki/Roud%20Folk%20Song%20Index
|
The Roud Folk Song Index is a database of around 250,000 references to nearly 25,000 songs collected from oral tradition in the English language from all over the world. It is compiled by Steve Roud. Roud's Index is a combination of the Broadside Index (printed sources before 1900) and a "field-recording index" compiled by Roud. It subsumes all the previous printed sources known to Francis James Child (the Child Ballads) and includes recordings from 1900 to 1975. Until early 2006, the index was available by a CD subscription; now it can be found online on the Vaughan Williams Memorial Library website, maintained by the English Folk Dance and Song Society (EFDSS). A partial list is also available at List of folk songs by Roud number.
Purpose of index
The primary function of the Roud Folk Song Index is as a research aid correlating versions of traditional English-language folk song lyrics independently documented over past centuries by many different collectors across (especially) the UK and North America. It is possible by searching the database, for example by title, by first line(s), or subject matter (or a combination of any of a dozen fields) to locate each of the often numerous variants of a particular song. Comprehensive details of those songs are then available, including details of the original collected source, and a reference to where to find the text (and possibly music) of the song within a published volume in the EFDSS archive.
A related index, the Roud Broadside Index, includes references to songs which appeared on broadsides and other cheap print publications, up to about 1920. In addition, there are many entries for music hall songs, pre-World War II radio performers' song folios, sheet music, etc. The index may be searched by title, first line etc. and the result includes details of the original imprint and where a copy may be located. The Roud number – "Roud num" – field may be used as a cross-reference to the Roud Folk Song Index itself in order
|
https://en.wikipedia.org/wiki/Variome
|
The variome is the whole set of genetic variations found in populations of species that have gone through a relatively short evolution change. For example, among humans, about 1 in every 1,200 nucleotide bases differ. The size of human variome in terms of effective population size is claimed to be about 10,000 individuals. This variation rate is comparatively small compared to other species. For example, the effective population size of tigers which perhaps has the whole population size less than 10,000 in the wild is not much smaller than the human species indicating a much higher level of genetic diversity although they are close to extinction in the wild. In practice, the variome can be the sum of the single nucleotide polymorphisms (SNPs), indels, and structural variation (SV) of a population or species. The Human Variome Project seeks to compile this genetic variation data worldwide. Variomics is the study of variome and a branch of bioinformatics.
Ethnic variomes
The human variome can be subdivided into smaller ethnicity specific variomes. Each variome can have a utility in terms of filtering out common ethnic specific variants in the analyses of cancer normal variants filtering for more efficient detection of somatic mutations that can be relevant to certain anti-cancer drugs. KoVariome is one such ethnic specific variome where the project uses the term variome to denote their identity and connection to the concept of the broader human variome. KoVariome founders have been affiliated with HVP since early days of HVP where Prof. Richard Cotton initiated various efforts to compile the human variome resources.
Many curated databases has been established to document the impact of clinically significant sequence variations, such as dbSNP or ClinVar. Similarly, many services have been developed by the bioinformatics community to search the literature for variants.
Etymology
The blend word 'variome' is from genetic variant (“a version of a gene that differ
|
https://en.wikipedia.org/wiki/Sustainable%20drainage%20system
|
Sustainable drainage systems (also known as SuDS, SUDS, or sustainable urban drainage systems) are a collection of water management practices that aim to align modern drainage systems with natural water processes and are part of a larger green infrastructure strategy. SuDS efforts make urban drainage systems more compatible with components of the natural water cycle such as storm surge overflows, soil percolation, and bio-filtration. These efforts hope to mitigate the effect human development has had or may have on the natural water cycle, particularly surface runoff and water pollution trends.
SuDS have become popular in recent decades as understanding of how urban development affects natural environments, as well as concern for climate change and sustainability, have increased. SuDS often use built components that mimic natural features in order to integrate urban drainage systems into the natural drainage systems or a site as efficiently and quickly as possible. SUDS infrastructure has become a large part of the Blue-Green Cities demonstration project in Newcastle upon Tyne.
History of drainage systems
Drainage systems have been found in ancient cities over 5,000 years old, including Minoan, Indus, Persian, and Mesopotamian civilizations. These drainage systems focused mostly on reducing nuisances from localized flooding and waste water. Rudimentary systems made from brick or stone channels constituted the extent of urban drainage technologies for centuries. Cities in Ancient Rome also employed drainage systems to protect low-lying areas from excess rainfall. When builders began constructing aqueducts to import fresh water into cities, urban drainage systems became integrated into water supply infrastructure for the first time as a unified urban water cycle.
Modern drainage systems did not appear until the 19th century in Western Europe, although most of these systems were primarily built to deal with sewage issues rising from rapid urbanization. One such exa
|
https://en.wikipedia.org/wiki/Physical%20address
|
In computing, a physical address (also real address, or binary address), is a memory address that is represented in the form of a binary number on the address bus circuitry in order to enable the data bus to access a particular storage cell of main memory, or a register of memory-mapped I/O device.
Use by central processing unit
In a computer supporting virtual memory, the term physical address is used mostly to differentiate from a virtual address. In particular, in computers utilizing a memory management unit (MMU) to translate memory addresses, the virtual and physical addresses refer to an address before and after translation performed by the MMU, respectively.
Unaligned addressing
Depending upon its underlying computer architecture, the performance of a computer may be hindered by unaligned access to memory. For example, a 16-bit computer with a 16-bit memory data bus, such as Intel 8086, generally has less overhead if the access is aligned to an even address. In that case fetching one 16-bit value requires a single memory read operation, a single transfer over a data bus.
If the 16-bit data value starts at an odd address, the processor may need to perform two memory read cycles to load the value into it, i.e. one for the low address (throwing away half of it) and then a second read cycle to load the high address (throwing away again half of the retrieved data). On some processors, such as the Motorola 68000 and Motorola 68010 processors, and SPARC processors, unaligned memory accesses will result in an exception being raised (usually resulting in a software exception, such as POSIX's SIGBUS, being raised).
Use by other devices
The direct memory access (DMA) feature allows other devices in the mother board besides the CPU to address the main memory. Such devices, therefore, also need to have a knowledge of physical addresses.
See also
Address constant
Addressing mode
Address space
Page address register
Pointer (computer programming)
Primary stor
|
https://en.wikipedia.org/wiki/Treadmilling
|
In molecular biology, treadmilling is a phenomenon observed within protein filaments of the cytoskeletons of many cells, especially in actin filaments and microtubules. It occurs when one end of a filament grows in length while the other end shrinks, resulting in a section of filament seemingly "moving" across a stratum or the cytosol. This is due to the constant removal of the protein subunits from these filaments at one end of the filament, while protein subunits are constantly added at the other end. Treadmilling was discovered by Wegner, who defined the thermodynamic and kinetic constraints. Wegner recognized that: “The equilibrium constant (K) for association of a monomer with a polymer is the same at both ends, since the addition of a monomer to each end leads to the same polymer.”; a simple reversible polymer can’t treadmill; ATP hydrolysis is required. GTP is hydrolyzed for microtubule treadmilling.
Detailed process
Dynamics of the filament
The cytoskeleton is a highly dynamic part of a cell and cytoskeletal filaments constantly grow and shrink through addition and removal of subunits. Directed crawling motion of cells such as macrophages relies on directed growth of actin filaments at the cell front (leading edge).
Microfilaments
The two ends of an actin filament differ in their dynamics of subunit addition and removal. They are thus referred to as the plus end (with faster dynamics, also called barbed end) and the minus end (with slower dynamics, also called pointed end). This difference results from the fact that subunit addition at the minus end requires a conformational change of the subunits. Note that each subunit is structurally polar and has to attach to the filament in a particular orientation. As a consequence, the actin filaments are also structurally polar.
Elongating the actin filament occurs when free-actin (G-actin) bound to ATP associates with the filament. Under physiological conditions, it is easier for G-actin to associate at th
|
https://en.wikipedia.org/wiki/Station%20HYPO
|
Station HYPO, also known as Fleet Radio Unit Pacific (FRUPAC), was the United States Navy signals monitoring and cryptographic intelligence unit in Hawaii during World War II. It was one of two major Allied signals intelligence units, called Fleet Radio Units in the Pacific theaters, along with FRUMEL in Melbourne, Australia. The station took its initial name from the phonetic code at the time for "H" for Heʻeia, Hawaii radio tower. The precise importance and role of HYPO in penetrating the Japanese naval codes has been the subject of considerable controversy, reflecting internal tensions amongst US Navy cryptographic stations.
HYPO was under the control of the OP-20-G Naval Intelligence section in Washington. Before the attack on Pearl Harbor of December 7, 1941, and for some time afterwards, HYPO was in the basement of the Old Administration Building at Pearl Harbor. Later on, a new building was constructed for the station, though it had been reorganized and renamed by then.
Background
Cryptanalytic problems facing the United States in the Pacific prior to World War II were largely those related to Japan. An early decision by OP-20-G in Washington divided responsibilities for them among CAST at Cavite and then Corregidor, in the Philippines, HYPO in Hawaii, and OP-20-G itself in Washington. Other Navy crypto stations, including Guam and Bainbridge Island on Puget Sound were tasked and staffed for signals interception and traffic analysis.
The US Army's SIS broke into the highest level Japanese diplomatic cypher (called PURPLE by the US) well before the attack on Pearl Harbor. PURPLE produced little of military value, as the Japanese Foreign Ministry was thought by the ultra-nationalists to be unreliable. Furthermore, decrypts from PURPLE, eventually called MAGIC, were poorly distributed and used in Washington. SIS was able to build several PURPLE machine equivalents. One was sent to CAST, but as HYPO's assigned responsibility did not include PURPLE traffic, no
|
https://en.wikipedia.org/wiki/Potassium%20adipate
|
Potassium adipate is a compound with formula K2C6H8O4. It is a potassium salt and common source ingredient of adipic acid.
It has E number E357.
See also
Sodium adipate
References
Adipates
Food additives
Potassium compounds
Food acidity regulators
E-number additives
|
https://en.wikipedia.org/wiki/Station%20CAST
|
Station CAST was the United States Navy signals monitoring and cryptographic intelligence fleet radio unit at Cavite Navy Yard in the Philippines, until Cavite was captured by the Japanese forces in 1942, during World War II. It was an important part of the Allied intelligence effort, addressing Japanese communications as the War expanded from China into the rest of the Pacific theaters. As Japanese advances in the Philippines threatened CAST, its staff and services were progressively transferred to Corregidor in Manila Bay, and eventually to a newly formed US-Australian station, FRUMEL in Melbourne, Australia.
STATION CAST had originally been located at Shanghai but had been evacuated to Cavite in early 1941 as part of the US Navy's disengagement with China.
Prior to the war, CAST was the US Navy's Far East cryptographic operation, under the OP-20-G Naval Intelligence section in Washington. It was located at the Navy Yard in Manila and moved into the tunnels on Corregidor, as Japanese attacks increased. STATION CAST possessed one of the PURPLE machines produced by the US Army.
Cryptanalytic problems facing the United States in the Pacific prior to World War II were largely Japanese. An early decision by OP-20-G divided responsibility for Japanese cryptanalysis amongst its various stations. Station CAST (at Manila in the Philippines), Station HYPO (Pearl Harbor, Hawaii) OP-20-02, and OP-20-G itself in Washington, shared cryptanalytic duties. Other Stations (on Guam, in Puget Sound on Bainbridge Island, etc.) were tasked and staffed for signals interception and traffic analysis.
PURPLE diplomatic traffic
The US Army Signal Intelligence Service (SIS) break into the highest security Japanese diplomatic cypher (called PURPLE by US analysts) produced very interesting intelligence, but little of military value (except for Ambassador Hiroshi Oshima's despatches from Germany); none of tactical value, and not much more of direct political value as the Foreign Office
|
https://en.wikipedia.org/wiki/Power%20network%20design%20%28IC%29
|
In the design of integrated circuits, power network design is the analysis and design of on-chip conductor networks that distribute electrical power on a chip. As in all engineering, this involves tradeoffs - the network must have adequate performance, be sufficiently reliable, but should not use more resources than required.
Function
The power distribution network distributes power and ground voltages from pad locations to all devices in a design. Shrinking device dimensions, faster switching frequencies and increasing power consumption in deep sub-micrometer technologies cause large switching currents to flow in the power and ground networks which degrade performance and reliability. A robust power distribution network is essential to ensure reliable operation of circuits on a chip. Power supply integrity verification is a critical concern in high-performance designs.
Design considerations
Due to the resistance of the interconnects constituting the network, there is a voltage drop across the network, commonly referred to as the IR-drop. The package supplies currents to the pads of the power grid either by means of package leads in wire-bond chips or through C4 bump arrays in flip chip technology. Although the resistance of package is quite small, the inductance of package leads is significant which causes a voltage drop at the pad locations due to the time varying current drawn by the devices on die. This voltage drop is referred to as the di/dt-drop. Therefore, the voltage seen at the devices is the supply voltage minus the IR-drop and di/dt-drop.
Excessive voltage drops in the power grid reduce switching speeds and noise margins of circuits, and inject noise which might lead to functional failures. High average current densities lead to undesirable wearing out of metal wires due to electromigration (EM). Therefore, the challenge in the design of a power distribution network is in achieving excellent voltage regulation at the consumption points notwithstandin
|
https://en.wikipedia.org/wiki/Memory%20controller
|
A memory controller is a digital circuit that manages the flow of data going to and from a computer's main memory. A memory controller can be a separate chip or integrated into another chip, such as being placed on the same die or as an integral part of a microprocessor; in the latter case, it is usually called an integrated memory controller (IMC). A memory controller is sometimes also called a memory chip controller (MCC) or a memory controller unit (MCU).
Memory controllers contain the logic necessary to read and write to DRAM, and to "refresh" the DRAM. Without constant refreshes, DRAM will lose the data written to it as the capacitors leak their charge within a fraction of a second. Some memory controllers include error detection and correction hardware.
A common form of memory controller is the memory management unit (MMU) which in many operating systems implements virtual addressing.
History
Most modern desktop or workstation microprocessors use an integrated memory controller (IMC), including microprocessors from Intel, AMD, and those built around the ARM architecture.
Prior to K8 (circa 2003), AMD microprocessors had a memory controller implemented on their motherboard's northbridge. In K8 and later, AMD employed an integrated memory controller. Likewise, until Nehalem (circa 2008), Intel microprocessors used memory controllers implemented on the motherboard's northbridge. Nehalem and later switched to an integrated memory controller.
Other examples of microprocessors that use integrated memory controllers include NVIDIA's Fermi, IBM's POWER5, and Sun Microsystems's UltraSPARC T1.
While an integrated memory controller has the potential to increase the system's performance, such as by reducing memory latency, it locks the microprocessor to a specific type (or types) of memory, forcing a redesign in order to support newer memory technologies. When DDR2 SDRAM was introduced, AMD released new Athlon 64 CPUs. These new models, with a DDR2 controller, us
|
https://en.wikipedia.org/wiki/TIMIT
|
TIMIT is a corpus of phonemically and lexically transcribed speech of American English speakers of different sexes and dialects. Each transcribed element has been delineated in time.
TIMIT was designed to further acoustic-phonetic knowledge and automatic speech recognition systems. It was commissioned by DARPA and corpus design was a joint effort between the Massachusetts Institute of Technology, SRI International, and Texas Instruments (TI). The speech was recorded at TI, transcribed at MIT, and verified and prepared for publishing by the National Institute of Standards and Technology (NIST). There is also a telephone bandwidth version called NTIMIT (Network TIMIT).
TIMIT and NTIMIT are not freely available — either membership of the Linguistic Data Consortium, or a monetary payment, is required for access to the dataset.
History
The TIMIT telephone corpus was an early attempt to create a database with speech samples. It was published in the year 1988 on CD-ROM and consists of only 10 sentences per speaker. Two 'dialect' sentences were read by each speaker, as well as another 8 sentences selected from a larger set Each sentence averages 3 seconds long and is spoken by 630 different speakers. It was the first notable attempt in creating and distributing a speech corpus and the overall project has produced costs of 1.5 million US$.
The full name of the project is DARPA-TIMIT Acoustic-Phonetic Continuous Speech Corpus and the acronym TIMIT stands for Texas Instruments/Massachusetts Institute of Technology. The main reason why a corpus of telephone speech was created was to train speech recognition software. In the Blizzard challenge, different software has the obligation to convert audio recordings into textual data and the TIMIT corpus was used as a standardized baseline.
See also
Comparison of datasets in machine learning
References
External links
TIMIT Acoustic-Phonetic Continuous Speech Corpus
Applied linguistics
Computational linguistics
Corpora
Di
|
https://en.wikipedia.org/wiki/Diode-or%20circuit
|
A diode-OR circuit is used in electronics to isolate two or more voltage sources. There are two typical implementations:
When a DC supply voltage needs to be generated from one of a number of different sources, for example when terminating a parallel SCSI bus, a very simple circuit like this can be used:
In digital electronics a diode-OR circuit is used to derive a simple Boolean logic function. This kind of circuit was once very common in diode–transistor logic but has been largely replaced by CMOS in modern electronics:
Logic gates
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.