source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Database%20forensics
|
Database forensics is a branch of digital forensic science relating to the forensic study of databases and their related metadata.
The discipline is similar to computer forensics, following the normal forensic process and applying investigative techniques to database contents and metadata. Cached information may also exist in a servers RAM requiring live analysis techniques.
A forensic examination of a database may relate to the timestamps that apply to the update time of a row in a relational table being inspected and tested for validity in order to verify the actions of a database user. Alternatively, a forensic examination may focus on identifying transactions within a database system or application that indicate evidence of wrongdoing, such as fraud.
Software tools can be used to manipulate and analyse data. These tools also provide audit logging capabilities which provide documented proof of what tasks or analysis a forensic examiner performed on the database.
Currently many database software tools are in general not reliable and precise enough to be used for forensic work as demonstrated in the first paper published on database forensics.
There is currently a single book published in this field, though more are destined.
Additionally there is a subsequent SQL Server forensics book by Kevvie Fowler named SQL Server Forensics which is well regarded also.
The forensic study of relational databases requires a knowledge of the standard used to encode data on the computer disk. A documentation of standards used to encode information in well-known brands of DB such as SQL Server and Oracle has been contributed to the public domain. Others include Apex Analytix.
Because the forensic analysis of a database is not executed in isolation, the technological framework within which a subject database exists is crucial to understanding and resolving questions of data authenticity and integrity especially as it relates to database users.
Further reading
Farmer and Ven
|
https://en.wikipedia.org/wiki/Financial%20risk%20modeling
|
Financial risk modeling is the use of formal mathematical and econometric techniques to measure, monitor and control the market risk, credit risk, and operational risk on a firm's balance sheet, on a bank's trading book, or re a fund manager's portfolio value; see Financial risk management.
Risk modeling is one of many subtasks within the broader area of financial modeling.
Application
Risk modeling uses a variety of techniques including market risk, value at risk (VaR), historical simulation (HS), or extreme value theory (EVT) in order to analyze a portfolio and make forecasts of the likely losses that would be incurred for a variety of risks. As above, such risks are typically grouped into credit risk, market risk, model risk, liquidity risk, and operational risk categories.
Many large financial intermediary firms use risk modeling to help portfolio managers assess the amount of capital reserves to maintain, and to help guide their purchases and sales of various classes of financial assets.
Formal risk modeling is required under the Basel II proposal for all the major international banking institutions by the various national depository institution regulators. In the past, risk analysis was done qualitatively but now with the advent of powerful computing software, quantitative risk analysis can be done quickly and effortlessly.
Criticism
Modeling the changes by distributions with finite variance is now known to be inappropriate. Benoît Mandelbrot found in the 1960s that changes in prices in financial markets do not follow a Gaussian distribution, but are rather modeled better by Lévy stable distributions. The scale of change, or volatility, depends on the length of the time interval to a power a bit more than 1/2. Large changes up or down, also called fat tails, are more likely than what one would calculate using a Gaussian distribution with an estimated standard deviation.
Quantitative risk analysis and its modeling have been under question in the light
|
https://en.wikipedia.org/wiki/Flyback%20converter
|
The flyback converter is used in both AC/DC, and DC/DC conversion with galvanic isolation between the input and any outputs. The flyback converter is a buck-boost converter with the inductor split to form a transformer, so that the voltage ratios are multiplied with an additional advantage of isolation. When driving, for example, a plasma lamp or a voltage multiplier, the rectifying diode of the boost converter is left out and the device is called a flyback transformer.
Structure and principle
The schematic of a flyback converter can be seen in Fig. 1. It is equivalent to that of a buck-boost converter, with the inductor split to form a transformer. Therefore, the operating principle of both converters is very similar:
When the switch is closed (top of Fig. 2), the primary of the transformer is directly connected to the input voltage source. The primary current and magnetic flux in the transformer increases, storing energy in the transformer. The voltage induced in the secondary winding is negative, so the diode is reverse-biased (i.e., blocked). The output capacitor supplies energy to the output load.
When the switch is opened (bottom of Fig. 2), the primary current and magnetic flux drops. The secondary voltage is positive, forward-biasing the diode, allowing current to flow from the transformer. The energy from the transformer core recharges the capacitor and supplies the load.
The operation of storing energy in the transformer before transferring to the output of the converter allows the topology to easily generate multiple outputs with little additional circuitry, although the output voltages have to be able to match each other through the turns ratio. Also there is a need for a controlling rail which has to be loaded before load is applied to the uncontrolled rails, this is to allow the PWM to open up and supply enough energy to the transformer.
Operations
The flyback converter is an isolated power converter. The two prevailing control schemes are vo
|
https://en.wikipedia.org/wiki/Gene%20cassette
|
In biology, a gene cassette is a type of mobile genetic element that contains a gene and a recombination site. Each cassette usually contains a single gene and tends to be very small; on the order of 500–1000 base pairs. They may exist incorporated into an integron or freely as circular DNA. Gene cassettes can move around within an organism's genome or be transferred to another organism in the environment via horizontal gene transfer. These cassettes often carry antibiotic resistance genes. An example would be the kanMX cassette which confers kanamycin (an antibiotic) resistance upon bacteria.
Integrons
Integrons are genetic structures in bacteria which express and are capable of acquiring and exchanging gene cassettes. The integron consists of a promoter, an attachment site, and an integrase gene that encodes a site-specific recombinase There are three classes of integrons described. The mobile units that insert into integrons are gene cassettes. For cassettes that carry a single gene without a promoter, the entire series of cassettes is transcribed from an adjacent promoter within the integron. The gene cassettes are speculated to be inserted and excised via a circular intermediate. This would involve recombination between short sequences found at their termini and known as 59 base elements (59-be)—which may not be 59 bases long. The 59-be are a diverse family of sequences that function as recognition sites for the site-specific integrase (enzyme responsible for integrating the gene cassette into an integron) that occur downstream from the gene coding sequence.
Diversity and prevalence
The ability of genetic elements like gene cassettes to excise and insert into genomes results in highly similar gene regions appearing in distantly related organisms. The three classes of integrons are similar in structure and are identified by where the insertions occur and what systems they coincide with. Class 1 integrons are seen in a diverse group of bacterial genomes and
|
https://en.wikipedia.org/wiki/Structure%20mapping%20engine
|
In artificial intelligence and cognitive science, the structure mapping engine (SME) is an implementation in software of an algorithm for analogical matching based on the psychological theory of Dedre Gentner. The basis of Gentner's structure-mapping idea is that an analogy is a mapping of knowledge from one domain (the base) into another (the target). The structure-mapping engine is a computer simulation of the analogy and similarity comparisons.
The theory is useful because it ignores surface features and finds matches between potentially very different things if they have the same representational structure. For example, SME could determine that a pen is like a sponge because both are involved in dispensing liquid, even though they do this very differently.
Structure mapping theory
Structure mapping theory is based on the systematicity principle, which states that connected knowledge is preferred over independent facts. Therefore, the structure mapping engine should ignore isolated source-target mappings unless they are part of a bigger structure. The SME, the theory goes, should map objects that are related to knowledge that has already been mapped.
The theory also requires that mappings be done one-to-one, which means that no part of the source description can map to more than one item in the target and no part of the target description can be mapped to more than one part of the source. The theory also requires that if a match maps subject to target, the arguments of subject and target must also be mapped. If both these conditions are met, the mapping is said to be "structurally consistent."
Concepts in SME
SME maps knowledge from a source into a target. SME calls each description a dgroup. Dgroups contain a list of entities and predicates. Entities represent the objects or concepts in a description — such as an input gear or a switch. Predicates are one of three types and are a general way to express knowledge for SME.
Relation predicates contain mul
|
https://en.wikipedia.org/wiki/Key%20clustering
|
Key or hash function should avoid clustering, the mapping of two or more keys to consecutive slots. Such clustering may cause the lookup cost to skyrocket, even if the load factor is low and collisions are infrequent. The popular multiplicative hash is claimed to have particularly poor clustering behaviour.
References
Key management
|
https://en.wikipedia.org/wiki/Multiplicity%20function%20for%20N%20noninteracting%20spins
|
The multiplicity function for a two state paramagnet, W(n,N), is the number of spin states such that n of the N spins point in the z-direction. This function is given by the combinatoric function C(N,n). That is:
It is primarily used in introductory statistical mechanics and thermodynamics textbooks to explain the microscopic definition of entropy to students. If the spins are non-interacting, then the multiplicity function counts the number of states which have the same energy in an external magnetic field. By definition, the entropy S is then given by the natural logarithm of this number:
Where k is the Boltzmann constant
References
Thermodynamics
|
https://en.wikipedia.org/wiki/Pseudorandom%20permutation
|
In cryptography, a pseudorandom permutation (PRP) is a function that cannot be distinguished from a random permutation (that is, a permutation selected at random with uniform probability, from the family of all permutations on the function's domain) with practical effort.
Definition
Let F be a mapping . F is a PRP if and only if
For any , is a bijection from to , where .
For any , there is an "efficient" algorithm to evaluate for any ,.
For all probabilistic polynomial-time distinguishers : , where is chosen uniformly at random and is chosen uniformly at random from the set of permutations on n-bit strings.
A pseudorandom permutation family is a collection of pseudorandom permutations, where a specific permutation may be chosen using a key.
The model of block ciphers
The idealized abstraction of a (keyed) block cipher is a truly random permutation on the mappings between plaintext and ciphertext. If a distinguishing algorithm exists that achieves significant advantage with less effort than specified by the block cipher's security parameter (this usually means the effort required should be about the same as a brute force search through the cipher's key space), then the cipher is considered broken at least in a certificational sense, even if such a break doesn't immediately lead to a practical security failure.
Modern ciphers are expected to have super pseudorandomness.
That is, the cipher should be indistinguishable from a randomly chosen permutation on the same message space, even if the adversary has black-box access to the forward and inverse directions of the cipher.
Connections with pseudorandom function
Michael Luby and Charles Rackoff showed that a "strong" pseudorandom permutation can be built from a pseudorandom function using a Luby–Rackoff construction which is built using a Feistel cipher.
Related concepts
Unpredictable permutation
An unpredictable permutation (UP) Fk is a permutation whose values cannot be predicted by a fast randomized
|
https://en.wikipedia.org/wiki/Computational%20chemical%20methods%20in%20solid-state%20physics
|
Computational chemical methods in solid-state physics follow the same approach as they do for molecules, but with two differences. First, the translational symmetry of the solid has to be utilised, and second, it is possible to use completely delocalised basis functions such as plane waves as an alternative to the molecular atom-centered basis functions. The electronic structure of a crystal is in general described by a band structure, which defines the energies of electron orbitals for each point in the Brillouin zone. Ab initio and semi-empirical calculations yield orbital energies, therefore they can be applied to band structure calculations. Since it is time-consuming to calculate the energy for a molecule, it is even more time-consuming to calculate them for the entire list of points in the Brillouin zone.
Calculations can use the Hartree–Fock method, some post-Hartree–Fock methods, particularly Møller–Plesset perturbation theory to second order (MP2) and density functional theory (DFT).
See also
List of quantum chemistry and solid-state physics software
References
Computational Chemistry, David Young, Wiley-Interscience, 2001. Chapter 41, pg 318. The extensive references in that chapter provide further reading on this topic.
Computational Chemistry and molecular modeling Principles and applications,K.I.Ramachandran, G.Deepa and Krishnan namboori P.K., Springer-Verlag GmbH
Computational chemistry
Theoretical chemistry
Computational science
Condensed matter physics
Computational physics
fr:Chimie numérique#Méthodes de chimie numérique dans les systèmes solides
|
https://en.wikipedia.org/wiki/Directed%20evolution
|
Directed evolution (DE) is a method used in protein engineering that mimics the process of natural selection to steer proteins or nucleic acids toward a user-defined goal. It consists of subjecting a gene to iterative rounds of mutagenesis (creating a library of variants), selection (expressing those variants and isolating members with the desired function) and amplification (generating a template for the next round). It can be performed in vivo (in living organisms), or in vitro (in cells or free in solution). Directed evolution is used both for protein engineering as an alternative to rationally designing modified proteins, as well as for experimental evolution studies of fundamental evolutionary principles in a controlled, laboratory environment.
History
Directed evolution has its origins in the 1960s with the evolution of RNA molecules in the "Spiegelman's Monster" experiment. The concept was extended to protein evolution via evolution of bacteria under selection pressures that favoured the evolution of a single gene in its genome.
Early phage display techniques in the 1980s allowed targeting of mutations and selection to a single protein. This enabled selection of enhanced binding proteins, but was not yet compatible with selection for catalytic activity of enzymes. Methods to evolve enzymes were developed in the 1990s and brought the technique to a wider scientific audience. The field rapidly expanded with new methods for making libraries of gene variants and for screening their activity. The development of directed evolution methods was honored in 2018 with the awarding of the Nobel Prize in Chemistry to Frances Arnold for evolution of enzymes, and George Smith and Gregory Winter for phage display.
Principles
Directed evolution is a mimic of the natural evolution cycle in a laboratory setting. Evolution requires three things to happen: variation between replicators, that the variation causes fitness differences upon which selection acts, and that this v
|
https://en.wikipedia.org/wiki/Tak%20%28function%29
|
In computer science, the Tak function is a recursive function, named after Ikuo Takeuchi (:ja:竹内郁雄). It is defined as follows:
def tak(x, y, z):
if y < x:
return tak(
tak(x-1, y, z),
tak(y-1, z, x),
tak(z-1, x, y)
)
else:
return z
This function is often used as a benchmark for languages with optimization for recursion.
tak() vs. tarai()
The original definition by Takeuchi was as follows:
def tarai(x, y, z):
if y < x:
return tarai(
tarai(x-1, y, z),
tarai(y-1, z, x),
tarai(z-1, x, y)
)
else:
return y # not z!
tarai is short for たらい回し tarai mawashi, "to pass around" in Japanese.
John McCarthy named this function tak() after Takeuchi.
However, in certain later references, the y somehow got turned into the z. This is a small, but significant difference because the original version benefits significantly from lazy evaluation.
Though written in exactly the same manner as others, the Haskell code below runs much faster.
tarai :: Int -> Int -> Int -> Int
tarai x y z
| x <= y = y
| otherwise = tarai (tarai (x-1) y z)
(tarai (y-1) z x)
(tarai (z-1) x y)
One can easily accelerate this function via memoization yet lazy evaluation still wins.
The best known way to optimize tarai is to use mutually recursive helper function as follows.
def laziest_tarai(x, y, zx, zy, zz):
if not y < x:
return y
else:
return laziest_tarai(
tarai(x-1, y, z),
tarai(y-1, z, x),
tarai(zx, zy, zz)-1, x, y)
def tarai(x, y, z):
if not y < x:
return y
else:
return laziest_tarai(
tarai(x-1, y, z),
tarai(y-1, z, x),
z-1, x, y)
Here is an efficient implementation of tarai() in C:
int tarai(int x, int y, int z)
{
while (x > y) {
int oldx = x, oldy = y;
x = tarai(
|
https://en.wikipedia.org/wiki/Mineral%20economics
|
Mineral economics is the academic discipline that investigates and promotes understanding of economic and policy issues associated with the production and use of mineral commodities.
Mineral economics [′min·rəl ‚ek·ə′näm·iks] is specially concerned with the analysis and understanding of mineral distribution as well as the ‘discovery, exploitation, and marketing of minerals’. Mineral economics is an academic discipline which constructs policies regarding mineral commodities and their global distribution.
The discipline of mineral economics examines the success and the implications associated with the mining industry and the impact the industry has on the economy socially and regarding the climate. Mineral economics is a continuing, evolving field which originally started after the Second World War and has continued to expand in today's modern climate. The identification of mineral sectors and their associated total revenue from specific commodities and how this varies across Countries is significant for global trade and fecundity. Australia is a leading export in several mineral commodities thus providing a substantial percentage of revenue within the Australian economy. Other various leaders regarding mineral trading and contributions also holds significance in understanding and forming concise parameters to apply and construct. The establishment of such findings addresses concerns regarding societal support and sustainability concerns. The sustainability of the mining industry is also a key focus and how its direct impact on the environment must be monitored and necessary parameters applied.
The history of mineral economics
Mineral economics did not become an academic discipline until after the Second World War, with the majority of current research being completed in other disciplines and fields. Although, mineral economics has continued to develop since the 1940s by recognising the demand of such mineral commodities and the increase seen in trade globally.
|
https://en.wikipedia.org/wiki/Conformal%20coating
|
Conformal coating is a protective, breathable coating of thin polymeric film applied to printed circuit boards (PCBs). Conformal coatings are typically applied at 25–250 μm to the electronic circuitry and provide protection against moisture and other harsher conditions.
Coatings can be applied in a number of ways including brushing, spraying, dispensing, and dip coating. A number of materials can be used as conformal coatings, such as acrylics, silicones, urethanes and parylene. Each has its own characteristics, making them contain different characteristics for different manufacturing use cases. Many circuit board assembly firms can coat assemblies with a layer of transparent conformal coating, which is used as an alternative to potting.
Reasons for use
Conformal coatings are used to protect electronic components from the environmental factors they are exposed to. Examples of these factors include moisture, dust, salt, chemicals, temperature changes and mechanical abrasion, as well as corrosion. More recently, conformal coatings are being used to reduce the formation of whiskers, and can also prevent current bleed between closely positioned components.
Conformal coatings are breathable, allowing trapped moisture in electronic boards to escape while maintaining protection from contamination. These coatings are not sealants, and prolonged exposure to vapors will cause transmission and degradation to occur. There are typically four classes of conformal coatings: Acrylic, Urethane, Silicone, and Varnish. While each has its own specific physical and chemical properties each is able to perform the following functions:
Insulation: Allowing closer conductor spacing
Minimal effect on component weight
More board protection including that of environmental, chemical, and corrosive
Applications
Precision analog circuitry may suffer degraded accuracy if insulating surfaces become contaminated with ionic substances such as fingerprint residues, which can become mildly co
|
https://en.wikipedia.org/wiki/Method%20of%20images
|
The method of images (or method of mirror images) is a mathematical tool for solving differential equations, in which the domain of the sought function is extended by the addition of its mirror image with respect to a symmetry hyperplane. As a result, certain boundary conditions are satisfied automatically by the presence of a mirror image, greatly facilitating the solution of the original problem. The domain of the function is not extended. The function is made to satisfy given boundary conditions by placing singularities outside the domain of the function. The original singularities are inside the domain of interest. The additional (fictitious) singularities are an artifact needed to satisfy the prescribed but yet unsatisfied boundary conditions.
Method of image charges
The method of image charges is used in electrostatics to simply calculate or visualize the distribution of the electric field of a charge in the vicinity of a conducting surface. It is based on the fact that the tangential component of the electrical field on the surface of a conductor is zero, and that an electric field E in some region is uniquely defined by its normal component over the surface that confines this region (the uniqueness theorem).
Magnet-superconductor systems
The method of images may also be used in magnetostatics for calculating the magnetic field of a magnet that is close to a superconducting surface. The superconductor in so-called Meissner state is an ideal diamagnet into which the magnetic field does not penetrate. Therefore, the normal component of the magnetic field on its surface should be zero. Then the image of the magnet should be mirrored. The force between the magnet and the superconducting surface is therefore repulsive.
Comparing to the case of the charge dipole above a flat conducting surface, the mirrored magnetization vector can be thought as due to an additional sign change of an axial vector.
In order to take into account the magnetic flux pinning p
|
https://en.wikipedia.org/wiki/Fault%20%28technology%29
|
In document ISO 10303-226, a fault is defined as an abnormal condition or defect at the component, equipment, or sub-system level which may lead to a failure.
In telecommunications, according to the Federal Standard 1037C of the United States, the term fault has the following meanings:
An accidental condition that causes a functional unit to fail to perform its required function. See .
A defect that causes a reproducible or catastrophic malfunction. A malfunction is considered reproducible if it occurs consistently under the same circumstances. See .
In power systems, an unintentional short circuit, or partial short circuit, between energized conductors or between an energized conductor and ground. A distinction can be made between symmetric and asymmetric faults. See Fault (power engineering).
Random fault
A random fault is a fault that occurs as a result of wear or other deterioration. Whereas the time of a particular occurrence of such a fault cannot be determined, the rate at which such faults occur within the equipment population on average can be predicted with accuracy. Manufacturers will often accept random faults as a risk if the chances are virtually negligible.
A fault can happen in virtually any object or appliance, most common with electronics and machinery.
For example, an Xbox 360 console will deteriorate over time due to dust buildup in the fans. This will cause the Xbox to overheat, cause an error, and shut the console down.
Systematic fault
Systematic faults are often a result of an error in the specification of the equipment and therefore affect all examples of that type. Such faults can remain undetected for years, until conditions conduce to create the failure. Given the same circumstances, each and every example of the equipment would fail identically at that time.
Failures in hardware can be caused by random faults or systematic faults, but failures in software are always systematic.
See also
Product defect
Reliability engineering
|
https://en.wikipedia.org/wiki/Against%20DRM%20license
|
Against DRM 2.0 is a free copyleft license for artworks. It is the first free content license that contains a clause about related rights and a clause against digital rights management (DRM).
The first clause authorizes the licensee to exercise related rights, while the second clause prevents the use of DRM. If the licensor uses DRM, the license is not applicable to the work; if the licensee uses DRM, license is automatically void.
According to Internet Archive, the first version of the Against DRM 2.0 license was published in 2006.
Notes
External links
Against DRM license version 2.0 on Internet Archive, archived on March 27th, 2017
Free Creations website on Internet Archive, archived on March 27th, 2017
The Readers' Bill of Rights for Digital Books
Digital rights management
Free content licenses
Business of visual arts
|
https://en.wikipedia.org/wiki/Synchronization%20model
|
In configuration management (CM), one has to control (among other things) changes made to software and documentation. This is called revision control, which manages multiple versions of the same unit of information. Although revision control is important to CM, it is not equal to it.
Synchronization Models, also known as Configuration Management Models (Feiler, 1991), describe methods to enable revision control through allowing simultaneous, concurrent changes to individual files.
Synchronization models
Feiler (1991) reports on four different synchronization models, shortly described below.
Check-out/check-in
In the check-out/check-in model, files are stored individually in a repository from which they are checked out whenever the files are accessed, and checked in when they have changed. This repository can store multiple versions of the files. Because these files can be documentation or source code, but can also be a collection of files, the term Configuration item (CI) will be used from now on. The basic mechanism used to prevent conflicts by simultaneous modifications is that of locking.
Composition
The composition model is an extension on the check-out/check-in model. This model allows developers to think in configurations instead of individual files. Although the complete check-out/check-in model is represented in the composition model, it enables the use of different strategies for updating through the use of improved support for the management of configurations. A configuration is defined as being built up from a system model and version selection rules. The system model determines which files are used, while the version selection rules determine which version of the files (e.g. the latest versions or of a certain development state).
Long transactions
The long transactions model takes a broader approach by assuming that a system is built up out of logical changes. Its focus is on the coordination and integration of these changes. Basically, it uses
|
https://en.wikipedia.org/wiki/183%20%28number%29
|
183 (one hundred [and] eighty-three) is the natural number following 182 and preceding 184.
In mathematics
183 is a perfect totient number, a number that is equal to the sum of its iterated totients
Because , it is the number of points in a projective plane over the finite field . 183 is the fourth element of a divisibility sequence in which the th number can be computed as
for a transcendental number . This sequence counts the number of trees of height in which each node can have at most two children.
There are 183 different semiorders on four labeled elements.
See also
The year AD 183 or 183 BC
List of highways numbered 183
References
Integers
183
|
https://en.wikipedia.org/wiki/Statistical%20field%20theory
|
In theoretical physics, statistical field theory (SFT) is a theoretical framework that describes phase transitions. It does not denote a single theory but encompasses many models, including for magnetism, superconductivity, superfluidity, topological phase transition, wetting as well as non-equilibrium phase transitions. A SFT is any model in statistical mechanics where the degrees of freedom comprise a field or fields. In other words, the microstates of the system are expressed through field configurations. It is closely related to quantum field theory, which describes the quantum mechanics of fields, and shares with it many techniques, such as the path integral formulation and renormalization.
If the system involves polymers, it is also known as polymer field theory.
In fact, by performing a Wick rotation from Minkowski space to Euclidean space, many results of statistical field theory can be applied directly to its quantum equivalent. The correlation functions of a statistical field theory are called Schwinger functions, and their properties are described by the Osterwalder–Schrader axioms.
Statistical field theories are widely used to describe systems in polymer physics or biophysics, such as polymer films, nanostructured block copolymers or polyelectrolytes.
Notes
References
External links
Problems in Statistical Field Theory
Particle and Polymer Field Theory Group
Applied mathematics
Mathematical physics
Quantum field theory
|
https://en.wikipedia.org/wiki/TIM-011
|
TIM 011 is an educational or personal computer for school microcomputer developed by Mihajlo Pupin Institute of Serbia in 1987. There were about 1200 TIM-011 computers in Serbian schools in the starting from 1987 and in 1990s.
It were based on CP/M with Hitachi HD64180, Z80A enhanced CPU with MMU , 256KB RAM standard, 3.5" floppy drives and integrated 512 X 256 green-screen monitors with 4 levels of intensity.
Reference literature
Dragoljub Milićević, Dušan Hristović (Ed): "Računari TIM" (TIM Computers), Naučna knjiga, Belgrade 1990.
D.B.Vujaklija, N.Markovic (Ed): "50 Years of Computing in Serbia (50 godina računarstva u Srbiji- Hronika digitalnih decenija)", DIS, IMP and PC-Press, Belgrade 2011.
References
Z80-based home computers
Personal computers
Microcomputers
Mihajlo Pupin Institute
|
https://en.wikipedia.org/wiki/Soil%20biology
|
Soil biology is the study of microbial and faunal activity and ecology in soil.
Soil life, soil biota, soil fauna, or edaphon is a collective term that encompasses all organisms that spend a significant portion of their life cycle within a soil profile, or at the soil-litter interface.
These organisms include earthworms, nematodes, protozoa, fungi, bacteria, different arthropods, as well as some reptiles (such as snakes), and species of burrowing mammals like gophers, moles and prairie dogs. Soil biology plays a vital role in determining many soil characteristics. The decomposition of organic matter by soil organisms has an immense influence on soil fertility, plant growth, soil structure, and carbon storage. As a relatively new science, much remains unknown about soil biology and its effect on soil ecosystems.
Overview
The soil is home to a large proportion of the world's biodiversity. The links between soil organisms and soil functions are complex. The interconnectedness and complexity of this soil ‘food web’ means any appraisal of soil function must necessarily take into account interactions with the living communities that exist within the soil. We know that soil organisms break down organic matter, making nutrients available for uptake by plants and other organisms. The nutrients stored in the bodies of soil organisms prevent nutrient loss by leaching. Microbial exudates act to maintain soil structure, and earthworms are important in bioturbation. However, we find that we don't understand critical aspects about how these populations function and interact. The discovery of glomalin in 1995 indicates that we lack the knowledge to correctly answer some of the most basic questions about the biogeochemical cycle in soils. There is much work ahead to gain a better understanding of the ecological role of soil biological components in the biosphere.
In balanced soil, plants grow in an active and steady environment. The mineral content of the soil and its heartiful
|
https://en.wikipedia.org/wiki/European%20Home%20Systems%20Protocol
|
European Home Systems (EHS) Protocol was a communication protocol aimed at home appliances control and communication using power line communication (PLC), developed by the European Home Systems Association (EHSA).
After merging with two other protocols, it is a part of the KNX standard, which complies with the European Committee for Electrotechnical Standardization (CENELEC) norm EN 50090 and has a chance to be a basis for the first open standard for home and building control.
See also
Building automation
Home automation
External links
Home Automation with EHS: Cheap But Slow - Nikkei Electronics Asia
www.cenelec.eu - European Committee for Electrotechnical Standardization
www.konnex.org - association aimed at development of home and building control systems.
Home automation
Network protocols
|
https://en.wikipedia.org/wiki/Kingston%20Technology
|
Kingston Technology Corporation is an American multinational computer technology corporation that develops, manufactures, sells and supports flash memory products, other computer-related memory products, as well as the HyperX gaming division (now owned by HP). Headquartered in Fountain Valley, California, United States, Kingston Technology employs more than 3,000 employees worldwide as of Q1 2016. The company has manufacturing and logistics facilities in the United States, United Kingdom, Ireland, Taiwan, and China.
It is the largest independent producer of DRAM memory modules, owning approximately 68% of the third-party worldwide DRAM module market share in 2017, according to DRAMeXchange. In 2018 the company generated $7.5 billion in revenue and made #53 on the Forbes Lists of "America's Largest Private Companies 2019." Kingston serves an international network of distributors, resellers, retailers and OEM customers on six continents. The company also provides contract manufacturing and supply chain management services for semiconductor manufacturers and system OEMs.
History
Kingston Technology was founded on October 17, 1987, in response to a severe shortage of 1Mbit surface-mount memory chips, Chinese immigrant John Tu designed a new single in-line memory module (SIMM) that used readily available, older-technology through-hole components. In 1990 the company branched out into its first non-memory product line, processor upgrades. By 1992, the firm was ranked #1 by Inc. as the fastest-growing privately held company in America. The company expanded into networking and storage product lines, and introduced DataTraveler and DataPak portable products. In September 1994, Kingston became ISO 9000 certified on its first assessment attempt.
In 1995, Kingston opened a branch office in Munich, Germany to provide technical support and marketing capabilities for its European distributors and customers.
In October 1995, the company joined the "Billion-Dollar Club". After
|
https://en.wikipedia.org/wiki/The%20Book%20of%20Squares
|
The Book of Squares, (Liber Quadratorum in the original Latin) is a book on algebra by Leonardo Fibonacci, published in 1225. It was dedicated to Frederick II, Holy Roman Emperor.
The Liber quadratorum has been passed down by a single 15th-century manuscript, the so-called ms. E 75 Sup. of the Biblioteca Ambrosiana (Milan, Italy), ff. 19r-39v. During the 19th century, the work has been published for the first time in a printed edition by Baldassarre Boncompagni Ludovisi, prince of Piombino.
Appearing in the book is Fibonacci's identity, establishing that the set of all sums of two squares is closed under multiplication. The book anticipated the works of later mathematicians such as Fermat and Euler. The book examines several topics in number theory, among them an inductive method for finding Pythagorean triples based on the sequence of odd integers, the fact that the sum of the first odd integers is , and the solution to the congruum problem.
Notes
Further reading
B. Boncompagni Ludovisi, Opuscoli di Leonardo Pisano secondo un codice della Biblioteca Ambrosiana di Milano contrassegnato E.75. Parte Superiore, in Id., Scritti di Leonardo Pisano matematico del secolo decimoterzo, vol. II, Roma 1862, pp. 253–283
P. Ver Eecke, Léonard de Pise. Le livre des nombres carrés. Traduit pour la première fois du Latin Médiéval en Français, Paris, Blanchard-Desclée - Bruges 1952.
G. Arrighi, La fortuna di Leonardo Pisano alla corte di Federico II, in Dante e la cultura sveva. Atti del Convegno di Studi, Melfi, 2-5 novembre 1969, Firenze 1970, pp. 17–31.
E. Picutti, Il Libro dei quadrati di Leonardo Pisano e i problemi di analisi indeterminata nel Codice Palatino 557 della Biblioteca Nazionale di Firenze, in «Physis. Rivista Internazionale di Storia della Scienza» XXI, 1979, pp. 195–339.
L.E. Sigler, Leonardo Pisano Fibonacci, the book of squares. An annotated translation into modern English, Boston 1987.
M. Moyon, Algèbre & Practica geometriæ en Occident médiéval latin
|
https://en.wikipedia.org/wiki/Mikl%C3%B3s%20Schweitzer%20Competition
|
The Miklós Schweitzer Competition (Schweitzer Miklós Matematikai Emlékverseny in Hungarian) is an annual Hungarian mathematics competition for university undergraduates, established in 1949.
It is named after Miklós Schweitzer (1 February 1923 – 28 January 1945), a young Hungarian mathematician who died under the Siege of Budapest in the Second World War.
The Schweitzer contest is uniquely high-level among mathematics competitions. The problems, written by prominent Hungarian mathematicians, are challenging and require in-depth knowledge of the fields represented. The competition is open-book and competitors are allowed ten days to come up with solutions.
The problems on the competition can be classified roughly in the following categories:
1. Algebra
2. Combinatorics
3. Theory of Functions
4. Geometry
5. Measure Theory
6. Number Theory
7. Operators
8. Probability Theory
9. Sequences and Series
10. Topology
11. Set Theory
Recently a similar competition has been started in France.
References
Contests in higher mathematics (Hungary, 1949–1961). In memoriam, Miklós Schweitzer. (G. Szasz, L. Geher, I. Kovacs, L. Pinter, eds), Akadémiai Kiadó, Budapest, 1968 260 pp.
Miklós Schweitzer Competition Problems in recent years
Problems of the Miklós Schweitzer Memorial Competition at http://artofproblemsolving.com/
Mathematics competitions
Recurring events established in 1949
Student events
|
https://en.wikipedia.org/wiki/High%20dynamic%20range
|
High dynamic range (HDR), also known as wide dynamic range, extended dynamic range, or expanded dynamic range, is a dynamic range higher than usual.
The term is often used in discussing the dynamic range of various signals such as images, videos, audio or radio. It may apply to the means of recording, processing, and reproducing such signals including analog and digitized signals.
The term is also the name of some of the technologies or techniques allowing to achieve high dynamic range images, videos, or audio.
Imaging
In this context, the term high dynamic range means there is a lot of variation in light levels within a scene or an image. The dynamic range refers to the range of luminosity between the brightest area and the darkest area of that scene or image.
(HDRI) refers to the set of imaging technologies and techniques that allow to increase the dynamic range of images or videos. It covers the acquisition, creation, storage, distribution and display of images and videos.
Modern movies have often been filmed with cameras featuring a higher dynamic range, and legacy movies can be converted even if manual intervention would be needed for some frames (as when black-and-white films are converted to color). Also, special effects, especially those that mix real and synthetic footage, require both HDR shooting and rendering. HDR video is also needed in applications that demand high accuracy for capturing temporal aspects of changes in the scene. This is important in monitoring of some industrial processes such as welding, in predictive driver assistance systems in automotive industry, in surveillance video systems, and other applications.
Capture
In photography and videography, a technique, commonly named high dynamic range (HDR), allows to increase the dynamic range of captured photos and videos beyond the native capability of the camera. It consists of capturing multiple frames of the same scene but with different exposures and then combining them into one
|
https://en.wikipedia.org/wiki/Valve%20audio%20amplifier
|
A valve audio amplifier (UK) or vacuum tube audio amplifier (US) is a valve amplifier used for sound reinforcement, sound recording and reproduction.
Until the invention of solid state devices such as the transistor, all electronic amplification was produced by valve (tube) amplifiers. While solid-state devices prevail in most audio amplifiers today, valve audio amplifiers are still used where their audible characteristics are considered pleasing, for example in music performance or music reproduction.
Instrument and vocal amplification
Valve amplifiers for guitars (and to a lesser degree vocals and other applications) have different purposes from those of hi-fi amplifiers. The purpose is not necessarily to reproduce sound as accurately as possible, but rather to fulfill the musician's concept of what the sound should be. For example, distortion is almost universally considered undesirable in hi-fi amplifiers but may be considered a desirable characteristic in performance.
Small signal circuits are often deliberately designed to have very high gain, driving the signal far outside the linear range of the tube circuit, to deliberately generate large amounts of harmonic distortion. The distortion and overdrive characteristics of valves are quite different from transistors (not least the amount of voltage headroom available in a typical circuit) and this results in a distinctive sound. Amplifiers for such performance applications typically retain tone and filter circuits that have largely disappeared from modern hi-fi products. Amplifiers for guitars in particular may also include a number of "effects" functions.
Origins of electric guitar amplification
The electric guitar originates from Rickenbacker in the 1930s but its modern form was popularised by Fender and Gibson (notably the Fender Telecaster (1951) & Stratocaster (1954) and Gibson Les Paul (1952) during the 1950s. The earliest guitar amplifiers were probably audio amplifiers made for other purposes and pres
|
https://en.wikipedia.org/wiki/TNT%20equivalent
|
TNT equivalent is a convention for expressing energy, typically used to describe the energy released in an explosion. The is a unit of energy defined by convention to be (), which is the approximate energy released in the detonation of a metric ton (1,000 kilograms) of TNT. In other words, for each gram of TNT exploded, (or 4,184 joules) of energy are released.
This convention intends to compare the destructiveness of an event with that of conventional explosive materials, of which TNT is a typical example, although other conventional explosives such as dynamite contain more energy.
Kiloton and megaton
The "kiloton (of TNT equivalent)" is a unit of energy equal to 4.184 terajoules ().
The "megaton (of TNT equivalent)" is a unit of energy equal to 4.184 petajoules ().
The kiloton and megaton of TNT equivalent have traditionally been used to describe the energy output, and hence the destructive power, of a nuclear weapon. The TNT equivalent appears in various nuclear weapon control treaties, and has been used to characterize the energy released in asteroid impacts.
Historical derivation of the value
Alternative values for TNT equivalency can be calculated according to which property is being compared and when in the two detonation processes the values are measured.
Where for example the comparison is by energy yield, an explosive's energy is normally expressed for chemical purposes as the thermodynamic work produced by its detonation. For TNT this has been accurately measured as 4,686 J/g from a large sample of air blast experiments, and theoretically calculated to be 4,853 J/g.
However even on this basis, comparing the actual energy yields of a large nuclear device and an explosion of TNT can be slightly inaccurate. Small TNT explosions, especially in the open, don't tend to burn the carbon-particle and hydrocarbon products of the explosion. Gas-expansion and pressure-change effects tend to "freeze" the burn rapidly. A large open explosion of TNT may mai
|
https://en.wikipedia.org/wiki/Database%20security
|
Database security concerns the use of a broad range of information security controls to protect databases (potentially including the data, the database applications or stored functions, the database systems, the database servers and the associated network links) against compromises of their confidentiality, integrity and availability. It involves various types or categories of controls, such as technical, procedural/administrative and physical.
Security risks to database systems include, for example:
Unauthorized or unintended activity or misuse by authorized database users, database administrators, or network/systems managers, or by unauthorized users or hackers (e.g. inappropriate access to sensitive data, metadata or functions within databases, or inappropriate changes to the database programs, structures or security configurations);
Malware infections causing incidents such as unauthorized access, leakage or disclosure of personal or proprietary data, deletion of or damage to the data or programs, interruption or denial of authorized access to the database, attacks on other systems and the unanticipated failure of database services;
Overloads, performance constraints and capacity issues resulting in the inability of authorized users to use databases as intended;
Physical damage to database servers caused by computer room fires or floods, overheating, lightning, accidental liquid spills, static discharge, electronic breakdowns/equipment failures and obsolescence;
Design flaws and programming bugs in databases and the associated programs and systems, creating various security vulnerabilities (e.g. unauthorized privilege escalation), data loss/corruption, performance degradation etc.;
Data corruption and/or loss caused by the entry of invalid data or commands, mistakes in database or system administration processes, sabotage/criminal damage etc.
Ross J. Anderson has often said that by their nature large databases will never be free of abuse by breaches of
|
https://en.wikipedia.org/wiki/Limit%20switch
|
In electrical engineering, a limit switch is a switch operated by the motion of a machine part or the presence of an object. A limit switch can be used for controlling machinery as part of a control system, as a safety interlock, or as a counter enumerating objects passing a point.
Limit switches are used in a variety of applications and environments because of their ruggedness, ease of installation, and reliability of operation. They can determine the presence, passing, positioning, and end of travel of an object. They were first used to define the limit of travel of an object, hence the name "limit switch".
Standardized limit switches are industrial control components manufactured with a variety of operator types, including lever, roller plunger, and whisker type. Limit switches may be directly mechanically operated by the motion of the operating lever. A reed switch may be used to indicate proximity of a magnet mounted on some moving part. Proximity switches operate by the disturbance of an electromagnetic field, by capacitance, or by sensing a magnetic field.
Rarely, a final operating device such as a lamp or solenoid valve is directly controlled by the contacts of an industrial limit switch, but more typically the limit switch is wired through a control relay, a motor contactor control circuit, or as an input to a programmable logic controller.
Examples
Miniature snap-action switches are components of devices like photocopiers, computer printers, convertible tops or microwave ovens to ensure internal components are in the correct position for operation and to prevent operation when access doors are opened. A set of adjustable limit switches installed on a garage door opener shut off the motor when the door has reached the fully raised or fully lowered position. A numerical control machine such as a lathe has limit switches to identify maximum limits for machine parts or to provide a known reference point for incremental motions.
References
Safety switche
|
https://en.wikipedia.org/wiki/And-inverter%20graph
|
An and-inverter graph (AIG) is a directed, acyclic graph that represents a structural implementation of the logical functionality of a circuit or network. An AIG consists of two-input nodes representing logical conjunction, terminal nodes labeled with variable names, and edges optionally containing markers indicating logical negation. This representation of a logic function is rarely structurally efficient for large circuits, but is an efficient representation for manipulation of boolean functions. Typically, the abstract graph is represented as a data structure in software.
Conversion from the network of logic gates to AIGs is fast and scalable. It only requires that every gate be expressed in terms of AND gates and inverters. This conversion does not lead to unpredictable increase in memory use and runtime. This makes the AIG an efficient representation in comparison with either the binary decision diagram (BDD) or the "sum-of-product" (ΣoΠ) form, that is, the canonical form in Boolean algebra known as the disjunctive normal form (DNF). The BDD and DNF may also be viewed as circuits, but they involve formal constraints that deprive them of scalability. For example, ΣoΠs are circuits with at most two levels while BDDs are canonical, that is, they require that input variables be evaluated in the same order on all paths.
Circuits composed of simple gates, including AIGs, are an "ancient" research topic. The interest in AIGs started with Alan Turing's seminal 1948 paper on neural networks, in which he described a randomized trainable network of NAND gates. Interest continued through the late 1950s and continued in the 1970s when various local transformations have been developed. These transformations were implemented in several
logic synthesis and verification systems, such as Darringer et al. and Smith et al., which reduce circuits to improve area and delay during synthesis, or to speed up formal equivalence checking. Several important techniques were discov
|
https://en.wikipedia.org/wiki/Easton%27s%20theorem
|
In set theory, Easton's theorem is a result on the possible cardinal numbers of powersets. (extending a result of Robert M. Solovay) showed via forcing that the only constraints on permissible values for 2κ when κ is a regular cardinal are
(where cf(α) is the cofinality of α) and
Statement
If G is a class function whose domain consists of ordinals and whose range consists of ordinals such that
G is non-decreasing,
the cofinality of is greater than for each α in the domain of G, and
is regular for each α in the domain of G,
then there is a model of ZFC such that
for each in the domain of G.
The proof of Easton's theorem uses forcing with a proper class of forcing conditions over a model satisfying the generalized continuum hypothesis.
The first two conditions in the theorem are necessary. Condition 1 is a well known property of cardinality, while condition 2 follows from König's theorem.
In Easton's model the powersets of singular cardinals have the smallest possible cardinality compatible with the conditions that 2κ has cofinality greater than κ and is a non-decreasing function of κ.
No extension to singular cardinals
proved that a singular cardinal of uncountable cofinality cannot be the smallest cardinal for which the generalized continuum hypothesis fails. This shows that Easton's theorem cannot be extended to the class of all cardinals. The program of PCF theory gives results on the possible values of for singular cardinals . PCF theory shows that the values of the continuum function on singular cardinals are strongly influenced by the values on smaller cardinals, whereas Easton's theorem shows that the values of the continuum function on regular cardinals are only weakly influenced by the values on smaller cardinals.
See also
Singular cardinal hypothesis
Aleph number
Beth number
References
Set theory
Theorems in the foundations of mathematics
Cardinal numbers
Forcing (mathematics)
Independence results
|
https://en.wikipedia.org/wiki/RC%20algorithm
|
The RC algorithms are a set of symmetric-key encryption algorithms invented by Ron Rivest. The "RC" may stand for either Rivest's cipher or, more informally, Ron's code. Despite the similarity in their names, the algorithms are for the most part unrelated. There have been six RC algorithms so far:
RC1 was never published.
RC2 was a 64-bit block cipher developed in 1987.
RC3 was broken before ever being used.
RC4 is a stream cipher.
RC5 is a 32/64/128-bit block cipher developed in 1994.
RC6, a 128-bit block cipher based heavily on RC5, was an AES finalist developed in 1997.
References
Cryptographic algorithms
|
https://en.wikipedia.org/wiki/Iraq%E2%80%93Saudi%20Arabia%20border
|
The Iraq–Saudi Arabia border is 811 km (504 mi) in length and runs from the tripoint with Jordan in the west to the tripoint with Kuwait in the east.
Description
The border starts on the west at the tripoint with Jordan, and consists of six straight lines broadly orientated to the south-east, eventually reaching the tripoint with Kuwait on the Wadi al-Batin.
History
Historically there was no clearly defined boundary in this part of the Arabian peninsula; at the start of the 20th century the Ottoman Empire controlled what is now Iraq, with areas further south consisting of loosely organised Arab groupings, occasionally forming emirates, most prominent of which was the Emirate of Nejd and Hasa ruled by the al-Saud family.
During the First World War an Arab Revolt, supported by Britain, succeeded in removing the Ottomans from most of the Middle East. As a result of the secret 1916 Anglo-French Sykes-Picot Agreement Britain gained control of the Ottoman Vilayets of Mosul, Baghdad and Basra, which it organised into the mandate of Iraq in 1920. In the meantime Ibn Saud had managed to expand his domains considerably, eventually proclaiming the Kingdom of Saudi Arabia in 1932.
In December 1922 Percy Cox, British High Commissioner in Iraq, met with ibn Saud and signed the Uqair Protocol, which finalised Saudi Arabia's borders with both Kuwait and Iraq. The border thus created differed slightly from the modern frontier, with a Saudi 'kink' in the middle-south section. It also created a Saudi–Iraqi neutral zone, immediately west of Kuwait. This border was confirmed by the Bahra Agreement in November 1925.
The Saudi-Iraq neutral zone was split in 1975 and a final border treaty signed in 1981, which also appears to have 'ironed out' the Saudi kink. The details of this treaty were not revealed until 1991 when Saudi Arabia deposited the agreements at the United Nations following the Gulf War. The Gulf War seriously strained relations between the two countries; Iraq fired scu
|
https://en.wikipedia.org/wiki/Bird%E2%80%93Meertens%20formalism
|
The Bird–Meertens formalism (BMF) is a calculus for deriving programs from program specifications (in a functional programming setting) by a process of equational reasoning. It was devised by Richard Bird and Lambert Meertens as part of their work within IFIP Working Group 2.1.
It is sometimes referred to in publications as BMF, as a nod to Backus–Naur form. Facetiously it is also referred to as Squiggol, as a nod to ALGOL, which was also in the remit of WG 2.1, and because of the "squiggly" symbols it uses. A less-used variant name, but actually the first one suggested, is SQUIGOL.
Basic examples and notations
Map is a well-known second-order function that applies a given function to every element of a list; in BMF, it is written :
Likewise, reduce is a function that collapses a list into a single value by repeated application of a binary operator. It is written / in BMF.
Taking as a suitable binary operator with neutral element e, we have
Using those two operators and the primitives (as the usual addition), and (for list concatenation), we can easily express the sum of all elements of a list, and the flatten function, as and , in
point-free style. We have:
Similarly, writing for functional composition and for conjunction, it is easy to write a function testing that all elements of a list satisfy a predicate p, simply as :
Bird (1989) transforms inefficient easy-to-understand expressions ("specifications") into efficient involved expressions ("programs") by algebraic manipulation. For example, the specification "" is an almost literal translation of the maximum segment sum problem, but running that functional program on a list of size will take time in general. From this, Bird computes an equivalent functional program that runs in time , and is in fact a functional version of Kadane's algorithm.
The derivation is shown in the picture, with computational complexities given in blue, and law applications indicated in red.
Example instances of the laws c
|
https://en.wikipedia.org/wiki/Narus%20Inc.
|
Narus Inc. was a software company and vendor of big data analytics for cybersecurity.
History
In 1997, Ori Cohen, Vice President of Business and Technology Development for VDONet, founded Narus with Stas Khirman in Israel. Presently, they are employed with Deutsche Telekom AG and are not members of Narus' executive team. In 2010, Narus became a subsidiary of Boeing, located in Sunnyvale, California. In 2015, Narus was sold to Symantec.
Here are some of the key events in Narus's history:
1997: Narus is founded.
2001: Narus's products are used to track the September 11th terrorist attacks.
2004: Narus's products are used to track the spread of the Stuxnet worm.
2014: Narus is acquired by Boeing.
2015: Boeing's Phantom Works Intelligence & Analytics business is formed.
Narus was a pioneer in the field of big data analytics for cybersecurity. The company's products helped to protect organizations from a variety of cyber threats. Narus's products are no longer available, but Boeing's Phantom Works Intelligence & Analytics business continues to offer cybersecurity solutions.
Management
In 2004, Narus employed former Deputy Director of the National Security Agency, William Crowell as a director. From the Press Release announcing this:
Narus software
Narus software primarily captures various computer network traffic in real-time and analyzes results.
Before 9/11 Narus built carrier-grade tools to analyze IP network traffic for billing purposes, to prevent what Narus called "revenue leakage". Post-9/11 Narus added more "semantic monitoring abilities" for surveillance.
Mobile
Narus provided Telecom Egypt with deep packet inspection equipment, a content-filtering technology that allows network managers to inspect, track and target content from users of the Internet and mobile phones, as it passes through routers. The national telecommunications authorities of both Pakistan and Saudi Arabia are global Narus customers.
Controversies
AT&T wiretapping room
Narus s
|
https://en.wikipedia.org/wiki/Low-energy%20electron%20diffraction
|
Low-energy electron diffraction (LEED) is a technique for the determination of the surface structure of single-crystalline materials by bombardment with a collimated beam of low-energy electrons (30–200 eV) and observation of diffracted electrons as spots on a fluorescent screen.
LEED may be used in one of two ways:
Qualitatively, where the diffraction pattern is recorded and analysis of the spot positions gives information on the symmetry of the surface structure. In the presence of an adsorbate the qualitative analysis may reveal information about the size and rotational alignment of the adsorbate unit cell with respect to the substrate unit cell.
Quantitatively, where the intensities of diffracted beams are recorded as a function of incident electron beam energy to generate the so-called I–V curves. By comparison with theoretical curves, these may provide accurate information on atomic positions on the surface at hand.
Historical perspective
An electron-diffraction experiment similar to modern LEED was the first to observe the wavelike properties of electrons, but LEED was established as an ubiquitous tool in surface science only with the advances in vacuum generation and electron detection techniques.
Davisson and Germer's discovery of electron diffraction
The theoretical possibility of the occurrence of electron diffraction first emerged in 1924, when Louis de Broglie introduced wave mechanics and proposed the wavelike nature of all particles. In his Nobel-laureated work de Broglie postulated that the wavelength of a particle with linear momentum p is given by h/p, where h is Planck's constant.
The de Broglie hypothesis was confirmed experimentally at Bell Labs in 1927, when Clinton Davisson and Lester Germer fired low-energy electrons at a crystalline nickel target and observed that the angular dependence of the intensity of backscattered electrons showed diffraction patterns. These observations were consistent with the diffraction theory for X-rays devel
|
https://en.wikipedia.org/wiki/Virtually%20Haken%20conjecture
|
In topology, an area of mathematics, the virtually Haken conjecture states that every compact, orientable, irreducible three-dimensional manifold with infinite fundamental group is virtually Haken. That is, it has a finite cover (a covering space with a finite-to-one covering map) that is a Haken manifold.
After the proof of the geometrization conjecture by Perelman, the conjecture was only open for hyperbolic 3-manifolds.
The conjecture is usually attributed to Friedhelm Waldhausen in a paper from 1968, although he did not formally state it. This problem is formally stated as Problem 3.2 in Kirby's problem list.
A proof of the conjecture was announced on March 12, 2012 by Ian Agol in a seminar lecture he gave at the Institut Henri Poincaré. The proof appeared shortly thereafter in a preprint which was eventually published in Documenta Mathematica. The proof was obtained via a strategy by previous work of Daniel Wise and collaborators, relying on actions of the fundamental group on certain auxiliary spaces (CAT(0) cube complexes)
It used as an essential ingredient the freshly-obtained solution to the surface subgroup conjecture by Jeremy Kahn and Vladimir Markovic.
Other results which are directly used in Agol's proof include the Malnormal Special Quotient Theorem of Wise and a criterion of Nicolas Bergeron and Wise for the cubulation of groups.
In 2018 related results were obtained by Piotr Przytycki and Daniel Wise proving that mixed 3-manifolds are also virtually special, that is they can be cubulated into a cube complex with a finite cover where all the hyperplanes are embedded which by the previous mentioned work can be made virtually Haken.
See also
Virtually fibered conjecture
Surface subgroup conjecture
Ehrenpreis conjecture
Notes
References
.
.
External links
3-manifolds
Theorems in topology
Conjectures that have been proved
|
https://en.wikipedia.org/wiki/Shear%20pin
|
A shear pin is a mechanical detail designed to allow a specific outcome to occur once a predetermined force is applied. It can either function as a safeguard designed to break to protect other parts, or as a conditional operator that will not allow a mechanical device to operate until the correct force is applied.
As safeguards
In the role of a mechanical safeguard, a shear pin is a safety device designed to shear in the case of a mechanical overload, preventing other, more expensive or less-easily replaced parts from being damaged. As a mechanical sacrificial part, it is analogous to an electric fuse.
They are most commonly used in drive trains, such as a snow blower's auger or the propellers attached to marine engines.
Another use is in pushback bars used for large aircraft. In this device, shear pins are frequently used to connect the "head" of the towbar – the portion that attaches to the aircraft – to the main shaft of the towbar. In this way, the failure of the shear pin will physically separate the aircraft and the tractor. The design may be such that the shear pin will have several different causes of failure – towbar rotation about its long axis, sudden braking or acceleration, excessive steering force, etc. – all of which could otherwise be extremely damaging to the aircraft.
As conditional operators
In the role as a conditional operator, a shear pin will be used to prevent a mechanical device from operating before the criteria for operation are met. A shear pin gives a distinct threshold for the force required for operation. It is very cheap and easy to produce delivering a very high reliability and predictable tolerance. They are almost maintenance-free and can remain ready for operation for years with little to no decrease in reliability. Shear pins are only useful for a single operating cycle, after each operation they have to be replaced. A very simple example is the plastic or wire loop affixed to the handles of common fire extinguishers. Its pre
|
https://en.wikipedia.org/wiki/Page%20orientation
|
Page orientation is the way in which a rectangular page is oriented for normal viewing. The two most common types of orientation are portrait and landscape. The term "portrait orientation" comes from visual art terminology and describes the dimensions used to capture a person's face and upper body in a picture; in such images, the height of the display area is greater than the width. The term "landscape orientation" also reflects visual art terminology, where pictures with more width than height are needed to fully capture the horizon within an artist's view.
Besides describing the way documents can be viewed and edited, the concepts of "portrait" and "landscape" orientation can also be used to describe video and photography display options (where the concept of "aspect ratio" replaces that of "page orientation"). Many types of visual media use landscape mode, especially the 4:3 aspect ratio used for classic TV formatting, which is 4 units or pixels wide and 3 units tall, and the 16:9 aspect ratio for newer, widescreen media viewing.
Most paper documents use portrait orientation. By default, most computer and television displays use landscape orientation, while most mobile phones use portrait orientation (with some flexibility on modern smartphones to switch screen orientations according to user preference). Portrait mode is preferred for editing page layout work, in order to view the entire page of a screen at once without showing wasted space outside the borders of a page, and for script-writing, legal work (in drafting contracts etc.), and other applications where it is useful to see a maximum number of lines of text. It is also preferred for smartphone use, as a phone in portrait orientation can be operated easily with one hand. Landscape viewing, on the other hand, visually caters to the natural horizontal alignment of human eyes at the same time landscape details are much wider than they are taller, and is therefore useful for portraying wider visuals with
|
https://en.wikipedia.org/wiki/Tournament%20of%20the%20Gods
|
is a H-game by Alicesoft.
Plot
Sid enters a tournament. Shortly after he is crowned champion, the fallen Angel Aquross infects him with a hideous disease that requires him to steal the life energy of Angels or be in constant pain. The only relief lies in a drug that kills the pain, but causes sexual urges that cannot be denied. Now, Sid must defeat the evil Aquross.
OVA
It was condensed into a single 35-minute episode and released in the US as a subtitled white-cassette VHS by Pink Pineapple studio.
Episodes
Theme songs
"Yume no Image" by Konami Yoshida
Cast
Reception
Mike Toole comparing the OVA to Gor did not find it very interesting. Chris Beveridge commented that the video "is a strange piece" and has "some good fun moments".
References
External links
English
Japanese
alicesoft
Imageepoch
pinkpineapple
1997 anime OVAs
Eroge
Fantasy anime and manga
Hentai anime and manga
OVAs based on video games
Pink Pineapple
NEC PC-9801 games
FM Towns games
Windows games
MSX2 games
Nintendo 3DS games
X68000 games
|
https://en.wikipedia.org/wiki/Read%E2%80%93write%20memory
|
Read–write memory, or RWM is a type of computer memory that can be easily written to as well as read from using electrical signaling normally associated with running a software, and without any other physical processes. The related storage type RAM means something different; it refers to memory that can access any memory location in a constant amount of time.
The term might also refer to memory locations having both read and write permissions. In modern computer systems using memory segmentation, each segment has a length and set of permissions associated with it.
Types
Read–write memory is composed of either volatile or non-volatile types of storage. Volatile memory is usually in the form of a microchip or other hardware that requires an external power source to enable data to persist. Non-volatile memory is considered static, or storage-type memory. This means that you can write data to it, and that information will persist even in the absence of a power source. Typically read-write speeds are limited to its bandwidth or have mechanical limitations of either rotation speeds and arm movement delays for storage types such as Cloud Storage, Hard Disk Drive or CD-RWs, DVD-RWs, SD cards, Solid State Drive, SRAM, and DRAM, or other integrated circuitry.
History
San Francisco in 1956, IBM was the first company to develop and sell the first commercial Hard Disk Drive (HDD). The drive was the Model 350 disk storage unit, which was 3.75 Megabytes of data storage capacity and had fifty 24-inch diameter disks stacked on a spindle and sold to Zellerbach paper.
See also
Read-mostly memory (RMM)
Random-access memory (RAM)
References
Footnotes
Computer memory
|
https://en.wikipedia.org/wiki/Boolean%20model%20of%20information%20retrieval
|
The (standard) Boolean model of information retrieval (BIR) is a classical information retrieval (IR) model and, at the same time, the first and most-adopted one. It is used by many IR systems to this day. The BIR is based on Boolean logic and classical set theory in that both the documents to be searched and the user's query are conceived as sets of terms (a bag-of-words model). Retrieval is based on whether or not the documents contain the query terms.
Definitions
An index term is a word or expression, which may be stemmed, describing or characterizing a document, such as a keyword given for a journal article. Letbe the set of all such index terms.
A document is any subset of . Letbe the set of all documents.
A query is a Boolean expression in normal form:where is true for when . (Equivalently, could be expressed in disjunctive normal form.)
We seek to find the set of documents that satisfy . This operation is called retrieval and consists of the following two steps:
1. For each in , find the set of documents that satisfy :2. Then the set of documents that satisfy Q is given by:
Example
Let the set of original (real) documents be, for example
where
= "Bayes' principle: The principle that, in estimating a parameter, one should initially assume that each possible value has equal probability (a uniform prior distribution)."
= "Bayesian decision theory: A mathematical theory of decision-making which presumes utility and probability functions, and according to which the act to be chosen is the Bayes act, i.e. the one with highest subjective expected utility. If one had unlimited time and calculating power with which to make every decision, this procedure would be the best way to make any decision."
= "Bayesian epistemology: A philosophical theory which holds that the epistemic status of a proposition (i.e. how well proven or well established it is) is best measured by a probability and that the proper way to revise this probability is given
|
https://en.wikipedia.org/wiki/Leaf%20valve
|
A leaf valve, also known as a reed valve, is a type of check valve that only allows fluid to flow in a single direction. These valves use thin pieces of metal, fiberglass, or carbon fiber, known as reeds, leaves, or petals, to form a barrier between two chambers. When air or fuel passes through the reeds, the flap opens and allows the fluid to enter the chamber. The reeds close when the flow stops, preventing backflow.
Applications
Motorcycles
Leaf valves are commonly mounted in the intake port of most 2-stroke motorcycle engines. When the piston moves up in the cylinder, the valve opens and allows air and fuel to pass through the intake port and into the carburetor. When the piston moves down, the valve closes and the compressed air in the cylinder is forced out through the exhaust port. This motion of the leaves occurs autonomously due to a pressure difference between the intake port and the carburetor and helps to atomize the air/fuel mixture for better combustion and an increase in engine power. The leaf valve opens and closes with every revolution of the engine.
Pumps
Leaf valves can sometimes be seen in some reciprocating compressors. The valve opens to allow the fluid to flow into the pressurized chamber when the compressor is performing a compressive stroke. The valve closes automatically when the compressor retracts in order to maintain the high pressure in the pressurized chamber.
Patents
Patents exist in the United States which specify the mechanical use of leaf valves to control various types of fluids.
United States Patent 4930535 is a folding leaf valve which is described as a valve that, "folds upon itself to provide positive sealing of a pressurized chamber under a variety of pressure conditions. In particular, the valve stem folds to effectively close the lumen and provide a positive seal even at low differential pressure."
United States Patent 4795340 is a single leaf valve for a gas-fired boiler. It has a seat for a valve leaf, and a sing
|
https://en.wikipedia.org/wiki/ITIL%20security%20management
|
ITIL security management describes the structured fitting of security into an organization. ITIL security management is based on the ISO 27001 standard. "ISO/IEC 27001:2005 covers all types of organizations (e.g. commercial enterprises, government agencies, not-for profit organizations). ISO/IEC 27001:2005 specifies the requirements for establishing, implementing, operating, monitoring, reviewing, maintaining and improving a documented Information Security Management System within the context of the organization's overall business risks. It specifies requirements for the implementation of security controls customized to the needs of individual organizations or parts thereof. ISO/IEC 27001:2005 is designed to ensure the selection of adequate and proportionate security controls that protect information assets and give confidence to interested parties."
A basic concept of security management is information security. The primary goal of information security is to control access to information. The value of the information is what must be protected. These values include confidentiality, integrity and availability. Inferred aspects are privacy, anonymity and verifiability.
The goal of security management comes in two parts:
Security requirements defined in service level agreements (SLA) and other external requirements that are specified in underpinning contracts, legislation and possible internal or external imposed policies.
Basic security that guarantees management continuity. This is necessary to achieve simplified service-level management for information security.
SLAs define security requirements, along with legislation (if applicable) and other contracts. These requirements can act as key performance indicators (KPIs) that can be used for process management and for interpreting the results of the security management process.
The security management process relates to other ITIL-processes. However, in this particular section the most obvious relations are the
|
https://en.wikipedia.org/wiki/Soil%20seed%20bank
|
The soil seed bank is the natural storage of seeds, often dormant, within the soil of most ecosystems. The study of soil seed banks started in 1859 when Charles Darwin observed the emergence of seedlings using soil samples from the bottom of a lake. The first scientific paper on the subject was published in 1882 and reported on the occurrence of seeds at different soil depths. Weed seed banks have been studied intensely in agricultural science because of their important economic impacts; other fields interested in soil seed banks include forest regeneration and restoration ecology.
Henry David Thoreau wrote that the contemporary popular belief explaining the succession of a logged forest, specifically to trees of a dissimilar species to the trees cut down, was that seeds either spontaneously generated in the soil, or sprouted after lying dormant for centuries. However, he dismissed this idea, noting that heavy nuts unsuited for distribution by wind were distributed instead by animals.
Background
Many taxa have been classified according to the longevity of their seeds in the soil seed bank. Seeds of transient species remain viable in the soil seed bank only to the next opportunity to germinate, while seeds of persistent species can survive longer than the next opportunity—often much longer than one year. Species with seeds that remain viable in the soil longer than five years form the long-term persistent seed bank, while species whose seeds generally germinate or die within one to five years are called short-term persistent. A typical long-term persistent species is Chenopodium album (Lambsquarters); its seeds commonly remain viable in the soil for up to 40 years and in rare situations perhaps as long as 1,600 years. A species forming no soil seed bank at all (except the dry season between ripening and the first autumnal rains) is Agrostemma githago (Corncockle), which was formerly a widespread cereal weed.
Seed longevity
Longevity of seeds is very var
|
https://en.wikipedia.org/wiki/Reference%20model
|
A reference model—in systems, enterprise, and software engineering—is an abstract framework or domain-specific ontology consisting of an interlinked set of clearly defined concepts produced by an expert or body of experts to encourage clear communication. A reference model can represent the component parts of any consistent idea, from business functions to system components, as long as it represents a complete set. This frame of reference can then be used to communicate ideas clearly among members of the same community.
Reference models are often illustrated as a set of concepts with some indication of the relationships between the concepts.
Overview
According to OASIS (Organization for the Advancement of Structured Information Standards) a reference model is "an abstract framework for understanding significant relationships among the entities of some environment, and for the development of consistent standards or specifications supporting that environment. A reference model is based on a small number of unifying concepts and may be used as a basis for education and explaining standards to a non-specialist. A reference model is not directly tied to any standards, technologies or other concrete implementation details, but it does seek to provide a common semantics that can be used unambiguously across and between different implementations."
There are a number of concepts rolled up into that of a 'reference model.' Each of these concepts is important:
Abstract: a reference model is abstract. It provides information about environments of a certain kind. A reference model describes the type or kind of entities that may occur in such an environment, not the particular entities that actually do occur in a specific environment. For example, when describing the architecture of a particular house (which is a specific environment of a certain kind), an actual exterior wall may have dimensions and materials, but the concept of a wall (type of entity) is part of the
|
https://en.wikipedia.org/wiki/Classmates.com
|
classmates.com is a social networking service. It was founded on November 17, 1995 by Randy Conrads as Classmates Online, Inc.
It originally sought to help users find class members and colleagues from kindergarten, primary school, high school, college, workplaces, and the U.S. military. In 2010, CEO Mark Goldston described the transition of the website "to increasingly focus on nostalgic content" such as "high school yearbooks, movie trailers, music tracks, and photographic images". To this end, and to appeal more to older users, the website name was changed to Memory Lane, which included a website redesign. This change was short-lived, however. Classmates dropped the Memory Lane brand in 2011.
Corporate information
United Online, Inc. (Nasdaq: UNTD) acquired Classmates Online in 2004 and owned and operated the company as part of its Classmates Media Corporation subsidiary until 2015.
Classmates Media operated online social networking and loyalty marketing services under the Classmates.com and MyPoints brands, respectively.
Classmates Media also operated the following international sites designed to enable users to connect with old friends:
StayFriends.de (Germany)
StayFriends.se (Sweden)
StayFriends.at (Austria)
Trombi.com (France)
In May 2016, the StayFriends sites were sold to Ströer.
In August 2015, Classmates was acquired from United Online by PeopleConnect Holdings, Inc., a portfolio company of H.I.G. Capital, for $30 million. Classmates is now operated as a division of PeopleConnect, which also owns Intelius.
Classmates Media Corporation's business model is based on user-generated content and revenue from paid subscriptions and advertising sales.
Users and ranking among other social networking sites
The only time Classmates appeared on Hitwise's top 10 list of social networking websites was June 2009, when it appeared tenth with 0.45% market share.
In early 2008, Nielsen Online had ranked Classmates as number three in unique monthly visitors (U.S.
|
https://en.wikipedia.org/wiki/Plastic%20hinge
|
In the structural engineering beam theory, the term "plastic hinge" is used to describe the deformation of a section of a beam where plastic bending occurs. In earthquake engineering plastic hinge is also a type of energy damping device allowing plastic rotation [deformation] of an otherwise rigid column connection.
Plastic behaviour
In plastic limit analysis of structural members subjected to bending, it is assumed that an abrupt transition from elastic to ideally plastic behaviour occurs at a certain value of moment, known as plastic moment (Mp). Member behaviour between Myp and Mp is considered to be elastic. When Mp is reached, a plastic hinge is formed in the member. In contrast to a frictionless hinge permitting free rotation, it is postulated that the plastic hinge allows large rotations to occur at constant plastic moment Mp.
Plastic hinges extend along short lengths of beams. Actual values of these lengths depend on cross-sections and load distributions. But detailed analyses have shown that it is sufficiently accurate to consider beams rigid-plastic, with plasticity confined to plastic hinges at points. While this assumption is sufficient for limit state analysis, finite element formulations are available to account for the spread of plasticity along plastic hinge lengths.
By inserting a plastic hinge at a plastic limit load into a statically determinate beam, a kinematic mechanism permitting an unbounded displacement of the system can be formed. It is known as the collapse mechanism. For each degree of static indeterminacy of the beam, an additional plastic hinge must be added to form a collapse mechanism.
Sufficient number of plastic hinges(N) required to make a collapse mechanism (unstable structure):
N=Degree of static indeterminacy + 1
References
Building engineering
Structural engineering
|
https://en.wikipedia.org/wiki/Plastic%20moment
|
In structural engineering, the plastic moment (Mp) is a property of a structural section. It is defined as the moment at which the entire cross section has reached its yield stress. This is theoretically the maximum bending moment that the section can resist – when this point is reached a plastic hinge is formed and any load beyond this point will result in theoretically infinite plastic deformation. In practice most materials are work-hardened resulting in increased stiffness and moment resistance until the material fails. This is of little significance in structural mechanics as the deflection prior to this occurring is considered to be an earlier failure point in the member.
In general, the method to calculate first requires calculation of the plastic section modulus and then to substitute this into the following formula:
For example, the plastic moment for a rectangular section can be calculated with the following formula:
where
is the width
is the height
is the yield stress
The plastic moment for a given section will always be larger than the yield moment (the bending moment at which the first part of the sections reaches the yield stress).
See also
Structural engineering theory
Plasticity (physics)
References
Building engineering
|
https://en.wikipedia.org/wiki/Password%20manager
|
A password manager is a computer program that allows users to store and manage their passwords for local applications or online services such as web applications, online shops or social media.
Password managers can generate passwords and fill online forms. Password managers may exist as a mix of: computer applications, mobile applications, or as web browser extensions.
A password manager may assist in generating passwords, storing passwords, usually in an encrypted database. Aside from passwords, these applications may also store data such as credit card information, addresses, and frequent flyer information.
The main purpose of password managers is to alleviate a cyber-security phenomenon known as password fatigue, where an end-user can become overwhelmed from remembering multiple passwords for multiple services and which password is used for what service.
Password managers typically require a user to create and remember one "master" password to unlock and access all information stored in the application. Password managers may choose to integrate multi-factor authentication through fingerprints, or through facial recognition software. Although, this is not required to use the application/browser extension.
Password managers may be installed on a computer or mobile device as an application or as a browser extension.
History
The first password manager software designed to securely store passwords was Password Safe created by Bruce Schneier, which was released as a free utility on September 5, 1997. Designed for Microsoft Windows 95, Password Safe used Schneier's Blowfish algorithm to encrypt passwords and other sensitive data. Although Password Safe was released as a free utility, due to U.S. cryptography export restrictions in place at the time, only U.S. and Canadian citizens and permanent residents were initially allowed to download it.
Criticisms
Vulnerabilities
Some applications store passwords as an unencrypted file, leaving the passwords easily ac
|
https://en.wikipedia.org/wiki/Intermediate%20mesoderm
|
Intermediate mesoderm or intermediate mesenchyme is a narrow section of the mesoderm (one of the three primary germ layers) located between the paraxial mesoderm and the lateral plate of the developing embryo. The intermediate mesoderm develops into vital parts of the urogenital system (kidneys, gonads and respective tracts).
Early formation
Factors regulating the formation of the intermediate mesoderm are not fully understood. It is believed that bone morphogenic proteins, or BMPs, specify regions of growth along the dorsal-ventral axis of the mesoderm and plays a central role in formation of the intermediate mesoderm. Vg1/Nodal signalling is an identified regulator of intermediate mesoderm formation acting through BMP signalling. Excess Vg1/Nodal signalling during early gastrulation stages results in expansion of the intermediate mesoderm at the expense of the adjacent paraxial mesoderm, whereas inhibition of Vg1/Nodal signalling represses intermediate mesoderm formation. A link has been established between Vg1/Nodal signalling and BMP signalling, whereby Vg1/Nodal signalling regulates intermediate mesoderm formation by modulating the growth-inducing effects of BMP signalling.
Other necessary markers of intermediate mesoderm induction include the odd-skipped related gene (Osr1) and paired-box-2 gene (Pax2) which require intermediate levels of BMP signalling to activate Markers of early intermediate mesoderm formation are often not exclusive to the intermediate mesoderm. This can be seen in early stages of intermediate mesoderm differentiation where higher levels of BMP stimulate growth of lateral plate tissue, whilst lower concentrations lead to paraxial mesoderm and somite formation. Osr1, which encodes a zinc-finger DNA-binding protein, and LIM-type homeobox gene (Lhx1) expression overlaps the intermediate mesoderm as well as the lateral plate. Osr1 has expression domains encompassing the entire length of the anterior-posterior (AP) axis from the first somite
|
https://en.wikipedia.org/wiki/Benzene%20in%20soft%20drinks
|
Benzene in soft drinks is of potential concern due to the carcinogenic nature of the molecule. This contamination is a public health concern and has caused significant outcry among environmental and health advocates. Benzene levels are regulated in drinking water nationally and internationally, and in bottled water in the United States, but only informally in soft drinks. The benzene forms from decarboxylation of the preservative benzoic acid in the presence of ascorbic acid (vitamin C) and metal ions (iron and copper) that act as catalysts, especially under heat and light. Hot peppers naturally contain vitamin C ("nearly as much as in one orange") so the observation about soft drinks applies to pepper sauces containing sodium benzoate, like Texas Pete.
Formation in soft drinks
The major cause of benzene in soft drinks is the decarboxylation of benzoic acid in the presence of ascorbic acid (vitamin C, E300) or erythorbic acid (a diastereomer of ascorbic acid, E315). Benzoic acid is often added to drinks as a preservative in the form of its salts sodium benzoate (E211), potassium benzoate (E 212), or calcium benzoate (E 213). Citric acid is not thought to induce significant benzene production in combination with benzoic acid, but some evidence suggests that in the presence of ascorbic or erythorbic acid and benzoic acid, citric acid may accelerate the production of benzene.
The proposed mechanism begins with hydrogen abstraction by the hydroxyl radical, which itself is produced by the Cu2+-catalysed reduction of dioxygen by ascorbic acid:
Other factors that affect the formation of benzene are heat and light. Storing soft drinks in warm conditions speeds up the formation of benzene.
Calcium disodium EDTA and sugars have been shown to inhibit benzene production in soft drinks.
The International Council of Beverages Associations (ICBA) has produced advice to prevent or minimize benzene formation.
Limit standards in drinking water
Various authorities have set li
|
https://en.wikipedia.org/wiki/SSSE3
|
Supplemental Streaming SIMD Extensions 3 (SSSE3 or SSE3S) is a SIMD instruction set created by Intel and is the fourth iteration of the SSE technology.
History
SSSE3 was first introduced with Intel processors based on the Core microarchitecture on June 26, 2006 with the "Woodcrest" Xeons.
SSSE3 has been referred to by the codenames Tejas New Instructions (TNI) or Merom New Instructions (MNI) for the first processor designs intended to support it.
Functionality
SSSE3 contains 16 new discrete instructions. Each instruction can act on 64-bit MMX or 128-bit XMM registers. Therefore, Intel's materials refer to 32 new instructions. They include:
Twelve instructions that perform horizontal addition or subtraction operations.
Six instructions that evaluate absolute values.
Two instructions that perform multiply-and-add operations and speed up the evaluation of dot products.
Two instructions that accelerate packed integer multiply operations and produce integer values with scaling.
Two instructions that perform a byte-wise, in-place shuffle according to the second shuffle control operand.
Six instructions that negate packed integers in the destination operand if the corresponding element in the source operand is negative.
Two instructions that align data from the composite of two operands.
CPUs with SSSE3
AMD:
"Cat" low-power processors
Bobcat-based processors
Jaguar-based processors and newer
Puma-based processors and newer
"Heavy Equipment" processors
Bulldozer-based processors
Piledriver-based processors
Steamroller-based processors
Excavator-based processors and newer
Zen-based processors
Zen+-based processors
Zen2-based processors
Zen3-based processors
Zen4-based processors
Intel:
Xeon 5100 Series
Xeon 5300 Series
Xeon 5400 Series
Xeon 3000 Series
Core 2 Duo
Core 2 Extreme
Core 2 Quad
Core i7
Core i5
Core i3
Pentium Dual Core (if 64-bit capable; Allendale onwards)
Celeron 4xx Sequence Conroe-L
Celeron Dual Core E1200
Celeron M 500 s
|
https://en.wikipedia.org/wiki/184%20%28number%29
|
184 (one hundred [and] eighty-four) is the natural number following 183 and preceding 185.
In mathematics
There are 184 different Eulerian graphs on eight unlabeled vertices, and 184 paths by which a chess rook can travel from one corner of a 4 × 4 chessboard to the opposite corner without passing through the same square twice. 184 is also a refactorable number.
In other fields
Some physicists have proposed that 184 is a magic number for neutrons in atomic nuclei.
In poker, with one or more jokers as wild cards, there are 184 different straight flushes.
See also
The year AD 184 or 184 BC
List of highways numbered 184
References
Integers
|
https://en.wikipedia.org/wiki/Comparison%20of%20issue-tracking%20systems
|
Notable issue tracking systems, including bug tracking systems, help desk and service desk issue tracking systems, as well as asset management systems, include the following. The comparison includes client-server application, distributed and hosted systems.
General
Systems listed on a light purple background are no longer in active development.
Features
Input interfaces
Notification interfaces
Revision control system integration
Authentication methods
Containers
See also
Comparison of help desk issue tracking software
List of personal information managers
Comparison of project management software
Networked Help Desk
OSS through Java
Notes
References
External links
Issue tracking systems
|
https://en.wikipedia.org/wiki/Beamrider
|
Beamrider is a fixed shooter written for the Intellivision by David Rolfe and published by Activision in 1983. The game was ported to the Atari 2600 (with a slightly reduced feature set), Atari 5200, Atari 8-bit family, ColecoVision, Commodore 64, ZX Spectrum, and MSX.
Gameplay
Beamrider takes place above Earth's atmosphere, where a large alien shield called the Restrictor Shield surrounds the Earth. The player's objective is to clear the Shield's 99 sectors of alien craft while piloting the Beamrider ship. The Beamrider is equipped with a short-range laser lariat and a limited supply of torpedoes. The player is given three at the start of each sector.
To clear a sector, fifteen enemy ships must be destroyed. A "Sentinel ship" will then appear, which can be destroyed using a torpedo (if any remain) for bonus points. Some enemy ships can only be destroyed with torpedoes, and some must simply be dodged. Occasionally during a sector, "Yellow Rejuvenators" (extra lives) appear. They can be picked up for an extra ship, but if they are shot they will transform into ship-damaging debris.
Activision offered a Beamrider patch to players who could get to Sector 14 with 40,000 points and sent in a screenshot of their accomplishment.
Reception
The Deseret News in 1984 gave the ColecoVision version of Beamrider three stars, describing it as "basically a slide-and-shoot space game."
A reviewer for Your Commodore described the Commodore 64 version of the game as "a really good, wholesome arcade zapping game."
See also
List of Activision games: 1980–1999
Radar Scope (1979)
Juno First (1983)
References
External links
Beamrider for the Atari 2600 at Atari Mania
Beamrider for the Atari 8-bit family at Atari Mania
1983 video games
Fixed shooters
Atari 2600 games
Atari 5200 games
Atari 8-bit family games
ColecoVision games
Commodore 64 games
Intellivision games
MSX games
ZX Spectrum games
Activision games
Video games developed in the United States
|
https://en.wikipedia.org/wiki/Drumhead%20%28sign%29
|
The term drumhead refers to a type of removable sign that was prevalent on North American railroads of the first half of the 20th century. The sign was mounted at the rear of named passenger trains, and consisted of a box with internal illumination that shone through a tinted panel bearing the logo of the railroad or specific train. Since the box and the sign were usually circular in shape and resembled small drums, they came to be known as drumheads.
Railroad drumheads were removable so that they could be mounted on different passenger cars (usually on the rear of observations), as needed for specific trains.
See also
Headboard
References
Passenger rail transport in Canada
Passenger rail transportation in the United States
Signage
|
https://en.wikipedia.org/wiki/Raster%20scan
|
A raster scan, or raster scanning, is the rectangular pattern of image capture and reconstruction in television. By analogy, the term is used for raster graphics, the pattern of image storage and transmission used in most computer bitmap image systems. The word raster comes from the Latin word rastrum (a rake), which is derived from radere (to scrape); see also rastrum, an instrument for drawing musical staff lines. The pattern left by the lines of a rake, when drawn straight, resembles the parallel lines of a raster: this line-by-line scanning is what creates a raster. It is a systematic process of covering the area progressively, one line at a time. Although often a great deal faster, it is similar in the most general sense to how one's gaze travels when one reads lines of text.
In most modern graphics cards the data to be drawn is stored internally in an area of semiconductor memory called the framebuffer. This memory area holds the values for each pixel on the screen. These values are retrieved from the refresh buffer and painted onto the screen one row at a time.
Description
Scan lines
In a raster scan, an image is subdivided into a sequence of (usually horizontal) strips known as "scan lines". Each scan line can be transmitted in the form of an analog signal as it is read from the video source, as in television systems, or can be further divided into discrete pixels for processing in a computer system. This ordering of pixels by rows is known as raster order, or raster scan order. Analog television has discrete scan lines (discrete vertical resolution), but does not have discrete pixels (horizontal resolution) – it instead varies the signal continuously over the scan line. Thus, while the number of scan lines (vertical resolution) is unambiguously defined, the horizontal resolution is more approximate, according to how quickly the signal can change over the course of the scan line.
Scanning pattern
In raster scanning, the beam sweeps horizontally left-
|
https://en.wikipedia.org/wiki/Information%20dimension
|
In information theory, information dimension is an information measure for random vectors in Euclidean space, based on the normalized entropy of finely quantized versions of the random vectors. This concept was first introduced by Alfréd Rényi in 1959.
Simply speaking, it is a measure of the fractal dimension of a probability distribution. It characterizes the growth rate of the Shannon entropy given by successively finer discretizations of the space.
In 2010, Wu and Verdú gave an operational characterization of Rényi information dimension as the fundamental limit of almost lossless data compression for analog sources under various regularity constraints of the encoder/decoder.
Definition and Properties
The entropy of a discrete random variable is
where is the probability measure of when , and the denotes a set .
Let be an arbitrary real-valued random variable. Given a positive integer , we create a new discrete random variable
where the is the floor operator which converts a real number to the greatest integer less than it. Then
and
are called lower and upper information dimensions of respectively. When , we call this value information dimension of ,
Some important properties of information dimension :
If the mild condition is fulfilled, we have .
For an -dimensional random vector , the first property can be generalized to .
It is sufficient to calculate the upper and lower information dimensions when restricting to the exponential subsequence .
and are kept unchanged if rounding or ceiling functions are used in quantization.
d-Dimensional Entropy
If the information dimension exists, one can define the -dimensional entropy of this distribution by
provided the limit exists. If , the zero-dimensional entropy equals the standard Shannon entropy . For integer dimension , the -dimensional entropy is the -fold integral defining the respective differential entropy.
An equivalent definition of Information Dimension
In 1994, Kawabata and Dem
|
https://en.wikipedia.org/wiki/Mac%20gaming
|
Mac gaming refers to the use of video games on Macintosh personal computers. In the 1990s, Apple computers did not attract the same level of video game development as Microsoft Windows computers due to the high popularity of Microsoft Windows and, for 3D gaming, Microsoft's DirectX technology. In recent years, the introduction of Mac OS X and support for Intel processors has eased porting of many games, including 3D games through use of OpenGL and more recently Apple's own Metal API. Virtualization technology and Boot Camp also permit the use of Windows and its games on Macintosh computers. Today, a growing number of popular games run natively on macOS, though as of early 2019, a majority still require the use of Microsoft Windows.
macOS Catalina (and later) eliminated support for 32-bit games, including those compatible with older versions of macOS.
Early game development on the Mac
Prior to the release of the Macintosh 128K, the first Macintosh computer, marketing executives at Apple feared that including a game in the finished operating system would aggravate the impression that the graphical user interface made the Mac toy-like. More critically, the limited amount of RAM in the original Macintosh meant that fitting a game into the operating system would be very difficult. Eventually, Andy Hertzfeld created a Desk Accessory called Puzzle that occupied only 600 bytes of memory. This was deemed small enough to be safely included in the operating system, and it shipped with the Mac when released in 1984. With Puzzle—the first computer game specifically for a mouse—the Macintosh became the first computer with a game in its ROM, and it would remain a part of the Mac OS for the next ten years, until being replaced in 1994 with Jigsaw, a jigsaw puzzle game included as part of System 7.5.
During the development of the Mac, a chess game similar to Archon based on Alice in Wonderland was shown to the development team. The game was written by Steve Capps for the Apple L
|
https://en.wikipedia.org/wiki/CEN%20Workshop%20Agreement
|
A CEN Workshop Agreement (commonly abbreviated CWA) is a reference document from the European Committee for Standardization (CEN). It is, by definition, not an official standard from the member organizations.
In the field of electronic signatures, several CWAs exist. In July 2003 the European Commission granted the following three CWAs status as generally recognized technical standards, presumed to be in accordance with the Electronic Signatures Directive (1999/93/EC):
CWA 14167-1 (June 2003): security requirements for trustworthy systems managing certificates for electronic signatures — Part 1: System Security Requirements
CWA 14167-2 (March 2004): security requirements for trustworthy systems managing certificates for electronic signatures — Part 2: cryptographic module for CSP signing operations — Protection Profile (MCSO-PP)
CWA 14169 (March 2004): secure signature creation devices.
Other CWA deals with e-signature; among them:
CWA 14170 Signature Creation Process and Environment.
CWA 14171 Signature Validation Process and Environment.
References
External links
CEN Workshop Agreements – CEN website
CEN Workshop Agreements on E-signature – CEN website
EN standards
European Committee for Standardization
|
https://en.wikipedia.org/wiki/Cyclic%20code
|
In coding theory, a cyclic code is a block code, where the circular shifts of each codeword gives another word that belongs to the code. They are error-correcting codes that have algebraic properties that are convenient for efficient error detection and correction.
Definition
Let be a linear code over a finite field (also called Galois field) of block length . is called a cyclic code if, for every codeword from , the word in obtained by a cyclic right shift of components is again a codeword. Because one cyclic right shift is equal to cyclic left shifts, a cyclic code may also be defined via cyclic left shifts. Therefore, the linear code is cyclic precisely when it is invariant under all cyclic shifts.
Cyclic codes have some additional structural constraint on the codes. They are based on Galois fields and because of their structural properties they are very useful for error controls. Their structure is strongly related to Galois fields because of which the encoding and decoding algorithms for cyclic codes are computationally efficient.
Algebraic structure
Cyclic codes can be linked to ideals in certain rings. Let be a polynomial ring over the finite field . Identify the elements of the cyclic code with polynomials in such that
maps to the polynomial
: thus multiplication by corresponds to a cyclic shift. Then is an ideal in , and hence principal, since is a principal ideal ring. The ideal is generated by the unique monic element in of minimum degree, the generator polynomial .
This must be a divisor of . It follows that every cyclic code is a polynomial code.
If the generator polynomial has degree then the rank of the code is .
The idempotent of is a codeword such that (that is, is an idempotent element of ) and is an identity for the code, that is for every codeword . If and are coprime such a word always exists and is unique; it is a generator of the code.
An irreducible code is a cyclic code in which the code, as an ideal i
|
https://en.wikipedia.org/wiki/Even%20code
|
A binary code is called an even code if the Hamming weight of each of its codewords is even. An even code should have a generator polynomial that include (1+x) minimal polynomial as a product. Furthermore, a binary code is called doubly even if the Hamming weight of all its codewords is divisible by 4. An even code which is not doubly even is said to be strictly even.
Examples of doubly even codes are the extended binary Hamming code of block length 8 and the extended binary Golay code of block length 24. These two codes are, in addition, self-dual.
Coding theory
Parity (mathematics)
|
https://en.wikipedia.org/wiki/Computer%20Memories%2C%20Inc.
|
Computer Memories, Inc. (CMI) was a Chatsworth, California manufacturer of hard disk drives during the early 1980s. CMI made basic stepper motor-based drives, with low cost in mind.
History
The company was founded in 1979 by Raymond Brooke, Abraham Brand, James Willets and James Quackenbush all formerly of Pertec Computer Corporation, with initial seed money from Raymond Brooke and Abraham Brand and investors Irwin Rubin, Frederic Heim and Marshall Butler. It was incorporated August 6, 1979. Initially the company offered three 5 1/4" disk drives with a capacity of 5, 10 or 15 megabytes (unformatted). Early investors in the company included Intel Corporation. The company made an initial public offering on August 23, 1983 of approximately 2,000,000 shares of common stock. August 1984 they secured a major contract as sole producer of 20-megabyte hard drives for the base model of the IBM PC/AT. Unfortunately, the Singapore-manufactured CM6000 drives proved highly unreliable. Dealers reported failure rates as high as 25 to 30 percent. Part of the problem was high demand for the PC/AT; IBM increased its order from 90,000 units in 1984 to 240,000 in 1985, and manufacturing quality suffered. Second, the design of the disk drive subsystem itself was flawed.
At the same time, Quantum Corporation sued CMI for patent infringement relating to the servo mechanism in the entire CM6600 line of drives. Instead of putting the tracking grating on the head arm and driving the arm directly from a voice coil, like the Quantum designs, CMI made a composite motor that would bolt to the drive in place of the usual stepper motor, with the voice coil on the bottom and the tracking mechanism on top (similar to DC servo motors used in process controls and robotics). CMI connected the motor to the arm with a metal-band pulley, the same mechanism they used on their stepper-motor drives. Since the feedback system was behind the pulley, it had to compensate for slack in the arm, one of several t
|
https://en.wikipedia.org/wiki/TIM-001
|
TIM-001 was an application development microcomputer developed by Mihajlo Pupin Institute (Serbia) in 1983/84.
See also
Mihajlo Pupin Institute
Literature
Dragoljub Milićević, Dušan Hristović (Ed): Računari TIM, Naučna knjiga, Belgrade 1990. In Serbian.
Dušan Hristović: Razvoj računarstva u Srbiji (Computing in Serbia), Phlogiston journal, No 18/19, pp. 89-105, Museum MNT-SANU, Belgrade 2010/2011. In Serbian.
D.B.Vujaklija, N.Markovic (Ed): 50 Years of computing in Serbia, pp.37-44, DIS, IMP and PC-Press, Belgrade 2011.
Mihajlo Pupin Institute
Computing by computer model
IBM PC compatibles
|
https://en.wikipedia.org/wiki/Coprostanol
|
5β-Coprostanol (5β-cholestan-3β-ol) is a 27-carbon stanol formed from the biohydrogenation of cholesterol (cholest-5en-3β-ol) in the gut of most higher animals and birds. This compound has frequently been used as a biomarker for the presence of human faecal matter in the environment.
Chemical properties
Solubility
5β-coprostanol has a low water solubility, and consequently a high octanol-water partition coefficient (log Kow = 8.82). This means that in most environmental systems, 5β-coprostanol will be associated with the solid phase.
Degradation
In anaerobic sediments and soils, 5β-coprostanol is stable for many hundreds of years enabling it to be used as an indicator of past faecal discharges. As such, records of 5β-coprostanol from paleo-environmental archives have been used to further constrain the timing of human settlements in a region, as well as reconstruct relative changes in human populations and agricultural activities over several thousand years.
Chemical analysis
Since the molecule has a hydroxyl (-OH) group, it is frequently bound to other lipids including fatty acids; most analytical methods, therefore, utilise a strong alkali (KOH or NaOH) to saponify the ester linkages. Typical extraction solvents include 6% KOH in methanol. The free sterols and stanols (saturated sterols) are then separated from the polar lipids by partitioning into a less polar solvent such as hexane. Prior to analysis, the hydroxyl group is frequently derivatised with BSTFA (bis-trimethyl silyl trifluoroacetamide) to replace the hydrogen with the less exchangeable trimethylsilyl (TMS) group. Instrumental analysis is frequently conducted on gas chromatograph (GC) with either a flame ionisation detector (FID) or mass spectrometer (MS). The mass spectrum for 5β-coprostanol - TMS ether can be seen in the figure.
Isomers
As well as the faecally derived stanol, two other isomers can be identified in the environment; 5α-cholestanol
Formation and occurrence
Faecal sources
5β-copro
|
https://en.wikipedia.org/wiki/Iskra%20Delta%20Partner
|
Iskra Delta Partner was a computer developed by Iskra Delta in 1983.
Specifications
Text mode: 26 lines with 80 or 132 characters each
Character set: YUSCII
I/O ports: three RS-232C, one used to connect printer (1200-4800 bit/s) and two general-purpose (300-9600 bit/s)
References
External links
Old-Computer.com article
Microcomputers
|
https://en.wikipedia.org/wiki/Gorenje%20Dialog
|
Dialog was a microcomputer system developed by Gorenje in 1980s. It was based on the 8-bit 4 MHz Zilog Z-80A microprocessor. The primary operating system was FEDOS (CP/M 2.2 compatible), developed by Computer Structures and Systems Laboratory (Faculty of Electrical Engineering, University of Ljubljana) and Gorenje.
There were 3 variants of the Dialog microcomputer system, distinguished only by minor changed: home, laboratory and personal (PC) (in Slovene: hišni, laboratorijski, osebni). Three types of external memory can be connected with Dialog: cassette recorder, floppy drive (5,25" and 8") and hard drive. The home variant of Dialog used resident FEBASIC (a variant of BASIC).
References
Mikroračunalnik DIALOG, Tehnično navodilo-uporaba, Gorenje procesna oprema.
FEBASIC, priročnik za uporabnike sistema DIALOG, T. Žitko, Ljubljana, 1985.
Microcomputers
Gorenje
|
https://en.wikipedia.org/wiki/Iskradata%201680
|
Iskradata 1680 was a computer developed by Iskradata in 1979.
References
Microcomputers
|
https://en.wikipedia.org/wiki/CER-202
|
CER ( – Digital Electronic Computer) model 202 is an early digital computer developed by Mihajlo Pupin Institute (Serbia) in the 1960s.
See also
CER Computers
Mihajlo Pupin Institute
References
One-of-a-kind computers
CER computers
|
https://en.wikipedia.org/wiki/Stack%20effect
|
The stack effect or chimney effect is the movement of air into and out of buildings through unsealed openings, chimneys, flue-gas stacks, or other containers, resulting from air buoyancy. Buoyancy occurs due to a difference in indoor-to-outdoor air density resulting from temperature and moisture differences. The result is either a positive or negative buoyancy force. The greater the thermal difference and the height of the structure, the greater the buoyancy force, and thus the stack effect. The stack effect helps drive natural ventilation, air infiltration, and fires (e.g. the Kaprun tunnel fire, King's Cross underground station fire and the Grenfell Tower fire).
Stack effect in buildings
Since buildings are not totally sealed (at the very minimum, there is always a ground level entrance), the stack effect will cause air infiltration. During the heating season, the warmer indoor air rises up through the building and escapes at the top either through open windows, ventilation openings, or unintentional holes in ceilings, like ceiling fans and recessed lights. The rising warm air reduces the pressure in the base of the building, drawing cold air in through either open doors, windows, or other openings and leakage. During the cooling season, the stack effect is reversed, but is typically weaker due to lower temperature differences.
In a modern high-rise building with a well-sealed envelope, the stack effect can create significant pressure differences that must be given design consideration and may need to be addressed with mechanical ventilation. Stairwells, shafts, elevators, and the like, tend to contribute to the stack effect, while interior partitions, floors, and fire separations can mitigate it. Especially in case of fire, the stack effect needs to be controlled to prevent the spread of smoke and fire, and to maintain tenable conditions for occupants and firefighters. While natural ventilation methods may be effective, such as air outlets being installed c
|
https://en.wikipedia.org/wiki/Biosafety%20Clearing-House
|
The Biosafety Clearing-House is an international mechanism that exchanges information about the movement of genetically modified organisms, established under the Cartagena Protocol on Biosafety. It assists Parties (i.e. governments that have ratified the Protocol) to implement the protocol’s provisions and to facilitate sharing of information on, and experience with, living modified organisms (also known as genetically modified organisms, GMOs). It further assists Parties and other stakeholders to make informed decisions regarding the importation or release of GMOs.
The Biosafety Clearing-House Central Portal is accessible through the Web. The BCH is a distributed system, and information in it is owned and updated by the users themselves through an authenticated system to ensure timeliness and accuracy.
Mandate
Article 20, paragraph 1 of the Cartagena Protocol on Biosafety established the BCH as part of the clearing-house mechanism of the Convention on Biological Diversity, in order to:
(a) Facilitate the exchange of scientific, technical, environmental and legal information on, and experience with, living modified organisms; and
(b) Assist Parties to implement the Protocol, taking into account the special needs of developing country Parties, in particular the least developed and small island developing States among them, and countries with economies in transition as well as countries that are centres of origin and centres of genetic diversity.
First use in international law
The BCH differs from other similar mechanisms established under other international legal agreements because it is in fact essential for the successful implementation of its parent body, the Protocol. It was the first Internet-based information-exchange mechanism created that must be used to fulfil certain international legal obligations - not only do Parties to the Protocol have a legal obligation to provide certain types of information to the BCH within defined time-frames, but certa
|
https://en.wikipedia.org/wiki/Gametangium
|
A gametangium (: gametangia) is an organ or cell in which gametes are produced that is found in many multicellular protists, algae, fungi, and the gametophytes of plants. In contrast to gametogenesis in animals, a gametangium is a haploid structure and formation of gametes does not involve meiosis.
Types of gametangia
Depending on the type of gamete produced in a gametangium, several types can be distinguished.
Female
Female gametangia are most commonly called archegonia. They produce egg cells and are the sites for fertilization. Archegonia are common in algae and primitive plants as well as gymnosperms. In flowering plants, they are replaced by the embryo sac inside the ovule.
Male
The male gametangia are most commonly called antheridia. They produce sperm cells that they release for fertilization. Antheridia producing non-motile sperm (spermatia) are called spermatangia. Some antheridia do not release their sperm. For example, the oomycete antheridium is a syncytium with many sperm nuclei and fertilization occurs via fertilization tubes growing from the antheridium and making contact with the egg cells. Antheridia are common in the gametophytes in "lower" plants such as bryophytes, ferns, cycads and ginkgo. In "higher" plants such as conifers and flowering plants, they are replaced by pollen grains.
Isogamous
In isogamy, the gametes look alike and cannot be classified into "male" or "female." For example, in zygomycetes, two gametangia (single multinucleate cells at the end of hyphae) form good contact with each other and fuse into a zygosporangium. Inside the zygosporangium, the nuclei from each of the original two gametangia pair up.
See also
Zoosporangium, a gametangium that produces motile isogamous gametes, called zoospores
Reproduction
Reproductive system
Germ cells
|
https://en.wikipedia.org/wiki/YUSCII
|
YUSCII is an informal name for several JUS standards for 7-bit character encoding. These include:
JUS I.B1.002 (ISO-IR-141, ISO 646-YU), which encodes Gaj's Latin alphabet, used for Serbo-Croatian and Slovenian language
JUS I.B1.003 (ISO-IR-146), which encodes Serbian Cyrillic alphabet, and
JUS I.B1.004 (ISO-IR-147), which encodes Macedonian Cyrillic alphabet.
The encodings are based on ISO 646, 7-bit Latinic character encoding standard, and were used in Yugoslavia before widespread use of later CP 852, ISO-8859-2/8859-5, Windows-1250/1251 and Unicode standards. It was named after ASCII, having the first word "American" replaced with "Yugoslav": "Yugoslav Standard Code for Information Interchange". Specific standards are also sometimes called by a local name: SLOSCII, CROSCII or SRPSCII for JUS I.B1.002, SRPSCII for JUS I.B1.003, MAKSCII for JUS I.B1.004.
JUS I.B1.002 is a national ISO 646 variant, i.e. equal to basic ASCII with less frequently used symbols replaced with specific letters of Gaj's alphabet. Cyrillic standards further replace Latin alphabet letters with corresponding Cyrillic letters. Љ (lj), Њ (nj), Џ (dž) and ѕ (dz) correspond to Latin digraphs, and are mapped over Latin letters which are not used in Serbian or Macedonian (q, w, x, y).
YUSCII was originally developed for teleprinters but it also spread for computer use. This was widely considered a bad idea among software developers who needed the original ASCII such as {, [, }, ], ^, ~, |, \ in their source code (an issue partly addressed by trigraphs in C). On the other hand, an advantage of YUSCII is that it remains comparatively readable even when support for it is not available, similarly to the Russian KOI-7. Numerous attempts to replace it with something better kept failing due to limited support. Eventually, Microsoft's introduction of code pages, appearance of Unicode and availability of fonts finally spelled sure (but nevertheless still slow) end of YUSCII.
Codepage layout
Code poin
|
https://en.wikipedia.org/wiki/Vancouver%20Community%20Network
|
Vancouver Community Network (VCN) is a community-owned provider of free internet access, technical support, and web hosting services to individuals and nonprofit organizations in Vancouver, British Columbia.
It developed StreetMessenger, a communication service for the homeless.
History
The organization was founded as Vancouver FreeNet by Brian Campbell and others.
Revenue Canada initially rejected VCN's application status as a charitable organization, which would have allowed it to receive tax-deductible contributions. VCN appealed this decision, and in 1997, the Federal Court of Appeal ruled that providing free internet service was a charitable tax purpose.
See also
Chebucto Community Network
Free-Net
Wireless community network
Community informatics
National Capital Freenet
References
External links
1993 establishments in British Columbia
Bulletin board systems
Community networks
Free web hosting services
Information technology charities
Internet service providers of Canada
Non-profit organizations based in Vancouver
Web hosting
|
https://en.wikipedia.org/wiki/Hemocyte%20%28invertebrate%20immune%20system%20cell%29
|
A hemocyte is a cell that plays a role in the immune system of invertebrates. It is found within the hemolymph.
Hemocytes are phagocytes of invertebrates.
Hemocytes in Drosophila melanogaster can be divided into two categories: embryonic and larval. Embryonic hemocytes are derived from head mesoderm and enter the hemolymph as circulating cells. Larval hemocytes, on the other hand, are responsible for tissue remodeling during development. Specifically, they are released during the pupa stage in order to prepare the fly for the transition into an adult and the massive associated tissue reorganization that must occur.
There are four basic types of hemocytes found in fruit flies: secretory, plasmatocytes, crystal cells, and lamellocytes. Secretory cells are never released into the hemolymph and instead send out signalling molecules responsible for cell differentiation. Plasmatocytes are the hemocytes responsible for cell ingestion (phagocytosis) and represent about 95% of circulating hemocytes. Crystal cells are only found in the larval stage of Drosophila, and they are involved in melanization, a process by which microbes/pathogens are engulfed in a hardened gel and destroyed via anti-microbial peptides and other proteins involved in the humoral response. They constitute about 5% of circulating hemocytes. Lamellocytes are flat cells that are never found in adult cells, and instead are only present in larval cells for their ability to encapsulate invading pathogens. They specifically act on parasitic wasp eggs that bind to the surfaces of cells, and are incapable of being phagocytosed by host cells.
In mosquitoes, hemocytes are functionally divided into three populations: granulocytes, oenocytoids and prohemocytes. Granulocytes are the most abundant cell type. They rapidly attach to foreign surfaces and readily engage in phagocytosis. Oenocytoids do not readily spread on foreign surfaces and are the major producers of phenoloxidase, which is the major enzyme
|
https://en.wikipedia.org/wiki/Debug%20menu
|
A debug menu or debug mode is a user interface implemented in a computer program that allows the user to view and/or manipulate the program's internal state for the purpose of debugging. Some games format their debug menu as an in-game location, referred to as a debug room (distinct from the developer's room type of Easter egg). Debug menus and rooms are used during software development for ease of testing and are usually made inaccessible or otherwise hidden from the end user.
Compared to the normal user interfaces, debug menus usually are unpolished and not user-friendly, intended only to be used by the software's developers. They are often cryptic and may allow for destructive actions such as erasing data without warning.
In video games
Debug menus are often of interest to video game players as they can be used to cheat, access unused content, or change the game configuration beyond what is normally allowed. Some game developers will reveal methods to access these menus as bonus features, while others may lock them out of the final version entirely such that they can only be accessed by modifying the program.
The Cutting Room Floor (TCRF) is a website dedicated to researching and documenting hidden content in video games, including debugging material. In December 2013, Edge described the website as "the biggest and most organised" of its kind, and by that time it had 3712 articles.
In other software
Debugging functions can be found in many other programs and consumer electronics as well. For example, many TVs and DVD players contain hidden menus that can be used to change settings that aren't accessible through the normal menus. Many cell phones also contain debug menus, usually used to test out functions of the phone to make sure they are working. For example, the hidden menu of the Samsung Galaxy S III has test functions for the vibrator, proximity sensor, sound, and other basic aspects of the phone.
References
Debugging
Video game development
|
https://en.wikipedia.org/wiki/Stressed%20skin
|
In mechanical engineering, stressed skin is a type of rigid construction, intermediate between monocoque and a rigid frame with a non-loaded covering. A stressed skin structure has its compression-taking elements localized and its tension-taking elements distributed. Typically, the main frame has rectangular structure and is triangulated by the covering.
Description
A framework box can be distorted from being square, so it isn't rigid by itself, however adding diagonals that take either tension or compression fixes this, because the box cannot deviate from right angles without altering the diagonals.
Sometimes flexible members like wires are used to provide tension, or rigid compression frames are used, as with a Warren or Pratt truss, however both these are full frame structures.
When the skin or outer covering is in tension so that it provides a significant portion of the rigidity, the structure is said to have a stressed skin design. This may also be referred to as semi-monocoque, and overlaps with monocoque, which has less framing, sometimes only including longitudinal or lateral members, and also overlaps with rigid frame structures where a minor portion of the overall stiffness may be derived from the skin. This method of construction is lighter than a full frame structure and not as complex to design as a full monocoque.
Examples
Examples include nearly all modern all-metal airplanes, as well as some railway vehicles, buses and motorhomes. The London Transport AEC Routemaster incorporated internal panels riveted to the frames which took most of the structure's shear load. Automobile unibodies are a form of stressed skin as well, as are some framed buildings which lack diagonal bracing.
Dornier-Zeppelin D.I : first all-metal stressed skin fighter and first with stressed skin wings (1918)
Short Silver Streak : first all-metal British stressed skin aircraft (1920)
Zeppelin-Lindau (Dornier) Rs.IV : first aircraft with an all-metal stressed skin fuselage
|
https://en.wikipedia.org/wiki/Isothermal%20titration%20calorimetry
|
In chemical thermodynamics, isothermal titration calorimetry (ITC) is a physical technique used to determine the thermodynamic parameters of interactions in solution. It is most often used to study the binding of small molecules (such as medicinal compounds) to larger macromolecules (proteins, DNA etc.) in a label-free environment. It consists of two cells which are enclosed in an adiabatic jacket. The compounds to be studied are placed in the sample cell, while the other cell, the reference cell, is used as a control and contains the buffer in which the sample is dissolved.
The technique was developed by H. D. Johnston in 1968 as a part of his Ph.D. dissertation at Brigham Young University, and was considered niche until introduced commercially by MicroCal Inc. in 1988. Compared to other calorimeters, ITC has an advantage in not requiring any correctors since there was no heat exchange between the system and the environment.
Thermodynamic measurements
ITC is a quantitative technique that can determine the binding affinity (), enthalpy changes (), and binding stoichiometry () of the interaction between two or more molecules in solution. This is achieved from integrating the area of the injection peaks and plotting the individual values by molar ratio of the binding event versus \Delta H (kcal/mol). From these initial measurements, Gibbs free energy changes () and entropy changes () can be determined using the relationship:
(where is the gas constant and is the absolute temperature).
For accurate measurements of binding affinity, the curve of the thermogram must be sigmoidal. The profile of the curve is determined by the c-value, which is calculated using the equation:
where is the stoichiometry of the binding, is the association constant and is the concentration of the molecule in the cell. The c-value must fall between 1 and 1000, ideally between 10 and 100. In terms of binding affinity, it would be approximately from ~ within the limit range.
In
|
https://en.wikipedia.org/wiki/Competition%20%28biology%29
|
Competition is an interaction between organisms or species in which both require a resource that is in limited supply (such as food, water, or territory). Competition lowers the fitness of both organisms involved since the presence of one of the organisms always reduces the amount of the resource available to the other.
In the study of community ecology, competition within and between members of a species is an important biological interaction. Competition is one of many interacting biotic and abiotic factors that affect community structure, species diversity, and population dynamics (shifts in a population over time).
There are three major mechanisms of competition: interference, exploitation, and apparent competition (in order from most direct to least direct). Interference and exploitation competition can be classed as "real" forms of competition, while apparent competition is not, as organisms do not share a resource, but instead share a predator. Competition among members of the same species is known as intraspecific competition, while competition between individuals of different species is known as interspecific competition.
According to the competitive exclusion principle, species less suited to compete for resources must either adapt or die out, although competitive exclusion is rarely found in natural ecosystems. According to evolutionary theory, competition within and between species for resources is important in natural selection. More recently, however, researchers have suggested that evolutionary biodiversity for vertebrates has been driven not by competition between organisms, but by these animals adapting to colonize empty livable space; this is termed the 'Room to Roam' hypothesis.
Interference competition
During interference competition, also called contest competition, organisms interact directly by fighting for scarce resources. For example, large aphids defend feeding sites on cottonwood leaves by ejecting smaller aphids from better sites.
|
https://en.wikipedia.org/wiki/Model%20transformation
|
A model transformation, in model-driven engineering, is an automated way of modifying and creating platform-specific model from platform-independent ones. An example use of model transformation is ensuring that a family of models is consistent, in a precise sense which the software engineer can define. The aim of using a model transformation is to save effort and reduce errors by automating the building and modification of models where possible.
Overview
Model transformations can be thought of as programs that take models as input. There is a wide variety of kinds of model transformation and uses of them, which differ in their inputs and outputs and also in the way they are expressed.
A model transformation usually specifies which models are acceptable as input, and if appropriate what models it may produce as output, by specifying the metamodel to which a model must conform.
Classification of model transformations
Model transformations and languages for them have been classified in many ways.
Some of the more common distinctions drawn are:
Number and type of inputs and outputs
In principle a model transformation may have many inputs and outputs of various types; the only absolute limitation is that a model transformation will take at least one model as input. However, a model transformation that did not produce any model as output would more commonly be called a model analysis or model query.
Endogenous versus exogenous
Endogenous transformations are transformations between models expressed in the same language. Exogenous transformations are transformations between models expressed using different languages. For example, in a process conforming to the OMG Model Driven Architecture, a platform-independent model might be transformed into a platform-specific model by an exogenous model transformation.
Unidirectional versus bidirectional
A unidirectional model transformation has only one mode of execution: that is, it always takes the same type of input
|
https://en.wikipedia.org/wiki/ShotCode
|
ShotCode is a circular barcode created by High Energy Magic of Cambridge University. It uses a dartboard-like circle, with a bullseye in the centre and datacircles surrounding it. The technology reads databits from these datacircles by measuring the angle and distance from the bullseye for each.
ShotCodes are designed to be read with a regular camera (including those found on mobile phones and webcams) without the need to purchase other specialised hardware. ShotCodes differ from matrix barcodes in that they do not store regular data - rather, they store a look up number consisting of 40 bits of data. This needs to link to a server that holds information regarding a mapped URL which the reading device can connect to in order to download said data.
History
ShotCode was created in 1999 at the University of Cambridge when researching a low cost vision based method to track locations and developed TRIPCode as a result. It has been used to track printed TRIPCode paperbadges in realtime with webcams. After that in Cambridge it had another research use; to read barcodes with mobile phone cams, and they used TRIPCode in a round barcode which was named SpotCode. High Energy Magic was founded in 2003 to commercialise research from the University of Cambridge Computer Laboratory and Laboratory for Communications Engineering. Least Bango.net, a mobile company used SpotCode 2004 in their ads. In 2005 High Energy Magic Ltd. sold the entire SpotCode IPR to OP3. Afterwards the name was changed from SpotCode to ShotCode. Heineken was the first company to officially use the ShotCode technology.
ShotCode's software
The software used to read a ShotCode captured by a mobile camera is called ‘ShotReader’. It is lightweight and is only around 17kB. It ‘reads’ the camera’s picture of a ShotCode in real time and prompts the browsers to navigate to a particular site.
The last website update was from 2007, suggesting that updates for phones based on Android and iPhone will not be availa
|
https://en.wikipedia.org/wiki/Requirement%20prioritization
|
Requirement prioritization is used in the Software product management for determining which candidate requirements of a software product should be included in a certain release. Requirements are also prioritized to minimize risk during development so that the most important or high risk requirements are implemented first. Several methods for assessing a prioritization of software requirements exist.
Introduction
In Software product management there exist several sub processes. First of all there is portfolio management where a product development strategy is defined based on information from the market and partner companies. In product roadmapping (or technology roadmapping), themes and core assets of products in the portfolio are identified and roadmap constructions are created. In requirements management candidate software requirements for a product are gathered and organized. Finally, in the release planning activity, these requirements are prioritized and selected for a release, after which the launch of the software product can be prepared. Thus, one of the key steps in release planning is requirements prioritization.
Cost-value approach
A good and relatively easy to use method for prioritizing software product requirements is the cost-value approach. This approach was created by Joachim Karlsson and Kevin Ryan. The approach was then further developed and commercialized in the company Focal Point (that was acquired by Telelogic in 2005). Their basic idea was to determine for each individual candidate requirement what the cost of implementing the requirement would be and how much value the requirement has.
The assessment of values and costs for the requirements was performed using the Analytic Hierarchy Process (AHP). This method was created by Thomas Saaty. Its basic idea is that for all pairs of (candidate) requirements a person assesses a value or a cost comparing the one requirement of a pair with the other. For example, a value of 3 for (Req1, Req2) ind
|
https://en.wikipedia.org/wiki/Digital%20Item
|
Digital Item is the basic unit of transaction in the MPEG-21 framework. It is a structured digital object, including a standard representation, identification and metadata.
A Digital Item may be a combination of resources like videos, audio tracks or images; metadata, such as descriptors and identifiers; and structure for describing the relationships between the resources.
It is becoming difficult for users of content to identify and interpret the different intellectual property rights that are associated with the elements of multimedia content. For this reason, new solutions are required for the access, delivery, management and protection of this content.
Digital Item Declaration
MPEG-21 proposes to facilitate a wide range of actions involving Digital Items so there is a need for a very precise description for defining exactly what constitutes such an item.
A Digital Item Declaration (DID) is a document that specifies the makeup, structure and organisation of a Digital Item. The purpose of the Digital Item Declaration is to describe a set of abstract terms and concepts, to form a useful model for defining what a Digital Item is. Following this model, a Digital Item is the digital representation of an object, which is managed, described or exchanged within the model.
Digital Item Identification
Digital Item Identification (DII) specification includes not only how to identify Digital Items uniquely but also to distinguish different types of them. These Identifiers are placed in a specific part of the Digital Item Declaration, which is the statement element, and they are associated with Digital Items.
Digital Items and their parts are identified by encapsulating uniform resource identifiers, which are a compact string of characters for identifying an abstract or physical resource.
The elements of a DID can have zero, one or more descriptors; each descriptor may contain a statement which can contain an identifier relating to the parent element of the stateme
|
https://en.wikipedia.org/wiki/History%20of%20the%20Dylan%20programming%20language
|
Dylan programming language history first introduces the history with a continuous text. The second section gives a timeline overview of the history and present several milestones and watersheds. The third section presents quotations related to the history of the Dylan programming language.
Introduction to the history
Dylan was originally developed by Apple Cambridge, then a part of the Apple Advanced Technology Group (ATG). Its initial goal was to produce a new system programming application development programming language for the Apple Newton PDA, but soon it became clear that this would take too much time. Walter Smith developed NewtonScript for scripting and application development, and systems programming was done in the language C. Development continued on Dylan for the Macintosh. The group produced an early Technology Release of its Apple Dylan product, but the group was dismantled due to internal restructuring before they could finish any real usable products.
According to Apple Confidential by Owen W. Linzmayer, the original code name for the Dylan project was Ralph, for Ralph Ellison, author of the novel Invisible Man, to reflect its status as a secret research project.
The initial killer application for Dylan was the Apple Newton PDA, but the initial implementation came too late for it. Also, the performance and size objectives were missed. So Dylan was retargeted toward a general computer programming audience. To compete in this market, it was decided to switch to infix notation.
Andrew Shalit (along with David A. Moon and Orca Starbuck) wrote the Dylan Reference Manual, which served as a basis for work at Harlequin (software company) and Carnegie Mellon University. When Apple Cambridge was closed, several members went to Harlequin, which produces a working compiler and development environment for Microsoft Windows. When Harlequin got bought and split, some of the developers founded Functional Objects. In 2003, the firm contributed its repository to
|
https://en.wikipedia.org/wiki/David%20Emmanuel%20%28mathematician%29
|
David Emmanuel (31 January 1854 – 4 February 1941) was a Romanian Jewish mathematician and member of the Romanian Academy, considered to be the founder of the modern mathematics school in Romania.
Born in Bucharest, Emmanuel studied at Gheorghe Lazăr and Gheorghe Șincai high schools. In 1873 he went to Paris, where he received his Ph.D. in mathematics from the University of Paris (Sorbonne) in 1879 with a thesis on Study of abelian integrals of the third species, becoming the second Romanian to have a Ph.D. in mathematics from the Sorbonne (the first one was Spiru Haret). The thesis defense committee consisted of Victor Puiseux (advisor), Charles Briot, and Jean-Claude Bouquet.
In 1882, Emmanuel became a professor of superior algebra and function theory at the Faculty of Sciences of the University of Bucharest. Here, in 1888, he held the first courses on group theory and on Galois theory, and introduced set theory in Romanian education. Among his students were Anton Davidoglu, Alexandru Froda, Traian Lalescu, Grigore Moisil, , Miron Nicolescu, Octav Onicescu, Dimitrie Pompeiu, Simion Stoilow, and Gheorghe Țițeica. Emmanuel had an important role in the introduction of modern mathematics and of the rigorous approach to mathematics in Romania.
Emmanuel was the president of the first Congress of Romanian Mathematicians, held in 1929 in Cluj. He died in Bucharest in 1941.
A street in the Dorobanți neighborhood of Bucharest is named after him.
Publications
References
1854 births
1941 deaths
Scientists from Bucharest
Romanian Sephardi Jews
Gheorghe Lazăr National College (Bucharest) alumni
University of Paris alumni
Romanian mathematicians
Academic staff of the University of Bucharest
Honorary members of the Romanian Academy
Mathematical analysts
Romanian expatriates in France
People from the United Principalities of Moldavia and Wallachia
|
https://en.wikipedia.org/wiki/Java%20%28software%20platform%29
|
Java is a set of computer software and specifications that provides a software platform for developing application software and deploying it in a cross-platform computing environment. Java is used in a wide variety of computing platforms from embedded devices and mobile phones to enterprise servers and supercomputers. Java applets, which are less common than standalone Java applications, were commonly run in secure, sandboxed environments to provide many features of native applications through being embedded in HTML pages.
Writing in the Java programming language is the primary way to produce code that will be deployed as byte code in a Java virtual machine (JVM); byte code compilers are also available for other languages, including Ada, JavaScript, Python, and Ruby. In addition, several languages have been designed to run natively on the JVM, including Clojure, Groovy, and Scala. Java syntax borrows heavily from C and C++, but object-oriented features are modeled after Smalltalk and Objective-C. Java eschews certain low-level constructs such as pointers and has a very simple memory model where objects are allocated on the heap (while some implementations e.g. all currently supported by Oracle, may use escape analysis optimization to allocate on the stack instead) and all variables of object types are references. Memory management is handled through integrated automatic garbage collection performed by the JVM.
Latest version
The latest version is Java 21, a long-term support (LTS) version released in September 2023, while Java 17 released in September 2021 is also supported, one of a few older LTS releases down to Java 8 LTS. As an open source platform, Java has many distributors, including Amazon, IBM, Azul Systems, and AdoptOpenJDK. Distributions include Amazon Corretto, Zulu, AdoptOpenJDK, and Liberica. Regarding Oracle, it distributes Java 8, and also makes available e.g. Java 11, both also currently supported LTS versions. Oracle (and others) "highly recomme
|
https://en.wikipedia.org/wiki/Reverse%20echo
|
Reverse echo and reverse reverb are sound effects created as the result of recording an echo or reverb effect of an audio recording played backwards. The original recording is then played forwards accompanied by the recording of the echoed or reverberated signal which now precedes the original signal. The process produces a swelling effect preceding and during playback.
Development
Guitarist and producer Jimmy Page claims to have invented the effect, stating that he originally developed the method when recording the single "Ten Little Indians" with The Yardbirds in 1967. He later used it on a number of Led Zeppelin tracks, including "You Shook Me", "Whole Lotta Love", and their cover of "When the Levee Breaks". In an interview he gave to Guitar World magazine in 1993, Page explained:
Despite Page's claims, an earlier example of the effect is possibly heard towards the end of the 1966 Lee Mallory single "That's the Way It's Gonna Be", produced by Curt Boettcher.
Usage in music
Jimmy Page of Led Zeppelin used this effect in the bridge of "Whole Lotta Love” (1969). Another early example is found in "Alucard" from the eponymous Gentle Giant album (1970), although usage was somewhat common throughout the 1970s, for example in “Crying to the Sky” by Be-Bop Deluxe.
Reverse reverb is commonly used in shoegaze, particularly by such bands as My Bloody Valentine and Spacemen 3.
It is also often used as a lead-in to vocal passages in hardstyle music, and various forms of EDM and pop music. The reverse reverb is applied to the first word or syllable of the vocal for a build-up effect or other-worldly sound.
Metallica used the effect in the song "Fade To Black" on James Hetfield's vocals in their 1984 album Ride The Lightning. The effect was also employed by Genesis (on Phil Collins’ snare drum) at the end of the song “Deep in the Motherload” on the 1978 album “And Then There Were Three”.
Use in other media
Reverse reverb has been used in filmmaking and television produc
|
https://en.wikipedia.org/wiki/Monotone%20class%20theorem
|
In measure theory and probability, the monotone class theorem connects monotone classes and -algebras. The theorem says that the smallest monotone class containing an algebra of sets is precisely the smallest -algebra containing It is used as a type of transfinite induction to prove many other theorems, such as Fubini's theorem.
Definition of a monotone class
A is a family (i.e. class) of sets that is closed under countable monotone unions and also under countable monotone intersections. Explicitly, this means has the following properties:
if and then and
if and then
Monotone class theorem for sets
Monotone class theorem for functions
Proof
The following argument originates in Rick Durrett's Probability: Theory and Examples.
Results and applications
As a corollary, if is a ring of sets, then the smallest monotone class containing it coincides with the -ring of
By invoking this theorem, one can use monotone classes to help verify that a certain collection of subsets is a -algebra.
The monotone class theorem for functions can be a powerful tool that allows statements about particularly simple classes of functions to be generalized to arbitrary bounded and measurable functions.
See also
Citations
References
Families of sets
Theorems in measure theory
fr:Lemme de classe monotone
|
https://en.wikipedia.org/wiki/Importer%20%28computing%29
|
An importer is a software application that reads in a data file or metadata information in one format and converts it to another format via special algorithms (such as filters). An importer often is not an entire program by itself, but an extension to another program, implemented as a plug-in. When implemented in this way, the importer reads the data from the file and converts it into the hosting application's native format.
For example, the data file for a 3D model may be written from a modeler, such as 3D Studio Max. A game developer may then want to use that model in their game's editor. An importer, part of the editor, may read in the 3D Studio Max model and convert it to the game's native format so it can be used in game levels.
Importers are important tools in the video game industry. A plug-in or application that does the converse of an importer is called an exporter.
See also
Data scraping
Web scraping
Report mining
Mashup (web application hybrid)
Metadata
Comparison of feed aggregators
Video game development
|
https://en.wikipedia.org/wiki/Alexandru%20Ghika
|
Alexandru Ghika (June 22, 1902 – April 11, 1964) was a Romanian mathematician, founder of the Romanian school of functional analysis.
Life
He was born in Bucharest, into the Ghica family, the son of Ioan Ghika (1873–1949) and Elena Metaxa (1870–1951), and great-great-grandson of Grigore IV Ghica, Prince of Wallachia. He started his secondary studies at the Gheorghe Lazăr High School in Bucharest. In 1917, he left with his family for Paris, completing his secondary studies at the Lycée Louis-le-Grand in 1920. He then entered the University of Paris (the Sorbonne) with a major in mathematics, graduating in 1922. In 1929, he obtained a Ph.D. in mathematics from the Faculté des Sciences of the University of Paris.
After completing his doctorate, Ghika returned to Romania. In November 1932 he became assistant professor in the Mathematics Department of the University of Bucharest, working in the Function Theory section chaired by Dimitrie Pompeiu. On February 7, 1935, he was promoted to associate professor, and in 1945 he was named Full Professor and chair of the newly founded Functional Analysis section.
In 1935, Ghika was elected corresponding member of the Romania Academy of Sciences, being promoted to full member in 1938. In 1955 he became corresponding member of the Romanian Academy, and was promoted to full membership on March 20, 1963. In 1949, at the founding of the Institute of Mathematics of the Romanian Academy, he became the chair of the Functional Analysis section of that Institute, a position he held till his death.
Ghika married Elisabeta Angelescu (daughter of one-time Prime Minister Constantin Angelescu) on June 7, 1934. They had a son, Grigore (born November 7, 1936), who became a researcher at the Institute of Atomic Physics in Măgurele.
Alexandru Ghika died in Bucharest of lung cancer. He was buried at the Ghika-Tei church, founded in 1833 by Prince Grigore Alexandru Ghica.
In March, 2007, the heirs of the Ghika and Angelescu families won b
|
https://en.wikipedia.org/wiki/Exporter%20%28computing%29
|
An exporter is a software application that writes out a data file in a format different from its native format. It does this via special algorithms (such as filters). An exporter often is not an entire program by itself, but an extension to another program, implemented as a plug-in. When implemented in this way, the exporter converts the hosting application's native format into the desired format and writes it to the file.
For example, a 3D model may be written with a modeler, such as 3D Studio Max. A game developer may want to use that model in its game, but uses a custom format that is different from 3D Studio Max's native format. Using the exporter, the model can be saved in the developer's native format and then read into the game (or a tool) without any extra conversion. Using exporters, game tools can also export from their native format into formats for other applications (such as the modeler or a paint program, such as Photoshop).
Exporters are important tools in the video game industry. A plug-in or application that does the converse of an exporter is called an importer. Importers and exporters are often used in conjunction with one another in many software development environments.
Video game development
|
https://en.wikipedia.org/wiki/Cycle%20index
|
In combinatorial mathematics a cycle index is a polynomial in several variables which is structured in such a way that information about how a group of permutations acts on a set can be simply read off from the coefficients and exponents. This compact way of storing information in an algebraic form is frequently used in combinatorial enumeration.
Each permutation π of a finite set of objects partitions that set into cycles; the cycle index monomial of π is a monomial in variables a1, a2, … that describes the cycle type of this partition: the exponent of ai is the number of cycles of π of size i. The cycle index polynomial of a permutation group is the average of the cycle index monomials of its elements. The phrase cycle indicator is also sometimes used in place of cycle index.
Knowing the cycle index polynomial of a permutation group, one can enumerate equivalence classes due to the group's action. This is the main ingredient in the Pólya enumeration theorem. Performing formal algebraic and differential operations on these polynomials and then interpreting the results combinatorially lies at the core of species theory.
Permutation groups and group actions
A bijective map from a set X onto itself is called a permutation of X, and the set of all permutations of X forms a group under the composition of mappings, called the symmetric group of X, and denoted Sym(X). Every subgroup of Sym(X) is called a permutation group of degree |X|. Let G be an abstract group with a group homomorphism φ from G into Sym(X). The image, φ(G), is a permutation group. The group homomorphism can be thought of as a means for permitting the group G to "act" on the set X (using the permutations associated with the elements of G). Such a group homomorphism is formally called a group action and the image of the homomorphism is a permutation representation of G. A given group can have many different permutation representations, corresponding to different actions.
Suppose that group G acts on
|
https://en.wikipedia.org/wiki/Circumscriptional%20name
|
In biological classification, circumscriptional names are taxon names that are not ruled by ICZN and are defined by the particular set of members included. Circumscriptional names are used mainly for taxa above family-group level (e. g. order or class), but can be also used for taxa of any ranks, as well as for rank-less taxa.
Non-typified names other than those of the genus- or species-group constitute the majority of generally accepted names of taxa higher than superfamily. The ICZN regulates names of taxa up to family group rank (i. e. superfamily). There are no generally accepted rules of naming higher taxa (orders, classes, phyla, etc.). Under the approach of circumscription-based (circumscriptional) nomenclatures, a circumscriptional name is associated with a certain circumscription of a taxon without regard of its rank or position.
Some authors advocate introducing a mandatory standardized typified nomenclature of higher taxa. They suggest all names of higher taxa to be derived in the same manner as family-group names, i.e. by modifying names of type genera with endings to reflect the rank. There is no consensus on what such higher rank endings should be. A number of established practices exist as to the use of typified names of higher taxa, depending on animal group.
See also
Descriptive botanical name, optional forms still used in botany for ranks above family and for a few family names
References
Kluge, N. 2000. "Sovremennaya Sistematika Nasekomyh ..." [Modern Systematics of Insects. Part I. Principles of Systematics of Living Organisms and General System of Insects, with Classification of Primary Wingless and Paleopterous Insects] - S.-Petersburg, Lan', 2000, 333 pp.; (c) N.Ju. Kluge, 2000; (c) "Lan'", 2000.
Kluge N.J. 2010. Circumscriptional names of higher taxa in Hexapoda. // Bionomina, 1: 15–55. http://www.mapress.com/bionomina/content/2010/f/bn00001p055.pdf
External links
Kluge's PRINCIPLES OF NOMENCLATURE of ZOOLOGICAL TAXA
NOMINA CIR
|
https://en.wikipedia.org/wiki/Variable%20air%20volume
|
Variable air volume (VAV) is a type of heating, ventilating, and/or air-conditioning (HVAC) system. Unlike constant air volume (CAV) systems, which supply a constant airflow at a variable temperature, VAV systems vary the airflow at a constant or varying temperature. The advantages of VAV systems over constant-volume systems include more precise temperature control, reduced compressor wear, lower energy consumption by system fans, less fan noise, and additional passive dehumidification.
Box technology
The most simple form of a VAV box is the single duct terminal configuration, which is connected to a single supply air duct that delivers treated air from an air-handling unit (AHU) to the space the box is serving. This configuration can deliver air at variable temperatures or air volumes to meet the heating and cooling loads as well as the ventilation rates required by the space.
Most commonly, VAV boxes are pressure independent, meaning the VAV box uses controls to deliver a constant flow rate regardless of variations in system pressures experienced at the VAV inlet. This is accomplished by an airflow sensor that is placed at the VAV inlet which opens or closes the damper within the VAV box to adjust the airflow. The difference between a CAV and VAV box is that a VAV box can be programmed to modulate between different flowrate setpoints depending on the conditions of the space. The VAV box is programmed to operate between a minimum and maximum airflow setpoint and can modulate the flow of air depending on occupancy, temperature, or other control parameters. A CAV box can only operate between a constant, maximum value, or an “off” state. This difference means the VAV box can provide tighter space temperature control while using much less energy. Another reason why VAV boxes save more energy is that they are coupled with variable-speed drives on fans, so the fans can ramp down when the VAV boxes are experiencing part load conditions.
It is common for VAV boxes to i
|
https://en.wikipedia.org/wiki/CCU%20delivery
|
Customer Configuration Updating (CCU) is a software development method for structuring the process of providing customers with new versions of products and updates production. This method is developed by researchers of the Utrecht University.
The delivery phase of the CCU method concerns the process which starts at the moment a product is finished until the actual shipping of the product to the customer.
Introduction to the delivery process
As described in the general entry of CCU, the delivery phase is the second phase of the CCU method. In figure one the CCU method is depicted. The phases of CCU that are not covered in this article are concealed by a transparent grey rectangle.
As can be seen in figure one, the delivery phase is in between the release phase and the deployment phase. A software vendor develops and releases a software product and afterwards it has to be transported to the customer. This phase is the delivery process. This process is highly complex because the vendor often has to deal with a product which has multiple versions, variable features, dependency on external products, and different kinds of distribution options. The CCU method helps the software vendor in structuring this process.
In figure 2, the process-data diagram of the delivery phase within CCU is depicted. This way of modeling was invented by Saeki (2003). On the left side you can see the meta-process model and on the right side the meta-data model. The two models are linked to each other by the relationships visualized as dotted lines. The meta-data model (right side) shows the concepts involved in the process and how the concepts are related to each other. For instance it is visible that a package consists of multiple parts, being the: software package, system description, manual, and license and management information. The numbers between the relations indicate in what quantity the concepts are related. For example the “1..1” between package and software package means that
|
https://en.wikipedia.org/wiki/Poincar%C3%A9%20inequality
|
In mathematics, the Poincaré inequality is a result in the theory of Sobolev spaces, named after the French mathematician Henri Poincaré. The inequality allows one to obtain bounds on a function using bounds on its derivatives and the geometry of its domain of definition. Such bounds are of great importance in the modern, direct methods of the calculus of variations. A very closely related result is Friedrichs' inequality.
Statement of the inequality
The classical Poincaré inequality
Let p, so that 1 ≤ p < ∞ and Ω a subset bounded at least in one direction. Then there exists a constant C, depending only on Ω and p, so that, for every function u of the Sobolev space W01,p(Ω) of zero-trace (a.k.a. zero on the boundary) functions,
Poincaré–Wirtinger inequality
Assume that 1 ≤ p ≤ ∞ and that Ω is a bounded connected open subset of the n-dimensional Euclidean space ℝn with a Lipschitz boundary (i.e., Ω is a Lipschitz domain). Then there exists a constant C, depending only on Ω and p, such that for every function u in the Sobolev space ,
where
is the average value of u over Ω, with |Ω| standing for the Lebesgue measure of the domain Ω. When Ω is a ball, the above inequality is
called a -Poincaré inequality; for more general domains Ω, the above is more familiarly known as a Sobolev inequality.
The necessity to subtract the average value can be seen by considering constant functions for which the derivative is zero while, without subtracting the average, we can have the integral of the function as large as we wish. There are other conditions instead of subtracting the average that we can require in order to deal with this issue with constant functions, for example, requiring trace zero, or subtracting the average over some proper subset of the domain. The constant C in the Poincare inequality may be different from condition to condition. Also note that the issue is not just the constant functions, because it is the same as saying that adding a constant value to a f
|
https://en.wikipedia.org/wiki/Wavelet%20modulation
|
Wavelet modulation, also known as fractal modulation, is a modulation technique that makes use of wavelet transformations to represent the data being transmitted. One of the objectives of this type of modulation is to send data at multiple rates over a channel that is unknown. If the channel is not clear for one specific bit rate, meaning that the signal will not be received, the signal can be sent at a different bit rate where the signal-to-noise ratio is higher.
References
Quantized radio modulation modes
Wavelets
|
https://en.wikipedia.org/wiki/Enumerated%20type
|
In computer programming, an enumerated type (also called enumeration, enum, or factor in the R programming language, and a categorical variable in statistics) is a data type consisting of a set of named values called elements, members, enumeral, or enumerators of the type. The enumerator names are usually identifiers that behave as constants in the language. An enumerated type can be seen as a degenerate tagged union of unit type. A variable that has been declared as having an enumerated type can be assigned any of the enumerators as a value. In other words, an enumerated type has values that are different from each other, and that can be compared and assigned, but are not specified by the programmer as having any particular concrete representation in the computer's memory; compilers and interpreters can represent them arbitrarily.
For example, the four suits in a deck of playing cards may be four enumerators named Club, Diamond, Heart, and Spade, belonging to an enumerated type named suit. If a variable V is declared having suit as its data type, one can assign any of those four values to it.
Although the enumerators are usually distinct, some languages may allow the same enumerator to be listed twice in the type's declaration. The names of enumerators need not be semantically complete or compatible in any sense. For example, an enumerated type called color may be defined to consist of the enumerators Red, Green, Zebra, Missing, and Bacon. In some languages, the declaration of an enumerated type also intentionally defines an ordering of its members (High, Medium and Low priorities); in others, the enumerators are unordered (English, French, German and Spanish supported languages); in others still, an implicit ordering arises from the compiler concretely representing enumerators as integers.
Some enumerator types may be built into the language. The Boolean type, for example is often a pre-defined enumeration of the values False and True. A unit type consisting
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.