source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Code%20Access%20Security
|
Code Access Security (CAS), in the Microsoft .NET framework, is Microsoft's solution to prevent untrusted code from performing privileged actions. When the CLR loads an assembly it will obtain evidence for the assembly and use this to identify the code group that the assembly belongs to. A code group contains a permission set (one or more permissions). Code that performs a privileged action will perform a code access demand which will cause the CLR to walk up the call stack and examine the permission set granted to the assembly of each method in the call stack.
The code groups and permission sets are determined by the administrator of the machine who defines the security policy.
Microsoft considers CAS as obsolete and discourages its use. It is also not available in .NET Core and .NET.
Evidence
Evidence can be any information associated with an assembly. The default evidences that are used by .NET code access security are:
Application directory: the directory in which an assembly resides.
Publisher: the assembly's publisher's digital signature (requires the assembly to be signed via Authenticode).
URL: the complete URL where the assembly was launched from
Site: the hostname of the URL/Remote Domain/VPN.
Zone: the security zone where the assembly resides
Hash: a cryptographic hash of the assembly, which identifies a specific version.
Strong Name: a combination of the assembly name, version and public key of the signing key used to sign the assembly. The signing key is not an X.509 certificate, but a custom key pair generated by the strong naming tool, SN.EXE or by Visual Studio.
A developer can use custom evidence (so-called assembly evidence) but this requires writing a security assembly and in version 1.1 of .NET this facility does not work.
Evidence based on a hash of the assembly is easily obtained in code. For example, in C#, evidence may be obtained by the following code clause:
this.GetType().Assembly.Evidence
Policy
A policy is a set of expressions t
|
https://en.wikipedia.org/wiki/Fujiwhara%20effect
|
The Fujiwhara effect, sometimes referred to as the Fujiwara effect, Fujiw(h)ara interaction or binary interaction, is a phenomenon that occurs when two nearby cyclonic vortices move around each other and close the distance between the circulations of their corresponding low-pressure areas. The effect is named after Sakuhei Fujiwhara, the Japanese meteorologist who initially described the effect. Binary interaction of smaller circulations can cause the development of a larger cyclone, or cause two cyclones to merge into one. Extratropical cyclones typically engage in binary interaction when within of one another, while tropical cyclones typically interact within of each other.
Description
When cyclones are in proximity of one another, their centers will circle each other cyclonically (counter-clockwise in the Northern Hemisphere and clockwise in the Southern Hemisphere) about a point between the two systems due to their cyclonic wind circulations. The two vortices will be attracted to each other, and eventually spiral into the center point and merge. It has not been agreed upon whether this is due to the divergent portion of the wind or vorticity advection. When the two vortices are of unequal size, the larger vortex will tend to dominate the interaction, and the smaller vortex will circle around it. The effect is named after Sakuhei Fujiwhara, the Japanese meteorologist who initially described it in a 1921 paper about the motion of vortices in water.
Tropical cyclones
Tropical cyclones can form when smaller circulations within the Intertropical Convergence Zone merge. The effect is often mentioned in relation to the motion of tropical cyclones, although the final merging of the two storms is uncommon. The effect becomes noticeable when they approach within of each other. Rotation rates within binary pairs accelerate when tropical cyclones close within of each other. Merger of the two systems (or shearing out of one of the pair) becomes realized when they ar
|
https://en.wikipedia.org/wiki/Nichols%20plot
|
The Nichols plot is a plot used in signal processing and control design, named after American engineer Nathaniel B. Nichols.
Use in control design
Given a transfer function,
with the closed-loop transfer function defined as,
the Nichols plots displays versus . Loci of constant and are overlaid to allow the designer to obtain the closed loop transfer function directly from the open loop transfer function. Thus, the frequency is the parameter along the curve. This plot may be compared to the Bode plot in which the two inter-related graphs - versus and versus ) - are plotted.
In feedback control design, the plot is useful for assessing the stability and robustness of a linear system. This application of the Nichols plot is central to the quantitative feedback theory (QFT) of Horowitz and Sidi, which is a well known method for robust control system design.
In most cases, refers to the phase of the system's response. Although similar to a Nyquist plot, a Nichols plot is plotted in a Cartesian coordinate system while a Nyquist plot is plotted in a Polar coordinate system.
See also
Hall circles
Bode plot
Nyquist plot
Transfer function
References
External links
Mathematica function for creating the Nichols plot
Plots (graphics)
Signal processing
Classical control theory
|
https://en.wikipedia.org/wiki/Least%20mean%20squares%20filter
|
Least mean squares (LMS) algorithms are a class of adaptive filter used to mimic a desired filter by finding the filter coefficients that relate to producing the least mean square of the error signal (difference between the desired and the actual signal). It is a stochastic gradient descent method in that the filter is only adapted based on the error at the current time. It was invented in 1960 by Stanford University professor Bernard Widrow and his first Ph.D. student, Ted Hoff.
Problem formulation
The picture shows the various parts of the filter. is the input signal, which is then transformed by an unknown filter that we wish to match using . The output from the unknown filter is , which is then interfered with a noise signal , producing . Then the error signal is computed, and it is fed back to the adaptive filter, to adjust its parameters in order to minimize the mean square of the error signal .
Relationship to the Wiener filter
The realization of the causal Wiener filter looks a lot like the solution to the least squares estimate, except in the signal processing domain. The least squares solution for input matrix and output vector
is
The FIR least mean squares filter is related to the Wiener filter, but minimizing the error criterion of the former does not rely on cross-correlations or auto-correlations. Its solution converges to the Wiener filter solution.
Most linear adaptive filtering problems can be formulated using the block diagram above. That is, an unknown system is to be identified and the adaptive filter attempts to adapt the filter to make it as close as possible to , while using only observable signals , and ; but , and are not directly observable. Its solution is closely related to the Wiener filter.
Definition of symbols
is the number of the current input sample
is the number of filter taps
(Hermitian transpose or conjugate transpose)
estimated filter; interpret as the estimation of the filter coefficients after s
|
https://en.wikipedia.org/wiki/Recursive%20least%20squares%20filter
|
Recursive least squares (RLS) is an adaptive filter algorithm that recursively finds the coefficients that minimize a weighted linear least squares cost function relating to the input signals. This approach is in contrast to other algorithms such as the least mean squares (LMS) that aim to reduce the mean square error. In the derivation of the RLS, the input signals are considered deterministic, while for the LMS and similar algorithms they are considered stochastic. Compared to most of its competitors, the RLS exhibits extremely fast convergence. However, this benefit comes at the cost of high computational complexity.
Motivation
RLS was discovered by Gauss but lay unused or ignored until 1950 when Plackett rediscovered the original work of Gauss from 1821. In general, the RLS can be used to solve any problem that can be solved by adaptive filters. For example, suppose that a signal is transmitted over an echoey, noisy channel that causes it to be received as
where represents additive noise. The intent of the RLS filter is to recover the desired signal by use of a -tap FIR filter, :
where is the column vector containing the most recent samples of . The estimate of the recovered desired signal is
The goal is to estimate the parameters of the filter , and at each time we refer to the current estimate as and the adapted least-squares estimate by . is also a column vector, as shown below, and the transpose, , is a row vector. The matrix product (which is the dot product of and ) is , a scalar. The estimate is "good" if is small in magnitude in some least squares sense.
As time evolves, it is desired to avoid completely redoing the least squares algorithm to find the new estimate for , in terms of .
The benefit of the RLS algorithm is that there is no need to invert matrices, thereby saving computational cost. Another advantage is that it provides intuition behind such results as the Kalman filter.
Discussion
The idea behind RLS filters is to
|
https://en.wikipedia.org/wiki/Common%20Indexing%20Protocol
|
The Common Indexing Protocol (CIP) was an attempt in the IETF working group FIND during the mid-1990s to define a protocol for exchanging index information between directory services.
In the X.500 Directory model, searches scoped near the root of the tree (e.g. at a particular country) were problematic to implement, as potentially hundreds or thousands of directory servers would need to be contacted
in order to handle that query.
The indexes contained summaries or subsets of information about individuals and organizations represented in a white pages schema. By merging subsets of information from multiple sources, it was hoped that an index server holding that subset could be able to process a query more efficiently by chaining it only to some of the sources: those sources which did not hold information would not be contacted. For example, if a server holding the base entry for a particular country were provided with a list of names of all the people in all the entries in that country subtree, then that server would be able to process a query searching for a person with
a particular name by only chaining it to those servers which held data about such a person.
The protocol evolved from earlier work developing WHOIS++, and was intended to be capable of interconnecting
services from both the evolving WHOIS and LDAP activities.
This protocol has not seen much recent deployment, as WHOIS and LDAP environments have followed separate evolution paths. WHOIS deployments are typically in domain name registrars, and its data management issues have been addressed through specifications for domain name registry interconnection such as CRISP. In contrast, enterprises that manage employee, customer or student identity data in an LDAP directory have looked to federation protocols for interconnection between organizations.
RFCs
The Architecture of the Common Indexing Protocol (CIP)
MIME Object Definitions for the Common Indexing Protocol (CIP)
CIP Transport Pro
|
https://en.wikipedia.org/wiki/Set%20packing
|
Set packing is a classical NP-complete problem in computational complexity theory and combinatorics, and was one of Karp's 21 NP-complete problems. Suppose one has a finite set S and a list of subsets of S. Then, the set packing problem asks if some k subsets in the list are pairwise disjoint (in other words, no two of them share an element).
More formally, given a universe and a family of subsets of , a packing is a subfamily of sets such that all sets in are pairwise disjoint. The size of the packing is . In the set packing decision problem, the input is a pair and an integer ; the question is whether
there is a set packing of size or more. In the set packing optimization problem, the input is a pair , and the task is to find a set packing that uses the most sets.
The problem is clearly in NP since, given subsets, we can easily verify that they are pairwise disjoint in polynomial time.
The optimization version of the problem, maximum set packing, asks for the maximum number of pairwise disjoint sets in the list. It is a maximization problem that can be formulated naturally as an integer linear program, belonging to the class of packing problems.
Integer linear program formulation
The maximum set packing problem can be formulated as the following integer linear program.
Complexity
The set packing problem is not only NP-complete, but its optimization version (general maximum set packing problem) has been proven as difficult to approximate as the maximum clique problem; in particular, it cannot be approximated within any constant factor. The best known algorithm approximates it within a factor of . The weighted variant can also be approximated as well.
Packing sets with a bounded size
The problem does have a variant which is more tractable. Given any positive integer k≥3, the k-set packing problem is a variant of set packing in which each set contains at most k elements.
When k=1, the problem is trivial. When k=2, the problem is equivalent to finding
|
https://en.wikipedia.org/wiki/Poly1305
|
Poly1305 is a universal hash family designed by Daniel J. Bernstein for use in cryptography.
As with any universal hash family, Poly1305 can be used as a one-time message authentication code to authenticate a single message using a secret key shared between sender and recipient,
similar to the way that a one-time pad can be used to conceal the content of a single message using a secret key shared between sender and recipient.
Originally Poly1305 was proposed as part of Poly1305-AES,
a Carter–Wegman authenticator
that combines the Poly1305 hash with AES-128 to authenticate many messages using a single short key and distinct message numbers.
Poly1305 was later applied with a single-use key generated for each message using XSalsa20 in the NaCl crypto_secretbox_xsalsa20poly1305 authenticated cipher,
and then using ChaCha in the ChaCha20-Poly1305 authenticated cipher
deployed in TLS on the internet.
Description
Definition of Poly1305
Poly1305 takes a 16-byte secret key and an -byte message and returns a 16-byte hash .
To do this, Poly1305:
Interprets as a little-endian 16-byte integer.
Breaks the message into consecutive 16-byte chunks.
Interprets the 16-byte chunks as 17-byte little-endian integers by appending a 1 byte to every 16-byte chunk, to be used as coefficients of a polynomial.
Evaluates the polynomial at the point modulo the prime .
Reduces the result modulo encoded in little-endian return a 16-byte hash.
The coefficients of the polynomial , where , are:
with the exception that, if , then:
The secret key is restricted to have the bytes , i.e., to have their top four bits clear; and to have the bytes , i.e., to have their bottom two bits clear.
Thus there are distinct possible values of .
Use as a one-time authenticator
If is a secret 16-byte string interpreted as a little-endian integer, then
is called the authenticator for the message .
If a sender and recipient share the 32-byte secret key in advance, chosen uniformly at random, th
|
https://en.wikipedia.org/wiki/Conway%27s%20Soldiers
|
Conway's Soldiers or the checker-jumping problem is a one-person mathematical game or puzzle devised and analyzed by mathematician John Horton Conway in 1961. A variant of peg solitaire, it takes place on an infinite checkerboard. The board is divided by a horizontal line that extends indefinitely. Above the line are empty cells and below the line are an arbitrary number of game pieces, or "soldiers". As in peg solitaire, a move consists of one soldier jumping over an adjacent soldier into an empty cell, vertically or horizontally (but not diagonally), and removing the soldier which was jumped over. The goal of the puzzle is to place a soldier as far above the horizontal line as possible.
Conway proved that, regardless of the strategy used, there is no finite sequence of moves that will allow a soldier to advance more than four rows above the horizontal line. His argument uses a carefully chosen weighting of cells (involving the golden ratio), and he proved that the total weight can only decrease or remain constant. This argument has been reproduced in a number of popular math books.
Simon Tatham and Gareth Taylor have shown that the fifth row can be reached via an infinite series of moves. If diagonal jumps are allowed, the 8th row can be reached, but not the 9th row. In the n-dimensional version of the game, the highest row that can be reached is ; Conway's weighting argument demonstrates that row cannot be reached.
Conway's proof that the fifth row is inaccessible
Notation and definitions
Define . (In other words, here denotes the reciprocal of the golden ratio.) Observe that .
Let the target square be labeled with the value , and all other squares be labeled with the value , where is the Manhattan distance to the target square. Then we can compute the "score" of a configuration of soldiers by summing the values of the soldiers' squares. For example, a configuration of only two soldiers placed so as to reach the target square on the next jump would have s
|
https://en.wikipedia.org/wiki/Spacetime%20symmetries
|
Spacetime symmetries are features of spacetime that can be described as exhibiting some form of symmetry. The role of symmetry in physics is important in simplifying solutions to many problems. Spacetime symmetries are used in the study of exact solutions of Einstein's field equations of general relativity. Spacetime symmetries are distinguished from internal symmetries.
Physical motivation
Physical problems are often investigated and solved by noticing features which have some form of symmetry. For example, in the Schwarzschild solution, the role of spherical symmetry is important in deriving the Schwarzschild solution and deducing the physical consequences of this symmetry (such as the nonexistence of gravitational radiation in a spherically pulsating star). In cosmological problems, symmetry plays a role in the cosmological principle, which restricts the type of universes that are consistent with large-scale observations (e.g. the Friedmann–Lemaître–Robertson–Walker (FLRW) metric). Symmetries usually require some form of preserving property, the most important of which in general relativity include the following:
preserving geodesics of the spacetime
preserving the metric tensor
preserving the curvature tensor
These and other symmetries will be discussed below in more detail. This preservation property which symmetries usually possess (alluded to above) can be used to motivate a useful definition of these symmetries themselves.
Mathematical definition
A rigorous definition of symmetries in general relativity has been given by Hall (2004). In this approach, the idea is to use (smooth) vector fields whose local flow diffeomorphisms preserve some property of the spacetime. (Note that one should emphasize in one's thinking this is a diffeomorphism—a transformation on a differential element. The implication is that the behavior of objects with extent may not be as manifestly symmetric.) This preserving property of the diffeomorphisms is made precise as follows. A
|
https://en.wikipedia.org/wiki/Adaptive%20control
|
Adaptive control is the control method used by a controller which must adapt to a controlled system with parameters which vary, or are initially uncertain.<ref name=CMP-AC-T-01>{{cite journal|author=Chengyu Cao, Lili Ma, Yunjun Xu|title="Adaptive Control Theory and Applications", Journal of Control Science and Engineering'|volume=2012|issue=1|year=2012|doi=10.1155/2012/827353|pages=1,2|doi-access=free }}</ref> For example, as an aircraft flies, its mass will slowly decrease as a result of fuel consumption; a control law is needed that adapts itself to such changing conditions. Adaptive control is different from robust control in that it does not need a priori information about the bounds on these uncertain or time-varying parameters; robust control guarantees that if the changes are within given bounds the control law need not be changed, while adaptive control is concerned with control law changing itself.
Parameter estimation
The foundation of adaptive control is parameter estimation, which is a branch of system identification. Common methods of estimation include recursive least squares and gradient descent. Both of these methods provide update laws that are used to modify estimates in real-time (i.e., as the system operates). Lyapunov stability is used to derive these update laws and show convergence criteria (typically persistent excitation; relaxation of this condition are studied in Concurrent Learning adaptive control). Projection and normalization are commonly used to improve the robustness of estimation algorithms.
Classification of adaptive control techniques
In general, one should distinguish between:
Feedforward adaptive control
Feedback adaptive control
as well as between
Direct methods
Indirect methods
Hybrid methods
Direct methods are ones wherein the estimated parameters are those directly used in the adaptive controller. In contrast, indirect methods are those in which the estimated parameters are used to calculate required controller par
|
https://en.wikipedia.org/wiki/Biological%20Innovation%20for%20Open%20Society
|
BiOS (Biological Open Source/Biological Innovation for Open Society) is an international initiative to foster innovation and freedom to operate in the biological sciences. BiOS was officially launched on 10 February 2005 by Cambia, an independent, international non-profit organization dedicated to democratizing innovation. Its intention is to initiate new norms and practices for creating tools for biological innovation, using binding covenants to protect and preserve their usefulness, while allowing diverse business models for the application of these tools.
As described by Richard Anthony Jefferson, CEO of Cambia, the Deputy CEO of Cambia, Dr Marie Connett worked extensively with small companies, university offices of technology transfer, attorneys, and multinational corporations to create a platform to share productive and sustainable technology. The parties developed the BiOS Material Transfer Agreement (MTA) and the BiOS license as legal instruments to facilitate these goals.
Biological Open Source
Traditionally, the term 'open source' describes a paradigm for software development associated with a set of collaborative innovation practices, which ensure access to the end product's source materials - typically, source code. The BiOS Initiative has sought to extend this concept to the biological sciences, and agricultural biotechnology in particular. BiOS is founded on the concept of sharing scientific tools and platforms so that innovation can occur at the 'application layer.' Jefferson observes that, 'Freeing up the tools that make new discoveries possible will spur a new wave of innovation that has real value.' He notes further that, 'Open source is an enormously powerful tool for driving efficiency.'
Through BiOS instruments, licensees cannot appropriate the fundamental kernel of a technology and improvements exclusively for themselves. The base technology remains the property of whichever entity developed it, but improvements can be shared with others that
|
https://en.wikipedia.org/wiki/Microstrip
|
Microstrip is a type of electrical transmission line which can be fabricated with any technology where a conductor is separated from a ground plane by a dielectric layer known as "substrate". Microstrip lines are used to convey microwave-frequency signals.
Typical realisation technologies are printed circuit board (PCB), alumina coated with a dielectric layer or sometimes silicon or some other similar technologies. Microwave components such as antennas, couplers, filters, power dividers etc. can be formed from microstrip, with the entire device existing as the pattern of metallization on the substrate. Microstrip is thus much less expensive than traditional waveguide technology, as well as being far lighter and more compact. Microstrip was developed by ITT laboratories as a competitor to stripline (first published by Grieg and Engelmann in the December 1952 IRE proceedings).
The disadvantages of microstrip compared with waveguide are the generally lower power handling capacity, and higher losses. Also, unlike waveguide, microstrip is typically not enclosed, and is therefore susceptible to cross-talk and unintentional radiation.
For lowest cost, microstrip devices may be built on an ordinary FR-4 (standard PCB) substrate. However it is often found that the dielectric losses in FR4 are too high at microwave frequencies, and that the dielectric constant is not sufficiently tightly controlled. For these reasons, an alumina substrate is commonly used. From monolithic integration perspective microtrips with integrated circuit/monolithic microwave integrated circuit technologies might be feasible however their performance might be limited by the dielectric layer(s) and conductor thickness available.
Microstrip lines are also used in high-speed digital PCB designs, where signals need to be routed from one part of the assembly to another with minimal distortion, and avoiding high cross-talk and radiation.
Microstrip is one of many forms of planar transmission line,
|
https://en.wikipedia.org/wiki/DYSEAC
|
DYSEAC was the second Standards Electronic Automatic Computer. (See SEAC.)
DYSEAC was a first-generation computer built by the National Bureau of Standards for the U.S. Army Signal Corps. It was housed in a truck, making it one of the first movable computers (perhaps the first). It went into operation in April 1954.
DYSEAC used 900 vacuum tubes and 24,500 crystal diodes. It had a memory of 512 words of 45 bits each (plus 1 parity bit), using mercury delay-line memory. Memory access time was 48–384 microseconds. The addition time was 48 microseconds, and the multiplication/division time was 2112 microseconds. These times are excluding the memory-access time, which added up to approximately 1500 microseconds to those times.
DYSEAC weighed about .
See also
SEAC
List of vacuum-tube computers
References
External links
BRL report on computers, inc. DYSEAC
Astin, A. V. (1955), Computer Development (SEAC and DYSEAC) at the National Bureau of Standards, Washington D.C., National Bureau of Standards Circular 551, Issued January 25, 1955, U.S. Government Printing Office. Fully viewable online. Includes several papers describing the technical details and operation of both DYSEAC and its predecessor SEAC, from which DYSEAC was derived. In particular, see "DYSEAC", by A. L. Leiner, S. N. Alexander, and R. P. Witt, on pp. 39–71, for an overview of DYSEAC and its differences from SEAC.
One-of-a-kind computers
Vacuum tube computers
Portable computers
Serial computers
|
https://en.wikipedia.org/wiki/FR-2
|
FR-2 (Flame Resistant 2) is a NEMA designation for synthetic resin bonded paper, a composite material made of paper impregnated with a plasticized phenol formaldehyde resin, used in the manufacture of printed circuit boards. Its main properties are similar to NEMA grade XXXP (MIL-P-3115) material, and can be substituted for the latter in many applications.
Applications
FR-2 sheet with copper foil lamination on one or both sides is widely used to build low-end consumer electronic equipment. While its electrical and mechanical properties are inferior to those of epoxy-bonded fiberglass, FR-4, it is significantly cheaper. It is not suitable for devices installed in vehicles, as continuous vibration can make cracks propagate, causing hairline fractures in copper circuit traces. Without copper foil lamination, FR-2 is sometimes used for simple structural shapes and electrical insulation.
Properties
Fabrication
FR-2 can be machined by drilling, sawing, milling and hot punching. Cold punching and shearing are not recommended, as they leave a ragged edge and tend to cause cracking. Tools made of high-speed steel can be used, although tungsten carbide tooling is preferred for high volume production.
Adequate ventilation or respiration protection are mandatory during high-speed machining, as it gives off toxic vapors.
Trade names and synonyms
Carta
Haefelyt
Lamitex
Paxolin, Paxoline
Pertinax, taken over by Lamitec and Dr. Dietrich Müller GmbH in 2014
Getinax (in the Ex-USSR)
Phenolic paper
Preßzell
Repelit
Synthetic resin bonded paper (SRBP)
Turbonit
Veroboard
Wahnerit
See also
Formica (plastic)
Micarta
References
Further reading
Composite materials
Printed circuit board manufacturing
Synthetic paper
|
https://en.wikipedia.org/wiki/MANIAC%20II
|
The MANIAC II (Mathematical Analyzer Numerical Integrator and Automatic Computer Model II) was a first-generation electronic computer, built in 1957 for use at Los Alamos Scientific Laboratory.
MANIAC II was built by the University of California and the Los Alamos Scientific Laboratory, completed in 1957 as a successor to MANIAC I. It used 2,850 vacuum tubes and 1,040 semiconductor diodes in the arithmetic unit. Overall it used 5,190 vacuum tubes, 3,050 semiconductor diodes, and 1,160 transistors.
It had 4,096 words of memory in Magnetic-core memory (with 2.4 microsecond access time), supplemented by 12,288 words of memory using Williams tubes (with 15 microsecond access time). The word size was 48 bits. Its average multiplication time was 180 microseconds and the average division time was 300 microseconds.
By the time of its decommissioning, the computer was all solid-state, using a combination of
RTL, DTL and TTL. It had an array multiplier, 15 index registers, 16K of 6-microsecond cycle time core memory, and 64K of 2-microsecond cycle time core memory. A NOP instruction took about 2.5 microseconds. A multiplication took 8 microseconds and a division 25 microseconds. It had a paging unit using 1K word pages with an associative 16-deep lookup memory. A 1-megaword CDC drum was hooked up as a paging device. It also had several ADDS Special-Order Direct-View Storage-Tube terminals. These terminals used an extended character set which covered about all the mathematical symbols, and allowed for half-line spacing for math formulas.
For I/O, it had two IBM 360 series nine-track and two seven-track 1/2" tape drives. It had an eight-bit paper-tape reader and punch, and a 500 line-per-minute printer (1500 line-per-minute using the hexadecimal character set). Storage was three IBM 7000 series 1301 disk drives, each having two modules of 21.6 million characters apiece.
One of the data products of MANIAC II was the table of numbers appearing in the book The 3-j and 6-j S
|
https://en.wikipedia.org/wiki/Multi-channel%20memory%20architecture
|
In the fields of digital electronics and computer hardware, multi-channel memory architecture is a technology that increases the data transfer rate between the DRAM memory and the memory controller by adding more channels of communication between them. Theoretically, this multiplies the data rate by exactly the number of channels present. Dual-channel memory employs two channels. The technique goes back as far as the 1960s having been used in IBM System/360 Model 91 and in CDC 6600.
Modern high-end desktop and workstation processors such as the AMD Ryzen Threadripper series and the Intel Core i9 Extreme Edition lineup support quad-channel memory. Server processors from the AMD Epyc series and the Intel Xeon platforms give support to memory bandwidth starting from quad-channel module layout to up to octa-channel layout. In March 2010, AMD released Socket G34 and Magny-Cours Opteron 6100 series processors with support for quad-channel memory. In 2006, Intel released chipsets that support quad-channel memory for its LGA771 platform and later in 2011 for its LGA2011 platform. Microcomputer chipsets with even more channels were designed; for example, the chipset in the AlphaStation 600 (1995) supports eight-channel memory, but the backplane of the machine limited operation to four channels.
Dual-channel architecture
Dual-channel-enabled memory controllers in a PC system architecture use two 64-bit data channels. Dual-channel should not be confused with double data rate (DDR), in which data exchange happens twice per DRAM clock. The two technologies are independent of each other, and many motherboards use both by using DDR memory in a dual-channel configuration.
Operation
Dual-channel architecture requires a dual-channel-capable motherboard and two or more DDR memory modules. The memory modules are installed into matching banks, each of which belongs to a different channel. The motherboard's manual will provide an explanation of how to install memory for that partic
|
https://en.wikipedia.org/wiki/Teltron%20tube
|
A teltron tube (named for Teltron Inc., which is now owned by 3B Scientific Ltd.) is a type of cathode ray tube used to demonstrate the properties of electrons. There were several different types made by Teltron including a diode, a triode, a Maltese Cross tube, a simple deflection tube with a fluorescent screen, and one which could be used to measure the charge-to-mass ratio of an electron. The latter two contained an electron gun with deflecting plates. The beams can be bent by applying voltages to various electrodes in the tube or by holding a magnet close by. The electron beams are visible as fine bluish lines. This is accomplished by filling the tube with low pressure helium (He) or Hydrogen (H2) gas. A few of the electrons in the beam collide with the helium atoms, causing them to fluoresce and emit light.
They are usually used to teach electromagnetic effects because they show how an electron beam is affected by electric fields and by magnetic fields like the Lorentz force.
Motions in fields
Charged particles in a uniform electric field follow a parabolic trajectory, since the electric field term (of the Lorentz force which acts on the particle) is the product of the particle's charge and the magnitude of the electric field, (oriented in the direction of the electric field). In a uniform magnetic field however, charged particles follow a circular trajectory due to the cross product in the magnetic field term of the Lorentz force. (That is, the force from the magnetic field acts on the particle in a direction perpendicular to the particle's direction of motion. See: Lorentz force for more details.)
Apparatus
The 'teltron' apparatus consists of a Teltron type electron deflection tube, a Teltron stand, EHT power supply (, variable).
Experimental setup
In an evacuated glass bulb some hydrogen gas (H2) is filled, so that the tube has a hydrogen atmosphere at low pressure of about is formed. The pressure is such that the electrons are decelerated by col
|
https://en.wikipedia.org/wiki/Nanodomain
|
A nanodomain is a nanometer-sized cluster of proteins found in a cell membrane. They are associated with the signal which occurs when a single calcium ion channel opens on a cell membrane, allowing an influx of calcium ions (Ca) which extend in a plume a few tens of nanometres from the channel pore. In a nanodomain, the coupling distance, that is, the distance between the calcium-binding proteins which sense the calcium, and the calcium channel, is very small, less than , which allows rapid signalling. The formation of a nanodomain signal is virtually instantaneous following the opening of the calcium channel, as calcium ions move rapidly into the cell along a steep concentration gradient. The nanodomain signal collapses just as quickly when the calcium channel closes, as the ions rapidly diffuse away from the pore. Formation of a nanodomain signal requires the influx of only approximately 1000 calcium ions.
Coupling distances greater than , mediated by a larger number of channels, are referred to as microdomains. nanodomain
Properties
Nanodomain signals are thought to improve the temporal precision of fast exocytosis of vesicles due to two specific properties:
The peak concentration of calcium ions will be reached incredibly quick (within a microsecond) and maintained as long as the channel is open.
Closure of the channel leads to a rapid collapse of the domain due to lateral diffusion away from the pore (the site of entry). The lateral diffusion of microdomains additionally depends on the action of fast endogenous buffers (which remove the calcium and transport it away from the active zone).
Single channels are able to cause vesicular release, however, the cooperativity of different calcium channels is synapse-specific. The release driven by a single calcium ion channel minimizes the total calcium ion influx, overlapping domains can provide greater reliability and temporal fidelity.
References
Molecular biology
|
https://en.wikipedia.org/wiki/Entropic%20force
|
In physics, an entropic force acting in a system is an emergent phenomenon resulting from the entire system's statistical tendency to increase its entropy, rather than from a particular underlying force on the atomic scale.
Mathematical formulation
In the canonical ensemble, the entropic force associated to a macrostate partition is given by
where is the temperature, is the entropy associated to the macrostate , and is the present macrostate.
Examples
Pressure of an ideal gas
The internal energy of an ideal gas depends only on its temperature, and not on the volume of its containing box, so it is not an energy effect that tends to increase the volume of the box as gas pressure does. This implies that the pressure of an ideal gas has an entropic origin.
What is the origin of such an entropic force? The most general answer is that the effect of thermal fluctuations tends to bring a thermodynamic system toward a macroscopic state that corresponds to a maximum in the number of microscopic states (or micro-states) that are compatible with this macroscopic state. In other words, thermal fluctuations tend to bring a system toward its macroscopic state of maximum entropy.
Brownian motion
The entropic approach to Brownian movement was initially proposed by R. M. Neumann. Neumann derived the entropic force for a particle undergoing three-dimensional Brownian motion using the Boltzmann equation, denoting this force as a diffusional driving force or radial force. In the paper, three example systems are shown to exhibit such a force:
electrostatic system of molten salt,
surface tension and,
elasticity of rubber.
Polymers
A standard example of an entropic force is the elasticity of a freely jointed polymer molecule. For an ideal chain, maximizing its entropy means reducing the distance between its two free ends. Consequently, a force that tends to collapse the chain is exerted by the ideal chain between its two free ends. This entropic force is proporti
|
https://en.wikipedia.org/wiki/Myology
|
Myology is the study of the muscular system, including the study of the structure, function and diseases of muscle. The muscular system consists of skeletal muscle, which contracts to move or position parts of the body (e.g., the bones that articulate at joints), smooth and cardiac muscle that propels, expels or controls the flow of fluids and contained substance.
See also
Myotomy
Oral myology
References
External links
British Myology Society
Physiology
|
https://en.wikipedia.org/wiki/Shell%20theorem
|
In classical mechanics, the shell theorem gives gravitational simplifications that can be applied to objects inside or outside a spherically symmetrical body. This theorem has particular application to astronomy.
Isaac Newton proved the shell theorem and stated that:
A spherically symmetric body affects external objects gravitationally as though all of its mass were concentrated at a point at its center.
If the body is a spherically symmetric shell (i.e., a hollow ball), no net gravitational force is exerted by the shell on any object inside, regardless of the object's location within the shell.
A corollary is that inside a solid sphere of constant density, the gravitational force within the object varies linearly with distance from the center, becoming zero by symmetry at the center of mass. This can be seen as follows: take a point within such a sphere, at a distance from the center of the sphere. Then you can ignore all of the shells of greater radius, according to the shell theorem (2). But the point can be considered to be external to the remaining sphere of radius r, and according to (1) all of the mass of this sphere can be considered to be concentrated at its centre. The remaining mass is proportional to (because it is based on volume). The gravitational force exerted on a body at radius r will be proportional to (the inverse square law), so the overall gravitational effect is proportional to so is linear in
These results were important to Newton's analysis of planetary motion; they are not immediately obvious, but they can be proven with calculus. (Gauss's law for gravity offers an alternative way to state the theorem.)
In addition to gravity, the shell theorem can also be used to describe the electric field generated by a static spherically symmetric charge density, or similarly for any other phenomenon that follows an inverse square law. The derivations below focus on gravity, but the results can easily be generalized to the electrostatic forc
|
https://en.wikipedia.org/wiki/Complete%20mixing
|
In evolutionary game theory, complete mixing refers to an assumption about the type of interactions that occur between individual organisms. Interactions between individuals in a population attains complete mixing if and only if the probably individual x interacts with individual y is equal for all y.
This assumption is implicit in the replicator equation a system of differential equations that represents one model in evolutionary game theory. This assumption usually does not hold for most organismic populations, since usually interactions occur in some spatial setting where individuals are more likely to interact with those around them. Although the assumption is empirically violated, it represents a certain sort of scientific idealization which may or may not be harmful to the conclusions reached by that model. This question has led individuals to investigate a series of other models where there is not complete mixing (e.g. Cellular automata models).
Game theory
Population genetics
|
https://en.wikipedia.org/wiki/Construction%20management
|
Construction management (CM) is the use of project management techniques and software to oversee the planning, design, construction and closeout of a construction project.
About
Construction management aims to control the quality of a project's scope, time, and cost (sometimes referred to as a project management triangle or "triple constraints") to maximize the project owner's satisfaction.
Practitioners of construction management are called construction managers. Professional construction managers may be hired for large-scaled, high budget undertakings (commercial real estate, transportation infrastructure, industrial facilities, and military infrastructure), called capital projects. Construction managers use their knowledge of project delivery methods to deliver the project optimally.
The role of a contractor
Contractors are assigned to a construction project during the design or once the design has been completed by a licensed architect or a licensed civil engineer. This is done by going through a bidding process with different contractors. As dictated by the project delivery method, the contractor is selected by using one of three common selection methods: low-bid selection, best-value selection, or qualifications-based selection.
A construction manager is hired for the following deliverables means and methods, communications with the authority having jurisdiction, time management, document control, cost controls and management, quality controls, decision making, mathematics, shop drawings, record drawings and human resources.
In the US, the Construction Management Association of America (CMAA) states the most common responsibilities of a Construction Manager fall into the following 7 categories: Project Management Planning, Cost Management, Time Management, Quality Management, Contract Administration, Safety Management, and CM Professional Practice. CM professional practice includes specific activities such as defining the responsibilities and management
|
https://en.wikipedia.org/wiki/Jupiter%20project
|
The Jupiter project was to be a new high-end model of Digital Equipment Corporation (DEC)'s PDP-10 mainframe computers. This project was cancelled in 1983, as the PDP-10 was increasingly eclipsed by the VAX supermini machines (descendants of the PDP-11). DEC recognized then that the PDP-10 and VAX product lines were competing with each other and decided to concentrate its software development effort on the more profitable VAX. The PDP-10 was finally dropped from DEC's line in 1983, following the failure of the Jupiter Project at DEC to build a viable new model.
References
External links
Jupiter development documents at Bitsavers
DEC computers
Information technology projects
|
https://en.wikipedia.org/wiki/Computer-supported%20collaboration
|
Computer-supported collaboration research focuses on technology that affects groups, organizations, communities and societies, e.g., voice mail and text chat. It grew from cooperative work study of supporting people's work activities and working relationships. As net technology increasingly supported a wide range of recreational and social activities, consumer markets expanded the user base, enabling more and more people to connect online to create what researchers have called a computer supported cooperative work, which includes "all contexts in which technology is used to mediate human activities such as communication, coordination, cooperation, competition, entertainment, games, art, and music" (from CSCW 2023).
Scope of the field
Focused on output
The subfield computer-mediated communication deals specifically with how humans use "computers" (or digital media) to form, support and maintain relationships with others (social uses), regulate information flow (instructional uses), and make decisions (including major financial and political ones). It does not focus on common work products or other "collaboration" but rather on "meeting" itself, and on trust. By contrast, CSC is focused on the output from, rather than the character or emotional consequences of, meetings or relationships, reflecting the difference between "communication" and "collaboration".
Focused on contracts and rendezvous
Unlike communication research, which focuses on trust, or computer science, which focuses on truth and logic, CSC focuses on cooperation and collaboration and decision making theory, which are more concerned with rendezvous and contract. For instance, auctions and market systems, which rely on bid and ask relationships, are studied as part of CSC but not usually as part of communication.
The term CSC emerged in the 1990s to replace the following terms:
workgroup computing, which emphasizes technology over the work being supported and seems to restrict inquiry to small organi
|
https://en.wikipedia.org/wiki/Issue%20tracking%20system
|
An issue tracking system (also ITS, trouble ticket system, support ticket, request management or incident ticket system) is a computer software package that manages and maintains lists of issues. Issue tracking systems are generally used in collaborative settings, especially in large or distributed collaborations, but can also be employed by individuals as part of a time management or personal productivity regimen. These systems often encompass resource allocation, time accounting, priority management, and oversight workflow in addition to implementing a centralized issue registry.
Background
In the institutional setting, issue tracking systems are commonly used in an organization's customer support call center to create, update, and resolve reported customer issues, or even issues reported by that organization's other employees. A support ticket should include vital information for the account involved and the issue encountered. An issue tracking system often also contains a knowledge base containing information on each customer, resolutions to common problems, and other such data.
An issue tracking system is similar to a "bugtracker", and often, a software company will sell both, and some bugtrackers are capable of being used as an issue tracking system, and vice versa. Consistent use of an issue or bug tracking system is considered one of the "hallmarks of a good software team".
A ticket element, within an issue tracking system, is a running report on a particular problem, its status, and other relevant data. They are commonly created in a help desk or call center environment and almost always have a unique reference number, also known as a case, issue or call log number which is used to allow the user or help staff to quickly locate, add to or communicate the status of the user's issue or request.
These tickets are called so because of their origin as small cards within a traditional wall mounted work planning system when this kind of support started. Oper
|
https://en.wikipedia.org/wiki/Common%20Locale%20Data%20Repository
|
The Common Locale Data Repository (CLDR) is a project of the Unicode Consortium to provide locale data in XML format for use in computer applications. CLDR contains locale-specific information that an operating system will typically provide to applications.
CLDR is written in the Locale Data Markup Language (LDML).
Details
Among the types of data that CLDR includes are the following:
Translations for language names
Translations for territory and country names
Translations for currency names, including singular/plural modifications
Translations for weekday, month, era, period of day, in full and abbreviated forms
Translations for time zones and example cities (or similar) for time zones
Translations for calendar fields
Patterns for formatting/parsing dates or times of day
Exemplar sets of characters used for writing the language
Patterns for formatting/parsing numbers
Rules for language-adapted collation
Rules for spelling out numbers as words
Rules for formatting numbers in traditional numeral systems (such as Roman and Armenian numerals)
Rules for transliteration between scripts, much of it based on BGN/PCGN romanization
The information is currently used in International Components for Unicode, Apple's macOS, LibreOffice, MediaWiki, and IBM's AIX, among other applications and operating systems.
CLDR overlaps somewhat with ISO/IEC 15897 (POSIX locales). POSIX locale information can be derived from CLDR by using some of CLDR's conversion tools.
CLDR is maintained by a technical committee which includes employees from IBM, Apple, Google, Microsoft, and some government-based organizations. The committee is chaired by John Emmons, of IBM; Mark Davis, of Google, is vice-chair.
The CLDR covers 400+ languages.
References
External links
Common Locale Data Repository, the informational webpage of the CLDR project
Locale Data Markup Language
Unicode
Date and time representation
Internationalization and localization
|
https://en.wikipedia.org/wiki/Variational%20perturbation%20theory
|
In mathematics, variational perturbation theory (VPT) is a mathematical method to convert divergent power series in a small expansion parameter, say
,
into a convergent series in powers
,
where is a critical exponent (the so-called index of "approach to scaling" introduced by Franz Wegner). This is possible with the help of variational parameters, which are determined by optimization order by order in . The partial sums are converted to convergent partial sums by a method developed in 1992.
Most perturbation expansions in quantum mechanics are divergent for any small coupling strength . They can be made convergent by VPT (for details see the first textbook cited below). The convergence is exponentially fast.
After its success in quantum mechanics, VPT has been developed further to become an important mathematical tool in quantum field theory with its anomalous dimensions. Applications focus on the theory of critical phenomena. It has led to the most accurate predictions of critical exponents.
More details can be read here.
References
External links
Kleinert H., Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 3. Auflage, World Scientific (Singapore, 2004) (readable online here) (see Chapter 5)
Kleinert H. and Verena Schulte-Frohlinde, Critical Properties of φ4-Theories, World Scientific (Singapur, 2001); Paperback (readable online here) (see Chapter 19)
Asymptotic analysis
Perturbation theory
|
https://en.wikipedia.org/wiki/Pagophagia
|
Pagophagia (from Greek: pagos, frost/ice, + phagō, to eat) is the compulsive consumption of ice or iced drinks.
It is a form of the disorder known as pica, which in Latin refers to a magpie that eats everything indiscriminately. Its medical definition refers to the persistent consumption of nonnutritive substances for over a period of at least one month. However, different studies have included alternative definitions including "daily consumption of 2-11 full glasses of ice (480-2640g)" or "the purposeful ingestion of at least one ordinary tray of ice daily for a period in excess of two months." Pagophagia has been shown to be associated with iron-deficiency anemia and responsive to iron supplementation,
leading some investigators to postulate that some forms of pica may be the result of nutritional deficiency.
Similarly, folk wisdom also maintained that pica reflected an appetite to compensate for nutritional deficiencies, such as low iron or zinc. In iron deficient pregnant women who experience symptoms of pagophagia, decreased cravings for ice have been observed after iron supplementation. Later research demonstrated that the substances ingested by those who have pica generally do not provide the mineral or nutrient in which people are deficient. In the long run, as people start consuming more nonfoods compulsively, pica can also cause additional nutritional deficiencies.
A hypothesis of the neurological basis of pagophagia was proposed in a 2014 study in which those with iron deficiency anemia were shown to have improved response times while performing on a neuropsychological test when given ice to chew on. As a result, the researchers hypothesized that chewing on ice causes vascular changes that allow for increased perfusion of the brain, as well as activation of the sympathetic nervous system, which also increases blood flow to the brain, allowing for increased processing speed and alertness.
Although some investigators also hypothesize that chewing ice may
|
https://en.wikipedia.org/wiki/Critical%20exponent
|
Critical exponents describe the behavior of physical quantities near continuous phase transitions. It is believed, though not proven, that they are universal, i.e. they do not depend on the details of the physical system, but only on some of its general features. For instance, for ferromagnetic systems, the critical exponents depend only on:
the dimension of the system
the range of the interaction
the spin dimension
These properties of critical exponents are supported by experimental data. Analytical results can be theoretically achieved in mean field theory in high dimensions or when exact solutions are known such as the two-dimensional Ising model. The theoretical treatment in generic dimensions requires the renormalization group approach or the conformal bootstrap techniques.
Phase transitions and critical exponents appear in many physical systems such as water at the critical point, in magnetic systems, in superconductivity, in percolation and in turbulent fluids.
The critical dimension above which mean field exponents are valid varies with the systems and can even be infinite.
Definition
The control parameter that drives phase transitions is often temperature but can also be other macroscopic variables like pressure or an external magnetic field. For simplicity, the following discussion works in terms of temperature; the translation to another control parameter is straightforward. The temperature at which the transition occurs is called the critical temperature . We want to describe the behavior of a physical quantity in terms of a power law around the critical temperature; we introduce the reduced temperature
which is zero at the phase transition, and define the critical exponent :
This results in the power law we were looking for:
It is important to remember that this represents the asymptotic behavior of the function as .
More generally one might expect
The most important critical exponents
Let us assume that the system has two different phases c
|
https://en.wikipedia.org/wiki/Minimum%20information%20about%20a%20microarray%20experiment
|
Minimum information about a microarray experiment (MIAME) is a standard created by the FGED Society for reporting microarray experiments.
MIAME is intended to specify all the information necessary to interpret the results of the experiment unambiguously and to potentially reproduce the experiment. While the standard defines the content required for compliant reports, it does not specify the format in which this data should be presented. MIAME describes the minimum information required to ensure that microarray data can be easily interpreted and that results derived from its analysis can be independently verified. There are a number of file formats used to represent this data, as well as both public and subscription-based repositories for such experiments. Additionally, software exists to aid the preparation of MIAME-compliant reports.
MIAME revolves around six key components: raw data, normalized data, sample annotations, experimental design, array annotations, and data protocols.
References
Biochemistry detection methods
Genetics techniques
Microarrays
|
https://en.wikipedia.org/wiki/Yamaha%20V9958
|
The Yamaha V9958 is a Video Display Processor used in the MSX2+ and MSX turbo R series of home computers, as the successor to the Yamaha V9938 used in the MSX2. The main new features are three graphical YJK modes with up to 19268 colors and horizontal scrolling registers. The V9958 was not as widely adopted as the V9938.
Specifications
Video RAM: 128 KB + 64 KB of expanded VRAM
Text modes: 80 x 24 and 32 x 24
Resolution: 512 x 212 (4 or 16 colors out of 512) and 256 x 212 (16, 256, 12499 or 19268 colors)
Sprites: 32, 16 colors, max 8 per horizontal line
Hardware acceleration for copy, line, fill, etc.
Interlacing to double vertical resolution
Horizontal and vertical scroll registers
Feature changes from the V9938
The following features were added to or removed from the Yamaha V9938 specifications:
Added horizontal scrolling registers
Added YJK graphics modes (similar to YUV):
G7 + YJK + YAE: 256 x 212, 12499 colors + 16 color palette
G7 + YJK: 256 x 212, 19268 colors
Added the ability to execute hardware accelerated commands in non-bitmap screen modes
Removed lightpen and mouse functions
Removed composite video output function
MSX-specific terminology
On MSX, the screen modes are often referred to by their assigned number in MSX BASIC. This mapping is as follows:
References
Graphics chips
MSX hardware
|
https://en.wikipedia.org/wiki/Yamaha%20V9938
|
The Yamaha V9938 is a video display processor (VDP) used on the MSX2 home computer, as well as on the Geneve 9640 enhanced TI-99/4A clone and the Tatung Einstein 256. It was also used in a few MSX1 computers, in a configuration with 16kB VRAM.
The Yamaha V9938, also known as MSX-Video or VDP (Video Display Processor), is the successor of the Texas Instruments TMS9918 used in the MSX1 and other systems. The V9938 was in turn succeeded by the Yamaha V9958.
Specifications
Video RAM: 16–192 KB
Text modes: 80 × 24, 40 × 24 and 32 × 24
Resolution: 512 × 212 (16 colors from 512), 256 × 212 (16 colors from 512) and 256 × 212 (256 colors)
Sprites: 32, 16 colors, max 8 per horizontal line
Hardware acceleration for copy, line, fill and logical operations available
Interlacing to double vertical resolution
Vertical scroll register
Detailed specifications
Video RAM: 4 possible configurations
16 KB (modes G4 up to G7 will not be available)
64 KB (modes G6 and G7 will not be available)
128 KB: most common configuration
192 KB, where 64 KB is extended-VRAM (only available as back-buffer for G4 and G5 modes)
Clock: 21 MHz
Video output frequency: 15 kHz
Sprites: 32, 16 colors (1 per line. 3, 7 or 15 colors/line by using the CC attribute), max 8 per horizontal line
Hardware acceleration, with copy, line, fill etc. With or without logical operations.
Vertical scroll register
Capable of superimposition and digitization
Support for connecting a lightpen and a mouse
Resolution:
Horizontal: 256 or 512
Vertical: 192p, 212p, 384i or 424i
Color modes:
Paletted RGB: 16 colors out of 512
Fixed RGB: 256 colors
Screen modes
Text modes:
T1: 40 × 24 with 2 colors (out of 512)
T2: 80 × 24 with 4 colors (out of 512)
All text modes can have 26.5 rows as well.
Pattern modes
G1: 256 × 192 with 16 paletted colors and 1 table of 8×8 patterns
G2: 256 × 192 with 16 paletted colors and 3 tables of 8×8 patterns
G3: 256 × 192 with 16 paletted colors and 3 tables of 8×8 patt
|
https://en.wikipedia.org/wiki/Hardware%20acceleration
|
Hardware acceleration is the use of computer hardware designed to perform specific functions more efficiently when compared to software running on a general-purpose central processing unit (CPU). Any transformation of data that can be calculated in software running on a generic CPU can also be calculated in custom-made hardware, or in some mix of both.
To perform computing tasks more efficiently, generally one can invest time and money in improving the software, improving the hardware, or both. There are various approaches with advantages and disadvantages in terms of decreased latency, increased throughput and reduced energy consumption. Typical advantages of focusing on software may include greater versatility, more rapid development, lower non-recurring engineering costs, heightened portability, and ease of updating features or patching bugs, at the cost of overhead to compute general operations. Advantages of focusing on hardware may include speedup, reduced power consumption, lower latency, increased parallelism and bandwidth, and better utilization of area and functional components available on an integrated circuit; at the cost of lower ability to update designs once etched onto silicon and higher costs of functional verification, times to market, and need for more parts. In the hierarchy of digital computing systems ranging from general-purpose processors to fully customized hardware, there is a tradeoff between flexibility and efficiency, with efficiency increasing by orders of magnitude when any given application is implemented higher up that hierarchy. This hierarchy includes general-purpose processors such as CPUs, more specialized processors such as programmable shaders in a GPU, fixed-function implemented on field-programmable gate arrays (FPGAs), and fixed-function implemented on application-specific integrated circuits (ASICs).
Hardware acceleration is advantageous for performance, and practical when the functions are fixed so updates are not as ne
|
https://en.wikipedia.org/wiki/Schanuel%27s%20conjecture
|
In mathematics, specifically transcendental number theory, Schanuel's conjecture is a conjecture made by Stephen Schanuel in the 1960s concerning the transcendence degree of certain field extensions of the rational numbers.
Statement
The conjecture is as follows:
Given any complex numbers that are linearly independent over the rational numbers , the field extension (z1, ..., zn, ez1, ..., ezn) has transcendence degree at least over .
The conjecture can be found in Lang (1966).
Consequences
The conjecture, if proven, would generalize most known results in transcendental number theory. The special case where the numbers z1,...,zn are all algebraic is the Lindemann–Weierstrass theorem. If, on the other hand, the numbers are chosen so as to make exp(z1),...,exp(zn) all algebraic then one would prove that linearly independent logarithms of algebraic numbers are algebraically independent, a strengthening of Baker's theorem.
The Gelfond–Schneider theorem follows from this strengthened version of Baker's theorem, as does the currently unproven four exponentials conjecture.
Schanuel's conjecture, if proved, would also settle whether numbers such as e + and ee are algebraic or transcendental, and prove that e and are algebraically independent simply by setting z1 = 1 and z2 = i, and using Euler's identity.
Euler's identity states that ei + 1 = 0. If Schanuel's conjecture is true then this is, in some precise sense involving exponential rings, the only relation between e, , and i over the complex numbers.
Although ostensibly a problem in number theory, the conjecture has implications in model theory as well. Angus Macintyre and Alex Wilkie, for example, proved that the theory of the real field with exponentiation, exp, is decidable provided Schanuel's conjecture is true. In fact they only needed the real version of the conjecture, defined below, to prove this result, which would be a positive solution to Tarski's exponential function problem.
Related conje
|
https://en.wikipedia.org/wiki/Footprinting
|
Footprinting (also known as reconnaissance) is the technique used for gathering information about computer systems and the entities they belong to. To get this information, a hacker might use various tools and technologies. This information is very useful to a hacker who is trying to crack a whole system.
When used in the computer security lexicon, "Footprinting" generally refers to one of the pre-attack phases; tasks performed before doing the actual attack. Some of the tools used for Footprinting are Sam Spade, nslookup, traceroute, Nmap and neotrace.
Techniques used
DNS queries
Network enumeration
Network queries
Operating system identification
Software used
Wireshark
Uses
It allows a hacker to gain information about the target system or network. This information can be used to carry out attacks on the system. That is the reason by which it may be named a Pre-Attack, since all the information is reviewed in order to get a complete and successful resolution of the attack. Footprinting is also used by ethical hackers and penetration testers to find security flaws and vulnerabilities within their own company's network before a malicious hacker does.
Types
There are two types of Footprinting that can be used: active Footprinting and passive Footprinting. Active Footprinting is the process of using tools and techniques, such as performing a ping sweep or using the traceroute command, to gather information on a target. Active Footprinting can trigger a target's Intrusion Detection System (IDS) and may be logged, and thus requires a level of stealth to successfully do. Passive Footprinting is the process of gathering information on a target by innocuous, or, passive, means. Browsing the target's website, visiting social media profiles of employees, searching for the website on WHOIS, and performing a Google search of the target are all ways of passive Footprinting. Passive Footprinting is the stealthier method since it will not trigger a target's IDS or otherwise
|
https://en.wikipedia.org/wiki/Affinity%20maturation
|
In immunology, affinity maturation is the process by which TFH cell-activated B cells produce antibodies with increased affinity for antigen during the course of an immune response. With repeated exposures to the same antigen, a host will produce antibodies of successively greater affinities. A secondary response can elicit antibodies with several fold greater affinity than in a primary response. Affinity maturation primarily occurs on membrane immunoglobulin of germinal center B cells and as a direct result of somatic hypermutation (SHM) and selection by TFH cells.
In vivo
The process is thought to involve two interrelated processes, occurring in the germinal centers of the secondary lymphoid organs:
Somatic hypermutation: Mutations in the variable, antigen-binding coding sequences (known as complementarity-determining regions (CDR)) of the immunoglobulin genes. The mutation rate is up to 1,000,000 times higher than in cell lines outside the lymphoid system. Although the exact mechanism of the SHM is still not known, a major role for the activation-induced (cytidine) deaminase has been discussed. The increased mutation rate results in 1-2 mutations per CDR and, hence, per cell generation. The mutations alter the binding specificity and binding affinities of the resultant antibodies.
Clonal selection: B cells that have undergone SHM must compete for limiting growth resources, including the availability of antigen and paracrine signals from TFH cells. The follicular dendritic cells (FDCs) of the germinal centers present antigen to the B cells, and the B cell progeny with the highest affinities for antigen, having gained a competitive advantage, are favored for positive selection leading to their survival. Positive selection is based on steady cross-talk between TFH cells and their cognate antigen presenting GC B cell. Because a limited number of TFH cells reside in the germinal center, only highly competitive B cells stably conjugate with TFH cells and thus r
|
https://en.wikipedia.org/wiki/Multibus
|
Multibus is a computer bus standard used in industrial systems. It was developed by Intel Corporation and was adopted as the IEEE 796 bus.
The Multibus specification was important because it was a robust industry standard with a relatively large form factor, allowing complex devices to be designed on it. Because it was well-defined and well-documented, it allowed a Multibus-compatible industry to grow around it, with many companies making card cages and enclosures for it. Many others made CPU, memory, and other peripheral boards. In 1982 there were over 100 Multibus board and systems manufacturers. This allowed complex systems to be built from commercial off-the-shelf hardware, and also allowed companies to innovate by designing a proprietary Multibus board, then integrate it with another vendor's hardware to create a complete system. A good example of this was Sun Microsystems with their Sun-1 and Sun-2 workstations. Sun built custom-designed CPU, memory, SCSI, and video display boards, and then added 3Com Ethernet networking boards, Xylogics SMD disk controllers, Ciprico Tapemaster 1/2 inch tape controllers, Sky Floating Point Processor, and Systech 16-port Terminal Interfaces in order to configure the system as a workstation or a file server. Other workstation vendors who used Multibus-based designs included HP/Apollo and Silicon Graphics.
The Intel Multibus I & II product line was purchased from Intel by RadiSys Corporation, which in 2002 was then purchased by U.S. Technologies, Inc.
Multibus architecture
Multibus was an asynchronous bus that accommodated devices with various transfer rates while maintaining a maximum throughput. It had 20 address lines so it could address up to 1 Mb of Multibus memory and 1 Mb of I/O locations. Most Multibus I/O devices only decoded the first 64 Kb of address space.
Multibus supported multi-master functionality that allowed it to share the Multibus with multiple processors and other DMA devices.
The standard Multibus for
|
https://en.wikipedia.org/wiki/Reed%E2%80%93Muller%20code
|
Reed–Muller codes are error-correcting codes that are used in wireless communications applications, particularly in deep-space communication. Moreover, the proposed 5G standard relies on the closely related polar codes for error correction in the control channel. Due to their favorable theoretical and mathematical properties, Reed–Muller codes have also been extensively studied in theoretical computer science.
Reed–Muller codes generalize the Reed–Solomon codes and the Walsh–Hadamard code. Reed–Muller codes are linear block codes that are locally testable, locally decodable, and list decodable. These properties make them particularly useful in the design of probabilistically checkable proofs.
Traditional Reed–Muller codes are binary codes, which means that messages and codewords are binary strings. When r and m are integers with 0 ≤ r ≤ m, the Reed–Muller code with parameters r and m is denoted as RM(r, m). When asked to encode a message consisting of k bits, where holds, the RM(r, m) code produces a codeword consisting of 2m bits.
Reed–Muller codes are named after David E. Muller, who discovered the codes in 1954, and Irving S. Reed, who proposed the first efficient decoding algorithm.
Description using low-degree polynomials
Reed–Muller codes can be described in several different (but ultimately equivalent) ways. The description that is based on low-degree polynomials is quite elegant and particularly suited for their application as locally testable codes and locally decodable codes.
Encoder
A block code can have one or more encoding functions that map messages to codewords . The Reed–Muller code has message length and block length . One way to define an encoding for this code is based on the evaluation of multilinear polynomials with m variables and total degree r. Every multilinear polynomial over the finite field with two elements can be written as follows:
The are the variables of the polynomial, and the values are the coefficients of the poly
|
https://en.wikipedia.org/wiki/KASW
|
KASW (channel 61) is a television station in Phoenix, Arizona, United States, affiliated with The CW. It is owned by the E. W. Scripps Company alongside ABC affiliate KNXV-TV (channel 15). Both stations share studios on North 44th Street on the city's east side, while KASW's primary transmitter is located on South Mountain.
KASW went on the air in 1995 as the Phoenix affiliate of The WB. Its first owner contracted with KTVK (channel 3) for programming and support services, and KTVK bought the station in 1999. In addition to being an affiliate of The WB and later The CW, the station also broadcast several secondary local sports teams at various times. KASW was split from KTVK in 2014 as the result of KTVK's sale. Scripps acquired it in 2019 and has added local newscasts from KNXV. KASW is the high-power ATSC 3.0 (NextGen TV) station for the Phoenix area and provides the ATSC 3.0 broadcasts of six major Phoenix commercial stations.
History
Prior history of UHF channel 61 in Phoenix
Prior to KASW's sign-on, the UHF channel 61 frequency in the Phoenix market was originally occupied by low-power station K61CA; that station carried a locally programmed music video format known as "Music Channel" and operated from March 15, 1983, until November 12, 1984, closing due to mounting debts and lack of cash to continue operating.
The construction permit for K61CA remained active for several more years; by 1988, it was owned by Channel 61 Development Corporation and was planned as a satellite-fed relay of KSTS, a Telemundo affiliate in San Jose, California.
In November 1987, the FCC allocated channel 61 for full-power use in Phoenix. KUSK-TV applied alongside four other groups; the field was narrowed to three, and Brooks Broadcasting, owned by Chandler farmer Gregory R. Brooks, was granted the permit in February 1991 by the FCC review board.
WB affiliation
Little activity occurred on the permit, with the call sign KAIK; Brooks considered running home shopping on the station
|
https://en.wikipedia.org/wiki/Painlev%C3%A9%20transcendents
|
In mathematics, Painlevé transcendents are solutions to certain nonlinear second-order ordinary differential equations in the complex plane with the Painlevé property (the only movable singularities are poles), but which are not generally solvable in terms of elementary functions. They were discovered by
,
,
, and
.
History
Painlevé transcendents have their origin in the study of special functions, which often arise as solutions of differential equations, as well as in the study of isomonodromic deformations of linear differential equations. One of the most useful classes of special functions are the elliptic functions. They are defined by second order ordinary differential equations whose singularities have the Painlevé property: the only movable singularities are poles. This property is rare in nonlinear equations. Poincaré and L. Fuchs showed that any first order equation with the Painlevé property can be transformed into the Weierstrass elliptic equation or the Riccati equation, which can all be solved explicitly in terms of integration and previously known special functions. Émile Picard pointed out that for orders greater than 1, movable essential singularities can occur, and found a special case of what was later called Painleve VI equation (see below).
(For orders greater than 2 the solutions can have moving natural boundaries.) Around 1900, Paul Painlevé studied second order differential equations with no movable singularities. He found that up to certain transformations, every such equation
of the form
(with a rational function) can be put into one of fifty canonical forms (listed in ).
found that forty-four of the fifty equations are reducible in the sense that they can be solved in terms of previously known functions, leaving just six equations requiring the introduction of new special functions to solve them. There were some computational errors,
and as a result he missed three of the equations, including the general form of Painleve VI.
|
https://en.wikipedia.org/wiki/Quantitative%20feedback%20theory
|
In control theory, quantitative feedback theory (QFT), developed by Isaac Horowitz (Horowitz, 1963; Horowitz and Sidi, 1972), is a frequency domain technique utilising the Nichols chart (NC) in order to achieve a desired robust design over a specified region of plant uncertainty. Desired time-domain responses are translated into frequency domain tolerances, which lead to bounds (or constraints) on the loop transmission function. The design process is highly transparent, allowing a designer to see what trade-offs are necessary to achieve a desired performance level.
Plant templates
Usually any system can be represented by its Transfer Function (Laplace in continuous time domain), after getting the model of a system.
As a result of experimental measurement, values of coefficients in the Transfer Function have a range of uncertainty. Therefore, in QFT every parameter of this function is included into an interval of possible values, and the system may be represented by a family of plants rather than by a standalone expression.
A frequency analysis is performed for a finite number of representative frequencies and a set of templates are obtained in the NC diagram which encloses the behaviour of the open loop system at each frequency.
Frequency bounds
Usually system performance is described as robustness to instability (phase and gain margins), rejection to input and output noise disturbances and reference tracking. In the QFT design methodology these requirements on the system are represented as frequency constraints, conditions that the compensated system loop (controller and plant) could not break.
With these considerations and the selection of the same set of frequencies used for the templates, the frequency constraints for the behaviour of the system loop are computed and represented on the Nichols Chart (NC) as curves.
To achieve the problem requirements, a set of rules on the Open Loop Transfer Function, for the nominal plant may be found. That means the n
|
https://en.wikipedia.org/wiki/Back-and-forth%20method
|
In mathematical logic, especially set theory and model theory, the back-and-forth method is a method for showing isomorphism between countably infinite structures satisfying specified conditions. In particular it can be used to prove that
any two countably infinite densely ordered sets (i.e., linearly ordered in such a way that between any two members there is another) without endpoints are isomorphic. An isomorphism between linear orders is simply a strictly increasing bijection. This result implies, for example, that there exists a strictly increasing bijection between the set of all rational numbers and the set of all real algebraic numbers.
any two countably infinite atomless Boolean algebras are isomorphic to each other.
any two equivalent countable atomic models of a theory are isomorphic.
the Erdős–Rényi model of random graphs, when applied to countably infinite graphs, almost surely produces a unique graph, the Rado graph.
any two many-complete recursively enumerable sets are recursively isomorphic.
Application to densely ordered sets
As an example, the back-and-forth method can be used to prove Cantor's isomorphism theorem, although this was not Georg Cantor's original proof. This theorem states that two unbounded countable dense linear orders are isomorphic.
Suppose that
(A, ≤A) and (B, ≤B) are linearly ordered sets;
They are both unbounded, in other words neither A nor B has either a maximum or a minimum;
They are densely ordered, i.e. between any two members there is another;
They are countably infinite.
Fix enumerations (without repetition) of the underlying sets:
A = { a1, a2, a3, ... },
B = { b1, b2, b3, ... }.
Now we construct a one-to-one correspondence between A and B that is strictly increasing. Initially no member of A is paired with any member of B.
(1) Let i be the smallest index such that ai is not yet paired with any member of B. Let j be some index such that bj is not yet paired with any member of A and ai can be paired wi
|
https://en.wikipedia.org/wiki/Intel740
|
The Intel740, or i740 (codenamed Auburn), is a 350 nm graphics processing unit using an AGP interface released by Intel on February 12, 1998. Intel was hoping to use the i740 to popularize the Accelerated Graphics Port, while most graphics vendors were still using PCI. Released to enormous fanfare, the i740 proved to have disappointing real-world performance, and sank from view after only a few months on the market. Some of its technology lived on in the form of Intel Extreme Graphics, and the concept of an Intel produced graphics processor lives on in the form of Intel HD Graphics and Intel Iris Pro.
History
The i740 has a long and storied history that starts at GE Aerospace as part of their flight simulation systems, notable for their construction of the Project Apollo "Visual Docking Simulator" that was used to train Apollo Astronauts to dock the Command Module and Lunar Module. GE sold their aerospace interests to Martin Marietta in 1992, as a part of Jack Welch's aggressive downsizing of GE. In 1995, Martin Marietta merged with Lockheed to form Lockheed Martin.
In January 1995, Lockheed Martin re-organized their divisions and formed Real3D in order to bring their 3D experience to the civilian market. Real3D had an early brush with success, providing chipsets and overall design to Sega, who used it in a number of arcade game boards, the Model 2 and Model 3. They also formed a joint project with Intel and Chips and Technologies (later purchased by Intel) to produce 3D accelerators for the PC market, under the code name "Auburn".
Auburn was designed specifically to take advantage of (and promote) the use of AGP interface, during the time when many competing 3D accelerators (notably, 3dfx Voodoo Graphics) still used the PCI connection. A unique characteristic, which set the AGP version of the card apart from other similar devices on the market, was the use of on-board memory exclusively for the display frame buffer, with all textures being kept in the computer s
|
https://en.wikipedia.org/wiki/Covering%20set
|
In mathematics, a covering set for a sequence of integers refers to a set of prime numbers such that every term in the sequence is divisible by at least one member of the set. The term "covering set" is used only in conjunction with sequences possessing exponential growth.
Sierpinski and Riesel numbers
The use of the term "covering set" is related to Sierpinski and Riesel numbers. These are odd natural numbers for which the formula (Sierpinski number) or (Riesel number) produces no prime numbers. Since 1960 it has been known that there exists an infinite number of both Sierpinski and Riesel numbers (as solutions to families of congruences based upon the set } but, because there are an infinitude of numbers of the form or for any , one can only prove to be a Sierpinski or Riesel number through showing that every term in the sequence or is divisible by one of the prime numbers of a covering set.
These covering sets form from prime numbers that in base 2 have short periods. To achieve a complete covering set, Wacław Sierpiński showed that a sequence can repeat no more frequently than every 24 numbers. A repeat every 24 numbers give the covering set }, while a repeat every 36 terms can give several covering sets: }; }; } and }.
Riesel numbers have the same covering sets as Sierpinski numbers.
Other covering sets
Covering sets (thus Sierpinski numbers and Riesel numbers) also exists for bases other than 2.
Covering sets are also used to prove the existence of composite generalized Fibonacci sequences with first two terms coprime (primefree sequence), such as the sequence starting with 20615674205555510 and 3794765361567513.
The concept of a covering set can easily be generalised to other sequences which turn out to be much simpler.
In the following examples + is used as it is in regular expressions to mean 1 or more. For example, 91+3 means the set }.
An example are the following eight sequences:
(29·10n − 191) / 9 or 32+01
(37·10n + 359) / 9 or 41+51
|
https://en.wikipedia.org/wiki/154%20%28number%29
|
154 (one hundred [and] fifty-four) is the natural number following 153 and preceding 155.
In mathematics
154 is a nonagonal number. Its factorization makes 154 a sphenic number
There is no integer with exactly 154 coprimes below it, making 154 a noncototient, nor is there, in base 10, any integer that added up to its own digits yields 154, making 154 a self number
154 is the sum of the first six factorials, if one starts with and assumes that .
With just 17 cuts, a pancake can be cut up into 154 pieces (Lazy caterer's sequence).
The distinct prime factors of 154 add up to 20, and so do the ones of 153, hence the two form a Ruth-Aaron pair. 154! + 1 is a factorial prime.
In music
154 is an album by Wire, named for the number of live gigs Wire had performed at that time
In the military
was a United States Navy Trefoil-class concrete barge during World War II
was a United States Navy Admirable-class minesweeper during World War II
was a United States Navy Wickes-class destroyer during World War II
was a United States Navy General G. O. Squier-class transport during World War II
was a United States Navy Haskell-class attack transport during World War II
was a United States Navy Buckley-class destroyer escort ship during World War II
Strike Fighter Squadron 154 (VFA-154) is a United States Navy strike fighter squadron stationed at Naval Air Station Lemoore
Convoy ON-154 was a convoy of ships in December 1942 during World War II
In sports
Major League Baseball teams played 154 games a season prior to expansion in 1961
Golfer Jack Nicklaus played in a record 154 consecutive major championships from the 1957 U.S. Open to the 1998 U.S. Open
In transportation
Seattle Bus Route 154
The Maserati Tipo 154 racecar, also known as 151/4, was produced in 1965
In other fields
154 is also:
The year AD 154 or 154 BC
154 AH is a year in the Islamic calendar that corresponds to that corresponds to 770 – 771 AD
154 Bertha is a dark outer Main belt asteroid
|
https://en.wikipedia.org/wiki/Clark%E2%80%93Wilson%20model
|
The Clark–Wilson integrity model provides a foundation for specifying and analyzing an integrity policy for a computing system.
The model is primarily concerned with formalizing the notion of information integrity. Information integrity is maintained by preventing corruption of data items in a system due to either error or malicious intent. An integrity policy describes how the data items in the system should be kept valid from one state of the system to the next and specifies the capabilities of various principals in the system. The model uses security labels to grant access to objects via transformation procedures and a restricted interface model.
Origin
The model was described in a 1987 paper (A Comparison of Commercial and Military Computer Security Policies) by David D. Clark and David R. Wilson. The paper develops the model as a way to formalize the notion of information integrity, especially as compared to the requirements for multilevel security (MLS) systems described in the Orange Book. Clark and Wilson argue that the existing integrity models such as Biba (read-up/write-down) were better suited to enforcing data integrity rather than information confidentiality. The Biba models are more clearly useful in, for example, banking classification systems to prevent the untrusted modification of information and the tainting of information at higher classification levels. In contrast, Clark–Wilson is more clearly applicable to business and industry processes in which the integrity of the information content is paramount at any level of classification (although the authors stress that all three models are obviously of use to both government and industry organizations).
Basic principles
According to Stewart and Chapple's CISSP Study Guide Sixth Edition, the Clark–Wilson model uses a multi-faceted approach in order to enforce data integrity. Instead of defining a formal state machine, the model defines each data item and allows modifications through only a sma
|
https://en.wikipedia.org/wiki/Battery%20pack
|
A battery pack is a set of any number of (preferably) identical batteries or individual battery cells. They may be configured in a series, parallel or a mixture of both to deliver the desired voltage, capacity, or power density. The term battery pack is often used in reference to cordless tools, radio-controlled hobby toys, and battery electric vehicles.
Components of battery packs include the individual batteries or cells, and the interconnects which provide electrical conductivity between them. Rechargeable battery packs often contain a temperature sensor, which the battery charger uses to detect the end of charging. Interconnects are also found in batteries as they are the part which connects each cell, though batteries are most often only arranged in series strings.
When a pack contains groups of cells in parallel there are differing wiring configurations which take into consideration the electrical balance of the circuit. Battery regulators are sometimes used to keep the voltage of each individual cell below its maximum value during charging so as to allow the weaker batteries to become fully charged, bringing the whole pack back into balance. Active balancing can also be performed by battery balancer devices which can shuttle energy from strong cells to weaker ones in real time for better balance. A well-balanced pack lasts longer and delivers better performance.
For an inline package, cells are selected and stacked with solder in between them. The cells are pressed together and a current pulse generates heat to solder them together and to weld all connections internal to the cell.
Calculating state of charge
SOC, or state of charge, is the equivalent of a fuel gauge for a battery. SOC cannot be determined by a simple voltage measurement, because the terminal voltage of a battery may stay substantially constant until it is completely discharged. In some types of battery, electrolyte specific gravity may be related to state of charge but this is not m
|
https://en.wikipedia.org/wiki/Animal%20science
|
Animal science is described as "studying the biology of animals that are under the control of humankind". It can also be described as the production and management of farm animals. Historically, the degree was called animal husbandry and the animals studied were livestock species, like cattle, sheep, pigs, poultry, and horses. Today, courses available look at a broader area, including companion animals, like dogs and cats, and many exotic species. Degrees in Animal Science are offered at a number of colleges and universities. Animal science degrees are often offered at land-grant universities, which will often have on-campus farms to give students hands-on experience with livestock animals.
Education
Professional education in animal science prepares students for careers in areas such as animal breeding, food and fiber production, nutrition, animal agribusiness, animal behavior, and welfare. Courses in a typical Animal Science program may include genetics, microbiology, animal behavior, nutrition, physiology, and reproduction. Courses in support areas, such as genetics, soils, agricultural economics and marketing, legal aspects, and the environment also are offered.
Bachelor degree
At many universities, a Bachelor of Science (BS) degree in Animal Science allows emphasis in certain areas. Typical areas are species-specific or career-specific. Species-specific areas of emphasis prepare students for a career in dairy management, beef management, swine management, sheep or small ruminant management, poultry production, or the horse industry. Other career-specific areas of study include pre-veterinary medicine studies, livestock business and marketing, animal welfare and behavior, animal nutrition science, animal reproduction science, or genetics. Youth programs are also an important part of animal science programs.
Pre-veterinary emphasis
Many schools that offer a degree option in Animal Science also offer a pre-veterinary emphasis such as Iowa State University, th
|
https://en.wikipedia.org/wiki/Neurochip
|
A neurochip is an integrated circuit chip (such as a microprocessor) that is designed for interaction with neuronal cells.
Formation
It is made of silicon that is doped in such a way that it contains EOSFETs (electrolyte-oxide-semiconductor field-effect transistors) that can sense the electrical activity of the neurons (action potentials) in the above-standing physiological electrolyte solution. It also contains capacitors for the electrical stimulation of the neurons. The University of Calgary, Faculty of Medicine scientists led by Pakistani-born Canadian scientist Naweed Syed who proved it is possible to cultivate a network of brain cells that reconnect on a silicon chip—or the brain on a microchip—have developed new technology that monitors brain cell activity at a resolution never achieved before.
Developed with the National Research Council Canada (NRC), the new silicon chips are also simpler to use, which will help future understanding of how brain cells work under normal conditions and permit drug discoveries for a variety of neurodegenerative diseases, such as Alzheimer's and Parkinson's.
Naweed Syed's lab cultivated brain cells on a microchip.
The new technology from the lab of Naweed Syed, in collaboration with the NRC, was published online in August 2010, in the journal, Biomedical Devices. It is the world's first neurochip. It is based on Syed's earlier experiments on neurochip technology dating back to 2003.
"This technical breakthrough means we can track subtle changes in brain activity at the level of ion channels and synaptic potentials, which are also the most suitable target sites for drug development in neurodegenerative diseases and neuropsychological disorders," says Syed, professor and head of the Department of Cell Biology and Anatomy, member of the Hotchkiss Brain Institute and advisor to the Vice President Research on Biomedical Engineering Initiative of the University of Chicago.
The new neurochips are also automated, meaning that an
|
https://en.wikipedia.org/wiki/EOSFET
|
An EOSFET or electrolyte–oxide–semiconductor field-effect transistor is a FET, like a MOSFET, but with an electrolyte solution replacing the metal for the detection of neuronal activity. Many EOSFETs are integrated in a neurochip.
Electrochemistry
Sensors
Transistor types
MOSFETs
Field-effect transistors
|
https://en.wikipedia.org/wiki/Gibbs%27%20inequality
|
In information theory, Gibbs' inequality is a statement about the information entropy of a discrete probability distribution. Several other bounds on the entropy of probability distributions are derived from Gibbs' inequality, including Fano's inequality.
It was first presented by J. Willard Gibbs in the 19th century.
Gibbs' inequality
Suppose that
is a discrete probability distribution. Then for any other probability distribution
the following inequality between positive quantities (since pi and qi are between zero and one) holds:
with equality if and only if
for all i. Put in words, the information entropy of a distribution P is less than or equal to its cross entropy with any other distribution Q.
The difference between the two quantities is the Kullback–Leibler divergence or relative entropy, so the inequality can also be written:
Note that the use of base-2 logarithms is optional, and
allows one to refer to the quantity on each side of the inequality as an
"average surprisal" measured in bits.
Proof
For simplicity, we prove the statement using the natural logarithm (). Because
the particular logarithm base that we choose only scales the relationship by the factor .
Let denote the set of all for which pi is non-zero. Then, since for all x > 0, with equality if and only if x=1, we have:
The last inequality is a consequence of the pi and qi being part of a probability distribution. Specifically, the sum of all non-zero values is 1. Some non-zero qi, however, may have been excluded since the choice of indices is conditioned upon the pi being non-zero. Therefore, the sum of the qi may be less than 1.
So far, over the index set , we have:
,
or equivalently
.
Both sums can be extended to all , i.e. including , by recalling that the expression tends to 0 as tends to 0, and tends to as tends to 0. We arrive at
For equality to hold, we require
for all so that the equality holds,
and which means if , that is, if .
This can happen i
|
https://en.wikipedia.org/wiki/Paper%20bag%20problem
|
In geometry, the paper bag problem or teabag problem is to calculate the maximum possible inflated volume of an initially flat sealed rectangular bag which has the same shape as a cushion or pillow, made out of two pieces of material which can bend but not stretch.
According to Anthony C. Robin, an approximate formula for the capacity of a sealed expanded bag is:
where w is the width of the bag (the shorter dimension), h is the height (the longer dimension), and V is the maximum volume. The approximation ignores the crimping round the equator of the bag.
A very rough approximation to the capacity of a bag that is open at one edge is:
(This latter formula assumes that the corners at the bottom of the bag are linked by a single edge, and that the base of the bag is not a more complex shape such as a lens).
The square teabag
For the special case where the bag is sealed on all edges and is square with unit sides, h = w = 1, the first formula estimates a volume of roughly
or roughly 0.19. According to Andrew Kepert at the University of Newcastle, Australia, an upper bound for this version of the teabag problem is 0.217+, and he has made a construction that appears to give a volume of 0.2055+.
Robin also found a more complicated formula for the general paper bag, which gives 0.2017, below the bounds given by Kepert (i.e., 0.2055+ ≤ maximum volume ≤ 0.217+).
See also
Biscornu, a shape formed by attaching two squares in a different way, with the corner of one at the midpoint of the other
Mylar balloon (geometry)
Notes
References
External links
The original statement of the teabag problem
Andrew Kepert's work on the teabag problem (mirror)
Curved folds for the teabag problem
A numerical approach to the teabag problem by Andreas Gammel
Geometric shapes
Mathematical optimization
|
https://en.wikipedia.org/wiki/GeForce%207%20series
|
The GeForce 7 series is the seventh generation of Nvidia's GeForce graphics processing units. This was the last series available on AGP cards.
A slightly modified GeForce 7-based card (more specifically based on the 7800GTX) is present as the RSX Reality Synthesizer, which is present on the PlayStation 3.
Features
The following features are common to all models in the GeForce 7 series except the GeForce 7100, which lacks GCAA(Gamma Corrected Anti-Aliasing):
Intellisample 4.0
Scalable Link Interface (SLI)
TurboCache
Nvidia PureVideo
The GeForce 7 supports hardware acceleration for H.264, but this feature was not used on Windows by Adobe Flash Player until the GeForce 8 Series.
GeForce 7100 series
The 7100 series was introduced on August 30, 2006 and is based on GeForce 6200 series architecture. This series supports only PCI Express interface. Only one model, the 7100 GS, is available.
Features
The 7100 series supports all of the standard features common to the GeForce 7 Series provided it is using the ForceWare 91.47 driver or later releases, though it lacks OpenCL/CUDA support, and its implementation of IntelliSample 4.0 lacks GCAA.
The 7100 series does not support technologies such as high-dynamic-range rendering (HDR) and UltraShadow II.
GeForce 7100 GS
Although the 7300 LE was originally intended to be the "lowest budget" GPU from the GeForce 7 lineup, the 7100 GS has taken its place. As it is little more than a revamped version of the GeForce 6200TC, it is designed as a basic PCI-e solution for OEMs to use if the chipset does not have integrated video capabilities. It comes in a PCI Express Graphics Bus and up to 512MB DDR2 VRAM.
Performance specifications:
Graphics Bus: PCI Express
Memory Interface: 64-bits
Memory Bandwidth: 5.3 GB/s
Fill Rate: 1.4 billion pixel/s
Vertex/s: 263 million
Memory Type: DDR2 with TC
GeForce 7200 series
The 7200 series was introduced October 8, 2006 and is based on (G72) architecture. It is designed to offer a low-
|
https://en.wikipedia.org/wiki/Solaris%20Volume%20Manager
|
Solaris Volume Manager (SVM; formerly known as Online: DiskSuite, and later Solstice DiskSuite) is a software package for creating, modifying and controlling RAID-0 (concatenation and stripe) volumes, RAID-1 (mirror) volumes, RAID 0+1 volumes, RAID 1+0 volumes, RAID-5 volumes, and soft partitions.
Version 1.0 of Online: DiskSuite was released as an add-on product for SunOS in late 1991; the product has undergone significant enhancements over the years. SVM has been included as a standard part of Solaris since Solaris 8 was released in February 2000.
SVM is similar in functionality to later software volume managers such as FreeBSD Vinum volume manager, allowing metadevices (virtual disks) to be concatenated, striped or mirrored together from physical ones. It also supports soft partitioning, dynamic hot spares, and growing metadevices. The mirrors support dirty region logging (DRL, called resync regions in DiskSuite) and logging support for RAID-5.
The ZFS file system, added in the Solaris 10 6/06 release, has its own integrated volume management capabilities, but SVM continues to be included with Solaris for use with other file systems.
See also
Logical volume management
Sun Microsystems
References
External links
Solaris Volume Manager Administration Guide
OpenSolaris Community: Solaris Volume Manager
Sun Microsystems software
Storage software
|
https://en.wikipedia.org/wiki/Bar%20product
|
In information theory, the bar product of two linear codes C2 ⊆ C1 is defined as
where (a | b) denotes the concatenation of a and b. If the code words in C1 are of length n, then the code words in C1 | C2 are of length 2n.
The bar product is an especially convenient way of expressing the Reed–Muller RM (d, r) code in terms of the Reed–Muller codes RM (d − 1, r) and RM (d − 1, r − 1).
The bar product is also referred to as the | u | u+v | construction
or (u | u + v) construction.
Properties
Rank
The rank of the bar product is the sum of the two ranks:
Proof
Let be a basis for and let be a basis for . Then the set
is a basis for the bar product .
Hamming weight
The Hamming weight w of the bar product is the lesser of (a) twice the weight of C1, and (b) the weight of C2:
Proof
For all ,
which has weight . Equally
for all and has weight . So minimising over we have
Now let and , not both zero. If then:
If then
so
See also
Reed–Muller code
References
Information theory
Coding theory
|
https://en.wikipedia.org/wiki/Hydrogen-terminated%20silicon%20surface
|
Hydrogen-terminated silicon surface is a chemically passivated silicon substrate where the surface Si atoms are bonded to hydrogen. The hydrogen-terminated surfaces are hydrophobic, luminescent, and amenable to chemical modification. Hydrogen-terminated silicon is an intermediate in the growth of bulk silicon from silane:
SiH4 → Si + 2H2
Preparation
Silicon wafers are treated with solutions of electronic-grade hydrofluoric acid in water, buffered water, or alcohol. One of the relevant reactions is simply removal of silicon oxides:
SiO2 + 4 HF → SiF4 + 2 H2O
The key reaction however is the formation of the hydrosilane functional group.
atomic force microscope (AFM) has been used to manipulate hydrogen-terminated silicon surfaces.
Properties
Hydrogen termination removes dangling bonds. All surface Si atoms are tetrahedral. Hydrogen termination confers stability in ambient environments. So again, the surface is both clean (of oxides) and relatively inert. These materials can be handled in air without special care for several minutes.
The Si-H bond in fact is stronger than the Si-Si bonds. Two kinds of Si-H centers are proposed, both featuring terminal Si-H bonds. One kind of site has one Si-H bond. The other kind of site features SiH2 centers.
Like organic hydrosilanes, the H-Si groups on the surface react with terminal alkenes and diazo groups. The reaction is called hydrosilylation. Many kinds of organic compounds with various functions can be introduced onto the silicon surface by the hydrosilylation of a hydrogen-terminated surface. The infrared spectrum of hydrogen-terminated silicon shows a band near 2090 cm−1, not very different from νSi-H for organic hydrosilanes.
Potential applications
One group proposed to use the material to create digital circuits made of quantum dots by removing hydrogen atoms from the silicon surface.
See also
Silanization of silicon and mica
References
External links
Materials
Nanotechnology
Thin fi
|
https://en.wikipedia.org/wiki/ISFET
|
An ion-sensitive field-effect transistor (ISFET) is a field-effect transistor used for measuring ion concentrations in solution; when the ion concentration (such as H+, see pH scale) changes, the current through the transistor will change accordingly. Here, the solution is used as the gate electrode. A voltage between substrate and oxide surfaces arises due to an ion sheath. It is a special type of MOSFET (metal–oxide–semiconductor field-effect transistor), and shares the same basic structure, but with the metal gate replaced by an ion-sensitive membrane, electrolyte solution and reference electrode. Invented in 1970, the ISFET was the first biosensor FET (BioFET).
The surface hydrolysis of Si–OH groups of the gate materials varies in aqueous solutions due to pH value. Typical gate materials are SiO2, Si3N4, Al2O3 and Ta2O5.
The mechanism responsible for the oxide surface charge can be described by the site binding model, which describes the equilibrium between the Si–OH surface sites and the H+ ions in the solution. The hydroxyl groups coating an oxide surface such as that of SiO2 can donate or accept a proton and thus behave in an amphoteric way as illustrated by the following acid-base reactions occurring at the oxide-electrolyte interface:
—Si–OH + H2O ↔ —Si–O− + H3O+
—Si–OH + H3O+ ↔ —Si–OH2+ + H2O
An ISFET's source and drain are constructed as for a MOSFET. The gate electrode is separated from the channel by a barrier which is sensitive to hydrogen ions and a gap to allow the substance under test to come in contact with the sensitive barrier. An ISFET's threshold voltage depends on the pH of the substance in contact with its ion-sensitive barrier.
Practical limitations due to the reference electrode
An ISFET electrode sensitive to H+ concentration can be used as a conventional glass electrode to measure the pH of a solution. However, it also requires a reference electrode to operate. If the reference electrode used in contact with the soluti
|
https://en.wikipedia.org/wiki/Concurrent%20ML
|
Concurrent ML (CML) is a concurrent extension of the Standard ML programming language characterized by its ability to allow programmers to create composable communication abstractions that are first-class rather than built into the language. The design of CML and its primitive operations have been adopted in several other programming languages such as GNU Guile, Racket, and Manticore.
Concepts
Many programming languages that support concurrency offer communication channels that allow the exchange of values between processes or threads running concurrently in a system. Communications established between processes may follow a specific protocol, requiring the programmer to write functions to establish the required pattern of communication. Meanwhile, a communicating system often requires establishing multiple channels, such as to multiple servers, and then choosing between the available channels when new data is available. This can be accomplished using polling, such as with the select operation on Unix systems.
Combining both application-specific protocols and multi-party communication may be complicated due to the need to introduce polling and checking for blocking within a pre-existing protocol. Concurrent ML solves this problem by reducing this coupling of programming concepts by introducing synchronizable events. Events are a first-class abstraction that can be used with a synchronization operation (called in CML and Racket) in order to potentially block and then produce some value resulting from communication (for example, data transmitted on a channel).
In CML, events can be combined or manipulated using a number of primitive operations. Each primitive operation constructs a new event rather than modifying the event in-place, allowing for the construction of compound events that represent the desired communication pattern. For example, CML allows the programmer to combine several sub-events in order to create a compound event that can then make a non-deter
|
https://en.wikipedia.org/wiki/Geodesics%20in%20general%20relativity
|
In general relativity, a geodesic generalizes the notion of a "straight line" to curved spacetime. Importantly, the world line of a particle free from all external, non-gravitational forces is a particular type of geodesic. In other words, a freely moving or falling particle always moves along a geodesic.
In general relativity, gravity can be regarded as not a force but a consequence of a curved spacetime geometry where the source of curvature is the stress–energy tensor (representing matter, for instance). Thus, for example, the path of a planet orbiting a star is the projection of a geodesic of the curved four-dimensional (4-D) spacetime geometry around the star onto three-dimensional (3-D) space.
Mathematical expression
The full geodesic equation is
where s is a scalar parameter of motion (e.g. the proper time), and are Christoffel symbols (sometimes called the affine connection coefficients or Levi-Civita connection coefficients) symmetric in the two lower indices. Greek indices may take the values: 0, 1, 2, 3 and the summation convention is used for repeated indices and . The quantity on the left-hand-side of this equation is the acceleration of a particle, so this equation is analogous to Newton's laws of motion, which likewise provide formulae for the acceleration of a particle. The Christoffel symbols are functions of the four spacetime coordinates and so are independent of the velocity or acceleration or other characteristics of a test particle whose motion is described by the geodesic equation.
Equivalent mathematical expression using coordinate time as parameter
So far the geodesic equation of motion has been written in terms of a scalar parameter s. It can alternatively be written in terms of the time coordinate, (here we have used the triple bar to signify a definition). The geodesic equation of motion then becomes:
This formulation of the geodesic equation of motion can be useful for computer calculations and to compare General Relativity with
|
https://en.wikipedia.org/wiki/Fabric%20Shortest%20Path%20First
|
Fabric Shortest Path First (FSPF) is a routing protocol used in Fibre Channel computer networks. It calculates the best path between network switches, establishes routes across the fabric and calculates alternate routes in event of a failure or network topology change. FSPF can guarantee in-sequence delivery of frames, even if the routing topology has changed during a failure, by enforcing a 'hold down' time before a new path is activated.
FSPF was created by Brocade Communications Systems in collaboration with Gadzoox, McDATA, Ancor Communications (now QLogic), and Vixel; it was submitted as an American National Standards Institute standard. It was introduced in 2000. The protocol is similar in conception to the Open Shortest Path First used in IP networks. FSPF has been adopted as the industry standard for routing between Fibre Channel switches within a fabric.
A management information base for FSPF was published as RFC 4626.
References
Fibre Channel
|
https://en.wikipedia.org/wiki/Lazy%20caterer%27s%20sequence
|
The lazy caterer's sequence, more formally known as the central polygonal numbers, describes the maximum number of pieces of a disk (a pancake or pizza is usually used to describe the situation) that can be made with a given number of straight cuts. For example, three cuts across a pancake will produce six pieces if the cuts all meet at a common point inside the circle, but up to seven if they do not. This problem can be formalized mathematically as one of counting the cells in an arrangement of lines; for generalizations to higher dimensions, see arrangement of hyperplanes.
The analogue of this sequence in three dimensions is the cake numbers.
Formula and sequence
The maximum number p of pieces that can be created with a given number of cuts (where ) is given by the formula
Using binomial coefficients, the formula can be expressed as
Simply put, each number equals a triangular number plus 1.
As the third column of Bernoulli's triangle (k = 2) is a triangular number plus one, it forms the lazy caterer's sequence for n cuts, where n ≥ 2.
The sequence can be alternatively derived from the sum of up to the first 3 terms of each row of Pascal's triangle:
{| class="wikitable" style="text-align:right;"
! !! 0 !! 1 !! 2
! rowspan="11" style="padding:0;"| !! Sum
|-
! style="text-align:left;"|0
| 1 || - || - || 1
|-
! style="text-align:left;"|1
| 1 || 1 || - || 2
|-
! style="text-align:left;"|2
| 1 || 2 || 1 || 4
|-
! style="text-align:left;"|3
| 1 || 3 || 3 || 7
|-
! style="text-align:left;"|4
| 1 || 4 || 6 || 11
|-
! style="text-align:left;"|5
| 1 || 5 || 10 || 16
|-
! style="text-align:left;"|6
| 1 || 6 || 15 || 22
|-
! style="text-align:left;"|7
| 1 || 7 || 21 || 29
|-
! style="text-align:left;"|8
| 1 || 8 || 28 || 37
|-
! style="text-align:left;"|9
| 1 || 9 || 36 || 46
|}
This sequence , starting with , thus results in
1, 2, 4, 7, 11, 16, 22, 29, 37, 46, 56, 67, 79, 92, 106, 121, 137, 154, 172, 191, 211, ...
Its three-dimensional analogue is known a
|
https://en.wikipedia.org/wiki/Website%20builder
|
Website builders are tools that typically allow the construction of websites without manual code editing. They fall into two categories:
Online proprietary tools provided by web hosting service companies. These are typically intended for service users to build their own website. Some services allow the site owner to use alternative tools (commercial or open-source) — the more complex of these may also be described as content management systems.
Application software software that runs on a personal computing device used to create and edit the pages of a web site and then publish these pages on any host. (These are often considered to be "website design software", rather than "website builders".)
History
The first website, manually written in HTML, was created on August 6, 1991.
Over time, software was created to help design web pages. For example, Microsoft released FrontPage in November 1995.
By 1998, Dreamweaver had been established as the industry leader; however, some have criticized the quality of the code produced by such software as being overblown and reliant on HTML tables. As the industry moved towards W3C standards, Dreamweaver and others were criticized for not being compliant. Compliance has improved over time, but many professionals still prefer to write optimized markup by hand.
Open source tools were typically developed to the standards and made fewer exceptions for the then-dominant Internet Explorer's deviations from the standards.
The W3C started Amaya in 1996 to showcase Web technologies in a fully featured Web client. This was to provide a framework that integrated many W3C technologies in a single, consistent environment. Amaya started as an HTML and CSS editor and now supports XML, XHTML, MathML, and SVG.
GeoCities was one of the first more modern site builders that didn't require any technical skills. Five years after its launch in 1994 Yahoo! purchased it for $3.6 billion. After becoming obsolescent, it was shut down in April 2009.
|
https://en.wikipedia.org/wiki/Theoretical%20astronomy
|
Theoretical astronomy is the use of analytical and computational models based on principles from physics and chemistry to describe and explain astronomical objects and astronomical phenomena. Theorists in astronomy endeavor to create theoretical models and from the results predict observational consequences of those models. The observation of a phenomenon predicted by a model allows astronomers to select between several alternate or conflicting models as the one best able to describe the phenomena.
Ptolemy's Almagest, although a brilliant treatise on theoretical astronomy combined with a practical handbook for computation, nevertheless includes compromises to reconcile discordant observations with a geocentric model. Modern theoretical astronomy is usually assumed to have begun with the work of Johannes Kepler (1571–1630), particularly with Kepler's laws. The history of the descriptive and theoretical aspects of the Solar System mostly spans from the late sixteenth century to the end of the nineteenth century.
Theoretical astronomy is built on the work of observational astronomy, astrometry, astrochemistry, and astrophysics. Astronomy was early to adopt computational techniques to model stellar and galactic formation and celestial mechanics. From the point of view of theoretical astronomy, not only must the mathematical expression be reasonably accurate but it should preferably exist in a form which is amenable to further mathematical analysis when used in specific problems. Most of theoretical astronomy uses Newtonian theory of gravitation, considering that the effects of general relativity are weak for most celestial objects. Theoretical astronomy does not attempt to predict the position, size and temperature of every object in the universe, but by and large has concentrated upon analyzing the apparently complex but periodic motions of celestial objects.
Integrating astronomy and physics
"Contrary to the belief generally held by laboratory physicists, astrono
|
https://en.wikipedia.org/wiki/ECMWF%20re-analysis
|
The ECMWF reanalysis project is a meteorological reanalysis project carried out by the European Centre for Medium-Range Weather Forecasts (ECMWF).
The first reanalysis product, ERA-15, generated reanalyses for approximately 15 years, from December 1978 to February 1994. The second product, ERA-40 (originally intended as a 40-year reanalysis) begins in 1957 (the International Geophysical Year) and covers 45 years to 2002. As a precursor to a revised extended reanalysis product to replace ERA-40, ECMWF released ERA-Interim, which covers the period from 1979 to 2019.
A new reanalysis product ERA5 has recently been released by ECMWF as part of Copernicus Climate Change Services. This product has higher spatial resolution (31 km) and covers the period from 1979 to present. Extension up to 1940 became available in 2023.
In addition to reanalysing all the old data using a consistent system, the reanalyses also make use of much archived data that was not available to the original analyses. This allows for the correction of many historical hand-drawn maps where the estimation of features was common in areas of data sparsity. The ability is also present to create new maps of atmosphere levels that were not commonly used until more recent times.
Generation
Many sources of the meteorological observations were used, including radiosondes, balloons, aircraft, buoyes, satellites, scatterometers. This data was run through the ECMWF computer model at a 125 km resolution. As the ECMWF's computer model is one of the more highly regarded in the field of forecasting, many scientists take its reanalysis to have similar merit. The data is stored in GRIB format. The reanalysis was done in an effort to improve the accuracy of historical weather maps and aid in a more detailed analysis of various weather systems through a period that was severely lacking in computerized data. With the data from reanalyses such as this, many of the more modern computerized tools for analyzing storm systems
|
https://en.wikipedia.org/wiki/Ensembl%20genome%20database%20project
|
Ensembl genome database project is a scientific project at the European Bioinformatics Institute, which provides a centralized resource for geneticists, molecular biologists and other researchers studying the genomes of our own species and other vertebrates and model organisms. Ensembl is one of several well known genome browsers for the retrieval of genomic information.
Similar databases and browsers are found at NCBI and the University of California, Santa Cruz (UCSC).
History
The human genome consists of three billion base pairs, which code for approximately 20,000–25,000 genes. However the genome alone is of little use, unless the locations and relationships of individual genes can be identified. One option is manual annotation, whereby a team of scientists tries to locate genes using experimental data from scientific journals and public databases. However this is a slow, painstaking task. The alternative, known as automated annotation, is to use the power of computers to do the complex pattern-matching of protein to DNA. The Ensembl project was launched in 1999 in response to the imminent completion of the Human Genome Project, with the initial goals of automatically annotate the human genome, integrate this annotation with available biological data and make all this knowledge publicly available.
In the Ensembl project, sequence data are fed into the gene annotation system (a collection of software "pipelines" written in Perl) which creates a set of predicted gene locations and saves them in a MySQL database for subsequent analysis and display. Ensembl makes these data freely accessible to the world research community. All the data and code produced by the Ensembl project is available to download, and there is also a publicly accessible database server allowing remote access. In addition, the Ensembl website provides computer-generated visual displays of much of the data.
Over time the project has expanded to include additional species (including key model
|
https://en.wikipedia.org/wiki/Danger%20zone%20%28food%20safety%29
|
The danger zone is the temperature range in which food-borne bacteria can grow. Food safety agencies, such as the United States' Food Safety and Inspection Service (FSIS), define the danger zone as roughly . The FSIS stipulates that potentially hazardous food should not be stored at temperatures in this range in order to prevent foodborne illness and that food that remains in this zone for more than two hours should not be consumed. Foodborne microorganisms grow much faster in the middle of the zone, at temperatures between . In the UK and NI, the Danger Zone is defined as 8 to 63 °C.
Food-borne bacteria, in large enough numbers, may cause food poisoning, symptoms similar to gastroenteritis or "stomach flu" (a misnomer, as true influenza primarily affects the respiratory system). Some of the symptoms include stomach cramps, nausea, vomiting, diarrhea, and fever. Food-borne illness becomes more dangerous in certain populations, such as people with weakened immune systems, young children, the elderly, and pregnant women. In Canada, there are approximately 4 million cases of food-borne disease per year. These symptoms can begin as early as shortly after and as late as weeks after consumption of the contaminated food.
Time and temperature control safety (TCS) plays a critical role in food handling. To prevent time-temperature abuse, the amount of time food spends in the danger zone must be minimized. A logarithmic relationship exists between microbial cell death and temperature, that is, a small decrease of cooking temperature can result in considerable numbers of cells surviving the process. In addition to reducing the time spent in the danger zone, foods should be moved through the danger zone as few times as possible when reheating or cooling.
Foods that are potentially hazardous inside the danger zone:
Meat: beef, poultry, pork, seafood
Eggs and other protein-rich foods
Dairy products
Cut or peeled fresh produce
Cooked vegetables, beans, rice, pasta
Sauc
|
https://en.wikipedia.org/wiki/Cassini%20oval
|
In geometry, a Cassini oval is a quartic plane curve defined as the locus of points in the plane such that the product of the distances to two fixed points (foci) is constant. This may be contrasted with an ellipse, for which the sum of the distances is constant, rather than the product. Cassini ovals are the special case of polynomial lemniscates when the polynomial used has degree 2.
Cassini ovals are named after the astronomer Giovanni Domenico Cassini who studied them in the late 17th century.
Cassini believed that the Sun traveled around the Earth on one of these ovals, with the Earth at one focus of the oval.
Other names include Cassinian ovals, Cassinian curves and ovals of Cassini.
Formal definition
A Cassini oval is a set of points, such that for any point of the set, the product of the distances to two fixed points is a constant, usually written as where :
As with an ellipse, the fixed points are called the foci of the Cassini oval.
Equations
If the foci are (a, 0) and (−a, 0), then the equation of the curve is
When expanded this becomes
The equivalent polar equation is
Shape
The curve depends, up to similarity, on e = b/a. When e < 1, the curve consists of two disconnected loops, each of which contains a focus. When e = 1, the curve is the lemniscate of Bernoulli having the shape of a sideways figure eight with a double point (specifically, a crunode) at the origin. When e > 1, the curve is a single, connected loop enclosing both foci. It is peanut-shaped for and convex for . The limiting case of a → 0 (hence e → ), in which case the foci coincide with each other, is a circle.
The curve always has x-intercepts at ± c where c2 = a2 + b2. When e < 1 there are two additional real x-intercepts and when e > 1 there are two real y-intercepts, all other x- and y-intercepts being imaginary.
The curve has double points at the circular points at infinity, in other words the curve is bicircular. These points are biflecnodes, meaning that the curve
|
https://en.wikipedia.org/wiki/Social%20networking%20service
|
A social networking service or SNS (sometimes called a social networking site) is a type of online social media platform which people use to build social networks or social relationships with other people who share similar personal or career content, interests, activities, backgrounds or real-life connections.
Social networking services vary in format and the number of features. They can incorporate a range of new information and communication tools, operating on desktops and on laptops, on mobile devices such as tablet computers and smartphones. This may feature digital photo/video/sharing and diary entries online (blogging). Online community services are sometimes considered social-network services by developers and users, though in a broader sense, a social-network service usually provides an individual-centered service whereas online community services are groups centered. Generally defined as "websites that facilitate the building of a network of contacts in order to exchange various types of content online," social networking sites provide a space for interaction to continue beyond in-person interactions. These computer mediated interactions link members of various networks and may help to create, sustain and develop new social and professional relationships.
Social networking sites allow users to share ideas, digital photos and videos, posts, and to inform others about online or real-world activities and events with people within their social network. While in-person social networking – such as gathering in a village market to talk about events – has existed since the earliest development of towns, the web enables people to connect with others who live in different locations across the globe (dependent on access to an Internet connection to do so). Depending on the platform, members may be able to contact any other member. In other cases, members can contact anyone they have a connection to, and subsequently anyone that contact has a connection to, and so o
|
https://en.wikipedia.org/wiki/A%20System%20of%20Logic
|
A System of Logic, Ratiocinative and Inductive is an 1843 book by English philosopher John Stuart Mill.
Overview
In this work, he formulated the five principles of inductive reasoning that are known as Mill's Methods. This work is important in the philosophy of science, and more generally, insofar as it outlines the empirical principles Mill would use to justify his moral and political philosophies.
An article in "Philosophy of Recent Times" has described this book as an "attempt to expound a psychological system of logic within empiricist principles.”
This work was important to the history of science, being a strong influence on scientists such as Dirac. A System of Logic also had an impression on Gottlob Frege, who rebuked many of Mill's ideas about the philosophy of mathematics in his work The Foundations of Arithmetic.
Mill revised the original work several times over the course of thirty years in response to critiques and commentary by Whewell, Bain, and others.
Editions
Mill, John Stuart, A System of Logic, University Press of the Pacific, Honolulu, 2002,
See also
Emergentism
References
Sources
Philosophy of Recent Times, ed. J. B. Hartmann (New York: McGraw-Hill, 1967), I, 14.
External links
Online editions
1843. Google Books: Vol. I, Vol. II (first edition)
1846. Google Books: All
1851. Google Books: Vol. I, Vol. II missing? Internet Archive: Vol. I, Vol. II missing? (third edition)
1858. Google Books: All
1862. Google Books: Vol. I, Vol. II
1868. Internet Archive: Vol. I, Vol. II. Also Vol. I, Vol. II. Also Vol. I (seventh edition)
1872. Internet Archive: Vol. I, Vol. II. Also partial HTML version.(eighth edition)
1882. Internet Archive: All
1882. Project Gutenberg: All
1843 non-fiction books
Books by John Stuart Mill
Logic books
Philosophy of science
|
https://en.wikipedia.org/wiki/VCR/DVD%20combo
|
A VCR/DVD combination, VCR/DVD combo, or DVD/VCR combo, is a multiplex or converged device that allows the ability to watch both VHS tapes and DVDs. Many such players can also play additional formats such as CD and VCD.
VCR/DVD player combinations were first introduced around the year 1999, with the first model released by Go Video, model DVR5000, manufactured by Samsung Electronics. VCR/DVD combinations were sometimes criticized as being of poorer quality in terms of resolution than stand-alone units. These products also had a disadvantage in that if one function (DVD or VHS) became unusable, the entire unit needed to be replaced or repaired, though later models which suffered from DVD playback lag still functioned with the VCR.
Normally in a combo unit, it will have typical features such as recording a DVD onto VHS (on most), record a show to VHS with a digital-to-analog converter device (unless a unit has a digital TV tuner), LP recording for VHS, surround sound for Dolby Digital and DTS (DVD), component connections for DVD (although some may lack the connection), 480p progressive scan for the DVD side, VCR+, playback of tapes in a variety of playback speeds, and front A/V inputs (VCR only).
To help the consumer, they will have one or more buttons for switching the output source for ease of use. Usually, the recording capabilities are VCR exclusive, while the better picture quality is DVD exclusive, but some models include S-Video, Component, or HDMI output for VCR as well. These devices were among the only VCRs alongside some VCR/Blu-ray combos to be equipped with an HDMI port for HDTV viewing upscaling to several different types of resolutions including 1080i.
Shortly after the turn of the century, combo devices including DVD recorders (instead of players) also became available. These could be used for transferring VHS material onto recordable blank DVDs. In rare cases, such devices had component inputs to record with the best connection possible.
In Jul
|
https://en.wikipedia.org/wiki/ED50
|
ED50 ("European Datum 1950", EPSG:4230) is a geodetic datum which was defined after World War II for the international connection of geodetic networks.
Background
Some of the important battles of World War II were fought on the borders of Germany, the Netherlands, Belgium and France, and the mapping of these countries had incompatible latitude and longitude positioning. During the war the German Military Survey (Reichsamt Kriegskarten und Vermessungswesen), under the command of Lieutenant General Gerlach Hemmerich, began a systematic mapping of the areas under the control of the German Military, a large part of Europe. The allies were also concerned about the state of mapping in Europe, and in 1944 the US Army Map Service set up an intelligence team to collect mapping and surveying information from the Germans as the allied armies moved through Europe after the Normandy landings. The group, known as Houghteam after Major Floyd W. Hough, collected much material. Their greatest success was in April 1945. They found a large cache of material in Saalfeld, Thuringia, which proved to be the entire geodetic archives of the German Army. The shipment, 75 truckloads in all, was transferred to Bamberg, and then to Washington for evaluation.
Shortly after this, the team captured the personnel of the Reichsamt für Landesaufnahme, the State Surveying Service, in Friedrichroda, also in Thuringia. This group had been working on the integration of the mapping of the occupied territories with that of Germany, under Professor Erwin Gigas, a geodesist with an international reputation. They were directed to continue this work, in Bamberg in the US zone of occupation, as part of the US-led effort to develop a single adjusted triangulation for Central Europe. This was completed in 1947. The work was then extended to cover much of Western Europe which was completed in 1950, and became ED50.
The European triangulation was originally classified military information. It was de-classifire
|
https://en.wikipedia.org/wiki/List%20of%20Microsoft%20operating%20systems
|
This is a list of Microsoft written and published operating systems. For the codenames that Microsoft gave their operating systems, see Microsoft codenames. For another list of versions of Microsoft Windows, see, List of Microsoft Windows versions.
MS-DOS
See MS-DOS Versions for a full list.
Windows
Windows 1.0 until 8.1
Windows 10/11 and Windows Server 2016/2019/2022
Windows Mobile
Windows Mobile 2003
Windows Mobile 2003 SE
Windows Mobile 5
Windows Mobile 6
Windows Phone
Xbox gaming
Xbox system software
Xbox 360 system software
Xbox One and Xbox Series X/S system software
OS/2
Unix and Unix-like
Xenix
Nokia X platform
Microsoft Linux distributions
Azure Sphere
SONiC
Windows Subsystem for Linux
CBL-Mariner
Other operating systems
MS-Net
LAN Manager
MIDAS
Singularity
Midori
Zune
KIN OS
Nokia Asha platform
Barrelfish
Time line
See also
List of Microsoft topics
List of operating systems
External links
Concise Microsoft O.S. Timeline, by Bravo Technology Center
Micro
Operating systems
|
https://en.wikipedia.org/wiki/List%20of%20Microsoft%20software
|
Microsoft is a developer of personal computer software. It is best known for its Windows operating system, the Internet Explorer and subsequent Microsoft Edge web browsers, the Microsoft Office family of productivity software plus services, and the Visual Studio IDE. The company also publishes books (through Microsoft Press) and video games (through Xbox Game Studios), and produces its own line of hardware. The following is a list of the notable Microsoft software Applications.
Software development
Azure DevOps
Azure DevOps Server (formerly Team Foundation Server and Visual Studio Team System)
Azure DevOps Services (formerly Visual Studio Team Services, Visual Studio Online and Team Foundation Service)
BASICA
Bosque
CLR Profiler
GitHub
Atom
GitHub Desktop
GitHub Copilot
npm
Spectrum
Dependabot
GW-BASIC
IronRuby
IronPython
JScript
Microsoft Liquid Motion
Microsoft BASIC, also licensed as:
Altair BASIC
AmigaBASIC
Applesoft BASIC
Commodore BASIC
Color BASIC
MBASIC
Spectravideo Extended BASIC
TRS-80 Level II BASIC
Microsoft MACRO-80
Microsoft Macro Assembler
Microsoft Small Basic
Microsoft Visual SourceSafe
Microsoft XNA
Microsoft WebMatrix
MSX BASIC
NuGet
QBasic and QuickBASIC
TASC (The AppleSoft Compiler)
TypeScript
VBScript
Visual Studio
Microsoft Visual Studio Express
Visual Basic
Visual Basic .NET
Visual Basic for Applications
Visual C++
C++/CLI
Managed Extensions for C++
Visual C#
Visual FoxPro
Visual J++
Visual J#
Visual Studio Code
Visual Studio Lab Management
Visual Studio Tools for Applications
Visual Studio Tools for Office
VSTS Profiler
Windows API
Windows SDK
WordBASIC
Xbox Development Kit
3D
3D Builder
3D Scan (requires a Kinect for Xbox One sensor)
3D Viewer
AltspaceVR
Bing Maps for Enterprise (formerly "Bing Maps Platform" and "Microsoft Virtual Earth")
Direct3D
Havok
HoloStudio
Kinect for Windows SDK
Microsoft Mesh
Paint 3D
Simplygon
Educational
Bing
Bing Bar
Browstat
Creative Writ
|
https://en.wikipedia.org/wiki/Geodetic%20control%20network
|
A geodetic control network (also geodetic network, reference network, control point network, or control network) is a network, often of triangles, which are measured precisely by techniques of control surveying, such as terrestrial surveying or satellite geodesy.
A geodetic control network consists of stable, identifiable points with published datum values derived from observations that tie the points together.
Classically, a control is divided into horizontal (X-Y) and vertical (Z) controls (components of the control), however with the advent of satellite navigation systems, GPS in particular, this division is becoming obsolete.
Many organizations contribute information to the geodetic control network.
The higher-order (high precision, usually millimeter-to-decimeter on a scale of continents) control points are normally defined in both space and time using global or space techniques, and are used for "lower-order" points to be tied into. The lower-order control points are normally used for engineering, construction and navigation. The scientific discipline that deals with the establishing of coordinates of points in a control network is called geodesy.
Cartography applications
After a cartographer registers key points in a digital map to the real world coordinates of those points on the ground, the map is then said to be "in control". Having a base map and other data in geodetic control means that they will overlay correctly.
When map layers are not in control, it requires extra work to adjust them to line up, which introduces additional error.
Those real world coordinates are generally in some particular map projection, unit, and geodetic datum.
Measurement techniques
Terrestrial techniques
Triangulation
In "classical geodesy" (up to the sixties) control networks were established by triangulation using measurements of angles and of some spare distances. The precise orientation to the geographic north is achieved through methods of geodetic astronomy.
|
https://en.wikipedia.org/wiki/Geodetic%20astronomy
|
Geodetic astronomy or astronomical geodesy (astro-geodesy) is the application of astronomical methods into geodetic networks and other technical projects of geodesy.
Applications
The most important applications are:
Establishment of geodetic datum systems (e.g. ED50) or at expeditions
apparent places of stars, and their proper motions
precise astronomical navigation
astro-geodetic geoid determination
modelling the rock densities of the topography and of geological layers in the subsurface
Monitoring of the Earth rotation and polar wandering
Contribution to the time system of physics and geosciences
Measuring techniques
Important measuring techniques are:
Latitude determination and longitude determination, by theodolites, tacheometers, astrolabes or zenith cameras
time and star positions by observation of star transits, e.g. by meridian circles (visual, photographic or CCD)
Azimuth determination
for the exact orientation of geodetic networks
for mutual transformations between terrestrial and space methods
for improved accuracy by means of "Laplace points" at special fixed points
Vertical deflection determination and their use
in geoid determination
in mathematical reduction of very precise networks
for geophysical and geological purposes (see above)
Modern spatial methods
VLBI with radio sources (quasars)
Astrometry of stars by scanning satellites like Hipparcos or the future Gaia.
The accuracy of these methods depends on the instrument and its spectral wavelength, the measuring or scanning method, the time amount (versus economy), the atmospheric situation, the stability of the surface resp. the satellite, on mechanical and temperature effects to the instrument, on the experience and skill of the observer, and on the accuracy of the physical-mathematical models.
Therefore, the accuracy reaches from 60" (navigation, ~1 mile) to 0,001" and better (a few cm; satellites, VLBI), e.g.:
angles (vertical deflections and azimuths) ±1" up to 0,1"
ge
|
https://en.wikipedia.org/wiki/Circuit-level%20gateway
|
A circuit-level gateway is a type of firewall.
Circuit-level gateways work at the session layer of the OSI model, or as a "shim-layer" between the application layer and the transport layer of the TCP/IP stack. They monitor TCP handshaking between packets to determine whether a requested session is legitimate. Information passed to a remote computer through a circuit-level gateway appears to have originated from the gateway. Firewall traffic is cleaned based on particular session rules and may be controlled to acknowledged computers only. Circuit-level firewalls conceal the details of the protected network from the external traffic, which is helpful for interdicting access to impostors. Circuit-level gateways are relatively inexpensive and have the advantage of hiding information about the private network they protect. However, they do not filter individual packets.
See also
Application firewall
Application-level gateway firewall
Bastion host
Dual-homed
External links
http://netsecurity.about.com/cs/generalsecurity/g/def_circgw.htm
http://www.softheap.com/internet/circuit-level-gateway.html
http://www.pcstats.com/articleview.cfm?articleid=1450&page=5
Internet architecture
Network socket
Transmission Control Protocol
|
https://en.wikipedia.org/wiki/Flattening
|
Flattening is a measure of the compression of a circle or sphere along a diameter to form an ellipse or an ellipsoid of revolution (spheroid) respectively. Other terms used are ellipticity, or oblateness. The usual notation for flattening is and its definition in terms of the semi-axes and of the resulting ellipse or ellipsoid is
The compression factor is in each case; for the ellipse, this is also its aspect ratio.
Definitions
There are three variants: the flattening sometimes called the first flattening, as well as two other "flattenings" and each sometimes called the second flattening, sometimes only given a symbol, or sometimes called the second flattening and third flattening, respectively.
In the following, is the larger dimension (e.g. semimajor axis), whereas is the smaller (semiminor axis). All flattenings are zero for a circle ().
{| class="wikitable" style="border:1px solid darkgray;" cellpadding="5"
! style="padding-left: 0.5em" scope="row" | (First) flattening
| style="padding-left: 0.5em" |
| style="padding-left: 0.5em" |
| style="padding-left: 0.5em " | Fundamental. Geodetic reference ellipsoids are specified by giving
|-
! style="padding-left: 0.5em" scope="row" | Second flattening
| style="padding-left: 0.5em" |
| style="padding-left: 0.5em" |
| style="padding-left: 0.5em" | Rarely used.
|-
! style="padding-left: 0.5em" scope="row" | Third flattening
| style="padding-left: 0.5em" |
| style="padding-left: 0.5em" |
| style="padding-left: 0.5em" | Used in geodetic calculations as a small expansion parameter.
|}
Identities
The flattenings can be related to each-other:
The flattenings are related to other parameters of the ellipse. For example,
where is the eccentricity.
See also
Earth flattening
Equatorial bulge
Ovality
Planetary flattening
Sphericity
Roundness (object)
References
Celestial mechanics
Geodesy
Trigonometry
Circles
|
https://en.wikipedia.org/wiki/Zenith%20camera
|
A zenith camera is an astrogeodetic telescope used today primarily for the local surveys of Earth's gravity field. Zenith cameras are designed as transportable field instruments for the direct observation of the plumb line (astronomical latitude and longitude) and vertical deflections.
Instrument
A zenith camera combines an optical lens (about 10–20 cm aperture) with a digital image sensor (CCD) in order to image stars near the zenith. Electronic levels (tilt sensors) serve as a means to point the lens towards zenith.
Zenith cameras are generally mounted on a turnable platform to allow star images to be taken in two camera directions (two-face-measurement). Because zenith cameras are usually designed as non-tracking and non-scanning instruments, exposure times are kept short, at the order of few 0.1 s, yielding rather circular star images. Exposure epochs are mostly recorded by means of the timing-capability of GPS-receivers (time-tagging).
Data processing
Depending on the CCD sensor - lens combination used, few tens to hundreds of stars are captured with a single digital zenith image. The positions of imaged stars are measured by means of digital image processing algorithms, such as image moment analysis or point spread functions to fit the star images. Star catalogues, such as Tycho-2 or UCAC-3 are used as celestial reference to reduce the star images. The zenith point is interpolated into the field of imaged stars, and corrected for the exposure time and (small) tilt of the telescope axis to yield the direction of the plumb line.
Accuracy and applications
If the geodetic coordinates of the zenith camera is known or measured, e.g., with a GPS-receiver, vertical deflections are obtained as result of the zenith camera measurement. Zenith cameras deliver astronomical coordinates and vertical deflections accurate to about 0.1 seconds of arc. Zenith cameras with CCD image sensors are efficient to collect vertical deflections at about 10 field stations per night.
|
https://en.wikipedia.org/wiki/Apparent%20place
|
The apparent place of an object is its position in space as seen by an observer. Because of physical and geometrical effects it may differ from the "true" or "geometric" position.
Astronomy
In astronomy, a distinction is made between the mean position, apparent position and topocentric position of an object.
Position of a star
The mean position of a star (relative to the observer's adopted coordinate system) can be calculated from its value at an arbitrary epoch, together with its actual motion over time (known as proper motion). The apparent position is its position as seen by a theoretical observer at the centre of the moving Earth. Several effects cause the apparent position to differ from the mean position:
Annual aberration – a deflection caused by the velocity of the Earth's motion around the Sun, relative to an inertial frame of reference. This is independent of the distance of the star from the Earth.
Annual parallax – the apparent change in position due to the star being viewed from different places as the Earth orbits the Sun in the course of a year. Unlike aberration, this effect depends on the distance of the star, being larger for nearby stars.
Precession – a long-term (ca. 26,000 years) variation in the direction of the Earth's axis of rotation.
Nutation – shorter-term variations in the direction of the Earth's axis of rotation.
The Apparent Places of Fundamental Stars is an astronomical yearbook, which is published one year in advance by the Astronomical Calculation Institute (Heidelberg University) in Heidelberg, Germany. It lists the apparent place of about 1000 fundamental stars for every 10 days and is published as a book and in a more extensive version on the Internet.
Solar System objects
The apparent position of a planet or other object in the Solar System is also affected by light-time correction, which is caused by the finite time it takes light from a moving body to reach the observer. Simply put, the observer sees the object
|
https://en.wikipedia.org/wiki/Pkg-config
|
pkg-config is a computer program that defines and supports a unified interface for querying installed libraries for the purpose of compiling software that depends on them. It allows programmers and installation scripts to work without explicit knowledge of detailed library path information. pkg-config was originally designed for Linux, but it is now also available for BSD, Microsoft Windows, macOS, and Solaris.
It outputs various information about installed libraries. This information may include:
Parameters (flags) for C or C++ compiler
Parameters (flags) for linker
Version of the package in question
The first implementation was written in shell. Later, it was rewritten in C using the GLib library.
Synopsis
When a library is installed (automatically through the use of an RPM, deb, or other binary packaging system or by compiling from the source), a .pc file should be included and placed into a directory with other .pc files (the exact directory is dependent upon the system and outlined in the pkg-config man page). This file has several entries.
These entries typically contain a list of dependent libraries that programs using the package also need to compile. Entries also typically include the location of header files, version information and a description.
Here is an example .pc file for libpng:
prefix=/usr/local
exec_prefix=${prefix}
libdir=${exec_prefix}/lib
includedir=${exec_prefix}/include
Name: libpng
Description: Loads and saves PNG files
Version: 1.2.8
Libs: -L${libdir} -lpng12 -lz
Cflags: -I${includedir}/libpng12
This file demonstrates how libpng informs that its libraries can be found in /usr/local/lib and its headers in /usr/local/include, that the library name is libpng, and that the version is 1.2.8. It also gives the additional linker flags that are needed to compile code that uses this library.
Here is an example of usage of pkg-config while compiling:
$ gcc -o test test.c $(pkg-config --libs --cflags libpng)
pkg-config can be used by bu
|
https://en.wikipedia.org/wiki/Radio%20masts%20and%20towers
|
Radio masts and towers are typically tall structures designed to support antennas for telecommunications and broadcasting, including television. There are two main types: guyed and self-supporting structures. They are among the tallest human-made structures. Masts are often named after the broadcasting organizations that originally built them or currently use them.
In the case of a mast radiator or radiating tower, the whole mast or tower is itself the transmitting antenna.
Terminology
The terms "mast" and "tower" are often used interchangeably. However, in structural engineering terms, a tower is a self-supporting or cantilevered structure, while a mast is held up by stays or guys. Broadcast engineers in the UK use the same terminology. A mast is a ground-based or rooftop structure that supports antennas at a height where they can satisfactorily send or receive radio waves. Typical masts are of steel lattice or tubular steel construction. Masts themselves play no part in the transmission of mobile telecommunications.
Masts (to use the civil engineering terminology) tend to be cheaper to build but require an extended area surrounding them to accommodate the guy wires. Towers are more commonly used in cities where land is in short supply.
(NB: the terminology used in the United States is opposite that used in Europe - in the US, structures called masts are typically relatively small, un-guyed structures, while larger structures, guyed or un-guyed, are referred to as towers. The US Federal Communications Commission, for example, uses "tower" to describe such structures as radio and television transmission towers, whether guyed or unguyed.
There are a few borderline designs that are partly free-standing and partly guyed, called additionally guyed towers. For example:
The Gerbrandy tower consists of a self-supporting tower with a guyed mast on top.
The few remaining Blaw-Knox towers do the opposite: they have a guyed lower section surmounted by a freestanding par
|
https://en.wikipedia.org/wiki/Mast%20radiator
|
A mast radiator (or radiating tower) is a radio mast or tower in which the metal structure itself is energized and functions as an antenna. This design, first used widely in the 1930s, is commonly used for transmitting antennas operating at low frequencies, in the LF and MF bands, in particular those used for AM radio broadcasting stations. The conductive steel mast is electrically connected to the transmitter. Its base is usually mounted on a nonconductive support to insulate it from the ground. A mast radiator is a form of monopole antenna.
Structural design
Most mast radiators are built as guyed masts. Steel lattice masts of triangular cross-section are the most common type. Square lattice masts and tubular masts are also sometimes used. To ensure that the tower is a continuous conductor, the tower's structural sections are electrically bonded at the joints by short copper jumpers which are soldered to each side or "fusion" (arc) welds across the mating flanges.
Base-fed masts, the most common type, must be insulated from the ground. At its base, the mast is usually mounted on a thick ceramic insulator, which has the compressive strength to support the tower's weight and the dielectric strength to withstand the high voltage applied by the transmitter. The RF power to drive the antenna is supplied by a impedance matching network, usually housed in an antenna tuning hut near the base of the mast, and the cable supplying the current is simply bolted or brazed to the tower. The actual transmitter is usually located in a separate building, which supplies RF power to the tuning hut via a transmission line.
To keep it upright the mast has tensioned guy wires attached, usually in sets of 3 at 120° angles, which are anchored to the ground usually with concrete anchors. Multiple sets of guys (from 2 to 5) at different levels are used to make the tower rigid against buckling. The guy lines have strain insulators inserted, usually at the top near the attachment point
|
https://en.wikipedia.org/wiki/DOS%20Plus
|
DOS Plus (erroneously also known as DOS+) was the first operating system developed by Digital Research's OEM Support Group in Newbury, Berkshire, UK, first released in 1985. DOS Plus 1.0 was based on CP/M-86 Plus combined with the PCMODE emulator from Concurrent PC DOS 4.11. While CP/M-86 Plus and Concurrent DOS 4.1 still had been developed in the United States, Concurrent PC DOS 4.11 was an internationalized and bug-fixed version brought forward by Digital Research UK. Later DOS Plus 2.x issues were based on Concurrent PC DOS 5.0 instead. In the broader picture, DOS Plus can be seen as an intermediate step between Concurrent CP/M-86 and DR DOS.
DOS Plus is able to run programs written for either CP/M-86 or MS-DOS 2.11, and can read and write the floppy formats used by both of these systems. Up to four CP/M-86 programs can be multitasked, but only one DOS program can be run at a time.
User interface
DOS Plus attempts to present the same command-line interface as MS-DOS. Like MS-DOS, it has a command-line interpreter called COMMAND.COM (alternative name DOSPLUS.COM). There is an AUTOEXEC.BAT file, but no CONFIG.SYS (except for FIDDLOAD, an extension to load some field-installable device drivers (FIDD) in some versions of DOS Plus 2.1). The major difference the user will notice is that the bottom line of the screen contains status information similar to:
DDT86 ALARM UK8 PRN=LPT1 Num 10:17:30
The left-hand side of the status bar shows running processes. The leftmost one will be visible on the screen; the others (if any) are running in the background. The right-hand side shows the keyboard layout in use (UK8 in the above example), the printer port assignment, the keyboard Caps Lock and Num Lock status, and the current time. If a DOS program is running, the status line is not shown. DOS programs cannot be run in the background.
The keyboard layout in use can be changed by pressing , and one of the function keys –.
Commands
DOS Plus con
|
https://en.wikipedia.org/wiki/DECUS
|
The Digital Equipment Computer Users' Society (DECUS) was an independent computer user group related to Digital Equipment Corporation (DEC). The Connect User Group Community, formed from the consolidation in May, 2008 of DECUS, Encompass, HP-Interex, and ITUG is the Hewlett-Packard’s largest user community, representing more than 50,000 participants.
History
DECUS was the Digital Equipment Computer Users' Society, a users' group for Digital Equipment Corporation (DEC) computers. Members included companies and organizations who purchased DEC equipment; many members were application programmers who wrote code for DEC machines or system programmers who managed DEC systems. DECUS was founded in March 1961 by Edward Fredkin.
DECUS was legally a part of Digital Equipment Corporation and subsidized by the company; however, it was run by unpaid volunteers. Digital staff members were not eligible to join DECUS, yet were allowed and encouraged to participate in DECUS activities. Digital, in turn, relied on DECUS as an important channel of communication with its customers.
DECUS Software Library
DECUS had a software library which accepted orders from anyone, distributing programs submitted to it by people willing to share. It was organized by processor and operating system, using information submitted by program submitters, who signed releases allowing this and asserting their right to do so. The DECUS library published catalogs of these offerings yearly, though because it had the catalog mastered by an outside firm, it did not have easy ways to retrieve the content of early catalogs (prior to circa 1980) in machine readable format. Later material was maintained in house and was more easily edited. The charges for copying were somewhat high, reflecting the fact the copies were made by hand on DECUS equipment.
Activities
There were two DECUS US symposia per year, at which members and DEC employees gave presentations, and could visit an exhibit hall containing many new comp
|
https://en.wikipedia.org/wiki/Nuclear%20cross%20section
|
The nuclear cross section of a nucleus is used to describe the probability that a nuclear reaction will occur. The concept of a nuclear cross section can be quantified physically in terms of "characteristic area" where a larger area means a larger probability of interaction. The standard unit for measuring a nuclear cross section (denoted as σ) is the barn, which is equal to , or . Cross sections can be measured for all possible interaction processes together, in which case they are called total cross sections, or for specific processes, distinguishing elastic scattering and inelastic scattering; of the latter, amongst neutron cross sections the absorption cross sections are of particular interest.
In nuclear physics it is conventional to consider the impinging particles as point particles having negligible diameter. Cross sections can be computed for any nuclear process, such as capture scattering, production of neutrons, or nuclear fusion. In many cases, the number of particles emitted or scattered in nuclear processes is not measured directly; one merely measures the attenuation produced in a parallel beam of incident particles by the interposition of a known thickness of a particular material. The cross section obtained in this way is called the total cross section and is usually denoted by a σ or σT.
Typical nuclear radii are of the order 10−14 m. Assuming spherical shape, we therefore expect the cross sections for nuclear reactions to be of the order of or (i.e., 1 barn). Observed cross sections vary enormously: for example, slow neutrons absorbed by the (n, ) reaction show a cross section much higher than 1,000 barns in some cases (boron-10, cadmium-113, and xenon-135), while the cross sections for transmutations by gamma-ray absorption are in the region of 0.001 barn.
Microscopic and macroscopic cross section
Nuclear cross sections are used in determining the nuclear reaction rate, and are governed by the reaction rate equation for a particular se
|
https://en.wikipedia.org/wiki/QUICC
|
The QUICC (Quad Integrated Communications Controller) was a Motorola 68k -based microcontroller made by Freescale Semiconductor, targeted at the telecommunications market. It lends its name to a family of successor chips called PowerQUICC.
History
The original QUICC was the Motorola 68360 (MC68360), based on the MC68302. It was followed by the PowerPC-based PowerQUICC I, PowerQUICC II, PowerQUICC II+ and PowerQUICC III.
Applications
QUICC chips form the core of many Motorola Cellular Base stations.
Many PowerQUICC II+ designs now have SATA controllers for SAN based applications.
PowerQUICC CPUs/boards come with a Linux environment. Freescale also offers MQX (a RTOS) for PPC.
References
External links
MC68360 QUICC datasheet
68k microprocessors
Microcontrollers
|
https://en.wikipedia.org/wiki/Segre%20classification
|
The Segre classification is an algebraic classification of rank two symmetric tensors. The resulting types are then known as Segre types. It is most commonly applied to the energy–momentum tensor (or the Ricci tensor) and primarily finds application in the classification of exact solutions in general relativity.
See also
Corrado Segre
Jordan normal form
Petrov classification
References
See section 5.1 for the Segre classification.
Linear algebra
Tensors
Tensors in general relativity
|
https://en.wikipedia.org/wiki/Cataclysmic%20pole%20shift%20hypothesis
|
The cataclysmic pole shift hypothesis is a pseudo-scientific claim that there have been recent, geologically rapid shifts in the axis of rotation of Earth, causing calamities such as floods and tectonic events or relatively rapid climate changes.
There is evidence of precession and changes in axial tilt, but this change is on much longer time-scales and does not involve relative motion of the spin axis with respect to the planet. However, in what is known as true polar wander, the Earth rotates with respect to a fixed spin axis. Research shows that during the last 200 million years a total true polar wander of some 30° has occurred, but that no rapid shifts in Earth's geographic axial pole were found during this period. A characteristic rate of true polar wander is 1° or less per million years. Between approximately 790 and 810 million years ago, when the supercontinent Rodinia existed, two geologically rapid phases of true polar wander may have occurred. In each of these, the magnetic poles of Earth shifted by approximately 55° due to a large shift in the crust.
Definition and clarification
The geographic poles are defined by the points on the surface of Earth that are intersected by the axis of rotation. The pole shift hypothesis describes a change in location of these poles with respect to the underlying surface – a phenomenon distinct from the changes in axial orientation with respect to the plane of the ecliptic that are caused by precession and nutation, and is an amplified event of a true polar wander. Geologically, a surface shift separate from a planetary shift, enabled by earth's molten core.
Pole shift hypotheses are not connected with plate tectonics, the well-accepted geological theory that Earth's surface consists of solid plates which shift over a viscous, or semifluid asthenosphere; nor with continental drift, the corollary to plate tectonics which maintains that locations of the continents have moved slowly over the surface of Earth, resulting
|
https://en.wikipedia.org/wiki/Immunophenotyping
|
Immunophenotyping is a technique used to study the protein expressed by cells. This technique is commonly used in basic science research and laboratory diagnostic purpose. This can be done on tissue section (fresh or fixed tissue), cell suspension, etc. An example is the detection of tumor markers, such as in the diagnosis of leukemia. It involves the labelling of white blood cells with antibodies directed against surface proteins on their membrane. By choosing appropriate antibodies, the differentiation of leukemic cells can be accurately determined. The labelled cells are processed in a flow cytometer, a laser-based instrument capable of analyzing thousands of cells per second. The whole procedure can be performed on cells from the blood, bone marrow or spinal fluid in a matter of a few hours.
Immunophenotyping is a very common flow cytometry test in which fluorophore-conjugated antibodies are used as probes for staining target cells with high avidity and affinity. This technique allows rapid and easy phenotyping of each cell in a heterogeneous sample according to the presence or absence of a protein combination.
References
External links
British Society for Haematology guidelines accessed July 31, 2006
Flow cytometry
Hematology
|
https://en.wikipedia.org/wiki/Video%20BIOS
|
Video BIOS is the BIOS of a graphics card in a (usually IBM PC-derived) computer. It initializes the graphics card at the computer's boot time. It also implements INT 10h interrupt and VESA BIOS Extensions (VBE) for basic text and videomode output before a specific video driver is loaded. In UEFI 2.x systems, the INT 10h and the VBE are replaced by the UEFI GOP.
Much the way the system BIOS provides a set of functions that are used by software programs to access the system hardware, the video BIOS provides a set of video-related functions that are used by programs to access the video hardware as well as storing vendor-specific settings such as card name, clock frequencies, VRAM types & voltages. The video BIOS interfaces software to the video chipset in the same way that the system BIOS does for the system chipset.
The ROM also contained a basic font set to upload to the video adapter font RAM, if the video card did not contain a font ROM with this font set instead.
Unlike some other hardware components, the video card usually needs to be active very early during the boot process so that the user can see what is going on. This requires the card to be activated before any operating system begins loading; thus it needs to be activated by the BIOS, the only software that is present at this early stage. The system BIOS loads the video BIOS from the card's ROM into system RAM and transfers control to it early in the boot sequence.
Early PCs contained functions for driving MDA and CGA cards in the system BIOS, and those cards did not have any Video BIOS built in. When the EGA card was first sold in 1984, the Video BIOS was introduced to make these cards compatible with existing PCs whose BIOS did not know how to drive an EGA card. Ever since, EGA/VGA and all enhanced VGA compatible cards have included a Video BIOS.
When the computer is started, some graphics cards (usually certain Nvidia cards) display their vendor, model, Video BIOS version and amount of video memory
|
https://en.wikipedia.org/wiki/Chaff%20algorithm
|
Chaff is an algorithm for solving instances of the Boolean satisfiability problem in programming. It was designed by researchers at Princeton University. The algorithm is an instance of the DPLL algorithm with a number of enhancements for efficient implementation.
Implementations
Some available implementations of the algorithm in software are mChaff and zChaff, the latter one being the most widely known and used. zChaff was originally written by Dr. Lintao Zhang, at Microsoft Research, hence the “z”. It is now maintained by researchers at Princeton University and available for download as both source code and binaries on Linux. zChaff is free for non-commercial use.
References
M. Moskewicz, C. Madigan, Y. Zhao, L. Zhang, S. Malik. Chaff: Engineering an Efficient SAT Solver, 39th Design Automation Conference (DAC 2001), Las Vegas, ACM 2001.
External links
Web page about zChaff
SAT solvers
Boolean algebra
Automated theorem proving
Constraint programming
|
https://en.wikipedia.org/wiki/Sulcus%20%28morphology%29
|
In biological morphology and anatomy, a sulcus (: sulci) is a furrow or fissure (Latin fissura, : fissurae). It may be a groove, natural division, deep furrow, elongated cleft, or tear in the surface of a limb or an organ, most notably on the surface of the brain, but also in the lungs, certain muscles (including the heart), as well as in bones, and elsewhere. Many sulci are the product of a surface fold or junction, such as in the gums, where they fold around the neck of the tooth.
In invertebrate zoology, a sulcus is a fold, groove, or boundary, especially at the edges of sclerites or between segments.
In pollen a grain that is grooved by a sulcus is termed sulcate.
Examples in anatomy
Liver
Ligamentum teres hepatis fissure
Ligamentum venosum fissure
Portal fissure, found in the under-surface of the liver
Transverse fissure of liver, found in the lower surface of the liver
Umbilical fissure, found in front of the liver
Lung
Azygos fissure, of right lung
Horizontal fissure of right lung
Oblique fissure, of the right and left lungs
Skull
Auricular fissure, found in the temporal bone
Petrotympanic fissure
Pterygomaxillary fissure
Sphenoidal fissure, separates the wings and the body of the sphenoid bone
Superior orbital fissure
Other types
anal fissure, a break or tear in the skin of the anal canal
anterior interventricular sulcus
calcaneal sulcus
coronal sulcus
femoral sulcus or intercondylar fossa of femur
fissure (dentistry), a break in the tooth enamel
fissure of the nipple, a condition that results from running, breastfeeding and other friction-causing exposures
fissured tongue, a condition characterized by deep grooves (fissures) in the tongue
gingival sulcus
gluteal sulcus
Henle's fissure, a fissure in the connective tissue between the muscle fibers of the heart
interlabial sulci
intermammary sulcus
intertubercular sulcus, the groove between the lesser and greater tubercules of the humerus (bone of the upper arm)
lacrimal sulcus (sulcus l
|
https://en.wikipedia.org/wiki/Holographic%20Versatile%20Card
|
The Holographic Versatile Card (HVC) was a proposed data storage format by Optware; the projected date for a Japanese launch had been the first half of 2007, pending finalization of the specification, however as of March 2022, nothing has yet surfaced. One of its main advantages compared with discs was supposed to be the lack of moving parts when played. They claimed it would hold 30GB of data, have a write speed 3 times faster than Blu-ray, and be approximately the size of a credit card. Optware claimed that at release the media would cost about ¥100 (roughly $1.20) each, that reader devices would initially cost about ¥200,000(roughly $2400) while reader/writer devices would have cost ¥1 000,000 (roughly $12000, as per exchange rate of Apr 2011) each.
See also
DVD
HD DVD
Holographic memory
Holographic Versatile Disc
Vaporware
References
External links
Optware Creator of HVC format.
Engadget News report on the Holographic Versatile Card
Über Gizmo News report on the Holographic Versatile Card
Image of HVC
Holographic data storage
Vaporware
|
https://en.wikipedia.org/wiki/Coincidence%20point
|
In mathematics, a coincidence point (or simply coincidence) of two functions is a point in their common domain having the same image.
Formally, given two functions
we say that a point x in X is a coincidence point of f and g if f(x) = g(x).
Coincidence theory (the study of coincidence points) is, in most settings, a generalization of fixed point theory, the study of points x with f(x) = x. Fixed point theory is the special case obtained from the above by letting X = Y and taking g to be the identity function.
Just as fixed point theory has its fixed-point theorems, there are theorems that guarantee the existence of coincidence points for pairs of functions. Notable among them, in the setting of manifolds, is the Lefschetz coincidence theorem, which is typically known only in its special case formulation for fixed points.
Coincidence points, like fixed points, are today studied using many tools from mathematical analysis and topology. An equaliser is a generalization of the coincidence set.
References
Mathematical analysis
Topology
Fixed points (mathematics)
|
https://en.wikipedia.org/wiki/El-Fish
|
El-Fish is a fish and fish-tank simulator and software toy developed by Russian game developer AnimaTek, with Maxis providing development advice. The game was published by Mindscape (v1.1) and later by Maxis (v1.1 + v1.2) in 1993 on 5 diskettes.
Each fish in El-Fish has a unique Roe, similar to the genome. This allows the user to catch fish and use selective breeding and mutation to create fish to their own tastes for placing them in virtual aquariums. Around 800 possible genetic attributes (fin shape, body color, size, etc.) are available, which can be selectively shaped into virtually infinite numbers of unique fish. Once fish are created, El-Fish will algorithmically generate up to 256 animation frames so that the fish will appear to swim smoothly around the tank.
The tank simulator is very customizable for this game's era. The player can select from a large number of backdrops and tank ornaments for the fish to swim between. The user can also import their own images to use as tank ornaments. El-Fish includes a fractal based plant generator for creating unique aquarium plants. There are several "moving objects" that can be added to the tanks which the fish will react to, such as a cat paw, fibcrab, and a small plastic scuba diver. The user can also procedurally generate background music for their tank using an in-game composer, or choose a separate MIDI music file to be played for each virtual aquarium, and feed the fish.
The tank simulator can run as a memory-resident program in MS-DOS, making it a screensaver. Multiple tanks can be displayed via El-Fish's slideshow feature.
Development
El-Fish was created by Vladimir Pokhilko, Ph.D. and Alexey Pajitnov (the creator of Tetris), who had backgrounds in mathematics, computer science, and psychology. They were attempting to create software for INTEC (a company that they started) that would be made for "people's souls". They developed this idea, calling it "Human Software", with three rules:
The software needs
|
https://en.wikipedia.org/wiki/RAD750
|
The RAD750 is a radiation-hardened single-board computer manufactured by BAE Systems Electronics, Intelligence & Support. The successor of the RAD6000, the RAD750 is for use in high-radiation environments experienced on board satellites and spacecraft. The RAD750 was released in 2001, with the first units launched into space in 2005.
Technology
The CPU has 10.4 million transistors, an order of magnitude more than the RAD6000 (which had 1.1 million). It is manufactured using either 250 or 150 nm photolithography and has a die area of 130 mm2. It has a core clock of 110 to 200 MHz and can process at 266 MIPS or more. The CPU can include an extended L2 cache to improve performance.
The CPU can withstand an absorbed radiation dose of 2,000 to 10,000 grays (200,000 to 1,000,000 rads), temperatures between −55 °C and 125 °C, and requires 5 watts of power. The standard RAD750 single-board system (CPU and motherboard) can withstand 1,000 grays (100,000 rads), temperatures between −55 °C and 70 °C, and requires 10 watts of power.
The RAD750 system has a price that is comparable to the RAD6000, the latter of which as of 2002 was listed at US$200,000 (). Customer program requirements and quantities, however, greatly affect the final unit costs.
The RAD750 is based on the PowerPC 750. Its packaging and logic functions are completely compatible with the PowerPC 7xx family.
The term RAD750 is a registered trademark of BAE Systems Information and Electronic Systems Integration Inc.
Deployment
In 2010, it was reported that there were over 150 RAD750s used in a variety of spacecraft. Notable examples, in order of launch date, include:
Deep Impact comet-chasing spacecraft, launched in January 2005 first to use the RAD750 computer.
XSS 11, small experimental satellite, launched 11 April 2005.
Mars Reconnaissance Orbiter, launched 12 August 2005.
SECCHI (Sun Earth Connection Coronal and Heliospheric Investigation) instrument package on each of the STEREO spacecraft, launched
|
https://en.wikipedia.org/wiki/389%20Directory%20Server
|
The 389 Directory Server (previously Fedora Directory Server) is a Lightweight Directory Access Protocol (LDAP) server developed by Red Hat as part of the community-supported Fedora Project. The name "389" derives from the port number used by LDAP.
389 Directory Server supports many operating systems, including Fedora Linux, Red Hat Enterprise Linux, Debian, Solaris, and HP-UX 11i. In late 2016 the project merged experimental FreeBSD support.
However, the 389 Directory Server team, as of 2017, is likely to remove HPUX and Solaris support in the upcoming 1.4.x series.
The 389 source code is generally available under the GNU General Public License version 3; some components have an exception for plugin code, while other components use LGPLv2 or Apache. Red Hat also markets a commercial version of the project as a Red Hat Directory Server as part of support contracts for RHEL.
History
389 Directory Server is derived from the original University of Michigan Slapd project. In 1996, the project's developers were hired by Netscape Communications Corporation, and the project became known as the Netscape Directory Server (NDS). After acquiring Netscape, AOL sold ownership of the NDS intellectual property to Sun Microsystems, but retained rights akin to ownership. Sun sold and developed the Netscape Directory Server under the name JES/SunOne Directory Server, now Oracle Directory Server since the takeover of Sun by Oracle. AOL/Netscape's rights were acquired by Red Hat, and on June 1, 2005, much of the source code was released as free software under the terms of the GNU General Public License (GPL).
As of 389 Directory Server version 1.0 (December 1, 2005), Red Hat released as free software all of the remaining source code for all components included in the release package (admin server, console, etc.) and continues to maintain them under their respective licenses.
In May 2009, the Fedora Directory Server project changed its name to 389 to give the project a distributio
|
https://en.wikipedia.org/wiki/Emote
|
An emote is an entry in a text-based chat client that indicates an action taking place. Unlike emoticons, they are not text art, and instead describe the action using words or images (similar to emoji).
Emotes were created by Shigetaka Kurita in Japan, whose original idea was to create a way of communication using pictures. Kurita would draw pictures of himself for inspiration for the emotes he created.
Nowadays emotes are a language of their own; emotes have progressed far past their original counterparts in 1999.
Overview
In most IRC chat clients, entering the command "/me" will print the user's name followed by whatever text follows. For example, if a user named Joe typed "/me jumps with joy", the client will print this as "Joe jumps with joy" in the chat window.
<Joe> Allow me to demonstrate...
* Joe jumps with joy again.
In chat media which do not support the "/me" command, it is conventional to read text surrounded by asterisks as if it were emoted. For example, reading "Joe: *jumps with joy*" in a chat log would suggest that the user had intended the words to be performed rather than spoken.
In MMORPGs with visible avatars, such as EverQuest, Asheron's Call, Second Life and World of Warcraft, certain commands entered through the chat interface will print a predefined /me emote to the chat window and cause the character to animate, and in some cases produce sound effects. For example, entering "/confused" into World of Warcraft's chat interface will play an animation on the user's avatar and print "You are hopelessly confused." in the chat window.
With this being said, emotes are used primarily online in video games, and even more recently, on smartphones. An example of image-based emotes being used frequently is the chat feature on the streaming service Twitch. Twitch also allows users to upload animated emotes encoded with the GIF format.
See also
Emoji
References
Sources
"History of emotes and why we use them": Reader's Digest
"History of emo
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.