source
stringlengths 33
168
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/P%E2%80%93n%20junction%20isolation
|
p–n junction isolation is a method used to electrically isolate electronic components, such as transistors, on an integrated circuit (IC) by surrounding the components with reverse biased p–n junctions.
Introduction
By surrounding a transistor, resistor, capacitor or other component on an IC with semiconductor material which is doped using an opposite species of the substrate dopant, and connecting this surrounding material to a voltage which reverse-biases the p–n junction that forms, it is possible to create a region which forms an electrically isolated "well" around the component.
Operation
Assume that the semiconductor wafer is p-type material. Also assume a ring of n-type material is placed around a transistor, and placed beneath the transistor. If the p-type material within the n-type ring is now connected to the negative terminal of the power supply and the n-type ring is connected to the positive terminal, the 'holes' in the p-type region are pulled away from the p–n junction, causing the width of the nonconducting depletion region to increase. Similarly, because the n-type region is connected to the positive terminal, the electrons will also be pulled away from the junction.
This effectively increases the potential barrier and greatly increases the electrical resistance against the flow of charge carriers. For this reason there will be no (or minimal) electric current across the junction.
At the middle of the junction of the p–n material, a depletion region is created to stand-off the reverse voltage. The width of the depletion region grows larger with higher voltage. The electric field grows as the reverse voltage increases. When the electric field increases beyond a critical level, the junction breaks down and current begins to flow by avalanche breakdown. Therefore, care must be taken that circuit voltages do not exceed the breakdown voltage or electrical isolation ceases.
History
In an article entitled "Microelectronics", published in Scientifi
|
https://en.wikipedia.org/wiki/Fractal%20expressionism
|
Fractal expressionism is used to distinguish fractal art generated directly by artists from fractal art generated using mathematics and/or computers. Fractals are patterns that repeat at increasingly fine scales and are prevalent in natural scenery (examples include clouds, rivers, and mountains). Fractal expressionism implies a direct expression of nature's patterns in an art work.
Jackson Pollock's poured paintings
The initial studies of fractal expressionism focused on the poured paintings by Jackson Pollock (1912-1956), whose work has traditionally been associated with the abstract expressionist movement. Pollock's patterns had previously been referred to as “natural” and “organic”, inviting speculation by John Briggs in 1992 that Pollock's work featured fractals. In 1997, Taylor built a pendulum device called the Pollockizer which painted fractal patterns bearing a similarity to Pollock's work. Computer analysis of Pollock's work published by Taylor et al. in a 1999 Nature article found that Pollock's painted patterns have characteristics that match those displayed by nature's fractals. This analysis supported clues that Pollock's patterns are fractal and reflect "the fingerprint of nature".
Taylor noted several similarities between Pollock's painting style and the processes used by nature to construct its landscapes. For instance, he cites Pollock's propensity to revisit paintings that he had not adjusted in several weeks as being comparable to cyclic processes in nature, such as the seasons or the tides. Furthermore, Taylor observed several visual similarities between the patterns produced by nature and those produced by Pollock as he painted. He points out that Pollock abandoned the use of a traditional frame for his paintings, preferring instead to roll out his canvas on the floor; this, Taylor asserts, is more compatible with how nature works than traditional painting techniques because the patterns in nature's scenery are not artificially bounded.
|
https://en.wikipedia.org/wiki/List%20of%20mathematical%20artists
|
[[File:San Romano Battle (Paolo Uccello, London) 01.jpg|thumb|350px|Broken lances lying along perspective lines in Paolo Uccello's The Battle of San Romano, 1438]]
This is a list of artists who actively explored mathematics in their artworks. Art forms practised by these artists include painting, sculpture, architecture, textiles and origami.
Some artists such as Piero della Francesca and Luca Pacioli went so far as to write books on mathematics in art. Della Francesca wrote books on solid geometry and the emerging field of perspective, including De Prospectiva Pingendi (On Perspective for Painting), Trattato d’Abaco (Abacus Treatise), and De corporibus regularibus (Regular Solids),Piero della Francesca, Trattato d'Abaco, ed. G. Arrighi, Pisa (1970). while Pacioli wrote De divina proportione (On Divine Proportion)'', with illustrations by Leonardo da Vinci, at the end of the fifteenth century.
Merely making accepted use of some aspect of mathematics such as perspective does not qualify an artist for admission to this list.
The term "fine art" is used conventionally to cover the output of artists who produce a combination of paintings, drawings and sculptures.
List
|
https://en.wikipedia.org/wiki/Time%20reversal%20signal%20processing
|
Time reversal signal processing is a signal processing technique that has three main uses: creating an optimal carrier signal for communication, reconstructing a source event, and focusing high-energy waves to a point in space. A Time Reversal Mirror (TRM) is a device that can focus waves using the time reversal method. TRMs are also known as time reversal mirror arrays since they are usually arrays of transducers. TRM are well-known and have been used for decades in the optical domain. They are also used in the ultrasonic domain.
Overview
If the source is passive, i.e. some type of isolated reflector, an iterative technique can be used to focus energy on it. The TRM transmits a plane wave which travels toward the target and is reflected off it. The reflected wave returns to the TRM, where it looks as if the target has emitted a (weak) signal. The TRM reverses and retransmits the signal as usual, and a more focused wave travels toward the target. As the process is repeated, the waves become more and more focused on the target.
Yet another variation is to use a single transducer and an ergodic cavity. Intuitively, an ergodic cavity is one that will allow a wave originating at any point to reach any other point. An example of an ergodic cavity is an irregularly shaped swimming pool: if someone dives in, eventually the entire surface will be rippling with no clear pattern. If the propagation medium is lossless and the boundaries are perfect reflectors, a wave starting at any point will reach all other points an infinite number of times. This property can be exploited by using a single transducer and recording for a long time to get as many reflections as possible.
Theory
The time reversal technique is based upon a feature of the wave equation known as reciprocity: given a solution to the wave equation, then the time reversal (using a negative time) of that solution is also a solution. This occurs because the standard wave equation only contains even order
|
https://en.wikipedia.org/wiki/Lego%20Mindstorms
|
Lego Mindstorms (sometimes stylized as LEGO MINDSTORMS) is a discontinued hardware and software structure which develops programmable robots based on Lego bricks. Each version included a programmable microcontroller (or intelligent brick), a set of modular sensors and motors, and parts from the Lego Technic line to create mechanical systems. The system is controlled by the intelligent brick, which acts as the brain of the mechanical system.
While originally conceptualized and launched as a tool to support educational constructivism, Mindstorms has become the first home robotics kit available to a wide audience. It has developed a community of adult hobbyists and hackers as well as students and general Lego enthusiasts following the product's launch in 1998. In October 2022, The Lego Group announced that the Lego Mindstorms brand would be discontinued by the end of the year.
Pre-Mindstorms
Background
In 1985, Seymour Papert, Mitchel Resnick and Stephen Ocko created a company called Microworlds with the intent of developing a construction kit that could be animated by computers for educational purposes. Papert had previously created the Logo programming language as a tool to "support the development of new ways of thinking and learning", and employed "Turtle" robots to physically act out the programs in the real world. As the types of programs created were limited by the shape of the Turtle, the idea came up to make a construction kit that could use Logo commands to animate a creation of the learner's own design. Similar to the "floor turtle" robots used to demonstrate Logo commands in the real world, a construction system that ran Logo commands would also demonstrate them in the real world, but allowing the child to construct their own creations benefitted the learning experience by putting them in control In considering which construction system to partner with, they wanted a "low floor high ceiling" approach, something that was easy to pick up but very powerfu
|
https://en.wikipedia.org/wiki/Stream%20processing
|
In computer science, stream processing (also known as event stream processing, data stream processing, or distributed stream processing) is a programming paradigm which views streams, or sequences of events in time, as the central input and output objects of computation. Stream processing encompasses dataflow programming, reactive programming, and distributed data processing. Stream processing systems aim to expose parallel processing for data streams and rely on streaming algorithms for efficient implementation. The software stack for these systems includes components such as programming models and query languages, for expressing computation; stream management systems, for distribution and scheduling; and hardware components for acceleration including floating-point units, graphics processing units, and field-programmable gate arrays.
The stream processing paradigm simplifies parallel software and hardware by restricting the parallel computation that can be performed. Given a sequence of data (a stream), a series of operations (kernel functions) is applied to each element in the stream. Kernel functions are usually pipelined, and optimal local on-chip memory reuse is attempted, in order to minimize the loss in bandwidth, associated with external memory interaction. Uniform streaming, where one kernel function is applied to all elements in the stream, is typical. Since the kernel and stream abstractions expose data dependencies, compiler tools can fully automate and optimize on-chip management tasks. Stream processing hardware can use scoreboarding, for example, to initiate a direct memory access (DMA) when dependencies become known. The elimination of manual DMA management reduces software complexity, and an associated elimination for hardware cached I/O, reduces the data area expanse that has to be involved with service by specialized computational units such as arithmetic logic units.
During the 1980s stream processing was explored within dataflow programming.
|
https://en.wikipedia.org/wiki/Global%20Census%20of%20Marine%20Life%20on%20Seamounts
|
Global Census of Marine Life on Seamounts (commonly CenSeam) is a global scientific initiative, launched in 2005, that is designed to expand the knowledge base of marine life at seamounts. Seamounts are underwater mountains, not necessarily volcanic in origin, which often form subsurface archipelagoes and are found throughout the world's ocean basins, with almost half in the Pacific. There are estimated to be as many as 100,000 seamounts at least one kilometer in height, and more if lower rises are included. However, they have not been explored very much—in fact, only about half of one percent have been sampled—and almost every expedition to a seamount discovers new species and new information. There is evidence that seamounts can host concentrations of biologic diversity, each with its own unique local ecosystem; they seem to affect oceanic currents, resulting among other things in local concentration of plankton which in turn attracts species that graze on it, and indeed are probably a significant overall factor in biogeography of the oceans. They also may serve as way stations in the migration of whales and other pelagic species. Despite being poorly studied, they are heavily targeted by commercial fishing, including dredging. In addition they are of interest to potential seabed mining.
The overall goal of CenSeam is "to determine the role of seamounts in the biogeography, biodiversity, productivity, and evolution of marine organisms, and to evaluate the effects of human exploitation on seamounts." To this effect, the group organizes and contributes to various research efforts about seamount biodiversity. Specifically, the project aims to act as a standardized scaffold for future studies and samplings, citing inefficiency and incompatibility between individual research efforts in the past. To give a scale of their mission, there are an estimated 100,000 seamounts in the ocean, but only 350 of them have been sampled, and only about 100 sampled thoroughly. Althoug
|
https://en.wikipedia.org/wiki/Electric%20power%20conversion
|
In all fields of electrical engineering, power conversion is the process of converting electric energy from one form to another. A power converter is an electrical or electro-mechanical device for converting electrical energy. A power converter can convert alternating current (AC) into direct current (DC) and vice versa; change the voltage or frequency of the current or do some combination of these. The power converter can be as simple as a transformer or it can be a far more complex system, such as a resonant converter. The term can also refer to a class of electrical machinery that is used to convert one frequency of alternating current into another. Power conversion systems often incorporate redundancy and voltage regulation.
Power converters are classified based on the type of power conversion they do. One way of classifying power conversion systems is according to whether the input and output are alternating current or direct current. Finally, the task of all power converters is to "process and control the flow of electrical energy by supplying voltages and currents in a form that is optimally suited for user loads".
DC power conversion
DC to DC
The following devices can convert DC to DC:
Linear regulator
Voltage regulator
Motor–generator
Rotary converter
Switched-mode power supply
DC to AC
The following devices can convert DC to AC:
Power inverter
Motor–generator
Rotary converter
Switched-mode power supply
Chopper (electronics)
AC power conversion
AC to DC
The following devices can convert AC to DC:
Rectifier
Mains power supply unit (PSU)
Motor–generator
Rotary converter
Switched-mode power supply
AC to AC
The following devices can convert AC to AC:
Transformer or autotransformer
Voltage converter
Voltage regulator
Cycloconverter
Variable-frequency transformer
Motor–generator
Rotary converter
Switched-mode power supply
Other systems
There are also devices and methods to convert between power systems designed for single and three-phase operation.
Th
|
https://en.wikipedia.org/wiki/Whittaker%E2%80%93Shannon%20interpolation%20formula
|
The Whittaker–Shannon interpolation formula or sinc interpolation is a method to construct a continuous-time bandlimited function from a sequence of real numbers. The formula dates back to the works of E. Borel in 1898, and E. T. Whittaker in 1915, and was cited from works of J. M. Whittaker in 1935, and in the formulation of the Nyquist–Shannon sampling theorem by Claude Shannon in 1949. It is also commonly called Shannon's interpolation formula and Whittaker's interpolation formula. E. T. Whittaker, who published it in 1915, called it the Cardinal series.
Definition
Given a sequence of real numbers, x[n], the continuous function
(where "sinc" denotes the normalized sinc function) has a Fourier transform, X(f), whose non-zero values are confined to the region |f| ≤ 1/(2T). When the parameter T has units of seconds, the bandlimit, 1/(2T), has units of cycles/sec (hertz). When the x[n] sequence represents time samples, at interval T, of a continuous function, the quantity fs = 1/T is known as the sample rate, and fs/2 is the corresponding Nyquist frequency. When the sampled function has a bandlimit, B, less than the Nyquist frequency, x(t) is a perfect reconstruction of the original function. (See Sampling theorem.) Otherwise, the frequency components above the Nyquist frequency "fold" into the sub-Nyquist region of X(f), resulting in distortion. (See Aliasing.)
Equivalent formulation: convolution/lowpass filter
The interpolation formula is derived in the Nyquist–Shannon sampling theorem article, which points out that it can also be expressed as the convolution of an infinite impulse train with a sinc function:
This is equivalent to filtering the impulse train with an ideal (brick-wall) low-pass filter with gain of 1 (or 0 dB) in the passband. If the sample rate is sufficiently high, this means that the baseband image (the original signal before sampling) is passed unchanged and the other images are removed by the brick-wall filter.
Convergence
The i
|
https://en.wikipedia.org/wiki/Prefix%20delegation
|
IP networks are divided logically into subnetworks. Computers in the same subnetwork have the same address prefix. For example, in a typical home network with legacy Internet Protocol version 4, the network prefix would be something like 192.168.1.0/24, as expressed in CIDR notation.
With IPv4, commonly home networks use private addresses (defined in ) that are non-routable on the public Internet and use address translation to convert to routable addresses when connecting to hosts outside the local network. Business networks typically had manually provisioned subnetwork prefixes. In IPv6 global addresses are used end-to-end, so even home networks may need to distribute public, routable IP addresses to hosts.
Since it would not be practical to manually provision networks at scale, in IPv6 networking, DHCPv6 prefix delegation is used to assign a network address prefix and automate configuration and provisioning of the public routable addresses for the network. The way this works for example in the case of a home network is that the home router uses DHCPv6 protocol to request a network prefix from the ISP's DHCPv6 server. Once assigned, the ISP routes this network to the customer's home router and the home router starts advertising the new addresses to hosts on the network, either via SLAAC or using DHCPv6.
DHCPv6 Prefix Delegation is supported by most ISPs who provide native IPv6 for consumers on fixed networks.
Prefix delegation is generally not supported on cellular networks, for example LTE or 5G. Most cellular networks route a fixed /64 prefix to the subscriber. Personal hotspots may still provide IPv6 access to hosts on the network by using a different technique called Proxy Neighbor Discovery or using the technique described in . One of the reasons why cellular networks may not yet support prefix delegation is that the operators want to use prefixes they can aggregate to a single route. To solve this, defines an optional mechanism and the related DHCPv6 opt
|
https://en.wikipedia.org/wiki/Multiplicative%20noise
|
In signal processing, the term multiplicative noise refers to an unwanted random signal that gets multiplied into some relevant signal during capture, transmission, or other processing.
An important example is the speckle noise commonly observed in radar imagery. Examples of multiplicative noise affecting digital photographs are proper shadows due to undulations on the surface of the imaged objects, shadows cast by complex objects like foliage and Venetian blinds, dark spots caused by dust in the lens or image sensor, and variations in the gain of individual elements of the image sensor array.
|
https://en.wikipedia.org/wiki/Hardware%20watermarking
|
Hardware watermarking, also known as IP core watermarking is the process of embedding covert marks as design attributes inside a hardware or IP core design itself. Hardware Watermarking can represent watermarking of either DSP Cores (widely used in consumer electronics devices) or combinational/sequential circuits. Both forms of Hardware Watermarking are very popular. In DSP Core Watermarking a secret mark is embedded within the logic elements of the DSP Core itself. DSP
Core Watermark usually implants this secret mark in the form of a robust signature either in the RTL design or during High Level Synthesis (HLS) design. The watermarking process of a DSP Core leverages on the High Level Synthesis framework and implants a secret mark in one (or more) of the high level synthesis phases such as scheduling, allocation and binding. DSP Core Watermarking is performed to protect a DSP core from hardware threats such as IP piracy, forgery and false claim of ownership. Some examples of DSP cores are FIR filter, IIR filter, FFT, DFT, JPEG, HWT etc. Few of the most important properties of a DSP core watermarking process are as follows:
(a) Low embedding cost
(b) Secret mark
(c) Low creation time
(d) Strong tamper tolerance
(e) Fault tolerance.
Process of hardware watermarking
Hardware or IP core watermarking in the context of DSP/Multimedia Cores are significantly different from watermarking of images/digital content. IP Cores are usually complex in size and nature and thus require highly sophisticated mechanisms to implant signatures within their design without disturbing the functionality. Any small change in the functionality of the IP core renders the hardware watermarking process futile. Such is the sensitivity of this process. Hardware Watermarking can be performed in two ways: (a) Single-phase watermarking, (b) Multi-phase watermarking.
Single-phase watermarking process
As the name suggests, in single-phase watermarking process the secret marks in the form of
|
https://en.wikipedia.org/wiki/Center%20for%20Advancing%20Electronics%20Dresden
|
The Center for Advancing Electronics Dresden (cfaed) of the Technische Universität Dresden is part of the Excellence Initiative of German universities. The cluster of excellence for microelectronics is funded from 2012 to 2017 by the German Research Community (DFG) and unites about 60 Investigators and their teams from 11 institutions to act jointly towards reaching the Cluster's ambitious aims. The coordinator is Prof. Dr.-Ing. Gerhard Fettweis, Chair of Mobile Communication Systems. The cluster brings together the teams from two universities and several research institutes in Saxony: Technische Universität Dresden, Technische Universität Chemnitz, Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Leibniz Institute for Polymer Research Dresden e.V. (IPF), Leibniz Institute for Solid State and Materials Research Dresden (IFW), Max Planck Institute of Molecular Cell Biology and Genetics (MPI-CBG), Max Planck Institute for the Physics of Complex Systems (MPI-PKS), Nanoelectronics Materials Laboratory gGmbH (NaMLab), Fraunhofer Institute for Electronic Nano Systems (Fraunhofer ENAS), Fraunhofer Institute of Ceramic Technologies and Systems (Fraunhofer IKTS) and Kurt Schwabe Institute for Measuring and Sensor Technology Meinsberg e.V. (KSI). About 300 scientists from more than 20 different countries are working in nine research paths to investigate completely new technologies for electronic information processing which overcome the limits of today's predominant CMOS technology.
Position and institutional building
One of the scientific buildings, as well as the organizational headquarters, of the cfaed is situated in Dresden-Plauen, Würzburger Straße 46. In May 2015, construction works for the new cfaed building commenced at the campus of TU Dresden. The building is due for completion in late 2017 and it will host new laboratories, seminar rooms, and offices.
History
The initial proposal for cfaed as a Cluster of Excellence was submitted to the DFG in August 2011. On July
|
https://en.wikipedia.org/wiki/Phase%20margin
|
In electronic amplifiers, the phase margin (PM) is the difference between the phase lag (< 0) and -180°, for an amplifier's output signal (relative to its input) at zero dB gain - i.e. unity gain, or that the output signal has the same amplitude as the input.
.
For example, if the amplifier's open-loop gain crosses 0 dB at a frequency where the phase lag is -135°, then the phase margin of this feedback system is -135° -(-180°) = 45°. See Bode plot#Gain margin and phase margin for more details.
Theory
Typically the open-loop phase lag (relative to input, < 0) varies with frequency, progressively increasing to exceed 180°, at which frequency the output signal becomes inverted, or antiphase in relation to the input. The PM will be positive but decreasing at frequencies less than the frequency at which inversion sets in (at which PM = 0), and PM is negative (PM < 0) at higher frequencies. In the presence of negative feedback, a zero or negative PM at a frequency where the loop gain exceeds unity (1) guarantees instability. Thus positive PM is a "safety margin" that ensures proper (non-oscillatory) operation of the circuit. This applies to amplifier circuits as well as more generally, to active filters, under various load conditions (e.g. reactive loads). In its simplest form, involving ideal negative feedback voltage amplifiers with non-reactive feedback, the phase margin is measured at the frequency where the open-loop voltage gain of the amplifier equals the desired closed-loop DC voltage gain.
More generally, PM is defined as that of the amplifier and its feedback network combined (the "loop", normally opened at the amplifier input), measured at a frequency where the loop gain is unity, and prior to the closing of the loop, through tying the output of the open loop to the input source, in such a way as to subtract from it.
In the above loop-gain definition, it is assumed that the amplifier input presents zero load. To make this work for non-zero-load input,
|
https://en.wikipedia.org/wiki/Fetal%20pig
|
Fetal pigs are unborn pigs used in elementary as well as advanced biology classes as objects for dissection. Pigs, as a mammalian species, provide a good specimen for the study of physiological systems and processes due to the similarities between many pig and human organs.
Use in biology labs
Along with frogs and earthworms, fetal pigs are among the most common animals used in classroom dissection. There are several reasons for this, the main reason being that pigs, like humans, are mammals. Shared traits include common hair, mammary glands, live birth, similar organ systems, metabolic levels, and basic body form. They also allow for the study of fetal circulation, which differs from that of an adult. Secondly, fetal pigs are easy to obtain because they are by-products of the pork industry. Fetal pigs are the unborn piglets of sows that were killed by the meat-packing industry. These pigs are not bred and killed for this purpose, but are extracted from the deceased sow’s uterus. Fetal pigs not used in classroom dissections are often used in fertilizer or simply discarded. Thirdly, fetal pigs are cheap, which is an essential component for dissection use by schools. They can be ordered for about $30 at biological product companies. Fourthly, fetal pigs are easy to dissect because of their soft tissue and incompletely developed bones that are still made of cartilage. In addition, they are relatively large with well-developed organs that are easily visible. As long as the pork industry exists, fetal pigs will be relatively abundant, making them the prime choice for classroom dissections.
Alternatives
Several peer-reviewed comparative studies have concluded that the educational outcomes of students who are taught basic and advanced biomedical concepts and skills using non-animal methods are equivalent or superior to those of their peers who use animal-based laboratories such as animal dissection.
A systematic review concluded that students taught using non-animal m
|
https://en.wikipedia.org/wiki/Wet-folding
|
Wet-folding is an origami technique developed by Akira Yoshizawa that employs water to dampen the paper so that it can be manipulated more easily. This process adds an element of sculpture to origami, which is otherwise purely geometric. Wet-folding is used very often by professional folders for non-geometric origami, such as animals. Wet-folders usually employ thicker paper than what would usually be used for normal origami, to ensure that the paper does not tear.
One of the most prominent users of the wet-folding technique is Éric Joisel, who specialized in origami animals, humans, and legendary creatures. He also created origami masks. Other folders who practice this technique are Robert J. Lang and John Montroll.
The process of wet-folding allows a folder to preserve a curved shape more easily. It also reduces the number of wrinkles substantially. Wet-folding allows for increased rigidity and structure due to a process called sizing. Sizing is a water-soluble adhesive, usually methylcellulose or methyl acetate, that may be added during the manufacture of the paper. As the paper dries, the chemical bonds of the fibers of the paper tighten together which results in a crisper and stronger sheet. In order to moisten the paper, an artist typically wipes the sheet with a dampened cloth. The amount of moisture added to the paper is crucial because too little will cause the paper to dry quickly and spring back into its original position before the folding is complete, while too much will either fray the edges of the paper or will cause the paper to split at high-stress points.
Notes and references
See also
Papier-mâché
External links
Mini-documentary about Joisel at YouTube
An illustrated introduction to wet-folding
Origami
Mathematics and art
|
https://en.wikipedia.org/wiki/List%20of%20thermodynamic%20properties
|
In thermodynamics, a physical property is any property that is measurable, and whose value describes a state of a physical system. Thermodynamic properties are defined as characteristic features of a system, capable of specifying the system's state. Some constants, such as the ideal gas constant, , do not describe the state of a system, and so are not properties. On the other hand, some constants, such as (the freezing point depression constant, or cryoscopic constant), depend on the identity of a substance, and so may be considered to describe the state of a system, and therefore may be considered physical properties.
"Specific" properties are expressed on a per mass basis. If the units were changed from per mass to, for example, per mole, the property would remain as it was (i.e., intensive or extensive).
Regarding work and heat
Work and heat are not thermodynamic properties, but rather process quantities: flows of energy across a system boundary. Systems do not contain work, but can perform work, and likewise, in formal thermodynamics, systems do not contain heat, but can transfer heat. Informally, however, a difference in the energy of a system that occurs solely because of a difference in its temperature is commonly called heat, and the energy that flows across a boundary as a result of a temperature difference is "heat".
Altitude (or elevation) is usually not a thermodynamic property. Altitude can help specify the location of a system, but that does not describe the state of the system. An exception would be if the effect of gravity need to be considered in order to describe a state, in which case altitude could indeed be a thermodynamic property.
See also
Conjugate variables
Dimensionless numbers
Intensive and extensive properties
Thermodynamic databases for pure substances
Thermodynamic variable
|
https://en.wikipedia.org/wiki/Networked%20music%20performance
|
A networked music performance or network musical performance is a real-time interaction over a computer network that enables musicians in different locations to perform as if they were in the same room. These interactions can include performances, rehearsals, improvisation or jamming sessions, and situations for learning such as master classes. Participants may be connected by "high fidelity multichannel audio and video links" as well as MIDI data connections and specialized collaborative software tools. While not intended to be a replacement for traditional live stage performance, networked music performance supports musical interaction when co-presence is not possible and allows for novel forms of music expression. Remote audience members and possibly a conductor may also participate.
History
One of the earliest examples of a networked music performance experiment was the 1951 piece: “Imaginary Landscape No. 4 for Twelve Radios” by composer John Cage. The piece “used radio transistors as a musical instrument. The transistors were interconnected thus influencing each other.”
In the late 1970s, as personal computers were becoming more available and affordable, groups like the League of Automatic Music Composers began to experiment with linking multiple computers, electronic instruments, and analog circuitry to create novel forms of music.
The 1990s saw several important experiments in networked performance. In 1993, The University of Southern California Information Sciences Institute began experimenting with networked music performance over the Internet.The Hub (band), which was formed by original members of The League of Automatic Composers, experimented in 1997 with sending MIDI data over ethernet to distributed locations. However, “ it was more difficult than imagined to debug all of the software problems on each of the different machines with different operating systems and CPU speeds in different cities”. In 1998, there was a three-way audio-only performan
|
https://en.wikipedia.org/wiki/Principle%20of%20least%20privilege
|
In information security, computer science, and other fields, the principle of least privilege (PoLP), also known as the principle of minimal privilege (PoMP) or the principle of least authority (PoLA), requires that in a particular abstraction layer of a computing environment, every module (such as a process, a user, or a program, depending on the subject) must be able to access only the information and resources that are necessary for its legitimate purpose.
Details
The principle means giving any users account or processes only those privileges which are essentially vital to perform its intended functions. For example, a user account for the sole purpose of creating backups does not need to install software: hence, it has rights only to run backup and backup-related applications. Any other privileges, such as installing new software, are blocked. The principle applies also to a personal computer user who usually does work in a normal user account, and opens a privileged, password protected account only when the situation absolutely demands it.
When applied to users, the terms least user access or least-privileged user account (LUA) are also used, referring to the concept that all user accounts should run with as few privileges as possible, and also launch applications with as few privileges as possible.
The principle of (least privilege) is widely recognized as an important design consideration towards enhancing and giving a much needed 'Boost' to the protection of data and functionality from faults (fault tolerance) and malicious behavior.
Benefits of the principle include:
Intellectual Security. When code is limited in the scope of changes it can make to a system, it is easier to test its possible actions and interactions with other security targeted applications. In practice for example, applications running with restricted rights will not have access to perform operations that could crash a machine, or adversely affect other applications running on the
|
https://en.wikipedia.org/wiki/Log-spectral%20distance
|
The log-spectral distance (LSD), also referred to as log-spectral distortion or root mean square log-spectral distance, is a distance measure between two spectra. The log-spectral distance between spectra and is defined as p-norm:
where and are power spectra.
Unlike the Itakura–Saito distance, the log-spectral distance is symmetric.
In speech coding, log spectral distortion for a given frame is defined as the root mean square difference between the original LPC log power spectrum and the quantized or interpolated LPC log power spectrum. Usually the average of spectral distortion over a large number of frames is calculated and that is used as the measure of performance of quantization or interpolation.
Meaning
When measuring the distortion between signals, the scale or temporality/spatiality of the signals can have different levels of significance to the distortion measures. To incorporate the proper level of significance, the signals can be transformed into a different domain.
When the signals are transformed into the spectral domain with transformation methods such as Fourier transform and DCT, the spectral distance is the measure to compare the transformed signals. LSD incorporates the logarithmic characteristics of the power spectra, and it becomes effective when the processing task of the power spectrum also has logarithmic characteristics, e.g. human listening to the sound signal with different levels of loudness.
Moreover, LSD is equal to the cepstral distance which is the distance between the signals' cepstrum when the p-numbers are the same by Parseval's theorem.
Other Representations
As LSD is in the form of p-norm, it can be represented with different p-numbers and log scales.
For instance, when it is expressed in dB with L2 norm, it is defined as:
.
When it is represented in the discrete space, it is defined as:
where and are power spectra in discrete space.
See also
Itakura–Saito distance
|
https://en.wikipedia.org/wiki/CAN%20bus
|
A Controller Area Network (CAN bus) is a vehicle bus standard designed to allow microcontrollers and devices to communicate with each other's applications without a host computer. It is a message-based protocol, designed originally for multiplex electrical wiring within automobiles to save on copper, but it can also be used in many other contexts. For each device, the data in a frame is transmitted serially but in such a way that if more than one device transmits at the same time, the highest priority device can continue while the others back off. Frames are received by all devices, including by the transmitting device.
History
Development of the CAN bus started in 1983 at Robert Bosch GmbH. The protocol was officially released in 1986 at the Society of Automotive Engineers (SAE) conference in Detroit, Michigan. The first CAN controller chips were introduced by Intel in 1987, and shortly thereafter by Philips. Released in 1991, the Mercedes-Benz W140 was the first production vehicle to feature a CAN-based multiplex wiring system.
Bosch published several versions of the CAN specification. The latest is CAN 2.0, published in 1991. This specification has two parts. Part A is for the standard format with an 11-bit identifier, and part B is for the extended format with a 29-bit identifier. A CAN device that uses 11-bit identifiers is commonly called CAN 2.0A, and a CAN device that uses 29-bit identifiers is commonly called CAN 2.0B. These standards are freely available from Bosch along with other specifications and white papers.
In 1993, the International Organization for Standardization (ISO) released CAN standard ISO 11898, which was later restructured into two parts: ISO 11898-1 which covers the data link layer, and ISO 11898-2 which covers the CAN physical layer for high-speed CAN. ISO 11898-3 was released later and covers the CAN physical layer for low-speed, fault-tolerant CAN. The physical layer standards ISO 11898-2 and ISO 11898-3 are not part of the Bosch C
|
https://en.wikipedia.org/wiki/Nova%20classification
|
The Nova classification (, 'new classification') is a framework for grouping edible substances based on the extent and purpose of food processing applied to them. Researchers at the University of São Paulo, Brazil, proposed the system in 2009.
Nova classifies food into four groups:
Unprocessed or minimally processed foods
Processed culinary ingredients
Processed foods
Ultra-processed foods
The system has been used worldwide in nutrition and public health research, policy, and guidance as a tool for understanding the health implications of different food products.
History
The Nova classification grew out of the research of Carlos Augusto Monteiro. Born in 1948 into a family straddling the divide between poverty and relative affluence in Brazil, Monteiro's journey began as the first member of his family to attend university. His early research in the late 1970s focused on malnutrition, reflecting the prevailing emphasis in nutrition science of the time. In the mid-1990s, Monteiro observed a significant shift in Brazil's dietary landscape marked by a rise in obesity rates among economically disadvantaged populations, while more affluent areas saw declines. This transformation led him to explore dietary patterns holistically, rather than focusing solely on individual nutrients. Employing statistical methods, Monteiro identified two distinct eating patterns in Brazil: one rooted in traditional foods like rice and beans and another characterized by the consumption of highly processed products.
The classification's name is from the title of the original scientific article in which it was published, 'A new classification of foods' (). The idea of applying this as the classification's name is credited to Jean-Claude Moubarac of the Université de Montréal. The name is often styled in capital letters, NOVA, but it is not an acronym. Recent scientific literature leans towards writing the name as Nova, including papers written with Monteiro's involvement.
Nova food pr
|
https://en.wikipedia.org/wiki/Van%20der%20Waerden%20notation
|
In theoretical physics, Van der Waerden notation refers to the usage of two-component spinors (Weyl spinors) in four spacetime dimensions. This is standard in twistor theory and supersymmetry. It is named after Bartel Leendert van der Waerden.
Dotted indices
Undotted indices (chiral indices)
Spinors with lower undotted indices have a left-handed chirality, and are called chiral indices.
Dotted indices (anti-chiral indices)
Spinors with raised dotted indices, plus an overbar on the symbol (not index), are right-handed, and called anti-chiral indices.
Without the indices, i.e. "index free notation", an overbar is retained on right-handed spinor, since ambiguity arises between chirality when no index is indicated.
Hatted indices
Indices which have hats are called Dirac indices, and are the set of dotted and undotted, or chiral and anti-chiral, indices. For example, if
then a spinor in the chiral basis is represented as
where
In this notation the Dirac adjoint (also called the Dirac conjugate) is
See also
Dirac equation
Infeld–Van der Waerden symbols
Lorentz transformation
Pauli equation
Ricci calculus
Notes
|
https://en.wikipedia.org/wiki/SUMIT
|
Stackable Unified Module Interconnect Technology (SUMIT) is a connector between expansion buses independent of motherboard form factor. Boards featuring SUMIT connectors are usually used in "stacks" where one board sits on top of another.
It was published by the Small Form Factor Special Interest Group.
Details
Two identical connectors carry the signals specified by the standard. Commonly referred to as SUMIT A & SUMIT B, designers have the option of designing with either both SUMIT A and B, or just SUMIT A. The signals carried within each connector is as follows:
SUMIT A:
One PCI-Express x1 lane
Four USB 2.0
ExpressCard
LPC
SPI/uWire
SMBus/I2C Bus
SUMIT B:
One PCI-Express x1 lane
One PCI-Express x4 or four more PCI-Express x1 lanes
As of August 2009, three board form factors used the SUMIT connectors for embedded applications: ISM or SUMIT-ISM [90mm × 96mm], Pico-ITXe [72mm × 100mm], and Pico-I/O [60mm × 72mm].
See also
VMEbus
VPX
CompactPCI
PC/104
Pico-ITXe
|
https://en.wikipedia.org/wiki/Catalog%20of%20articles%20in%20probability%20theory
|
This page lists articles related to probability theory. In particular, it lists many articles corresponding to specific probability distributions. Such articles are marked here by a code of the form (X:Y), which refers to number of random variables involved and the type of the distribution. For example (2:DC) indicates a distribution with two random variables, discrete or continuous. Other codes are just abbreviations for topics. The list of codes can be found in the table of contents.
Core probability: selected topics
Probability theory
Basic notions (bsc)
Random variable
Continuous probability distribution / (1:C)
Cumulative distribution function / (1:DCR)
Discrete probability distribution / (1:D)
Independent and identically-distributed random variables / (FS:BDCR)
Joint probability distribution / (F:DC)
Marginal distribution / (2F:DC)
Probability density function / (1:C)
Probability distribution / (1:DCRG)
Probability distribution function
Probability mass function / (1:D)
Sample space
Instructive examples (paradoxes) (iex)
Berkson's paradox / (2:B)
Bertrand's box paradox / (F:B)
Borel–Kolmogorov paradox / cnd (2:CM)
Boy or Girl paradox / (2:B)
Exchange paradox / (2:D)
Intransitive dice
Monty Hall problem / (F:B)
Necktie paradox
Simpson's paradox
Sleeping Beauty problem
St. Petersburg paradox / mnt (1:D)
Three Prisoners problem
Two envelopes problem
Moments (mnt)
Expected value / (12:DCR)
Canonical correlation / (F:R)
Carleman's condition / anl (1:R)
Central moment / (1:R)
Coefficient of variation / (1:R)
Correlation / (2:R)
Correlation function / (U:R)
Covariance / (2F:R) (1:G)
Covariance function / (U:R)
Covariance matrix / (F:R)
Cumulant / (12F:DCR)
Factorial moment / (1:R)
Factorial moment generating function / anl (1:R)
Fano factor
Geometric standard deviation / (1:R)
Hamburger moment problem / anl (1:R)
Hausdorff moment problem / anl (1:R)
Isserlis Gaussian moment theorem / Gau
Jensen's inequality / (1:DCR
|
https://en.wikipedia.org/wiki/Noise%20%28signal%20processing%29
|
In signal processing, noise is a general term for unwanted (and, in general, unknown) modifications that a signal may suffer during capture, storage, transmission, processing, or conversion.
Sometimes the word is also used to mean signals that are random (unpredictable) and carry no useful information; even if they are not interfering with other signals or may have been introduced intentionally, as in comfort noise.
Noise reduction, the recovery of the original signal from the noise-corrupted one, is a very common goal in the design of signal processing systems, especially filters. The mathematical limits for noise removal are set by information theory.
Types of noise
Signal processing noise can be classified by its statistical properties (sometimes called the "color" of the noise) and by how it modifies the intended signal:
Additive noise, gets added to the intended signal
White noise
Additive white Gaussian noise
Black noise
Gaussian noise
Pink noise or flicker noise, with 1/f power spectrum
Brownian noise, with 1/f2 power spectrum
Contaminated Gaussian noise, whose PDF is a linear mixture of Gaussian PDFs
Power-law noise
Cauchy noise
Multiplicative noise, multiplies or modulates the intended signal
Quantization error, due to conversion from continuous to discrete values
Poisson noise, typical of signals that are rates of discrete events
Shot noise, e.g. caused by static electricity discharge
Transient noise, a short pulse followed by decaying oscillations
Burst noise, powerful but only during short intervals
Phase noise, random time shifts in a signal
Noise in specific kinds of signals
Noise may arise in signals of interest to various scientific and technical fields, often with specific features:
Noise (audio), such as "hiss" or "hum", in audio signals
Background noise, due to spurious sounds during signal capture
Comfort noise, added to voice communications to fill silent gaps
Electromagnetically induced noise, audible noise due to el
|
https://en.wikipedia.org/wiki/Quasi-analog%20signal
|
In telecommunication, a quasi-analog signal is a digital signal that has been converted to a form suitable for transmission over a specified analog channel.
The specification of the analog channel should include frequency range, bandwidth, signal-to-noise ratio, and envelope delay distortion. When quasi-analog form of signaling is used to convey message traffic over dial-up telephone systems, it is often referred to as voice-data. A modem may be used for the conversion process.
|
https://en.wikipedia.org/wiki/High-%CE%BA%20dielectric
|
In the semiconductor industry, the term high-κ dielectric refers to a material with a high dielectric constant (κ, kappa), as compared to silicon dioxide. High-κ dielectrics are used in semiconductor manufacturing processes where they are usually used to replace a silicon dioxide gate dielectric or another dielectric layer of a device. The implementation of high-κ gate dielectrics is one of several strategies developed to allow further miniaturization of microelectronic components, colloquially referred to as extending Moore's Law. Sometimes these materials are called "high-k" (pronounced "high kay"), instead of "high-κ" (high kappa).
Need for high-κ materials
Silicon dioxide () has been used as a gate oxide material for decades. As metal–oxide–semiconductor field-effect transistors (MOSFETs) have decreased in size, the thickness of the silicon dioxide gate dielectric has steadily decreased to increase the gate capacitance (per unit area) and thereby drive current (per device width), raising device performance. As the thickness scales below 2 nm, leakage currents due to tunneling increase drastically, leading to high power consumption and reduced device reliability. Replacing the silicon dioxide gate dielectric with a high-κ material allows increased gate capacitance without the associated leakage effects.
First principles
The gate oxide in a MOSFET can be modeled as a parallel plate capacitor. Ignoring quantum mechanical and depletion effects from the Si substrate and gate, the capacitance of this parallel plate capacitor is given by
where
is the capacitor area
is the relative dielectric constant of the material (3.9 for silicon dioxide)
is the permittivity of free space
is the thickness of the capacitor oxide insulator
Since leakage limitation constrains further reduction of , an alternative method to increase gate capacitance is to alter κ by replacing silicon dioxide with a high-κ material. In such a scenario, a thicker gate oxide layer might
|
https://en.wikipedia.org/wiki/Animal%20efficacy%20rule
|
The FDA animal efficacy rule (also known as animal rule) applies to development and testing of drugs and biologicals to reduce or prevent serious or life-threatening conditions caused by exposure to lethal or permanently disabling toxic agents (chemical, biological, radiological, or nuclear substances), where human efficacy trials are not feasible or ethical. The animal efficacy rule was finalized by the FDA and authorized by the United States Congress in 2002, following the September 11 attacks and concerns regarding bioterrorism.
Summary
The FDA can rely on evidence from animal studies to provide substantial evidence of product effectiveness if:
There is a reasonably well-understood mechanism for the toxicity of the agent and its amelioration or prevention by the product;
The effect is demonstrated in either:
More than one animal species expected to react with a response predictive for humans; or
One well-characterized animal species model (adequately evaluated for its responsiveness in humans) for predicting the response in humans.
The animal study endpoint is clearly related to the desired benefit in humans; and
Data or information on the pharmacokinetics and pharmacodynamics of the product or other relevant data or information in animals or humans is sufficiently well understood to allow selection of an effective dose in humans, and it is, therefore, reasonable to expect the effectiveness of the product in animals to be a reliable indicator of its effectiveness in humans.
FDA published a Guidance for Industry on the Animal Rule in October 2015.
|
https://en.wikipedia.org/wiki/What%20Is%20Life%3F
|
What Is Life? The Physical Aspect of the Living Cell is a 1944 science book written for the lay reader by physicist Erwin Schrödinger. The book was based on a course of public lectures delivered by Schrödinger in February 1943, under the auspices of the Dublin Institute for Advanced Studies, where he was Director of Theoretical Physics, at Trinity College, Dublin. The lectures attracted an audience of about 400, who were warned "that the subject-matter was a difficult one and that the lectures could not be termed popular, even though the physicist’s most dreaded weapon, mathematical deduction, would hardly be utilized." Schrödinger's lecture focused on one important question: "how can the events in space and time which take place within the spatial boundary of a living organism be accounted for by physics and chemistry?"
In the book, Schrödinger introduced the idea of an "aperiodic crystal" that contained genetic information in its configuration of covalent chemical bonds. In the 1950s, this idea stimulated enthusiasm for discovering the chemical basis of genetic inheritance. Although the existence of some form of hereditary information had been hypothesized since 1869, its role in reproduction and its helical shape were still unknown at the time of Schrödinger's lecture. In retrospect, Schrödinger's aperiodic crystal can be viewed as a well-reasoned theoretical prediction of what biologists should have been looking for during their search for genetic material. In 1953, James D. Watson and Francis Crick jointly proposed the double helix structure of deoxyribonucleic acid (DNA) on the basis of, amongst other theoretical insights, X-ray diffraction experiments conducted by Rosalind Franklin. They both credited Schrödinger's book with presenting an early theoretical description of how the storage of genetic information would work, and each independently acknowledged the book as a source of inspiration for their initial researches.
Background
The book, published i
|
https://en.wikipedia.org/wiki/Programmable%20logic%20device
|
A programmable logic device (PLD) is an electronic component used to build reconfigurable digital circuits. Unlike digital logic constructed using discrete logic gates with fixed functions, a PLD has an undefined function at the time of manufacture. Before the PLD can be used in a circuit it must be programmed to implement the desired function. Compared to fixed logic devices, programmable logic devices simplify the design of complex logic and may offer superior performance. Unlike for microprocessors, programming a PLD changes the connections made between the gates in the device.
PLDs can broadly be categorised into, in increasing order of complexity, Simple Programmable Logic Devices (SPLDs), comprising programmable array logic, programmable logic array and generic array logic; Complex Programmable Logic Devices (CPLDs) and Field-Programmable Gate Arrays (FPGAs).
History
In 1969, Motorola offered the XC157, a mask-programmed gate array with 12 gates and 30 uncommitted input/output pins.
In 1970, Texas Instruments developed a mask-programmable IC based on the IBM read-only associative memory or ROAM. This device, the TMS2000, was programmed by altering the metal layer during the production of the IC. The TMS2000 had up to 17 inputs and 18 outputs with 8 JK flip flop for memory. TI coined the term Programmable Logic Array (PLA) for this device.
In 1971, General Electric Company (GE) was developing a programmable logic device based on the new Programmable Read-Only Memory (PROM) technology. This experimental device improved on IBM's ROAM by allowing multilevel logic. Intel had just introduced the floating-gate UV erasable PROM so the researcher at GE incorporated that technology. The GE device was the first erasable PLD ever developed, predating the Altera EPLD by over a decade. GE obtained several early patents on programmable logic devices.
In 1973 National Semiconductor introduced a mask-programmable PLA device (DM7575) with 14 inputs and 8 outputs with no m
|
https://en.wikipedia.org/wiki/List%20of%20gravitational%20wave%20observations
|
This page contains a list of observed/candidate gravitational wave events.
Origin and nomenclature
Direct observation of gravitational waves, which commenced with the detection of an event by LIGO in 2015, plays a key role in gravitational wave astronomy. LIGO has been involved in all subsequent detections to date, with Virgo joining in August 2017.
Joint observation runs of LIGO and VIRGO, designated "O1, O2, etc." span many months, with months of maintenance and upgrades in-between designed to increase the instruments sensitivity and range. Within these run periods, the instruments are capable of detecting gravitational waves.
The first run, O1, ran from September 12, 2015, to January 19, 2016, and succeeded in its first gravitational wave detection. O2 ran for a greater duration, from November 30, 2016, to August 25, 2017. O3 began on April 1, 2019, which was briefly suspended on September 30, 2019, for maintenance and upgrades, thus O3a. O3b marks resuming of the run and began on November 1, 2019. Due to the COVID-19 pandemic O3 was forced to end prematurely. O4 is planned to begin on May 24, 2023; initially planned for March, the project needed more time to stabilize the instruments.
The O4 observing run has been extended from one year to 18 months, following plans to make further upgrades for the O5 run. Updated observing plans are published on the official website, containing the latest information on these runs.
Gravitational wave events are named starting with the prefix GW, while observations that trigger an event alert but have not (yet) been confirmed are named starting with the prefix S. Six digits then indicate the date of the event, with the two first digits representing the year, the two middle digits the month and two final digits the day of observation. This is similar to the systematic naming for other kinds of astronomical event observations, such as those of gamma-ray bursts.
Probable detections that are not confidently identified as gravi
|
https://en.wikipedia.org/wiki/Eukaryote
|
The eukaryotes () constitute the domain of Eukarya, organisms whose cells have a membrane-bound nucleus. All animals, plants, fungi, and many unicellular organisms are eukaryotes. They constitute a major group of life forms alongside the two groups of prokaryotes: the Bacteria and the Archaea. Eukaryotes represent a small minority of the number of organisms, but due to their generally much larger size, their collective global biomass is much larger than that of prokaryotes.
The eukaryotes seemingly emerged in the Archaea, within the Asgard archaea. This implies that there are only two domains of life, Bacteria and Archaea, with eukaryotes incorporated among the Archaea. Eukaryotes emerged approximately 2.2 billion years ago, during the Proterozoic eon, likely as flagellated cells. The leading evolutionary theory is they were created by symbiogenesis between an anaerobic Asgard archaean and an aerobic proteobacterium, which formed the mitochondria. A second episode of symbiogenesis with a cyanobacterium created the plants, with chloroplasts. The oldest-known eukaryote fossils, multicellular planktonic organisms belonging to the Gabonionta, were discovered in Gabon in 2023, dating back to 2.1 billion years ago.
Eukaryotic cells contain membrane-bound organelles such as the nucleus, the endoplasmic reticulum, and the Golgi apparatus. Eukaryotes may be either unicellular or multicellular. In comparison, prokaryotes are typically unicellular. Unicellular eukaryotes are sometimes called protists. Eukaryotes can reproduce both asexually through mitosis and sexually through meiosis and gamete fusion (fertilization).
Diversity
Eukaryotes are organisms that range from microscopic single cells, such as picozoans under 3 micrometres across, to animals like the blue whale, weighing up to 190 tonnes and measuring up to long, or plants like the coast redwood, up to tall. Many eukaryotes are unicellular; the informal grouping called protists includes many of these, with some
|
https://en.wikipedia.org/wiki/List%20of%20functional%20analysis%20topics
|
This is a list of functional analysis topics.
See also: Glossary of functional analysis.
Hilbert space
Functional analysis, classic results
Operator theory
Banach space examples
Lp space
Hardy space
Sobolev space
Tsirelson space
ba space
Real and complex algebras
Topological vector spaces
Amenability
Amenable group
Von Neumann conjecture
Wavelets
Quantum theory
See also list of mathematical topics in quantum theory
Probability
Free probability
Bernstein's theorem
Non-linear
Fixed-point theorems in infinite-dimensional spaces
History
Stefan Banach (1892–1945)
Hugo Steinhaus (1887–1972)
John von Neumann (1903-1957)
Alain Connes (born 1947)
Earliest Known Uses of Some of the Words of Mathematics: Calculus & Analysis
Earliest Known Uses of Some of the Words of Mathematics: Matrices and Linear Algebra
Functional analysis
|
https://en.wikipedia.org/wiki/Kingdom%20%28biology%29
|
In biology, a kingdom is the second highest taxonomic rank, just below domain. Kingdoms are divided into smaller groups called phyla.
Traditionally, some textbooks from the United States and Canada used a system of six kingdoms of eukaryotes (Animalia, Plantae, Fungi, Protista, Archaea/Archaebacteria, and Bacteria or Eubacteria), while textbooks in other parts of the world, such as the United Kingdom, Pakistan, Bangladesh, India, Greece, Brazil use five kingdoms only (Animalia, Plantae, Fungi, Protista and Monera).
Some recent classifications based on modern cladistics have explicitly abandoned the term kingdom, noting that some traditional kingdoms are not monophyletic, meaning that they do not consist of all the descendants of a common ancestor. The terms flora (for plants), fauna (for animals), and, in the 21st century, funga (for fungi) are also used for life present in a particular region or time.
Definition and associated terms
When Carl Linnaeus introduced the rank-based system of nomenclature into biology in 1735, the highest rank was given the name "kingdom" and was followed by four other main or principal ranks: class, order, genus and species. Later two further main ranks were introduced, making the sequence kingdom, phylum or division, class, order, family, genus and species. In 1990, the rank of domain was introduced above kingdom.
Prefixes can be added so subkingdom (subregnum) and infrakingdom (also known as infraregnum) are the two ranks immediately below kingdom. Superkingdom may be considered as an equivalent of domain or empire or as an independent rank between kingdom and domain or subdomain. In some classification systems the additional rank branch (Latin: ramus) can be inserted between subkingdom and infrakingdom, e.g., Protostomia and Deuterostomia in the classification of Cavalier-Smith.
History
Two kingdoms of life
The classification of living things into animals and plants is an ancient one. Aristotle (384–322 BC) classified anima
|
https://en.wikipedia.org/wiki/Inverter%20%28logic%20gate%29
|
In digital logic, an inverter or NOT gate is a logic gate which implements logical negation. It outputs a bit opposite of the bit that is put into it. The bits are typically implemented as two differing voltage levels.
Description
The NOT gate outputs a zero when given a one, and a one when given a zero. Hence, it inverts its inputs. Colloquially, this inversion of bits is called "flipping" bits. As with all binary logic gates, other pairs of symbols such as true and false, or high and low may be used in lieu of one and zero.
It is equivalent to the logical negation operator (¬) in mathematical logic. Because it has only one input, it is a unary operation and has the simplest type of truth table. It is also called the complement gate because it produces the ones' complement of a binary number, swapping 0s and 1s.
The NOT gate is one of three basic logic gates from which any Boolean circuit may be built up. Together with the AND gate and the OR gate, any function in binary mathematics may be implemented. All other logic gates may be made from these three.
The terms "programmable inverter" or "controlled inverter" do not refer to this gate; instead, these terms refer to the XOR gate because it can conditionally function like a NOT gate.
Symbols
The traditional symbol for an inverter circuit is a triangle touching a small circle or "bubble". Input and output lines are attached to the symbol; the bubble is typically attached to the output line. To symbolize active-low input, sometimes the bubble is instead placed on the input line. Sometimes only the circle portion of the symbol is used, and it is attached to the input or output of another gate; the symbols for NAND and NOR are formed in this way.
A bar or overline ( ‾ ) above a variable can denote negation (or inversion or complement) performed by a NOT gate. A slash (/) before the variable is also used.
Electronic implementation
An inverter circuit outputs a voltage representing the opposite logic-level to
|
https://en.wikipedia.org/wiki/Iron%20oxide%20nanoparticle
|
Iron oxide nanoparticles are iron oxide particles with diameters between about 1 and 100 nanometers. The two main forms are composed of magnetite () and its oxidized form maghemite (γ-). They have attracted extensive interest due to their superparamagnetic properties and their potential applications in many fields (although cobalt and nickel are also highly magnetic materials, they are toxic and easily oxidized) including molecular imaging.
Applications of iron oxide nanoparticles include terabit magnetic storage devices, catalysis, sensors, superparamagnetic relaxometry, high-sensitivity biomolecular magnetic resonance imaging, magnetic particle imaging, magnetic fluid hyperthermia, separation of biomolecules, and targeted drug and gene delivery for medical diagnosis and therapeutics. These applications require coating of the nanoparticles by agents such as long-chain fatty acids, alkyl-substituted amines, and diols. They have been used in formulations for supplementation.
Structure
Magnetite has an inverse spinel structure with oxygen forming a face-centered cubic crystal system. In magnetite, all tetrahedral sites are occupied by and octahedral sites are occupied by both and . Maghemite differs from magnetite in that all or most of the iron is in the trivalent state () and by the presence of cation vacancies in the octahedral sites. Maghemite has a cubic unit cell in which each cell contains 32 oxygen ions, 21 ions and 2 vacancies. The cations are distributed randomly over the 8 tetrahedral and 16 octahedral sites.
Magnetic properties
Due to its 4 unpaired electrons in 3d shell, an iron atom has a strong magnetic moment. Ions have also 4 unpaired electrons in 3d shell and have 5 unpaired electrons in 3d shell. Therefore, when crystals are formed from iron atoms or ions and they can be in ferromagnetic, antiferromagnetic, or ferrimagnetic states.
In the paramagnetic state, the individual atomic magnetic moments are randomly oriented, and the substance
|
https://en.wikipedia.org/wiki/Index%20of%20combinatorics%20articles
|
A
Abstract simplicial complex
Addition chain
Scholz conjecture
Algebraic combinatorics
Alternating sign matrix
Almost disjoint sets
Antichain
Arrangement of hyperplanes
Assignment problem
Quadratic assignment problem
Audioactive decay
B
Barcode
Matrix code
QR Code
Universal Product Code
Bell polynomials
Bertrand's ballot theorem
Binary matrix
Binomial theorem
Block design
Balanced incomplete block design(BIBD)
Symmetric balanced incomplete block design (SBIBD)
Partially balanced incomplete block designs (PBIBDs)
Block walking
Boolean satisfiability problem
2-satisfiability
3-satisfiability
Bracelet (combinatorics)
Bruck–Chowla–Ryser theorem
C
Catalan number
Cellular automaton
Collatz conjecture
Combination
Combinatorial design
Combinatorial number system
Combinatorial optimization
Combinatorial search
Constraint satisfaction problem
Conway's Game of Life
Cycles and fixed points
Cyclic order
Cyclic permutation
Cyclotomic identity
D
Data integrity
Alternating bit protocol
Checksum
Cyclic redundancy check
Luhn formula
Error detection
Error-detecting code
Error-detecting system
Message digest
Redundancy check
Summation check
De Bruijn sequence
Deadlock
Delannoy number
Dining philosophers problem
Mutual exclusion
Rendezvous problem
Derangement
Dickson's lemma
Dinitz conjecture
Discrete optimization
Dobinski's formula
E
Eight queens puzzle
Entropy coding
Enumeration
Algebraic enumeration
Combinatorial enumeration
Burnside's lemma
Erdős–Ko–Rado theorem
Euler number
F
Faà di Bruno's formula
Factorial number system
Family of sets
Faulhaber's formula
Fifteen puzzle
Finite geometry
Finite intersection property
G
Game theory
Combinatorial game theory
Combinatorial game theory (history)
Combinatorial game theory (pedagogy)
Star (game theory)
Zero game, fuzzy game
Dots and Boxes
Impartial game
Digital sum
Nim
Nimber
Sprague–Grundy theorem
Partizan game
Solved board games
Col ga
|
https://en.wikipedia.org/wiki/Nuclear%20magnetic%20resonance
|
Nuclear magnetic resonance (NMR) is a physical phenomenon in which nuclei in a strong constant magnetic field are perturbed by a weak oscillating magnetic field (in the near field) and respond by producing an electromagnetic signal with a frequency characteristic of the magnetic field at the nucleus. This process occurs near resonance, when the oscillation frequency matches the intrinsic frequency of the nuclei, which depends on the strength of the static magnetic field, the chemical environment, and the magnetic properties of the isotope involved; in practical applications with static magnetic fields up to ca. 20 tesla, the frequency is similar to VHF and UHF television broadcasts (60–1000 MHz). NMR results from specific magnetic properties of certain atomic nuclei. Nuclear magnetic resonance spectroscopy is widely used to determine the structure of organic molecules in solution and study molecular physics and crystals as well as non-crystalline materials. NMR is also routinely used in advanced medical imaging techniques, such as in magnetic resonance imaging (MRI).
The most commonly used nuclei are and , although isotopes of many other elements, such as , , and, can be studied by high-field NMR spectroscopy as well. In order to interact with the magnetic field in the spectrometer, the nucleus must have an intrinsic nuclear magnetic moment and angular momentum. This occurs when an isotope has a nonzero nuclear spin, meaning an odd number of protons and/or neutrons (see Isotope). Nuclides with even numbers of both have a total spin of zero and are therefore NMR-inactive.
A key feature of NMR is that the resonant frequency of a particular sample substance is usually directly proportional to the strength of the applied magnetic field. It is this feature that is exploited in imaging techniques; if a sample is placed in a non-uniform magnetic field then the resonance frequencies of the sample's nuclei depend on where in the field they are located. Since the resolutio
|
https://en.wikipedia.org/wiki/Unix%20architecture
|
A Unix architecture is a computer operating system system architecture that embodies the Unix philosophy. It may adhere to standards such as the Single UNIX Specification (SUS) or similar POSIX IEEE standard. No single published standard describes all Unix architecture computer operating systems — this is in part a legacy of the Unix wars.
Description
There are many systems which are Unix-like in their architecture. Notable among these are the Linux distributions. The distinctions between Unix and Unix-like systems have been the subject of heated legal battles, and the holders of the UNIX brand, The Open Group, object to "Unix-like" and similar terms.
For distinctions between SUS branded UNIX architectures and other similar architectures, see Unix-like.
Kernel
A Unix kernel — the core or key components of the operating system — consists of many kernel subsystems like process management, scheduling, file management, device management, network management, memory management, and dealing with interrupts from hardware devices.
Each of the subsystems has some features:
Concurrency: As Unix is a multiprocessing OS, many processes run concurrently to improve the performance of the system.
Virtual memory (VM): Memory management subsystem implements the virtual memory concept and users need not worry about the executable program size and the RAM size.
Paging: It is a technique to minimize the internal as well as the external fragmentation in the physical memory.
Virtual file system (VFS): A VFS is a file system used to help the user to hide the different file systems complexities. A user can use the same standard file system related calls to access different file systems.
The kernel provides these and other basic services: interrupt and trap handling, separation between user and system space, system calls, scheduling, timer and clock handling, file descriptor management.
Features
Some key features of the Unix architecture concept are:
Unix systems use a central
|
https://en.wikipedia.org/wiki/List%20of%20fractals%20by%20Hausdorff%20dimension
|
According to Benoit Mandelbrot, "A fractal is by definition a set for which the Hausdorff-Besicovitch dimension strictly exceeds the topological dimension."
Presented here is a list of fractals, ordered by increasing Hausdorff dimension, to illustrate what it means for a fractal to have a low or a high dimension.
Deterministic fractals
Random and natural fractals
See also
Fractal dimension
Hausdorff dimension
Scale invariance
Notes and references
Further reading
External links
The fractals on Mathworld
Other fractals on Paul Bourke's website
Soler's Gallery
Fractals on mathcurve.com
1000fractales.free.fr - Project gathering fractals created with various software
Fractals unleashed
IFStile - software that computes the dimension of the boundary of self-affine tiles
Hausdorff Dimension
Hausdorff Dimension
Mathematics-related lists
|
https://en.wikipedia.org/wiki/Species%20description
|
A species description is a formal scientific description of a newly encountered species, usually in the form of a scientific paper. Its purpose is to give a clear description of a new species of organism and explain how it differs from species that have been described previously or are related. To be considered valid, a species description must follow guidelines established over time. Naming requires adherence to respective codes, for example: in zoology, the International Code of Zoological Nomenclature (ICZN); plants, the International Code of Nomenclature for algae, fungi, and plants (ICN); viruses, the International Committee on Taxonomy of Viruses (ICTV). The species description often contains photographs or other illustrations of type material along with a note on where they are deposited. The publication in which the species is described gives the new species a formal scientific name. Some 1.9 million species have been identified and described, out of some 8.7 million that may actually exist. Millions more have become extinct throughout the existence of life on Earth.
Naming process
A name of a new species becomes valid (available in zoological terminology) with the date of publication of its formal scientific description. Once the scientist has performed the necessary research to determine that the discovered organism represents a new species, the scientific results are summarized in a scientific manuscript, either as part of a book or as a paper to be submitted to a scientific journal.
A scientific species description must fulfill several formal criteria specified by the nomenclature codes, e.g. selection of at least one type specimen. These criteria are intended to ensure that the species name is clear and unambiguous, for example, the International Code of Zoological Nomenclature states that "Authors should exercise reasonable care and consideration in forming new names to ensure that they are chosen with their subsequent users in mind and that, as f
|
https://en.wikipedia.org/wiki/The%20Seven%20Pillars%20of%20Life
|
The Seven Pillars of Life are the essential principles of life described by Daniel E. Koshland in 2002 in order to create a universal definition of life. One stated goal of this universal definition is to aid in understanding and identifying artificial and extraterrestrial life. The seven pillars are Program, Improvisation, Compartmentalization, Energy, Regeneration, Adaptability, and Seclusion. These can be abbreviated as PICERAS.
The Seven Pillars
Program
Koshland defines "Program" as an "organized plan that describes both the ingredients themselves and the kinetics of the interactions among ingredients as the living system persists through time." In natural life as it is known on Earth, the program operates through the mechanisms of nucleic acids and amino acids, but the concept of program can apply to other imagined or undiscovered mechanisms.
Improvisation
"Improvisation" refers to the living system's ability to change its program in response to the larger environment in which it exists. An example of improvisation on earth is natural selection.
Compartmentalization
"Compartmentalization" refers to the separation of spaces in the living system that allow for separate environments for necessary chemical processes. Compartmentalization is necessary to protect the concentration of the ingredients for a reaction from outside environments.
Energy
Because living systems involve net movement in terms of chemical movement or body movement, and lose energy in those movements through entropy, energy is required for a living system to exist. The main source of energy on Earth is the sun, but other sources of energy exist for life on Earth, such as hydrogen gas or methane, used in chemosynthesis.
Regeneration
"Regeneration" in a living system refers to the general compensation for losses and degradation in the various components and processes in the system. This covers the thermodynamic loss in chemical reactions, the wear and tear of larger parts, and the large
|
https://en.wikipedia.org/wiki/Mini-STX
|
Mini-STX (mSTX, Mini Socket Technology EXtended, originally "Intel 5x5") is a computer motherboard form factor that was released by Intel in 2015 (as "Intel 5x5").
These motherboards measure 147mm by 140mm (5.8" x 5.5"), making them larger than "4x4" NUC (102x102mm / 4.01" x 4.01" inches) and Nano-ITX (120x120mm / 4.7" x 4.7") boards, but notably smaller than the more common Mini-ITX (170x170mm / 6.7" x 6.7") boards. Unlike these standards, which use a square shape, the Mini-STX form factor is 7mm longer from front-to-rear, making it slightly rectangular.
Mini-STX design elements
The Mini-STX design suggests (but does not require) support for:
Socketed processors (e.g. LGA or PGA CPUs)
Onboard power regulation circuitry, enabling direct DC power input
IO ports embedded on the front and rear of the motherboard (akin to NUC, but unlike typical motherboards which often use headers instead to connect built-in ports on enclosures)
Adoption by manufacturers
This motherboard form factor is still not in particularly common use with consumer-PC manufacturers, although there are a few offerings:
ASRock offers both DeskMini kits (that use mini-STX boards) and standalone motherboards,
Asus offer VivoMini kits (that use mini-STX boards) and standalone motherboards,
Gigabyte offers a few motherboards, and
industrial PC suppliers (e.g. Kontron, Iesy, ASRock Industrial) also provide some options for mini-STX equipment.
Derivatives
ASRock developed a derivative of mini-STX, dubbed micro-STX, for their 'DeskMini GTX/RX' small form-factor PCs and industrial motherboards.
Micro-STX adds an MXM slot which allows the use of special PCI Express expansion cards, including graphics or machine learning accelerators, but increases the width of the board to be extended two inches, resulting in measurements of 147 x 188 mm (5.8" x 7.4")
|
https://en.wikipedia.org/wiki/List%20of%20computability%20and%20complexity%20topics
|
This is a list of computability and complexity topics, by Wikipedia page.
Computability theory is the part of the theory of computation that deals with what can be computed, in principle. Computational complexity theory deals with how hard computations are, in quantitative terms, both with upper bounds (algorithms whose complexity in the worst cases, as use of computing resources, can be estimated), and from below (proofs that no procedure to carry out some task can be very fast).
For more abstract foundational matters, see the list of mathematical logic topics. See also list of algorithms, list of algorithm general topics.
Calculation
Lookup table
Mathematical table
Multiplication table
Generating trigonometric tables
History of computers
Multiplication algorithm
Peasant multiplication
Division by two
Exponentiating by squaring
Addition chain
Scholz conjecture
Presburger arithmetic
Computability theory: models of computation
Arithmetic circuits
Algorithm
Procedure, recursion
Finite state automaton
Mealy machine
Minsky register machine
Moore machine
State diagram
State transition system
Deterministic finite automaton
Nondeterministic finite automaton
Generalized nondeterministic finite automaton
Regular language
Pumping lemma
Myhill-Nerode theorem
Regular expression
Regular grammar
Prefix grammar
Tree automaton
Pushdown automaton
Context-free grammar
Büchi automaton
Chomsky hierarchy
Context-sensitive language, context-sensitive grammar
Recursively enumerable language
Register machine
Stack machine
Petri net
Post machine
Rewriting
Markov algorithm
Term rewriting
String rewriting system
L-system
Knuth–Bendix completion algorithm
Star height
Star height problem
Generalized star height problem
Cellular automaton
Rule 110 cellular automaton
Conway's Game of Life
Langton's ant
Edge of chaos
Turing machine
Deterministic Turing machine
Non-deterministic Turing machine
Alternating automaton
Alternating Turing machine
Turing-complete
Turing tarpit
Oracle machine
Lambda
|
https://en.wikipedia.org/wiki/Retinalophototroph
|
A retinalophototroph is one of two different types of phototrophs, and are named for retinal-binding proteins (microbial rhodopsins) they utilize for cell signaling and converting light into energy. Like all photoautotrophs, retinalophototrophs absorb photons to initiate their cellular processes. However, unlike all photoautotrophs, retinalophototrophs do not use chlorophyll or an electron transport chain to power their chemical reactions. This means retinalophototrophs are incapable of traditional carbon fixation, a fundamental photosynthetic process that transforms inorganic carbon (carbon contained in molecular compounds like carbon dioxide) into organic compounds. For this reason, experts consider them to be less efficient than their chlorophyll-using counterparts, chlorophototrophs.
Energy conversion
Retinalophototrophs achieve adequate energy conversion via a proton-motive force. In retinalophototrophs, proton-motive force is generated from rhodopsin-like proteins, primarily bacteriorhodopsin and proteorhodopsin, acting as proton pumps along a cellular membrane.
To capture photons needed for activating a protein pump, retinalophototrophs employ organic pigments known as carotenoids, namely beta-carotenoids. Beta-carotenoids present in retinalophototrophs are unusual candidates for energy conversion, but they possess high Vitamin-A activity necessary for retinaldehyde, or retinal, formation. Retinal, a chromophore molecule configured from Vitamin A, is formed when bonds between carotenoids are disrupted in a process called cleavage. Due to its acute light sensitivity, retinal is ideal for activation of proton-motive force and imparts a unique purple coloration to retinalophototrophs. Once retinal absorbs enough light, it isomerizes, thereby forcing a conformational (i.e., structural) change among the covalent bonds of the rhodopsin-like proteins. Upon activation, these proteins mimic a gateway, allowing passage of ions to create an electrochemical gradient b
|
https://en.wikipedia.org/wiki/Thermal%20death%20time
|
Thermal death time is how long it takes to kill a specific bacterium at a specific temperature. It was originally developed for food canning and has found applications in cosmetics, producing salmonella-free feeds for animals (e.g. poultry) and pharmaceuticals.
History
In 1895, William Lyman Underwood of the Underwood Canning Company, a food company founded in 1822 at Boston, Massachusetts and later relocated to Watertown, Massachusetts, approached William Thompson Sedgwick, chair of the biology department at the Massachusetts Institute of Technology, about losses his company was suffering due to swollen and burst cans despite the newest retort technology available. Sedgwick gave his assistant, Samuel Cate Prescott, a detailed assignment on what needed to be done. Prescott and Underwood worked on the problem every afternoon from late 1895 to late 1896, focusing on canned clams. They first discovered that the clams contained heat-resistant bacterial spores that were able to survive the processing; then that these spores' presence depended on the clams' living environment; and finally that these spores would be killed if processed at 250 ˚F (121 ˚C) for ten minutes in a retort.
These studies prompted the similar research of canned lobster, sardines, peas, tomatoes, corn, and spinach. Prescott and Underwood's work was first published in late 1896, with further papers appearing from 1897 to 1926. This research, though important to the growth of food technology, was never patented. It would pave the way for thermal death time research that was pioneered by Bigelow and C. Olin Ball from 1921 to 1936 at the National Canners Association (NCA).
Bigelow and Ball's research focused on the thermal death time of Clostridium botulinum (C. botulinum) that was determined in the early 1920s. Research continued with inoculated canning pack studies that were published by the NCA in 1968.
Mathematical formulas
Thermal death time can be determined one of two ways: 1) by using graphs
|
https://en.wikipedia.org/wiki/Laboratory%20automation
|
Laboratory automation is a multi-disciplinary strategy to research, develop, optimize and capitalize on technologies in the laboratory that enable new and improved processes. Laboratory automation professionals are academic, commercial and government researchers, scientists and engineers who conduct research and develop new technologies to increase productivity, elevate experimental data quality, reduce lab process cycle times, or enable experimentation that otherwise would be impossible.
The most widely known application of laboratory automation technology is laboratory robotics. More generally, the field of laboratory automation comprises many different automated laboratory instruments, devices (the most common being autosamplers), software algorithms, and methodologies used to enable, expedite and increase the efficiency and effectiveness of scientific research in laboratories.
The application of technology in today's laboratories is required to achieve timely progress and remain competitive. Laboratories devoted to activities such as high-throughput screening, combinatorial chemistry, automated clinical and analytical testing, diagnostics, large-scale biorepositories, and many others, would not exist without advancements in laboratory automation. Some universities offer entire programs that focus on lab technologies. For example, Indiana University-Purdue University at Indianapolis offers a graduate program devoted to Laboratory Informatics. Also, the Keck Graduate Institute in California offers a graduate degree with an emphasis on development of assays, instrumentation and data analysis tools required for clinical diagnostics, high-throughput screening, genotyping, microarray technologies, proteomics, imaging and other applications.
History
At least since 1875 there have been reports of automated devices for scientific investigation. These first devices were mostly built by scientists themselves in order to solve problems in the laboratory. After the s
|
https://en.wikipedia.org/wiki/Inequality%20%28mathematics%29
|
In mathematics, an inequality is a relation which makes a non-equal comparison between two numbers or other mathematical expressions. It is used most often to compare two numbers on the number line by their size. There are several different notations used to represent different kinds of inequalities:
The notation a < b means that a is less than b.
The notation a > b means that a is greater than b.
In either case, a is not equal to b. These relations are known as strict inequalities, meaning that a is strictly less than or strictly greater than b. Equality is excluded.
In contrast to strict inequalities, there are two types of inequality relations that are not strict:
The notation a ≤ b or a ⩽ b means that a is less than or equal to b (or, equivalently, at most b, or not greater than b).
The notation a ≥ b or a ⩾ b means that a is greater than or equal to b (or, equivalently, at least b, or not less than b).
The relation not greater than can also be represented by a ≯ b, the symbol for "greater than" bisected by a slash, "not". The same is true for not less than and a ≮ b.
The notation a ≠ b means that a is not equal to b; this inequation sometimes is considered a form of strict inequality. It does not say that one is greater than the other; it does not even require a and b to be member of an ordered set.
In engineering sciences, less formal use of the notation is to state that one quantity is "much greater" than another, normally by several orders of magnitude.
The notation a ≪ b means that a is much less than b.
The notation a ≫ b means that a is much greater than b.
This implies that the lesser value can be neglected with little effect on the accuracy of an approximation (such as the case of ultrarelativistic limit in physics).
In all of the cases above, any two symbols mirroring each other are symmetrical; a < b and b > a are equivalent, etc.
Properties on the number line
Inequalities are governed by the following properties. All of these properties
|
https://en.wikipedia.org/wiki/Richards%27%20theorem
|
Richards' theorem is a mathematical result due to Paul I. Richards in 1947. The theorem states that for,
if is a positive-real function (PRF) then is a PRF for all real, positive values of .
The theorem has applications in electrical network synthesis. The PRF property of an impedance function determines whether or not a passive network can be realised having that impedance. Richards' theorem led to a new method of realising such networks in the 1940s.
Proof
where is a PRF, is a positive real constant, and is the complex frequency variable, can be written as,
where,
Since is PRF then
is also PRF. The zeroes of this function are the poles of . Since a PRF can have no zeroes in the right-half s-plane, then can have no poles in the right-half s-plane and hence is analytic in the right-half s-plane.
Let
Then the magnitude of is given by,
Since the PRF condition requires that for all then for all . The maximum magnitude of occurs on the axis because is analytic in the right-half s-plane. Thus for .
Let , then the real part of is given by,
Because for then for and consequently must be a PRF.
Richards' theorem can also be derived from Schwarz's lemma.
Uses
The theorem was introduced by Paul I. Richards as part of his investigation into the properties of PRFs. The term PRF was coined by Otto Brune who proved that the PRF property was a necessary and sufficient condition for a function to be realisable as a passive electrical network, an important result in network synthesis. Richards gave the theorem in his 1947 paper in the reduced form,
that is, the special case where
The theorem (with the more general casse of being able to take on any value) formed the basis of the network synthesis technique presented by Raoul Bott and Richard Duffin in 1949. In the Bott-Duffin synthesis, represents the electrical network to be synthesised and is another (unknown) network incorporated within it ( is unitless, but has units of impe
|
https://en.wikipedia.org/wiki/Dataflow%20architecture
|
Dataflow architecture is a dataflow-based computer architecture that directly contrasts the traditional von Neumann architecture or control flow architecture. Dataflow architectures have no program counter, in concept: the executability and execution of instructions is solely determined based on the availability of input arguments to the instructions, so that the order of instruction execution may be hard to predict.
Although no commercially successful general-purpose computer hardware has used a dataflow architecture, it has been successfully implemented in specialized hardware such as in digital signal processing, network routing, graphics processing, telemetry, and more recently in data warehousing, and artificial intelligence (as: polymorphic dataflow Convolution Engine, structure-driven, dataflow scheduling). It is also very relevant in many software architectures today including database engine designs and parallel computing frameworks.
Synchronous dataflow architectures tune to match the workload presented by real-time data path applications such as wire speed packet forwarding. Dataflow architectures that are deterministic in nature enable programmers to manage complex tasks such as processor load balancing, synchronization and accesses to common resources.
Meanwhile, there is a clash of terminology, since the term dataflow is used for a subarea of parallel programming: for dataflow programming.
History
Hardware architectures for dataflow was a major topic in computer architecture research in the 1970s and early 1980s. Jack Dennis of MIT pioneered the field of static dataflow architectures while the Manchester Dataflow Machine and MIT Tagged Token architecture were major projects in dynamic dataflow.
The research, however, never overcame the problems related to:
Efficiently broadcasting data tokens in a massively parallel system.
Efficiently dispatching instruction tokens in a massively parallel system.
Building content-addressable memory (CAM)
|
https://en.wikipedia.org/wiki/Pin%20compatibility
|
In electronics, pin-compatible devices are electronic components, generally integrated circuits or expansion cards, sharing a common footprint and with the same functions assigned or usable on the same pins. Pin compatibility is a property desired by systems integrators as it allows a product to be updated without redesigning printed circuit boards, which can reduce costs and decrease time to market.
Although devices which are pin-compatible share a common footprint, they are not necessarily electrically or thermally compatible. As a result, manufacturers often specify devices as being either pin-to-pin or drop-in compatible. Pin-compatible devices are generally produced to allow upgrading within a single product line, to allow end-of-life devices to be replaced with newer equivalents, or to compete with the equivalent products of other manufacturers.
Pin-to-pin compatibility
Pin-to-pin compatible devices share an assignment of functions to pins, but may have differing electrical characteristics (supply voltages, or oscillator frequencies) or thermal characteristics (TDPs, reflow curves, or temperature tolerances). As a result, their use in a system may require that portions of the system, such as its power delivery subsystem, be adapted to fit the new component.
A common example of pin-to-pin compatible devices which may not be electrically compatible are the 7400 series integrated circuits. The 7400 series devices have been produced on a number of different manufacturing processes, but have retained the same pinouts throughout. For example, all 7405 devices provide six NOT gates (or inverters) but may have incompatible supply voltage tolerances.
7405 – Standard TTL, 4.75–5.25 V.
74C05 – CMOS, 4–15 V.
74LV05 – Low-voltage CMOS, 2.0–5.5 V.
In other cases, particularly with computers, devices may be pin-to-pin compatible but made otherwise incompatible as a result of market segmentation. For example, Intel Skylake desktop-class Core and Xeon E3v5 processor
|
https://en.wikipedia.org/wiki/Thermal%20analysis
|
Thermal analysis is a branch of materials science where the properties of materials are studied as they change with temperature. Several methods are commonly used – these are distinguished from one another by the property which is measured:
Dielectric thermal analysis: dielectric permittivity and loss factor
Differential thermal analysis: temperature difference versus temperature or time
Differential scanning calorimetry: heat flow changes versus temperature or time
Dilatometry: volume changes with temperature change
Dynamic mechanical analysis: measures storage modulus (stiffness) and loss modulus (damping) versus temperature, time and frequency
Evolved gas analysis: analysis of gases evolved during heating of a material, usually decomposition products
Isothermal titration calorimetry
Isothermal microcalorimetry
Laser flash analysis: thermal diffusivity and thermal conductivity
Thermogravimetric analysis: mass change versus temperature or time
Thermomechanical analysis: dimensional changes versus temperature or time
Thermo-optical analysis: optical properties
Derivatography: A complex method in thermal analysis
Simultaneous thermal analysis generally refers to the simultaneous application of thermogravimetry and differential scanning calorimetry to one and the same sample in a single instrument. The test conditions are perfectly identical for the thermogravimetric analysis and differential scanning calorimetry signals (same atmosphere, gas flow rate, vapor pressure of the sample, heating rate, thermal contact to the sample crucible and sensor, radiation effect, etc.). The information gathered can even be enhanced by coupling the simultaneous thermal analysis instrument to an Evolved Gas Analyzer like Fourier transform infrared spectroscopy or mass spectrometry.
Other, less common, methods measure the sound or light emission from a sample, or the electrical discharge from a dielectric material, or the mechanical relaxation in a stressed specimen. T
|
https://en.wikipedia.org/wiki/Microwave%20Imaging%20Radiometer%20with%20Aperture%20Synthesis
|
Microwave Imaging Radiometer with Aperture Synthesis (MIRAS) is the major instrument on the Soil Moisture and Ocean Salinity satellite (SMOS). MIRAS employs a planar antenna composed of a central body (the so-called hub) and three telescoping, deployable arms, in total 69 receivers on the Unit. Each receiver is composed of one Lightweight Cost-Effective Front-end (LICEF) module, which detects radiation in the microwave L-band, both in horizontal and vertical polarizations. The aperture on the LICEF detectors, planar in arrangement on MIRAS, point directly toward the Earth's surface as the satellite orbits. The arrangement and orientation of MIRAS makes the instrument a 2-D interferometric radiometer that generates brightness temperature images, from which both geophysical variables are computed. The salinity measurement requires demanding performance of the instrument in terms of calibration and stability. The MIRAS instrument's prime contractor was EADS CASA Espacio, manufacturing the payload of SMOS under ESA's contract.
LICEF
The LICEF detector is composed of a round patch antenna element, with 2 pairs of probes for orthogonal linear polarisations, feeding two receiver channels in a compact lightweight package behind the antenna. It picks up thermal radiation emitted by the Earth near 1.4 GHz in the microwave L-band, amplifies it 100 dB, and digitises it with 1-bit quantisation.
|
https://en.wikipedia.org/wiki/Ethnocomputing
|
Ethnocomputing is the study of the interactions between computing and culture. It is carried out through theoretical analysis, empirical investigation, and design implementation. It includes research on the impact of computing on society, as well as the reverse: how cultural, historical, personal, and societal origins and surroundings cause and affect the innovation, development, diffusion, maintenance, and appropriation of computational artifacts or ideas. From the ethnocomputing perspective, no computational technology is culturally "neutral," and no cultural practice is a computational void. Instead of considering culture to be a hindrance for software engineering, culture should be seen as a resource for innovation and design.
Subject matter
Social categories for ethnocomputing include:
Indigenous computing: In some cases, ethnocomputing "translates" from indigenous culture to high tech frameworks: for example, analyzing the African board game Owari as a one-dimensional cellular automaton.
Social/historical studies of computing: In other cases ethnocomputing seeks to identify the social, cultural, historical, or personal dimensions of high tech computational ideas and artifacts: for example, the relationship between the Turing Test and Alan Turing's closeted gay identity.
Appropriation in computing: lay persons who did not participate in the original design of a computing system can still affect it by modifying its interpretation, use, or structure. Such "modding" may be as subtle as the key board character "emoticons" created through lay use of email, or as blatant as the stylized customization of computer cases.
Equity tools: a software "Applications Quest" has been developed for generating a "diversity index" that allows consideration of multiple identity characteristics in college admissions.
Technical categories in ethnocomputing include:
Organized structures and models used to represent information (data structures)
Ways of manipulating the organiz
|
https://en.wikipedia.org/wiki/Vector%20notation
|
In mathematics and physics, vector notation is a commonly used notation for representing vectors, which may be Euclidean vectors, or more generally, members of a vector space.
For representing a vector, the common typographic convention is lower case, upright boldface type, as in . The International Organization for Standardization (ISO) recommends either bold italic serif, as in , or non-bold italic serif accented by a right arrow, as in .
In advanced mathematics, vectors are often represented in a simple italic type, like any variable.
History
In 1835 Giusto Bellavitis introduced the idea of equipollent directed line segments which resulted in the concept of a vector as an equivalence class of such segments.
The term vector was coined by W. R. Hamilton around 1843, as he revealed quaternions, a system which uses vectors and scalars to span a four-dimensional space. For a quaternion q = a + bi + cj + dk, Hamilton used two projections: S q = a, for the scalar part of q, and V q = bi + cj + dk, the vector part. Using the modern terms cross product (×) and dot product (.), the quaternion product of two vectors p and q can be written pq = –p.q + p×q. In 1878, W. K. Clifford severed the two products to make the quaternion operation useful for students in his textbook Elements of Dynamic. Lecturing at Yale University, Josiah Willard Gibbs supplied notation for the scalar product and vector products, which was introduced in Vector Analysis.
In 1891, Oliver Heaviside argued for Clarendon to distinguish vectors from scalars. He criticized the use of Greek letters by Tait and Gothic letters by Maxwell.
In 1912, J.B. Shaw contributed his "Comparative Notation for Vector Expressions" to the Bulletin of the Quaternion Society. Subsequently, Alexander Macfarlane described 15 criteria for clear expression with vectors in the same publication.
Vector ideas were advanced by Hermann Grassmann in 1841, and again in 1862 in the German language. But German mathematicians wer
|
https://en.wikipedia.org/wiki/Physical%20object
|
In common usage and classical mechanics, a physical object or physical body (or simply an object or body) is a collection of matter within a defined contiguous boundary in three-dimensional space. The boundary surface must be defined and identified by the properties of the material, although it may change over time. The boundary is usually the visible or tangible surface of the object. The matter in the object is constrained (to a greater or lesser degree) to move as one object. The boundary may move in space relative to other objects that it is not attached to (through translation and rotation). An object's boundary may also deform and change over time in other ways.
Also in common usage, an object is not constrained to consist of the same collection of matter. Atoms or parts of an object may change over time. An object is usually meant to be defined by the simplest representation of the boundary consistent with the observations. However the laws of physics only apply directly to objects that consist of the same collection of matter.
In physics, an object is an identifiable collection of matter, which may be constrained by an identifiable boundary, and may move as a unit by translation or rotation, in 3-dimensional space.
Each object has a unique identity, independent of any other properties. Two objects may be identical, in all properties except position, but still remain distinguishable. In most cases the boundaries of two objects may not overlap at any point in time. The property of identity allows objects to be counted.
Examples of models of physical bodies include, but are not limited to a particle, several interacting smaller bodies (particulate or otherwise), and continuous media.
The common conception of physical objects includes that they have extension in the physical world, although there do exist theories of quantum physics and cosmology which arguably challenge this. In modern physics, "extension" is understood in terms of the spacetime: roughly s
|
https://en.wikipedia.org/wiki/Monoclonality
|
In biology, monoclonality refers to the state of a line of cells that have been derived from a single clonal origin. Thus, "monoclonal cells" can be said to form a single clone. The term monoclonal comes .
The process of replication can occur in vivo, or may be stimulated in vitro for laboratory manipulations. The use of the term typically implies that there is some method to distinguish between the cells of the original population from which the single ancestral cell is derived, such as a random genetic alteration, which is inherited by the progeny.
Common usages of this term include:
Monoclonal antibody: a single hybridoma cell, which by chance includes the appropriate V(D)J recombination to produce the desired antibody, is cloned to produce a large population of identical cells. In informal laboratory jargon, the monoclonal antibodies isolated from cell culture supernatants of these hybridoma clones (hybridoma lines) are simply called monoclonals.
Monoclonal neoplasm (tumor): A single aberrant cell which has undergone carcinogenesis reproduces itself into a cancerous mass.
Monoclonal plasma cell (also called plasma cell dyscrasia): A single aberrant plasma cell which has undergone carcinogenesis reproduces itself, which in some cases is cancerous.
|
https://en.wikipedia.org/wiki/Vector%20potential
|
In vector calculus, a vector potential is a vector field whose curl is a given vector field. This is analogous to a scalar potential, which is a scalar field whose gradient is a given vector field.
Formally, given a vector field v, a vector potential is a vector field A such that
Consequence
If a vector field v admits a vector potential A, then from the equality
(divergence of the curl is zero) one obtains
which implies that v must be a solenoidal vector field.
Theorem
Let
be a solenoidal vector field which is twice continuously differentiable. Assume that decreases at least as fast as for .
Define
Then, A is a vector potential for , that is,
Here, is curl for variable y.
Substituting curl[v] for the current density j of the retarded potential, you will get this formula. In other words, v corresponds to the H-field.
You can restrict the integral domain to any single-connected region Ω. That is, A' below is also a vector potential of v;
A generalization of this theorem is the Helmholtz decomposition which states that any vector field can be decomposed as a sum of a solenoidal vector field and an irrotational vector field.
By analogy with Biot-Savart's law, the following is also qualify as a vector potential for v.
Substitute j (current density) for v and H (H-field)for A, we will find the Biot-Savart law.
Let and let the Ω be a star domain centered on the p then,
translating Poincaré's lemma for differential forms into vector fields world, the following is also a vector potential for the
Nonuniqueness
The vector potential admitted by a solenoidal field is not unique. If is a vector potential for , then so is
where is any continuously differentiable scalar function. This follows from the fact that the curl of the gradient is zero.
This nonuniqueness leads to a degree of freedom in the formulation of electrodynamics, or gauge freedom, and requires choosing a gauge.
See also
Fundamental theorem of vector calculus
Magnetic vector potentia
|
https://en.wikipedia.org/wiki/Network-Integrated%20Multimedia%20Middleware
|
The Network-Integrated Multimedia Middleware (NMM) is a flow graph based multimedia framework. NMM allows creating distributed multimedia applications: local and remote multimedia devices or software components can be controlled transparently and integrated into a common multimedia processing flow graph. NMM is implemented in C++, a programming language, and NMM-IDL, an interface description language (IDL). NMM is a set of cross-platform libraries and applications for the operating systems Linux, OS X, Windows, and others. A software development kit (SDK) is also provided.
NMM is released under dual-licensing. The Linux, OS X, and PS3 versions are distributed for free as open-source software under the terms and conditions of the GNU General Public License (GPL). The Windows version is distributed for free as binary version under the terms and conditions of the NMM Non-Commercial License (NMM-NCL). All NMM versions (i.e., for all supported operating systems) are also distributed under a commercial license with full warranty, which allows developing closed-source proprietary software atop NMM.
See also
Java Media Framework
DirectShow
QuickTime
Helix DNA
MPlayer
VLC media player (VLC)
Video wall
Sources
Linux gains open source multimedia middleware
KDE to gain cutting-edge multimedia technology
Multimedia barriers drop at CeBIT in March
A Survey of Software Infrastructures and Frameworks for Ubiquitous Computing
External links
NMM homepage
Computer networking
|
https://en.wikipedia.org/wiki/Optical%20properties%20of%20water%20and%20ice
|
The refractive index of water at 20 °C for visible light is 1.33. The refractive index of normal ice is 1.31 (from List of refractive indices). In general, an index of refraction is a complex number with real and imaginary parts, where the latter indicates the strength of absorption loss at a particular wavelength. In the visible part of the electromagnetic spectrum, the imaginary part of the refractive index is very small. However, water and ice absorb in infrared and close the infrared atmospheric window thereby contributing to the greenhouse effect ...
The absorption spectrum of pure water is used in numerous applications, including light scattering and absorption by ice crystals and cloud water droplets, theories of the rainbow, determination of the single-scattering albedo, ocean color, and many others.
Quantitative description of the refraction index
Over the wavelengths from 0.2 μm to 1.2 μm, and over temperatures from −12 °C to 500 °C, the real part of the index of refraction of water can be calculated by the following empirical expression:
Where:
,
, and
and the appropriate constants are
= 0.244257733, = 0.00974634476, = −0.00373234996, = 0.000268678472, = 0.0015892057, = 0.00245934259, = 0.90070492, = −0.0166626219, = 273.15 K, = 1000 kg/m3, = 589 nm, = 5.432937, and = 0.229202.
In the above expression, T is the absolute temperature of water (in K), is the wavelength of light in nm, is the density of the water in kg/m3, and n is the real part of the index of refraction of water.
Volumic mass of water
In the above formula, the density of water also varies with temperature and is defined by:
with:
= −3.983035 °C
= 301.797 °C
= 522528.9 °C2
= 69.34881 °C
= 999.974950 kg / m3
Refractive index (real and imaginary parts) for liquid water
The total refractive index of water is given as m = n + ik. The absorption coefficient α' is used in the Beer–Lambert law with the prime here signifying base e convention. Values are for water
|
https://en.wikipedia.org/wiki/Analogue%20electronics
|
Analogue electronics () are electronic systems with a continuously variable signal, in contrast to digital electronics where signals usually take only two levels. The term "analogue" describes the proportional relationship between a signal and a voltage or current that represents the signal. The word analogue is derived from the Greek word meaning "proportional".
Analogue signals
An analogue signal uses some attribute of the medium to convey the signal's information. For example, an aneroid barometer uses the angular position of a needle on top of a contracting and expanding box as the signal to convey the information of changes in atmospheric pressure. Electrical signals may represent information by changing their voltage, current, frequency, or total charge. Information is converted from some other physical form (such as sound, light, temperature, pressure, position) to an electrical signal by a transducer which converts one type of energy into another (e.g. a microphone).
The signals take any value from a given range, and each unique signal value represents different information. Any change in the signal is meaningful, and each level of the signal represents a different level of the phenomenon that it represents. For example, suppose the signal is being used to represent temperature, with one volt representing one degree Celsius. In such a system, 10 volts would represent 10 degrees, and 10.1 volts would represent 10.1 degrees.
Another method of conveying an analogue signal is to use modulation. In this, some base carrier signal has one of its properties altered: amplitude modulation (AM) involves altering the amplitude of a sinusoidal voltage waveform by the source information, frequency modulation (FM) changes the frequency. Other techniques, such as phase modulation or changing the phase of the carrier signal, are also used.
In an analogue sound recording, the variation in pressure of a sound striking a microphone creates a corresponding variation in t
|
https://en.wikipedia.org/wiki/List%20of%20partial%20differential%20equation%20topics
|
This is a list of partial differential equation topics.
General topics
Partial differential equation
Nonlinear partial differential equation
list of nonlinear partial differential equations
Boundary condition
Boundary value problem
Dirichlet problem, Dirichlet boundary condition
Neumann boundary condition
Stefan problem
Wiener–Hopf problem
Separation of variables
Green's function
Elliptic partial differential equation
Singular perturbation
Cauchy–Kovalevskaya theorem
H-principle
Atiyah–Singer index theorem
Bäcklund transform
Viscosity solution
Weak solution
Loewy decomposition of linear differential equations
Specific partial differential equations
Broer–Kaup equations
Burgers' equation
Euler equations
Fokker–Planck equation
Hamilton–Jacobi equation, Hamilton–Jacobi–Bellman equation
Heat equation
Laplace's equation
Laplace operator
Harmonic function
Spherical harmonic
Poisson integral formula
Klein–Gordon equation
Korteweg–de Vries equation
Modified KdV–Burgers equation
Maxwell's equations
Navier–Stokes equations
Poisson's equation
Primitive equations (hydrodynamics)
Schrödinger equation
Wave equation
Numerical methods for PDEs
Finite difference
Finite element method
Finite volume method
Boundary element method
Multigrid
Spectral method
Computational fluid dynamics
Alternating direction implicit
Related areas of mathematics
Calculus of variations
Harmonic analysis
Ordinary differential equation
Sobolev space
Partial differential equations
|
https://en.wikipedia.org/wiki/Address%20space
|
In computing, an address space defines a range of discrete addresses, each of which may correspond to a network host, peripheral device, disk sector, a memory cell or other logical or physical entity.
For software programs to save and retrieve stored data, each datum must have an address where it can be located. The number of address spaces available depends on the underlying address structure, which is usually limited by the computer architecture being used. Often an address space in a system with virtual memory corresponds to a highest level translation table, e.g., a segment table in IBM System/370.
Address spaces are created by combining enough uniquely identified qualifiers to make an address unambiguous within the address space. For a person's physical address, the address space would be a combination of locations, such as a neighborhood, town, city, or country. Some elements of a data address space may be the same, but if any element in the address is different, addresses in said space will reference different entities. For example, there could be multiple buildings at the same address of "32 Main Street" but in different towns, demonstrating that different towns have different, although similarly arranged, street address spaces.
An address space usually provides (or allows) a partitioning to several regions according to the mathematical structure it has. In the case of total order, as for memory addresses, these are simply chunks. Like the hierarchical design of postal addresses, some nested domain hierarchies appear as a directed ordered tree, such as with the Domain Name System or a directory structure. In the Internet, the Internet Assigned Numbers Authority (IANA) allocates ranges of IP addresses to various registries so each can manage their parts of the global Internet address space.
Examples
Uses of addresses include, but are not limited to the following:
Memory addresses for main memory, memory-mapped I/O, as well as for virtual memory;
Device
|
https://en.wikipedia.org/wiki/Bilinear%20time%E2%80%93frequency%20distribution
|
Bilinear time–frequency distributions, or quadratic time–frequency distributions, arise in a sub-field of signal analysis and signal processing called time–frequency signal processing, and, in the statistical analysis of time series data. Such methods are used where one needs to deal with a situation where the frequency composition of a signal may be changing over time; this sub-field used to be called time–frequency signal analysis, and is now more often called time–frequency signal processing due to the progress in using these methods to a wide range of signal-processing problems.
Background
Methods for analysing time series, in both signal analysis and time series analysis, have been developed as essentially separate methodologies applicable to, and based in, either the time or the frequency domain. A mixed approach is required in time–frequency analysis techniques which are especially effective in analyzing non-stationary signals, whose frequency distribution and magnitude vary with time. Examples of these are acoustic signals. Classes of "quadratic time-frequency distributions" (or bilinear time–frequency distributions") are used for time–frequency signal analysis. This class is similar in formulation to Cohen's class distribution function that was used in 1966 in the context of quantum mechanics. This distribution function is mathematically similar to a generalized time–frequency representation which utilizes bilinear transformations. Compared with other time–frequency analysis techniques, such as short-time Fourier transform (STFT), the bilinear-transformation (or quadratic time–frequency distributions) may not have higher clarity for most practical signals, but it provides an alternative framework to investigate new definitions and new methods. While it does suffer from an inherent cross-term contamination when analyzing multi-component signals, by using a carefully chosen window function(s), the interference can be significantly mitigated, at the expens
|
https://en.wikipedia.org/wiki/Shearlet
|
In applied mathematical analysis, shearlets are a multiscale framework which allows efficient encoding of anisotropic features in multivariate problem classes. Originally, shearlets were introduced in 2006 for the analysis and sparse approximation of functions . They are a natural extension of wavelets, to accommodate the fact that multivariate functions are typically governed by anisotropic features such as edges in images, since wavelets, as isotropic objects, are not capable of capturing such phenomena.
Shearlets are constructed by parabolic scaling, shearing, and translation applied to a few generating functions. At fine scales, they are essentially supported within skinny and directional ridges following the parabolic scaling law, which reads length² ≈ width. Similar to wavelets, shearlets arise from the affine group and allow a unified treatment of the continuum and digital situation leading to faithful implementations. Although they do not constitute an orthonormal basis for , they still form a frame allowing stable expansions of arbitrary functions .
One of the most important properties of shearlets is their ability to provide optimally sparse approximations (in the sense of optimality in ) for cartoon-like functions . In imaging sciences, cartoon-like functions serve as a model for anisotropic features and are compactly supported in while being apart from a closed piecewise singularity curve with bounded curvature. The decay rate of the -error of the -term shearlet approximation obtained by taking the largest coefficients from the shearlet expansion is in fact optimal up to a log-factor:
where the constant depends only on the maximum curvature of the singularity curve and the maximum magnitudes of , and . This approximation rate significantly improves the best -term approximation rate of wavelets providing only for such class of functions.
Shearlets are to date the only directional representation system that provides sparse approximation of ani
|
https://en.wikipedia.org/wiki/RIPAC%20%28microprocessor%29
|
RIPAC was a VLSI single-chip microprocessor designed for automatic recognition of the connected speech, one of the first of this use.
The project of the microprocessor RIPAC started in 1984. RIPAC was aimed to provide efficient real-time speech recognition services to the italian telephone system provided by SIP. The microprocessor was presented in September 1986 at The Hague (Netherlands) at EUSPICO conference. It was composed of 70.000 transistors and structured as Harvard architecture.
The name RIPAC is the acronym for "Riconoscimento del PArlato Connesso", that means "Recognition of the connected speech" in Italian. The microprocessor was designed by the Italian companies CSELT and ELSAG and was produced by SGS: a combination of Hidden Markov Model and Dynamic Time Warping algorithms was used for processing speech signals. It was able to do real-time speech recognition of Italian and many languages with a good affordability. The chip, issued by U.S. Patent No. 4,907,278, worked at first run.
|
https://en.wikipedia.org/wiki/Biological%20pacemaker
|
A biological pacemaker is one or more types of cellular components that, when "implanted or injected into certain regions of the heart," produce specific electrical stimuli that mimic that of the body's natural pacemaker cells. Biological pacemakers are indicated for issues such as heart block, slow heart rate, and asynchronous heart ventricle contractions.
The biological pacemaker is intended as an alternative to the artificial cardiac pacemaker that has been in human use since the late 1950s. Despite their success, several limitations and problems with artificial pacemakers have emerged during the past decades such as electrode fracture or damage to insulation, infection, re-operations for battery exchange, and venous thrombosis. The need for an alternative is most obvious in children, including premature newborn babies, where size mismatch and the fact that pacemaker leads do not grow with children are a problem. A more biological approach has been taken in order to mitigate many of these issues. However, the implanted biological pacemaker cells still typically need to be supplemented with an artificial pacemaker while the cells form the necessary electrical connections with cardiac tissue.
History
The first successful experiment with biological pacemakers was carried out by Arjang Ruhparwar 's group at Hannover Medical School in Germany using transplanted fetal heart muscle cells. The process was first introduced at the scientific sessions of the American Heart Association in Anaheim in 2001, and the results were published in 2002. A few months later, Eduardo Marban's group from Johns Hopkins University published the first successful gene-therapeutic approach towards the generation of pacemaking activity in otherwise non-pacemaking adult cardiomyocytes using a guinea pig model. The investigators postulated latent pacemaker capability in normal heart muscle cells. This potential ability is suppressed by the inward-rectifier potassium current Ik1 encoded by the
|
https://en.wikipedia.org/wiki/Alberto%20Diaspro
|
Alberto Diaspro (born April 7, 1959, in Genoa, Italy) is an Italian scientist. He received his doctoral degree in electronic engineering from the university of Genoa, Italy, in 1983. He is full professor in applied physics at university of Genoa. He is research director of Nanoscopy Italian Institute of Technology. Alberto Diaspro is President of the Italian biophysical society SIBPA. In 2022 he got the Gregorio Weber Award for excellence in fluorescence.
|
https://en.wikipedia.org/wiki/Electronic%20speed%20control
|
An electronic speed control (ESC) is an electronic circuit that controls and regulates the speed of an electric motor. It may also provide reversing of the motor and dynamic braking.
Miniature electronic speed controls are used in electrically powered radio controlled models. Full-size electric vehicles also have systems to control the speed of their drive motors.
Function
An electronic speed control follows a speed reference signal (derived from a throttle lever, joystick, or other manual input) and varies the switching rate of a network of field effect transistors (FETs). By adjusting the duty cycle or switching frequency of the transistors, the speed of the motor is changed. The rapid switching of the current flowing through the motor is what causes the motor itself to emit its characteristic high-pitched whine, especially noticeable at lower speeds.
Different types of speed controls are required for brushed DC motors and brushless DC motors. A brushed motor can have its speed controlled by varying the voltage on its armature. (Industrially, motors with electromagnet field windings instead of permanent magnets can also have their speed controlled by adjusting the strength of the motor field current.) A brushless motor requires a different operating principle. The speed of the motor is varied by adjusting the timing of pulses of current delivered to the several windings of the motor.
Brushless ESC systems basically create three-phase AC power, like a variable frequency drive, to run brushless motors. Brushless motors are popular with radio controlled airplane hobbyists because of their efficiency, power, longevity and light weight in comparison to traditional brushed motors. Brushless DC motor controllers are much more complicated than brushed motor controllers.
The correct phase of the current fed to the motor varies with the motor rotation, which is to be taken into account by the ESC: Usually, back EMF from the motor windings is used to detect this
|
https://en.wikipedia.org/wiki/Pathological%20%28mathematics%29
|
In mathematics, when a mathematical phenomenon runs counter to some intuition, then the phenomenon is sometimes called pathological. On the other hand, if a phenomenon does not run counter to intuition,
it is sometimes called well-behaved. These terms are sometimes useful in mathematical research and teaching, but there is no strict mathematical definition of pathological or well-behaved.
In analysis
A classic example of a pathology is the Weierstrass function, a function that is continuous everywhere but differentiable nowhere. The sum of a differentiable function and the Weierstrass function is again continuous but nowhere differentiable; so there are at least as many such functions as differentiable functions. In fact, using the Baire category theorem, one can show that continuous functions are generically nowhere differentiable.
Such examples were deemed pathological when they were first discovered:
To quote Henri Poincaré:
Since Poincaré, nowhere differentiable functions have been shown to appear in basic physical and biological processes such as Brownian motion and in applications such as the Black-Scholes model in finance.
Counterexamples in Analysis is a whole book of such counterexamples.
In topology
One famous counterexample in topology is the Alexander horned sphere, showing that topologically embedding the sphere S2 in R3 may fail to separate the space cleanly. As a counterexample, it motivated mathematicians to define the tameness property, which suppresses the kind of wild behavior exhibited by the horned sphere, wild knot, and other similar examples.
Like many other pathologies, the horned sphere in a sense plays on infinitely fine, recursively generated structure, which in the limit violates ordinary intuition. In this case, the topology of an ever-descending chain of interlocking loops of continuous pieces of the sphere in the limit fully reflects that of the common sphere, and one would expect the outside of it, after an embedding, to work
|
https://en.wikipedia.org/wiki/Kappa%20organism
|
In biology, Kappa organism or Kappa particle refers to inheritable cytoplasmic symbionts, occurring in some strains of the ciliate Paramecium. Paramecium strains possessing the particles are known as "killer paramecia". They liberate a substance also known as paramecin into the culture medium that is lethal to Paramecium that do not contain kappa particles. Kappa particles are found in genotypes of Paramecium aurelia syngen 2 that carry the dominant gene K.
Kappa particles are Feulgen-positive and stain with Giemsa after acid hydrolysis. The length of the particles is 0.2–0.5μ.
While there was initial confusion over the status of kappa particles as viruses, bacteria, organelles, or mere nucleoprotein, the particles are intracellular bacterial symbionts called Caedibacter taeniospiralis. Caedibacter taeniospiralis contains cytoplasmic protein inclusions called R bodies which act as a toxin delivery system.
|
https://en.wikipedia.org/wiki/Embedded%20C
|
Embedded C is a set of language extensions for the C programming language by the C Standards Committee to address commonality issues that exist between C extensions for different embedded systems.
Embedded C programming typically requires nonstandard extensions to the C language in order to support enhanced microprocessor features such as fixed-point arithmetic, multiple distinct memory banks, and basic I/O operations. The C Standards Committee produced a Technical Report, most recently revised in 2008 and reviewed in 2013, providing a common standard for all implementations to adhere to. It includes a number of features not available in normal C, such as fixed-point arithmetic, named address spaces and basic I/O hardware addressing. Embedded C uses most of the syntax and semantics of standard C, e.g., main() function, variable definition, datatype declaration, conditional statements (if, switch case), loops (while, for), functions, arrays and strings, structures and union, bit operations, macros, etc.
|
https://en.wikipedia.org/wiki/Pulse-width%20modulation
|
Pulse-width modulation (PWM), also known as pulse-duration modulation (PDM) or pulse-length modulation (PLM), is a method of controlling the average power or amplitude delivered by an electrical signal. The average value of voltage (and current) fed to the load is controlled by switching the supply between 0 and 100% at a rate faster than it takes the load to change significantly. The longer the switch is on, the higher the total power supplied to the load. Along with maximum power point tracking (MPPT), it is one of the primary methods of controlling the output of solar panels to that which can be utilized by a battery. PWM is particularly suited for running inertial loads such as motors, which are not as easily affected by this discrete switching. The goal of PWM is to control a load; however, the PWM switching frequency must be selected carefully in order to smoothly do so.
The PWM switching frequency can vary greatly depending on load and application. For example, switching only has to be done several times a minute in an electric stove; 100 or 120 Hz (double of the utility frequency) in a lamp dimmer; between a few kilohertz (kHz) and tens of kHz for a motor drive; and well into the tens or hundreds of kHz in audio amplifiers and computer power supplies. Choosing a switching frequency that is too high for the application results in smooth control of the load, but may cause premature failure of the mechanical control components. Selecting a switching frequency that is too low for the application causes oscillations in the load. The main advantage of PWM is that power loss in the switching devices is very low. When a switch is off there is practically no current, and when it is on and power is being transferred to the load, there is almost no voltage drop across the switch. Power loss, being the product of voltage and current, is thus in both cases close to zero. PWM also works well with digital controls, which, because of their on/off nature, can easily set the
|
https://en.wikipedia.org/wiki/Cospeciation
|
Cospeciation is a form of coevolution in which the speciation of one species dictates speciation of another species and is most commonly studied in host-parasite relationships. In the case of a host-parasite relationship, if two hosts of the same species get within close proximity of each other, parasites of the same species from each host are able to move between individuals and mate with the parasites on the other host. However, if a speciation event occurs in the host species, the parasites will no longer be able to "cross over" because the two new host species no longer mate and, if the speciation event is due to a geographic separation, it is very unlikely the two hosts will interact at all with each other. The lack of proximity between the hosts ultimately prevents the populations of parasites from interacting and mating. This can ultimately lead to speciation within the parasite.
According to Fahrenholz's rule, first proposed by Heinrich Fahrenholz in 1913, when host-parasite cospeciation has occurred, the phylogenies of the host and parasite come to mirror each other. In host-parasite phylogenies, and all species phylogenies for that matter, perfect mirroring is rare. Host-parasite phylogenies can be altered by host switching, extinction, independent speciation, and other ecological events, making cospeciation harder to detect. However, cospeciation is not limited to parasitism, but has been documented in symbiotic relationships like those of gut microbes in primates.
Fahrenholz's rule
In 1913, Heinrich Fahrenholz proposed that the phylogenies of both the host and parasite will eventually become congruent, or mirror each other when cospeciation occurs. More specifically, more closely related parasite species will be found on closely related species of host. Thus, to determine if cospeciation has occurred within a host-parasite relationship, scientists have used comparative analyses on the host and parasite phylogenies.
In 1968, Daniel Janzen proposed an
|
https://en.wikipedia.org/wiki/Super-resolution%20imaging
|
Super-resolution imaging (SR) is a class of techniques that enhance (increase) the resolution of an imaging system. In optical SR the diffraction limit of systems is transcended, while in geometrical SR the resolution of digital imaging sensors is enhanced.
In some radar and sonar imaging applications (e.g. magnetic resonance imaging (MRI), high-resolution computed tomography), subspace decomposition-based methods (e.g. MUSIC) and compressed sensing-based algorithms (e.g., SAMV) are employed to achieve SR over standard periodogram algorithm.
Super-resolution imaging techniques are used in general image processing and in super-resolution microscopy.
Basic concepts
Because some of the ideas surrounding super-resolution raise fundamental issues, there is need at the outset to examine the relevant physical and information-theoretical principles:
Diffraction limit: The detail of a physical object that an optical instrument can reproduce in an image has limits that are mandated by laws of physics, whether formulated by the diffraction equations in the wave theory of light or equivalently the uncertainty principle for photons in quantum mechanics. Information transfer can never be increased beyond this boundary, but packets outside the limits can be cleverly swapped for (or multiplexed with) some inside it. One does not so much “break” as “run around” the diffraction limit. New procedures probing electro-magnetic disturbances at the molecular level (in the so-called near field) remain fully consistent with Maxwell's equations.
Spatial-frequency domain: A succinct expression of the diffraction limit is given in the spatial-frequency domain. In Fourier optics light distributions are expressed as superpositions of a series of grating light patterns in a range of fringe widths, technically spatial frequencies. It is generally taught that diffraction theory stipulates an upper limit, the cut-off spatial-frequency, beyond which pattern elements fail to be transferred in
|
https://en.wikipedia.org/wiki/Packard%20Bell%20Statesman
|
The Packard Bell Statesman was an economy line of notebook computers introduced in 1993 by Packard Bell. They were slower in performance and lacked features compared to most competitor products, but they were lower in price. It was created in a collaboration between Packard Bell and Zenith Data Systems. The Statesman series was essentially a rebrand of Zenith Data Systems Z-Star 433 series, with the only notable difference of the logo in the middle and text on the front bezel.
History
In June 1993 Zenith Data Systems announced an alliance with Packard Bell. Zenith acquired about 20% of Packard Bell and they would both now work together to design and build PC's. Zenith would also provide Packard Bell with private-label versions of their portable PC's. The Packard Bell Statesman was a rebrand of the Zenith Z-Star notebook computer series. While the Statesman was being advertised by Packard Bell, the Z-Star series was also still being sold by Zenith.
The Statesman was first introduced on October 4, 1993. Prices started at $1,500 for a monochrome or color DSTN model with a 33 MHz Cyrix Cx486SLC, 4 MB of RAM, 200 MB hard disk drive, internal 1.44 MB floppy disk drive, and MS-DOS 6.0 with Windows 3.1 for the included software. A "J mouse" pointing device was included, similar to the TrackPoint. The Statesman was expected to begin shipping within the next few weeks.
Specifications
Hardware
CPU
The first two models, the 200M and 200C, used the Cyrix Cx486SLC. This was Cyrix's first processor, which was actually a 386SX with on-board L1 cache and 486 instructions, being known as a "hybrid chip". The processor was clocked at 33 MHz and had 1 KB of L1 cache. It was a 16-bit processor and was pin compatible with the Intel 80386SX. On the bottom of the unit, the motherboard had an empty socket for a Cyrix FasMath co-processor, which could improve floating-point math performance.
The 200M and 200C plus models had a Cyrix Cx486SLC2 clocked at 50 MHz, which was 50% faster t
|
https://en.wikipedia.org/wiki/Natural%20logarithm%20of%202
|
The decimal value of the natural logarithm of 2
is approximately
The logarithm of 2 in other bases is obtained with the formula
The common logarithm in particular is ()
The inverse of this number is the binary logarithm of 10:
().
By the Lindemann–Weierstrass theorem, the natural logarithm of any natural number other than 0 and 1 (more generally, of any positive algebraic number other than 1) is a transcendental number.
Series representations
Rising alternate factorial
This is the well-known "alternating harmonic series".
Binary rising constant factorial
Other series representations
using
(sums of the reciprocals of decagonal numbers)
Involving the Riemann Zeta function
( is the Euler–Mascheroni constant and Riemann's zeta function.)
BBP-type representations
(See more about Bailey–Borwein–Plouffe (BBP)-type representations.)
Applying the three general series for natural logarithm to 2 directly gives:
Applying them to gives:
Applying them to gives:
Applying them to gives:
Representation as integrals
The natural logarithm of 2 occurs frequently as the result of integration. Some explicit formulas for it include:
Other representations
The Pierce expansion is
The Engel expansion is
The cotangent expansion is
The simple continued fraction expansion is
,
which yields rational approximations, the first few of which are 0, 1, 2/3, 7/10, 9/13 and 61/88.
This generalized continued fraction:
,
also expressible as
Bootstrapping other logarithms
Given a value of , a scheme of computing the logarithms of other integers is to tabulate the logarithms of the prime numbers and in the next layer the logarithms of the composite numbers based on their factorizations
This employs
In a third layer, the logarithms of rational numbers are computed with , and logarithms of roots via .
The logarithm of 2 is useful in the sense that the powers of 2 are rather densely distributed; finding powers close to powers of other numbers is comparatively eas
|
https://en.wikipedia.org/wiki/Software%20design
|
Software design is the process by which an agent creates a specification of a software artifact intended to accomplish goals, using a set of primitive components and subject to constraints. The term is sometimes used broadly to refer to "all the activity involved in conceptualizing, framing, implementing, commissioning, and ultimately modifying" the software, or more specifically "the activity following requirements specification and before programming, as ... [in] a stylized software engineering process."
Software design usually involves problem-solving and planning a software solution. This includes both a low-level component and algorithm design and a high-level, architecture design.
Overview
Software design is the process of envisioning and defining software solutions to one or more sets of problems. One of the main components of software design is the software requirements analysis (SRA). SRA is a part of the software development process that lists specifications used in software engineering.
If the software is "semi-automated" or user centered, software design may involve user experience design yielding a storyboard to help determine those specifications. If the software is completely automated (meaning no user or user interface), a software design may be as simple as a flow chart or text describing a planned sequence of events. There are also semi-standard methods like Unified Modeling Language and Fundamental modeling concepts. In either case, some documentation of the plan is usually the product of the design. Furthermore, a software design may be platform-independent or platform-specific, depending upon the availability of the technology used for the design.
The main difference between software analysis and design is that the output of a software analysis consists of smaller problems to solve. Additionally, the analysis should not be designed very differently across different team members or groups. In contrast, the design focuses on capabilities, a
|
https://en.wikipedia.org/wiki/Wiles%27s%20proof%20of%20Fermat%27s%20Last%20Theorem
|
Wiles's proof of Fermat's Last Theorem is a proof by British mathematician Andrew Wiles of a special case of the modularity theorem for elliptic curves. Together with Ribet's theorem, it provides a proof for Fermat's Last Theorem. Both Fermat's Last Theorem and the modularity theorem were almost universally considered inaccessible to proof by contemporaneous mathematicians, meaning that they were believed to be impossible to prove using current knowledge.
Wiles first announced his proof on 23 June 1993 at a lecture in Cambridge entitled "Modular Forms, Elliptic Curves and Galois Representations". However, in September 1993 the proof was found to contain an error. One year later on 19 September 1994, in what he would call "the most important moment of [his] working life", Wiles stumbled upon a revelation that allowed him to correct the proof to the satisfaction of the mathematical community. The corrected proof was published in 1995.
Wiles's proof uses many techniques from algebraic geometry and number theory, and has many ramifications in these branches of mathematics. It also uses standard constructions of modern algebraic geometry, such as the category of schemes and Iwasawa theory, and other 20th-century techniques which were not available to Fermat. The proof's method of identification of a deformation ring with a Hecke algebra (now referred to as an R=T theorem) to prove modularity lifting theorems has been an influential development in algebraic number theory.
Together, the two papers which contain the proof are 129 pages long, and consumed over seven years of Wiles's research time. John Coates described the proof as one of the highest achievements of number theory, and John Conway called it "the proof of the [20th] century." Wiles's path to proving Fermat's Last Theorem, by way of proving the modularity theorem for the special case of semistable elliptic curves, established powerful modularity lifting techniques and opened up entire new approaches to numer
|
https://en.wikipedia.org/wiki/Vedic%20Mathematics
|
Vedic Mathematics is a book written by the Indian monk Bharati Krishna Tirtha, and first published in 1965. It contains a list of mathematical techniques, which were falsely claimed to have been retrieved from the Vedas and to contain advanced mathematical knowledge.
Krishna Tirtha failed to produce the sources, and scholars unanimously note it to be a mere compendium of tricks for increasing the speed of elementary mathematical calculations sharing no overlap with historical mathematical developments during the Vedic period. However, there has been a proliferation of publications in this area and multiple attempts to integrate the subject into mainstream education by right-wing Hindu nationalist governments.
Contents
The book contains metaphorical aphorisms in the form of sixteen sutras and thirteen sub-sutras, which Krishna Tirtha states allude to significant mathematical tools. The range of their asserted applications spans from topic as diverse as statics and pneumatics to astronomy and financial domains. Tirtha stated that no part of advanced mathematics lay beyond the realms of his book and propounded that studying it for a couple of hours every day for a year equated to spending about two decades in any standardized education system to become professionally trained in the discipline of mathematics.
STS scholar S. G. Dani in 'Vedic Mathematics': Myth and Reality states that the book is primarily a compendium of tricks that can be applied in elementary, middle and high school arithmetic and algebra, to gain faster results. The sutras and sub-sutras are abstract literary expressions (for example, "as much less" or "one less than previous one") prone to creative interpretations; Krishna Tirtha exploited this to the extent of manipulating the same shloka to generate widely different mathematical equivalencies across a multitude of contexts.
Source and relation with The Vedas
According to Krishna Tirtha, the sutras and other accessory content were found after
|
https://en.wikipedia.org/wiki/Wireless%20onion%20router
|
A wireless onion router is a router that uses Tor to connect securely to a network. The onion router allows the user to connect to the internet anonymously creating an anonymous connection. Tor works using an overlaid network which is free throughout the world, this overlay network is created by using numerous relay points created using volunteer which helps the user hide personal information behind layers of encrypted data like layers of an onion. Routers are being created using Raspberry Pi adding a wireless module or using its own inbuilt wireless module in the later versions.
This router provides encryption at the seventh layer (application layer) of the OSI model, which makes it transparent encryption, the user does not have to think about how the data will be sent or received. The encrypted data includes the destination and origin IP address of the data and the current relay point only knows the previous and the next hop of the encrypted packet. These relay points are selected in a random order and can only decrypt a single layer before forwarding it to the next hop where is the procedure is followed unless it is the destination point.
Applications
A wireless router which can use the onion router network can be used to keep the user safe from hackers or network sniffers. The data captured by them won't make any sense as it will only look like messed up text. These are small and handy which will give the user a freedom to carry this tool and connect to the network from anywhere. This setup does not require installation of Tor browser on the work station. Whistle blowers and NGO workers use this network to pass information or to talk to their family without disclosing any information. The applications of wireless onion router are common to a normal router, it provides access that allows it to be placed at a site and users can get connected.
Tor can be used in security focused Operating Systems, messengers, browsers. These can be anonymised using Tor network.
|
https://en.wikipedia.org/wiki/Product-family%20engineering
|
Product-family engineering (PFE), also known as product-line engineering, is based on the ideas of "domain engineering" created by the Software Engineering Institute, a term coined by James Neighbors in his 1980 dissertation at University of California, Irvine. Software product lines are quite common in our daily lives, but before a product family can be successfully established, an extensive process has to be followed. This process is known as product-family engineering.
Product-family engineering can be defined as a method that creates an underlying architecture of an organization's product platform. It provides an architecture that is based on commonality as well as planned variabilities. The various product variants can be derived from the basic product family, which creates the opportunity to reuse and differentiate on products in the family. Product-family engineering is conceptually similar to the widespread use of vehicle platforms in the automotive industry.
Product-family engineering is a relatively new approach to the creation of new products. It focuses on the process of engineering new products in such a way that it is possible to reuse product components and apply variability with decreased costs and time. Product-family engineering is all about reusing components and structures as much as possible.
Several studies have proven that using a product-family engineering approach for product development can have several benefits. Here is a list of some of them:
Higher productivity
Higher quality
Faster time-to-market
Lower labor needs
The Nokia case mentioned below also illustrates these benefits.
Overall process
The product family engineering process consists of several phases. The three main phases are:
Phase 1: Product management
Phase 2: Domain engineering
Phase 3: Product engineering
The process has been modeled on a higher abstraction level. This has the advantage that it can be applied to all kinds of product lines and families, not on
|
https://en.wikipedia.org/wiki/OSIAN
|
OSIAN, or Open Source IPv6 Automation Network, is a free and open-source implementation of IPv6 networking for wireless sensor networks (WSNs). OSIAN extends TinyOS, which started as a collaboration between the University of California, Berkeley in co-operation with Intel Research and Crossbow Technology, and has since grown to be an international consortium, the TinyOS Alliance. OSIAN brings direct Internet-connectivity to smartdust technology.
Design
Architecturally, OSIAN treats TinyOS as the underlying operating system providing hardware drivers, while OSIAN itself adds Internet networking capabilities. Users are able to download and install OSIAN-enabled firmware to their embedded hardware, form a PPP connection with their computer, and communicate raw IPv6 UDP to other wireless sensors from their favorite programming language on their computer.
OSIAN is developed using a style very much like the development of Linux, which requires peer reviews and unit testing before any code moves into core repositories.
Platforms
OSIAN is designed for deeply embedded systems with very small amounts of memory. One primary platform contains a TI MSP430-based CC430 system-on-a-chip, which contains 32 kB ROM and 4 kB RAM.
See also
TinyOS
Contiki
6LoWPAN
External links
SuRF Developer Kit supporting OSIAN
Wireless sensor network
Embedded systems
|
https://en.wikipedia.org/wiki/Kostant%27s%20convexity%20theorem
|
In mathematics, Kostant's convexity theorem, introduced by , states that the projection of every coadjoint orbit of a connected compact Lie group into the dual of a Cartan subalgebra is a convex set. It is a special case of a more general result for symmetric spaces. Kostant's theorem is a generalization of a result of , and for hermitian matrices. They proved that the projection onto the diagonal matrices of the space of all n by n complex self-adjoint matrices with given eigenvalues Λ = (λ1, ..., λn) is the convex polytope with vertices all permutations of the coordinates of Λ.
Kostant used this to generalize the Golden–Thompson inequality to all compact groups.
Compact Lie groups
Let K be a connected compact Lie group with maximal torus T and Weyl group W = NK(T)/T. Let their Lie algebras be and . Let P be the orthogonal projection of onto for some Ad-invariant inner product on . Then for X in , P(Ad(K)⋅X) is the convex polytope with vertices w(X) where w runs over the Weyl group.
Symmetric spaces
Let G be a compact Lie group and σ an involution with K a compact subgroup fixed by σ and containing the identity component of the fixed point subgroup of σ. Thus G/K is a symmetric space of compact type. Let and be their Lie algebras and let σ also denote the corresponding involution of . Let be the −1 eigenspace of σ and let be a maximal Abelian subspace. Let Q be the orthogonal projection of onto for some Ad(K)-invariant inner product on . Then for X in , Q(Ad(K)⋅X) is the convex polytope with vertices the w(X) where w runs over the restricted Weyl group (the normalizer of in K modulo its centralizer).
The case of a compact Lie group is the special case where G = K × K, K is embedded diagonally and σ is the automorphism of G interchanging the two factors.
Proof for a compact Lie group
Kostant's proof for symmetric spaces is given in . There is an elementary proof just for compact Lie groups using similar ideas, due to : it is based on a generaliza
|
https://en.wikipedia.org/wiki/Infix%20notation
|
Infix notation is the notation commonly used in arithmetical and logical formulae and statements. It is characterized by the placement of operators between operands—"infixed operators"—such as the plus sign in .
Usage
Binary relations are often denoted by an infix symbol such as set membership a ∈ A when the set A has a for an element. In geometry, perpendicular lines a and b are denoted and in projective geometry two points b and c are in perspective when while they are connected by a projectivity when
Infix notation is more difficult to parse by computers than prefix notation (e.g. + 2 2) or postfix notation (e.g. 2 2 +). However many programming languages use it due to its familiarity. It is more used in arithmetic, e.g. 5 × 6.
Further notations
Infix notation may also be distinguished from function notation, where the name of a function suggests a particular operation, and its arguments are the operands. An example of such a function notation would be S(1, 3) in which the function S denotes addition ("sum"): .
Order of operations
In infix notation, unlike in prefix or postfix notations, parentheses surrounding groups of operands and operators are necessary to indicate the intended order in which operations are to be performed. In the absence of parentheses, certain precedence rules determine the order of operations.
See also
Tree traversal: Infix (In-order) is also a tree traversal order. It is described in a more detailed manner on this page.
Calculator input methods: comparison of notations as used by pocket calculators
Postfix notation, also called Reverse Polish notation
Prefix notation, also called Polish notation
Shunting yard algorithm, used to convert infix notation to postfix notation or to a tree
Operator (computer programming)
Subject Verb Object
|
https://en.wikipedia.org/wiki/Catabiosis
|
Catabiosis is the process of growing older, aging and physical degradation.
The word comes from Greek "kata"—down, against, reverse and "biosis"—way of life and is generally used to describe senescence and degeneration in living organisms and biophysics of aging in general.
One of the popular catabiotic theories is the entropy theory of aging, where aging is characterized by thermodynamically favourable increase in structural disorder. Living organisms are open systems that take free energy from the environment and offload their entropy as waste. However, basic components of living systems—DNA, proteins, lipids and sugars—tend towards the state of maximum entropy while continuously accumulating damages causing catabiosis of the living structure.
Catabiotic force on the contrary is the influence exerted by living structures on adjoining cells, by which the latter are developed in harmony with the primary structures.
See also
Onpedia definition of catabiosis
Catabiotic force
Dictionary.com - Catabiosis
DNA damage theory of aging
Medical aspects of death
Biology terminology
Senescence
|
https://en.wikipedia.org/wiki/Photon%20noise
|
Photon noise is the randomness in signal associated with photons arriving at a detector. For a simple black body emitting on an absorber, the noise-equivalent power is given by
where is the Planck constant, is the central frequency, is the bandwidth, is the occupation number and is the optical efficiency.
The first term is essentially shot noise whereas the second term is related to the bosonic character of photons, variously known as "Bose noise" or "wave noise". At low occupation number, such as in the visible spectrum, the shot noise term dominates. At high occupation number, however, typical of the radio spectrum, the Bose term dominates.
See also
Hanbury Brown and Twiss effect
Phonon noise
|
https://en.wikipedia.org/wiki/Signal%20averaging
|
Signal averaging is a signal processing technique applied in the time domain, intended to increase the strength of a signal relative to noise that is obscuring it. By averaging a set of replicate measurements, the signal-to-noise ratio (SNR) will be increased, ideally in proportion to the square root of the number of measurements.
Deriving the SNR for averaged signals
Assumed that
Signal is uncorrelated to noise, and noise is uncorrelated : .
Signal power is constant in the replicate measurements.
Noise is random, with a mean of zero and constant variance in the replicate measurements: and .
We (canonically) define Signal-to-Noise ratio as .
Noise power for sampled signals
Assuming we sample the noise, we get a per-sample variance of
.
Averaging a random variable leads to the following variance:
.
Since noise variance is constant :
,
demonstrating that averaging realizations of the same, uncorrelated noise reduces noise power by a factor of , and reduces noise level by a factor of .
Signal power for sampled signals
Considering vectors of signal samples of length :
,
the power of such a vector simply is
.
Again, averaging the vectors , yields the following averaged vector
.
In the case where , we see that reaches a maximum of
.
In this case, the ratio of signal to noise also reaches a maximum,
.
This is the oversampling case, where the observed signal is correlated (because oversampling implies that the signal observations are strongly correlated).
Time-locked signals
Averaging is applied to enhance a time-locked signal component in noisy measurements; time-locking implies that the signal is observation-periodic, so we end up in the maximum case above.
Averaging odd and even trials
A specific way of obtaining replicates is to average all the odd and even trials in separate buffers. This has the advantage of allowing for comparison of even and odd results from interleaved trials. An average of odd and even averages generates
|
https://en.wikipedia.org/wiki/Teknomo%E2%80%93Fernandez%20algorithm
|
The Teknomo–Fernandez algorithm (TF algorithm), is an efficient algorithm for generating the background image of a given video sequence.
By assuming that the background image is shown in the majority of the video, the algorithm is able to generate a good background image of a video in -time using only a small number of binary operations and Boolean bit operations, which require a small amount of memory and has built-in operators found in many programming languages such as C, C++, and Java.
History
People tracking from videos usually involves some form of background subtraction to segment foreground from background. Once foreground images are extracted, then desired algorithms (such as those for motion tracking, object tracking, and facial recognition) may be executed using these images.
However, background subtraction requires that the background image is already available and unfortunately, this is not always the case. Traditionally, the background image is searched for manually or automatically from the video images when there are no objects. More recently, automatic background generation through object detection, medial filtering, medoid filtering, approximated median filtering, linear predictive filter, non-parametric model, Kalman filter, and adaptive smoothening have been suggested; however, most of these methods have high computational complexity and are resource-intensive.
The Teknomo–Fernandez algorithm is also an automatic background generation algorithm. Its advantage, however, is its computational speed of only -time, depending on the resolution of an image and its accuracy gained within a manageable number of frames. Only at least three frames from a video is needed to produce the background image assuming that for every pixel position, the background occurs in the majority of the videos. Furthermore, it can be performed for both grayscale and colored videos.
Assumptions
The camera is stationary.
The light of the environment changes only slowly
|
https://en.wikipedia.org/wiki/Local%20Management%20Interface
|
Local Management Interface (LMI) is a term for some signaling standards used in networks, namely Frame Relay and Carrier Ethernet.
Frame Relay
LMI is a set of signalling standards between routers and Frame Relay switches. Communication takes place between a router and the first Frame Relay switch to which it is connected. Information about keepalives, global addressing, IP multicast and the status of virtual circuits is commonly exchanged using LMI.
There are three standards for LMI:
Using DLCI 0:
ANSI's T1.617 Annex D standard
ITU-T's Q.933 Annex A standard
Using DLCI 1023:
The "Gang of Four" standard, developed by Cisco, DEC, StrataCom and Nortel
Carrier Ethernet
Ethernet Local Management Interface (E-LMI) is an Ethernet layer operation, administration, and management (OAM) protocol defined by the Metro Ethernet Forum (MEF) for Carrier Ethernet networks. It provides information that enables auto configuration of customer edge (CE) devices.
|
https://en.wikipedia.org/wiki/City%20Nature%20Challenge
|
The City Nature Challenge is an annual, global, community science competition to document urban biodiversity. The challenge is a bioblitz that engages residents and visitors to find and document plants, animals, and other organisms living in urban areas. The goals are to engage the public in the collection of biodiversity data, with three awards each year for the cities that make the most observations, find the most species, and engage the most people.
Participants primarily use the iNaturalist app and website to document their observations, though some areas use other platforms, such as Natusfera in Spain. The observation period is followed by several days of identification and the final announcement of winners. Participants need not know how to identify the species; help is provided through iNaturalist's automated species identification feature as well as the community of users on iNaturalist, including professional scientists and expert naturalists.
History
The City Nature Challenge was founded by Alison Young and Rebecca Johnson of the California Academy of Sciences and Lila Higgins of the Natural History Museum of Los Angeles County. The first challenge was in the spring of 2016 between Los Angeles and San Francisco. Participants documented over 20,000 observations with the iNaturalist platform. In 2017, the challenge expanded to 16 cities across the United States and participants collected over 125,000 observations of wildlife in 5 days. In 2018, the challenge expanded to 68 cities across the world. In four days, over 441,000 observations of more than 18,000 species were observed, and over 17,000 people participated. The 2019 challenge more than doubled in scale, with almost a million observations of over 31,000 species observed by around 35,000 people.
Taking the competition beyond its US roots, the 2019 event was a much more international affair, with the winning city for observations and species coming from Africa (Cape Town), and three South American
|
https://en.wikipedia.org/wiki/Process%20control%20monitoring
|
In the application of integrated circuits, process control monitoring (PCM) is the procedure followed to obtain detailed information about the process used.
PCM is associated with designing and fabricating special structures that can monitor technology specific parameters such as Vth in CMOS and Vbe in bipolars. These structures are placed across the wafer at specific locations along with the chip produced so that a closer look into the process variation is possible.
Integrated circuits
|
https://en.wikipedia.org/wiki/List%20of%20atmospheric%20optical%20phenomena
|
Atmospheric optical phenomena include:
Afterglow
Airglow
Alexander's band, the dark region between the two bows of a double rainbow.
Alpenglow
Anthelion
Anticrepuscular rays
Aurora
Auroral light (northern and southern lights, aurora borealis and aurora australis)
Belt of Venus
Brocken Spectre
Circumhorizontal arc
Circumzenithal arc
Cloud iridescence
Crepuscular rays
Earth's shadow
Earthquake lights
Glories
Green flash
Halos, of Sun or Moon, including sun dogs
Haze
Heiligenschein or halo effect, partly caused by the opposition effect
Ice blink
Light pillar
Lightning
Mirages (including Fata Morgana)
Monochrome Rainbow
Moon dog
Moonbow
Nacreous cloud/Polar stratospheric cloud
Rainbow
Subsun
Sun dog
Tangent arc
Tyndall effect
Upper-atmospheric lightning, including red sprites, Blue jets, and ELVES
Water sky
See also
|
https://en.wikipedia.org/wiki/List%20of%20aperiodic%20sets%20of%20tiles
|
In geometry, a tiling is a partition of the plane (or any other geometric setting) into closed sets (called tiles), without gaps or overlaps (other than the boundaries of the tiles). A tiling is considered periodic if there exist translations in two independent directions which map the tiling onto itself. Such a tiling is composed of a single fundamental unit or primitive cell which repeats endlessly and regularly in two independent directions. An example of such a tiling is shown in the adjacent diagram (see the image description for more information). A tiling that cannot be constructed from a single primitive cell is called nonperiodic. If a given set of tiles allows only nonperiodic tilings, then this set of tiles is called aperiodic. The tilings obtained from an aperiodic set of tiles are often called aperiodic tilings, though strictly speaking it is the tiles themselves that are aperiodic. (The tiling itself is said to be "nonperiodic".)
The first table explains the abbreviations used in the second table. The second table contains all known aperiodic sets of tiles and gives some additional basic information about each set. This list of tiles is still incomplete.
Explanations
List
|
https://en.wikipedia.org/wiki/Proof%20of%20impossibility
|
In mathematics, a proof of impossibility is a proof that demonstrates that a particular problem cannot be solved as described in the claim, or that a particular set of problems cannot be solved in general. Such a case is also known as a negative proof, proof of an impossibility theorem, or negative result. Proofs of impossibility often are the resolutions to decades or centuries of work attempting to find a solution, eventually proving that there is no solution. Proving that something is impossible is usually much harder than the opposite task, as it is often necessary to develop a proof that works in general, rather than to just show a particular example. Impossibility theorems are usually expressible as negative existential propositions or universal propositions in logic.
The irrationality of the square root of 2 is one of the oldest proofs of impossibility. It shows that it is impossible to express the square root of 2 as a ratio of two integers. Another consequential proof of impossibility was Ferdinand von Lindemann's proof in 1882, which showed that the problem of squaring the circle cannot be solved because the number is transcendental (i.e., non-algebraic), and that only a subset of the algebraic numbers can be constructed by compass and straightedge. Two other classical problems—trisecting the general angle and doubling the cube—were also proved impossible in the 19th century, and all of these problems gave rise to research into more complicated mathematical structures.
A problem that arose in the 16th century was creating a general formula using radicals to express the solution of any polynomial equation of fixed degree k, where k ≥ 5. In the 1820s, the Abel–Ruffini theorem (also known as Abel's impossibility theorem) showed this to be impossible, using concepts such as solvable groups from Galois theory—a new sub-field of abstract algebra.
Some of the most important proofs of impossibility found in the 20th century were those related to undecidability
|
https://en.wikipedia.org/wiki/List%20of%20numerical-analysis%20software
|
Listed here are notable end-user computer applications intended for use with numerical or data analysis:
Numerical-software packages
General-purpose computer algebra systems
Interface-oriented
Language-oriented
Historically significant
Expensive Desk Calculator written for the TX-0 and PDP-1 in the late 1950s or early 1960s.
S is an (array-based) programming language with strong numerical support. R is an implementation of the S language.
See also
|
https://en.wikipedia.org/wiki/Eb/N0
|
{{DISPLAYTITLE:Eb/N0}}
In digital communication or data transmission, (energy per bit to noise power spectral density ratio) is a normalized signal-to-noise ratio (SNR) measure, also known as the "SNR per bit". It is especially useful when comparing the bit error rate (BER) performance of different digital modulation schemes without taking bandwidth into account.
As the description implies, is the signal energy associated with each user data bit; it is equal to the signal power divided by the user bit rate (not the channel symbol rate). If signal power is in watts and bit rate is in bits per second, is in units of joules (watt-seconds). is the noise spectral density, the noise power in a 1 Hz bandwidth, measured in watts per hertz or joules.
These are the same units as so the ratio is dimensionless; it is frequently expressed in decibels. directly indicates the power efficiency of the system without regard to modulation type, error correction coding or signal bandwidth (including any use of spread spectrum). This also avoids any confusion as to which of several definitions of "bandwidth" to apply to the signal.
But when the signal bandwidth is well defined, is also equal to the signal-to-noise ratio (SNR) in that bandwidth divided by the "gross" link spectral efficiency in bit/s⋅Hz, where the bits in this context again refer to user data bits, irrespective of error correction information and modulation type.
must be used with care on interference-limited channels since additive white noise (with constant noise density ) is assumed, and interference is not always noise-like. In spread spectrum systems (e.g., CDMA), the interference is sufficiently noise-like that it can be represented as and added to the thermal noise to produce the overall ratio .
Relation to carrier-to-noise ratio
is closely related to the carrier-to-noise ratio (CNR or ), i.e. the signal-to-noise ratio (SNR) of the received signal, after the receiver filter but before detection
|
https://en.wikipedia.org/wiki/Square-law%20detector
|
In electronic signal processing, a square law detector is a device that produces an output proportional to the square of some input. For example, in demodulating radio signals, a semiconductor diode can be used as a square law detector, providing an output current proportional to the square of the amplitude of the input voltage over some range of input amplitudes. A square law detector provides an output directly proportional to the power of the input electrical signal.
|
https://en.wikipedia.org/wiki/Thomas%20Baxter%20%28mathematician%29
|
Thomas Baxter ( 1732–1740), was a schoolmaster and mathematician who published an erroneous method of squaring the circle. He was derided as a "pseudo-mathematician" by F. Y. Edgeworth, writing for the Dictionary of National Biography.
When he was master of a private school at Crathorne, North Yorkshire, Baxter composed a book entitled The Circle squared (London: 1732), published in octavo. The mathematical book begins with the untrue assertion that "if the diameter of a circle be unity or one, the circumference of that circle will be 3.0625", where the value should correctly be pi. From this incorrect assumption, Baxter proves fourteen geometric theorems on circles, alongside some others on cones and ellipses, which Edgeworth refers to as of "equal absurdity" to Baxter's other assertions. Thomas Gent, who published the work, wrote in his reminisces, in The Life of Mr. Thomas Gent, that "as it never proved of any effect, it was converted to waste paper, to the great mortification of the author".
This book has received harsh reviews from modern mathematicians and scholars. Antiquary Edward Peacock referred to it as "no doubt, great rubbish". Mathematician Augustus De Morgan included Baxter's proof among his Budget of Paradoxes (1872), dismissing it as an absurd work. The work was the reason Edgeworth gave Baxter the epithet, "pseudo-mathematician".
Baxter published another work, Matho, or the Principles of Astronomy and Natural Philosophy accommodated to the Use of Younger Persons (London: 1740). Unlike Baxter's other work, this volume enjoyed considerable popularity in its time.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.