text
stringlengths
105
4.17k
source
stringclasses
883 values
Vibrational spectroscopy is the branch of spectroscopy that studies the spectra. However, the latest developments in spectroscopy can sometimes dispense with the dispersion technique. In biochemical spectroscopy, information can be gathered about biological tissue by absorption and light scattering techniques. Light scattering spectroscopy is a type of reflectance spectroscopy that determines tissue structures by examining elastic scattering. In such a case, it is the tissue that acts as a diffraction or dispersion mechanism. Spectroscopic studies were central to the development of quantum mechanics, because the first useful atomic models described the spectra of hydrogen, which include the Bohr model, the Schrödinger equation, and Matrix mechanics, all of which can produce the spectral lines of hydrogen, therefore providing the basis for discrete quantum jumps to match the discrete hydrogen spectrum. Also, Max Planck's explanation of blackbody radiation involved spectroscopy because he was comparing the wavelength of light using a photometer to the temperature of a Black Body. Spectroscopy is used in physical and analytical chemistry because atoms and molecules have unique spectra. As a result, these spectra can be used to detect, identify and quantify information about the atoms and molecules. Spectroscopy is also used in astronomy and remote sensing on Earth. Most research telescopes have spectrographs.
https://en.wikipedia.org/wiki/Spectroscopy
Spectroscopy is also used in astronomy and remote sensing on Earth. Most research telescopes have spectrographs. The measured spectra are used to determine the chemical composition and physical properties of astronomical objects (such as their temperature, density of elements in a star, velocity, black holes and more). An important use for spectroscopy is in biochemistry. Molecular samples may be analyzed for species identification and energy content. ## Theory The underlying premise of spectroscopy is that light is made of different wavelengths and that each wavelength corresponds to a different frequency. The importance of spectroscopy is centered around the fact that every element in the periodic table has a unique light spectrum described by the frequencies of light it emits or absorbs consistently appearing in the same part of the electromagnetic spectrum when that light is diffracted. This opened up an entire field of study with anything that contains atoms. Spectroscopy is the key to understanding the atomic properties of all matter. As such spectroscopy opened up many new sub-fields of science yet undiscovered. The idea that each atomic element has its unique spectral signature enabled spectroscopy to be used in a broad number of fields each with a specific goal achieved by different spectroscopic procedures.
https://en.wikipedia.org/wiki/Spectroscopy
As such spectroscopy opened up many new sub-fields of science yet undiscovered. The idea that each atomic element has its unique spectral signature enabled spectroscopy to be used in a broad number of fields each with a specific goal achieved by different spectroscopic procedures. The National Institute of Standards and Technology maintains a public Atomic Spectra Database that is continually updated with precise measurements. The broadening of the field of spectroscopy is due to the fact that any part of the electromagnetic spectrum may be used to analyze a sample from the infrared to the ultraviolet telling scientists different properties about the very same sample. For instance in chemical analysis, the most common types of spectroscopy include atomic spectroscopy, infrared spectroscopy, ultraviolet and visible spectroscopy, Raman spectroscopy and nuclear magnetic resonance. In nuclear magnetic resonance (NMR), the theory behind it is that frequency is analogous to resonance and its corresponding resonant frequency. Resonances by the frequency were first characterized in mechanical systems such as pendulums, which have a frequency of motion noted famously by Galileo. ## Classification of methods Spectroscopy is a sufficiently broad field that many sub-disciplines exist, each with numerous implementations of specific spectroscopic techniques. The various implementations and techniques can be classified in several ways.
https://en.wikipedia.org/wiki/Spectroscopy
## Classification of methods Spectroscopy is a sufficiently broad field that many sub-disciplines exist, each with numerous implementations of specific spectroscopic techniques. The various implementations and techniques can be classified in several ways. ### Type of radiative energy The types of spectroscopy are distinguished by the type of radiative energy involved in the interaction. In many applications, the spectrum is determined by measuring changes in the intensity or frequency of this energy. The types of radiative energy studied include: - Electromagnetic radiation was the first source of energy used for spectroscopic studies. Techniques that employ electromagnetic radiation are typically classified by the wavelength region of the spectrum and include microwave, terahertz, infrared, near-infrared, ultraviolet-visible, x-ray, and gamma spectroscopy. - Particles, because of their de Broglie waves, can also be a source of radiative energy. Both electron and neutron spectroscopy are commonly used. For a particle, its kinetic energy determines its wavelength. - Acoustic spectroscopy involves radiated pressure waves. - Dynamic mechanical analysis can be employed to impart radiating energy, similar to acoustic waves, to solid materials. ### Nature of the interaction The types of spectroscopy also can be distinguished by the nature of the interaction between the energy and the material.
https://en.wikipedia.org/wiki/Spectroscopy
- Dynamic mechanical analysis can be employed to impart radiating energy, similar to acoustic waves, to solid materials. ### Nature of the interaction The types of spectroscopy also can be distinguished by the nature of the interaction between the energy and the material. These interactions include: - Absorption spectroscopy: Absorption occurs when energy from the radiative source is absorbed by the material. Absorption is often determined by measuring the fraction of energy transmitted through the material, with absorption decreasing the transmitted portion. - Emission spectroscopy: Emission indicates that radiative energy is released by the material. A material's blackbody spectrum is a spontaneous emission spectrum determined by its temperature. This feature can be measured in the infrared by instruments such as the atmospheric emitted radiance interferometer. Emission can also be induced by other sources of energy such as flames, sparks, electric arcs or electromagnetic radiation in the case of fluorescence. - Elastic scattering and reflection spectroscopy determine how incident radiation is reflected or scattered by a material. Crystallography employs the scattering of high energy radiation, such as x-rays and electrons, to examine the arrangement of atoms in proteins and solid crystals.
https://en.wikipedia.org/wiki/Spectroscopy
Emission can also be induced by other sources of energy such as flames, sparks, electric arcs or electromagnetic radiation in the case of fluorescence. - Elastic scattering and reflection spectroscopy determine how incident radiation is reflected or scattered by a material. Crystallography employs the scattering of high energy radiation, such as x-rays and electrons, to examine the arrangement of atoms in proteins and solid crystals. - Impedance spectroscopy: Impedance is the ability of a medium to impede or slow the transmittance of energy. For optical applications, this is characterized by the index of refraction. - Inelastic scattering phenomena involve an exchange of energy between the radiation and the matter that shifts the wavelength of the scattered radiation. These include Raman and Compton scattering. - Coherent or resonance spectroscopy are techniques where the radiative energy couples two quantum states of the material in a coherent interaction that is sustained by the radiating field. The coherence can be disrupted by other interactions, such as particle collisions and energy transfer, and so often require high intensity radiation to be sustained.
https://en.wikipedia.org/wiki/Spectroscopy
These include Raman and Compton scattering. - Coherent or resonance spectroscopy are techniques where the radiative energy couples two quantum states of the material in a coherent interaction that is sustained by the radiating field. The coherence can be disrupted by other interactions, such as particle collisions and energy transfer, and so often require high intensity radiation to be sustained. Nuclear magnetic resonance (NMR) spectroscopy is a widely used resonance method, and ultrafast laser spectroscopy is also possible in the infrared and visible spectral regions. - Nuclear spectroscopy are methods that use the properties of specific nuclei to probe the local structure in matter, mainly condensed matter, molecules in liquids or frozen liquids and bio-molecules. - Quantum logic spectroscopy is a general technique used in ion traps that enables precision spectroscopy of ions with internal structures that preclude laser cooling, state manipulation, and detection. Quantum logic operations enable a controllable ion to exchange information with a co-trapped ion that has a complex or unknown electronic structure. ### Type of material Spectroscopic studies are designed so that the radiant energy interacts with specific types of matter. #### Atoms Atomic spectroscopy was the first application of spectroscopy. Atomic absorption spectroscopy and atomic emission spectroscopy involve visible and ultraviolet light.
https://en.wikipedia.org/wiki/Spectroscopy
#### Atoms Atomic spectroscopy was the first application of spectroscopy. Atomic absorption spectroscopy and atomic emission spectroscopy involve visible and ultraviolet light. These absorptions and emissions, often referred to as atomic spectral lines, are due to electronic transitions of outer shell electrons as they rise and fall from one electron orbit to another. Atoms also have distinct x-ray spectra that are attributable to the excitation of inner shell electrons to excited states. Atoms of different elements have distinct spectra and therefore atomic spectroscopy allows for the identification and quantitation of a sample's elemental composition. After inventing the spectroscope, Robert Bunsen and Gustav Kirchhoff discovered new elements by observing their emission spectra. Atomic absorption lines are observed in the solar spectrum and referred to as Fraunhofer lines after their discoverer. A comprehensive explanation of the hydrogen spectrum was an early success of quantum mechanics and explained the Lamb shift observed in the hydrogen spectrum, which further led to the development of quantum electrodynamics. Modern implementations of atomic spectroscopy for studying visible and ultraviolet transitions include flame emission spectroscopy, inductively coupled plasma atomic emission spectroscopy, glow discharge spectroscopy, microwave induced plasma spectroscopy, and spark or arc emission spectroscopy.
https://en.wikipedia.org/wiki/Spectroscopy
A comprehensive explanation of the hydrogen spectrum was an early success of quantum mechanics and explained the Lamb shift observed in the hydrogen spectrum, which further led to the development of quantum electrodynamics. Modern implementations of atomic spectroscopy for studying visible and ultraviolet transitions include flame emission spectroscopy, inductively coupled plasma atomic emission spectroscopy, glow discharge spectroscopy, microwave induced plasma spectroscopy, and spark or arc emission spectroscopy. Techniques for studying x-ray spectra include X-ray spectroscopy and X-ray fluorescence. #### Molecules The combination of atoms into molecules leads to the creation of unique types of energetic states and therefore unique spectra of the transitions between these states. Molecular spectra can be obtained due to electron spin states (electron paramagnetic resonance), molecular rotations, molecular vibration, and electronic states. Rotations are collective motions of the atomic nuclei and typically lead to spectra in the microwave and millimetre-wave spectral regions. Rotational spectroscopy and microwave spectroscopy are synonymous. Vibrations are relative motions of the atomic nuclei and are studied by both infrared and Raman spectroscopy. Electronic excitations are studied using visible and ultraviolet spectroscopy as well as fluorescence spectroscopy.
https://en.wikipedia.org/wiki/Spectroscopy
Vibrations are relative motions of the atomic nuclei and are studied by both infrared and Raman spectroscopy. Electronic excitations are studied using visible and ultraviolet spectroscopy as well as fluorescence spectroscopy. Studies in molecular spectroscopy led to the development of the first maser and contributed to the subsequent development of the laser. #### Crystals and extended materials The combination of atoms or molecules into crystals or other extended forms leads to the creation of additional energetic states. These states are numerous and therefore have a high density of states. This high density often makes the spectra weaker and less distinct, i.e., broader. For instance, blackbody radiation is due to the thermal motions of atoms and molecules within a material. Acoustic and mechanical responses are due to collective motions as well. Pure crystals, though, can have distinct spectral transitions, and the crystal arrangement also has an effect on the observed molecular spectra. The regular lattice structure of crystals also scatters x-rays, electrons or neutrons allowing for crystallographic studies. #### Nuclei Nuclei also have distinct energy states that are widely separated and lead to gamma ray spectra. Distinct nuclear spin states can have their energy separated by a magnetic field, and this allows for nuclear magnetic resonance spectroscopy.
https://en.wikipedia.org/wiki/Spectroscopy
#### Nuclei Nuclei also have distinct energy states that are widely separated and lead to gamma ray spectra. Distinct nuclear spin states can have their energy separated by a magnetic field, and this allows for nuclear magnetic resonance spectroscopy. ## Other types Other types of spectroscopy are distinguished by specific applications or implementations: - Acoustic resonance spectroscopy is based on sound waves primarily in the audible and ultrasonic regions. - Auger electron spectroscopy is a method used to study surfaces of materials on a micro-scale. It is often used in connection with electron microscopy. - Cavity ring-down spectroscopy - Circular dichroism spectroscopy - Coherent anti-Stokes Raman spectroscopy is a recent technique that has high sensitivity and powerful applications for in vivo spectroscopy and imaging. - Cold vapour atomic fluorescence spectroscopy - Correlation spectroscopy encompasses several types of two-dimensional NMR spectroscopy. - Deep-level transient spectroscopy measures concentration and analyzes parameters of electrically active defects in semiconducting materials. - Dielectric spectroscopy - Dual-polarization interferometry measures the real and imaginary components of the complex refractive index. - Electron energy loss spectroscopy in transmission electron microscopy.
https://en.wikipedia.org/wiki/Spectroscopy
It is often used in connection with electron microscopy. - Cavity ring-down spectroscopy - Circular dichroism spectroscopy - Coherent anti-Stokes Raman spectroscopy is a recent technique that has high sensitivity and powerful applications for in vivo spectroscopy and imaging. - Cold vapour atomic fluorescence spectroscopy - Correlation spectroscopy encompasses several types of two-dimensional NMR spectroscopy. - Deep-level transient spectroscopy measures concentration and analyzes parameters of electrically active defects in semiconducting materials. - Dielectric spectroscopy - Dual-polarization interferometry measures the real and imaginary components of the complex refractive index. - Electron energy loss spectroscopy in transmission electron microscopy. - Electron phenomenological spectroscopy measures the physicochemical properties and characteristics of the electronic structure of multicomponent and complex molecular systems. - Electron paramagnetic resonance spectroscopy - Force spectroscopy - Fourier-transform spectroscopy is an efficient method for processing spectra data obtained using interferometers. Fourier-transform infrared spectroscopy is a common implementation of infrared spectroscopy. NMR also employs Fourier transforms. - Gamma spectroscopy - Hadron spectroscopy studies the energy/mass spectrum of hadrons according to spin, parity, and other particle properties.
https://en.wikipedia.org/wiki/Spectroscopy
Fourier-transform infrared spectroscopy is a common implementation of infrared spectroscopy. NMR also employs Fourier transforms. - Gamma spectroscopy - Hadron spectroscopy studies the energy/mass spectrum of hadrons according to spin, parity, and other particle properties. Baryon spectroscopy and meson spectroscopy are types of hadron spectroscopy. - Multispectral imaging and hyperspectral imaging is a method to create a complete picture of the environment or various objects, each pixel containing a full visible, visible near infrared, near infrared, or infrared spectrum. - Inelastic electron tunneling spectroscopy uses the changes in current due to inelastic electron-vibration interaction at specific energies that can also measure optically forbidden transitions. - Inelastic neutron scattering is similar to Raman spectroscopy, but uses neutrons instead of photons. - Laser-induced breakdown spectroscopy, also called laser-induced plasma spectrometry - Laser spectroscopy uses tunable lasers and other types of coherent emission sources, such as optical parametric oscillators, for selective excitation of atomic or molecular species. - Light scattering spectroscopy (LSS) is a spectroscopic technique typically used to evaluate morphological changes in epithelial cells in order to study mucosal tissue and detect early cancer and precancer.
https://en.wikipedia.org/wiki/Spectroscopy
- Multispectral imaging and hyperspectral imaging is a method to create a complete picture of the environment or various objects, each pixel containing a full visible, visible near infrared, near infrared, or infrared spectrum. - Inelastic electron tunneling spectroscopy uses the changes in current due to inelastic electron-vibration interaction at specific energies that can also measure optically forbidden transitions. - Inelastic neutron scattering is similar to Raman spectroscopy, but uses neutrons instead of photons. - Laser-induced breakdown spectroscopy, also called laser-induced plasma spectrometry - Laser spectroscopy uses tunable lasers and other types of coherent emission sources, such as optical parametric oscillators, for selective excitation of atomic or molecular species. - Light scattering spectroscopy (LSS) is a spectroscopic technique typically used to evaluate morphological changes in epithelial cells in order to study mucosal tissue and detect early cancer and precancer. - Mass spectroscopy is a historical term used to refer to mass spectrometry. The current recommendation is to use the latter term. The term "mass spectroscopy" originated in the use of phosphor screens to detect ions. - Mössbauer spectroscopy probes the properties of specific isotopic nuclei in different atomic environments by analyzing the resonant absorption of gamma rays.
https://en.wikipedia.org/wiki/Spectroscopy
The current recommendation is to use the latter term. The term "mass spectroscopy" originated in the use of phosphor screens to detect ions. - Mössbauer spectroscopy probes the properties of specific isotopic nuclei in different atomic environments by analyzing the resonant absorption of gamma rays. See also Mössbauer effect. - Multivariate optical computing is an all optical compressed sensing technique, generally used in harsh environments, that directly calculates chemical information from a spectrum as analogue output. - Neutron spin echo spectroscopy measures internal dynamics in proteins and other soft matter systems. - Nuclear quadrupole resonance is a chemical spectroscopy method mediated by NMR of the electric field gradient (EFG) in the absence of magnetic field - Perturbed angular correlation (PAC) uses radioactive nuclei as probe to study electric and magnetic fields (hyperfine interactions) in crystals (condensed matter) and bio-molecules. - Photoacoustic spectroscopy measures the sound waves produced upon the absorption of radiation. - Photoemission spectroscopy - Photothermal spectroscopy measures heat evolved upon absorption of radiation.
https://en.wikipedia.org/wiki/Spectroscopy
- Photoacoustic spectroscopy measures the sound waves produced upon the absorption of radiation. - Photoemission spectroscopy - Photothermal spectroscopy measures heat evolved upon absorption of radiation. - Pump-probe spectroscopy can use ultrafast laser pulses to measure reaction intermediates in the femtosecond timescale. - Raman optical activity spectroscopy exploits Raman scattering and optical activity effects to reveal detailed information on chiral centers in molecules. - Raman spectroscopy - Saturated spectroscopy - Scanning tunneling spectroscopy - Spectrophotometry - Spin noise spectroscopy traces spontaneous fluctuations of electronic and nuclear spins. - Time-resolved spectroscopy measures the decay rates of excited states using various spectroscopic methods. - Time-stretch spectroscopy - Thermal infrared spectroscopy measures thermal radiation emitted from materials and surfaces and is used to determine the type of bonds present in a sample as well as their lattice environment. The techniques are widely used by organic chemists, mineralogists, and planetary scientists. - Transient grating spectroscopy measures quasiparticle propagation.
https://en.wikipedia.org/wiki/Spectroscopy
The techniques are widely used by organic chemists, mineralogists, and planetary scientists. - Transient grating spectroscopy measures quasiparticle propagation. It can track changes in metallic materials as they are irradiated. - Ultraviolet photoelectron spectroscopy - Ultraviolet–visible spectroscopy - Vibrational circular dichroism spectroscopy - Video spectroscopy - X-ray photoelectron spectroscopy ## Applications There are several applications of spectroscopy in the fields of medicine, physics, chemistry, and astronomy. Taking advantage of the properties of absorbance and with astronomy emission, spectroscopy can be used to identify certain states of nature. The uses of spectroscopy in so many different fields and for so many different applications has caused specialty scientific subfields. Such examples include: - Determining the atomic structure of a sample - Studying spectral emission lines of the sun and distant galaxies - Space exploration - Cure monitoring of composites using optical fibers. - Estimating weathered wood exposure times using near infrared spectroscopy. - Measurement of different compounds in food samples by absorption spectroscopy both in visible and infrared spectrum. - Measurement of toxic compounds in blood samples - Non-destructive elemental analysis by X-ray fluorescence. - Electronic structure research with various spectroscopes.
https://en.wikipedia.org/wiki/Spectroscopy
- Measurement of different compounds in food samples by absorption spectroscopy both in visible and infrared spectrum. - Measurement of toxic compounds in blood samples - Non-destructive elemental analysis by X-ray fluorescence. - Electronic structure research with various spectroscopes. - Redshift to determine the speed and velocity of a distant object - Determining the metabolic structure of a muscle - Monitoring dissolved oxygen content in freshwater and marine ecosystems - Altering the structure of drugs to improve effectiveness - Characterization of proteins - Respiratory gas analysis in hospitals - Finding the physical properties of a distant star or nearby exoplanet using the Relativistic Doppler effect. - In-ovo sexing: spectroscopy allows to determine the sex of the egg while it is hatching. Developed by French and German companies, both countries decided to ban chick culling, mostly done through a macerator, in 2022. - Process monitoring in Industrial process control ## History The history of spectroscopy began with Isaac Newton's optics experiments (1666–1672). According to Andrew Fraknoi and David Morrison, "In 1672, in the first paper that he submitted to the Royal Society, Isaac Newton described an experiment in which he permitted sunlight to pass through a small hole and then through a prism. Newton found that sunlight, which looks white to us, is actually made up of a mixture of all the colors of the rainbow."
https://en.wikipedia.org/wiki/Spectroscopy
According to Andrew Fraknoi and David Morrison, "In 1672, in the first paper that he submitted to the Royal Society, Isaac Newton described an experiment in which he permitted sunlight to pass through a small hole and then through a prism. Newton found that sunlight, which looks white to us, is actually made up of a mixture of all the colors of the rainbow." Newton applied the word "spectrum" to describe the rainbow of colors that combine to form white light and that are revealed when the white light is passed through a prism. Fraknoi and Morrison state that "In 1802, William Hyde Wollaston built an improved spectrometer that included a lens to focus the Sun's spectrum on a screen. Upon use, Wollaston realized that the colors were not spread uniformly, but instead had missing patches of colors, which appeared as dark bands in the spectrum." During the early 1800s, Joseph von Fraunhofer made experimental advances with dispersive spectrometers that enabled spectroscopy to become a more precise and quantitative scientific technique. Since then, spectroscopy has played and continues to play a significant role in chemistry, physics, and astronomy. Per Fraknoi and Morrison, "Later, in 1815, German physicist Joseph Fraunhofer also examined the solar spectrum, and found about 600 such dark lines (missing colors), are now known as Fraunhofer lines, or Absorption lines.
https://en.wikipedia.org/wiki/Spectroscopy
Since then, spectroscopy has played and continues to play a significant role in chemistry, physics, and astronomy. Per Fraknoi and Morrison, "Later, in 1815, German physicist Joseph Fraunhofer also examined the solar spectrum, and found about 600 such dark lines (missing colors), are now known as Fraunhofer lines, or Absorption lines. " In quantum mechanical systems, the analogous resonance is a coupling of two quantum mechanical stationary states of one system, such as an atom, via an oscillatory source of energy such as a photon. The coupling of the two states is strongest when the energy of the source matches the energy difference between the two states. The energy of a photon is related to its frequency by where is the Planck constant, and so a spectrum of the system response vs. photon frequency will peak at the resonant frequency or energy. Particles such as electrons and neutrons have a comparable relationship, the de Broglie relations, between their kinetic energy and their wavelength and frequency and therefore can also excite resonant interactions. Spectra of atoms and molecules often consist of a series of spectral lines, each one representing a resonance between two different quantum states. The explanation of these series, and the spectral patterns associated with them, were one of the experimental enigmas that drove the development and acceptance of quantum mechanics.
https://en.wikipedia.org/wiki/Spectroscopy
Spectra of atoms and molecules often consist of a series of spectral lines, each one representing a resonance between two different quantum states. The explanation of these series, and the spectral patterns associated with them, were one of the experimental enigmas that drove the development and acceptance of quantum mechanics. The hydrogen spectral series in particular was first successfully explained by the Rutherford–Bohr quantum model of the hydrogen atom. In some cases spectral lines are well separated and distinguishable, but spectral lines can also overlap and appear to be a single transition if the density of energy states is high enough. Named series of lines include the principal, sharp, diffuse and fundamental series. ## DIY Spectroscopy Spectroscopy has emerged as a growing practice within the maker movement, enabling hobbyists and educators to construct functional spectrometers using readily available materials. Utilizing components like CD/DVD diffraction gratings, smartphones, and 3D-printed parts, these instruments offer a hands-on approach to understanding light and matter interactions. Smartphone applications along with open-source tools facilitate integration, greatly simplify the capturing and analysis of spectral data. While limitations in resolution, calibration accuracy, and stray light management exist compared to professional equipment, DIY spectroscopy provides valuable educational experiences and contributes to citizen science initiatives, fostering accessibility to spectroscopic techniques.
https://en.wikipedia.org/wiki/Spectroscopy
Multi-level caches can be designed in various ways depending on whether the content of one cache is present in other levels of caches. If all blocks in the higher level cache are also present in the lower level cache, then the lower level cache is said to be inclusive of the higher level cache. If the lower level cache contains only blocks that are not present in the higher level cache, then the lower level cache is said to be exclusive of the higher level cache. If the contents of the lower level cache are neither strictly inclusive nor exclusive of the higher level cache, then it is called non-inclusive non-exclusive (NINE) cache. ## Inclusive Policy Consider an example of a two level cache hierarchy where L2 can be inclusive, exclusive or NINE of L1. Consider the case when L2 is inclusive of L1. Suppose there is a processor read request for block X. If the block is found in L1 cache, then the data is read from L1 cache and returned to the processor. If the block is not found in the L1 cache, but present in the L2 cache, then the cache block is fetched from the L2 cache and placed in L1. If this causes a block to be evicted from L1, there is no involvement of L2. If the block is not found in either L1 or L2, then it is fetched from the main memory and placed in both L1 and L2.
https://en.wikipedia.org/wiki/Cache_inclusion_policy
If this causes a block to be evicted from L1, there is no involvement of L2. If the block is not found in either L1 or L2, then it is fetched from the main memory and placed in both L1 and L2. Now, if there is an eviction from L2, the L2 cache sends a back invalidation to the L1 cache, so that inclusion is not violated. As illustrated in Figure 1, initially consider both L1 and L2 caches to be empty (a). Assume that the processor sends a read X request. It will be a miss in both L1 and L2 and hence the block is brought into both L1 and L2 from the main memory as shown in (b). Now, assume the processor issues a read Y request which is a miss in both L1 and L2. So, block Y is placed in both L1 and L2 as shown in (c). If block X has to be evicted from L1, then it is removed from L1 only as shown in (d). If block Y has to be evicted from L2, it sends a back invalidation request to L1 and hence block Y is evicted from L1 as shown in (e). In order for inclusion to hold, certain conditions need to be satisfied. L2 associativity must be greater than or equal to L1 associativity irrespective of the number of sets.
https://en.wikipedia.org/wiki/Cache_inclusion_policy
In order for inclusion to hold, certain conditions need to be satisfied. L2 associativity must be greater than or equal to L1 associativity irrespective of the number of sets. The number of L2 sets must be greater than or equal to the number of L1 sets irrespective of L2 associativity. All reference information from L1 is passed to L2 so that it can update its replacement bits. One example of inclusive cache is Intel quad core processor with 4x256KB L2 caches and 8MB (inclusive) L3 cache. ## Exclusive Policy Consider the case when L2 is exclusive of L1. Suppose there is a processor read request for block X. If the block is found in L1 cache, then the data is read from L1 cache and returned to the processor. If the block is not found in the L1 cache, but present in the L2 cache, then the cache block is moved from the L2 cache to the L1 cache. If this causes a block to be evicted from L1, the evicted block is then placed into L2. This is the only way L2 gets populated. Here, L2 behaves like a victim cache. If the block is not found in either L1 or L2, then it is fetched from main memory and placed just in L1 and not in L2.
https://en.wikipedia.org/wiki/Cache_inclusion_policy
Here, L2 behaves like a victim cache. If the block is not found in either L1 or L2, then it is fetched from main memory and placed just in L1 and not in L2. As illustrated in Figure 2, initially consider both L1 and L2 caches to be empty (a). Assume that the processor sends a read X request. It will be a miss in both L1 and L2 and hence the block is brought into L1 from the main memory as shown in (b). Now, again the processor issues a read Y request which is a miss in both L1 and L2. So, block Y is placed in L1 as shown in (c). If block X has to be evicted from L1, then it is removed from L1 and placed in L2 as shown in (d). An example of exclusive cache is AMD Opteron with 512 KB (per core) L2 cache, exclusive of L1. ## NINE Policy Consider the case when L2 is non-inclusive non-exclusive of L1. Suppose there is a processor read request for block X. If the block is found in L1 cache, then the data is read from L1 cache and returned to the processor. If the block is not found in the L1 cache, but present in the L2 cache, then the cache block is fetched from the L2 cache and placed in L1.
https://en.wikipedia.org/wiki/Cache_inclusion_policy
If the block is found in L1 cache, then the data is read from L1 cache and returned to the processor. If the block is not found in the L1 cache, but present in the L2 cache, then the cache block is fetched from the L2 cache and placed in L1. If this causes a block to be evicted from L1, there is no involvement of L2, which is the same as in the case of inclusive policy. If the block is not found in both L1 and L2, then it is fetched from main memory and placed in both L1 and L2. Now, if there is an eviction from L2, unlike inclusive policy, there is no back invalidation. As illustrated in Figure 3, initially consider both L1 and L2 caches to be empty (a). Assume that the processor sends a read X request. It will be a miss in both L1 and L2 and hence the block is brought into both L1 and L2 from the main memory as shown in (b). Now, again the processor issues a read Y request which is a miss in both L1 and L2. So, block Y is placed in both L1 and L2 as shown in (c). If block X has to be evicted from L1, then it is removed from L1 only as shown in (d).
https://en.wikipedia.org/wiki/Cache_inclusion_policy
So, block Y is placed in both L1 and L2 as shown in (c). If block X has to be evicted from L1, then it is removed from L1 only as shown in (d). If block Y has to be evicted from L2, it is evicted from L2 only as shown in (e). An example of non-inclusive non-exclusive cache is AMD Opteron with non-inclusive L3 cache of 6 MB (shared). ## Comparison The merit of inclusive policy is that, in parallel systems with per-processor private cache if there is a cache miss other peer caches are checked for the block. If the lower level cache is inclusive of the higher level cache and it is a miss in the lower level cache, then the higher level cache need not be searched. This implies a shorter miss latency for an inclusive cache compared to exclusive and NINE. A drawback of an inclusive policy is that the unique memory capacity of the cache is determined by the lower level cache. Unlike the case of exclusive cache, where the unique memory capacity is the combined capacity of all caches in the hierarchy. If the size of lower level cache is small and comparable with the size of higher level cache, there is more wasted cache capacity in inclusive caches.
https://en.wikipedia.org/wiki/Cache_inclusion_policy
Unlike the case of exclusive cache, where the unique memory capacity is the combined capacity of all caches in the hierarchy. If the size of lower level cache is small and comparable with the size of higher level cache, there is more wasted cache capacity in inclusive caches. Although the exclusive cache has more unique memory capacity, it uses more bandwidth since it suffers from a higher rate of filling of new blocks (equal to the rate of higher level cache's misses) as compared to NINE cache which is filled with a new block only when it suffers a miss. Therefore, assessment of cost relative to benefit needs to be done while exploiting the choice between Inclusive, Exclusive and NINE caches. Value Inclusion: It is not necessary for a block to have the same data values when it is cached in both higher and lower level caches even though inclusion is maintained. But, if the data values are the same, value inclusion is maintained. This depends on the write policy in use, as write back policy does not notify the lower level cache of the changes made to the block in higher level cache. However, in case of write-through cache there is no such concern. ## References Category:Cache (computing)
https://en.wikipedia.org/wiki/Cache_inclusion_policy
In solid-state physics of semiconductors, a band diagram is a diagram plotting various key electron energy levels (Fermi level and nearby energy band edges) as a function of some spatial dimension, which is often denoted x. These diagrams help to explain the operation of many kinds of semiconductor devices and to visualize how bands change with position (band bending). The bands may be coloured to distinguish level filling. A band diagram should not be confused with a band structure plot. In both a band diagram and a band structure plot, the vertical axis corresponds to the energy of an electron. The difference is that in a band structure plot the horizontal axis represents the wave vector of an electron in an infinitely large, homogeneous material (usually a crystal), whereas in a band diagram the horizontal axis represents position in space, usually passing through multiple materials. Because a band diagram shows the changes in the band structure from place to place, the resolution of a band diagram is limited by the Heisenberg uncertainty principle: the band structure relies on momentum, which is only precisely defined for large length scales. For this reason, the band diagram can only accurately depict evolution of band structures over long length scales, and has difficulty in showing the microscopic picture of sharp, atomic scale interfaces between different materials (or between a material and vacuum).
https://en.wikipedia.org/wiki/Band_diagram
Because a band diagram shows the changes in the band structure from place to place, the resolution of a band diagram is limited by the Heisenberg uncertainty principle: the band structure relies on momentum, which is only precisely defined for large length scales. For this reason, the band diagram can only accurately depict evolution of band structures over long length scales, and has difficulty in showing the microscopic picture of sharp, atomic scale interfaces between different materials (or between a material and vacuum). Typically, an interface must be depicted as a "black box", though its long-distance effects can be shown in the band diagram as asymptotic band bending. ## Anatomy The vertical axis of the band diagram represents the energy of an electron, which includes both kinetic and potential energy. The horizontal axis represents position, often not being drawn to scale. Note that the Heisenberg uncertainty principle prevents the band diagram from being drawn with a high positional resolution, since the band diagram shows energy bands (as resulting from a momentum-dependent band structure). While a basic band diagram only shows electron energy levels, often a band diagram will be decorated with further features. It is common to see cartoon depictions of the motion in energy and position of an electron (or electron hole) as it drifts, is excited by a light source, or relaxes from an excited state.
https://en.wikipedia.org/wiki/Band_diagram
While a basic band diagram only shows electron energy levels, often a band diagram will be decorated with further features. It is common to see cartoon depictions of the motion in energy and position of an electron (or electron hole) as it drifts, is excited by a light source, or relaxes from an excited state. The band diagram may be shown connected to a circuit diagram showing how bias voltages are applied, how charges flow, etc. The bands may be colored to indicate filling of energy levels, or sometimes the band gaps will be colored instead. ### Energy levels Depending on the material and the degree of detail desired, a variety of energy levels will be plotted against position: - EF or μ: Although it is not a band quantity, the Fermi level (total chemical potential of electrons) is a crucial level in the band diagram. The Fermi level is set by the device's electrodes. For a device at equilibrium, the Fermi level is a constant and thus will be shown in the band diagram as a flat line. Out of equilibrium (e.g., when voltage differences are applied), the Fermi level will not be flat.
https://en.wikipedia.org/wiki/Band_diagram
For a device at equilibrium, the Fermi level is a constant and thus will be shown in the band diagram as a flat line. Out of equilibrium (e.g., when voltage differences are applied), the Fermi level will not be flat. Furthermore, in semiconductors out of equilibrium it may be necessary to indicate multiple quasi-Fermi levels for different energy bands, whereas in an out-of-equilibrium insulator or vacuum it may not be possible to give a quasi-equilibrium description, and no Fermi level can be defined. - EC: The conduction band edge should be indicated in situations where electrons might be transported at the bottom of the conduction band, such as in an n-type semiconductor. The conduction band edge may also be indicated in an insulator, simply to demonstrate band bending effects. - EV: The valence band edge likewise should be indicated in situations where electrons (or holes) are transported through the top of the valence band such as in a p-type semiconductor. - Ei: The intrinsic Fermi level may be included in a semiconductor, to show where the Fermi level would have to be for the material to be neutrally doped (i.e., an equal number of mobile electrons and holes). - Eimp: Impurity energy level. Many defects and dopants add states inside the band gap of a semiconductor or insulator.
https://en.wikipedia.org/wiki/Band_diagram
Ei: The intrinsic Fermi level may be included in a semiconductor, to show where the Fermi level would have to be for the material to be neutrally doped (i.e., an equal number of mobile electrons and holes). - Eimp: Impurity energy level. Many defects and dopants add states inside the band gap of a semiconductor or insulator. It can be useful to plot their energy level to see whether they are ionized or not. - Evac: In a vacuum, the vacuum level shows the energy $$ -e\phi $$ , where $$ \phi $$ is the electrostatic potential. The vacuum can be considered as a sort of insulator, with Evac playing the role of the conduction band edge. At a vacuum-material interface, the vacuum energy level is fixed by the sum of work function and Fermi level of the material. - Electron affinity level: Occasionally, a "vacuum level" is plotted even inside materials, at a fixed height above the conduction band, determined by the electron affinity. This "vacuum level" does not correspond to any actual energy band and is poorly defined (electron affinity strictly speaking is a surface, not bulk, property); however, it may be a helpful guide in the use of approximations such as Anderson's rule or the Schottky–Mott rule.
https://en.wikipedia.org/wiki/Band_diagram
At a vacuum-material interface, the vacuum energy level is fixed by the sum of work function and Fermi level of the material. - Electron affinity level: Occasionally, a "vacuum level" is plotted even inside materials, at a fixed height above the conduction band, determined by the electron affinity. This "vacuum level" does not correspond to any actual energy band and is poorly defined (electron affinity strictly speaking is a surface, not bulk, property); however, it may be a helpful guide in the use of approximations such as Anderson's rule or the Schottky–Mott rule. ## Band bending When looking at a band diagram, the electron energy states (bands) in a material can curve up or down near a junction. This effect is known as band bending. It does not correspond to any physical (spatial) bending. Rather, band bending refers to the local changes in electronic structure, in the energy offset of a semiconductor's band structure near a junction, due to space charge effects. The primary principle underlying band bending inside a semiconductor is space charge: a local imbalance in charge neutrality. Poisson's equation gives a curvature to the bands wherever there is an imbalance in charge neutrality.
https://en.wikipedia.org/wiki/Band_diagram
The primary principle underlying band bending inside a semiconductor is space charge: a local imbalance in charge neutrality. Poisson's equation gives a curvature to the bands wherever there is an imbalance in charge neutrality. The reason for the charge imbalance is that, although a homogeneous material is charge neutral everywhere (since it must be charge neutral on average), there is no such requirement for interfaces. Practically all types of interface develop a charge imbalance, though for different reasons: - At the junction of two different types of the same semiconductor (e.g., p-n junction) the bands vary continuously since the dopants are sparsely distributed and only perturb the system. - At the junction of two different semiconductors there is a sharp shift in band energies from one material to the other; the band alignment at the junction (e.g., the difference in conduction band energies) is fixed. - At the junction of a semiconductor and metal, the bands of the semiconductor are pinned to the metal's Fermi level. - At the junction of a conductor and vacuum, the vacuum level (from vacuum electrostatic potential) is set by the material's work function and Fermi level. This also (usually) applies for the junction of a conductor to an insulator.
https://en.wikipedia.org/wiki/Band_diagram
At the junction of a conductor and vacuum, the vacuum level (from vacuum electrostatic potential) is set by the material's work function and Fermi level. This also (usually) applies for the junction of a conductor to an insulator. Knowing how bands will bend when two different types of materials are brought in contact is key to understanding whether the junction will be rectifying (Schottky) or ohmic. The degree of band bending depends on the relative Fermi levels and carrier concentrations of the materials forming the junction. In an n-type semiconductor the band bends upward, while in p-type the band bends downward. Note that band bending is due neither to magnetic field nor temperature gradient. Rather, it only arises in conjunction with the force of the electric field.
https://en.wikipedia.org/wiki/Band_diagram
In statistics, a confidence interval (CI) is a range of values used to estimate an unknown statistical parameter, such as a population mean. Rather than reporting a single point estimate (e.g. "the average screen time is 3 hours per day"), a confidence interval provides a range, such as 2 to 4 hours, along with a specified confidence level, typically 95%. This indicates that if the same sampling procedure were repeated 100 times, approximately 95 of the resulting intervals would be expected to contain the true population mean. A 95% confidence level does not imply a 95% probability that the true parameter lies within a particular calculated interval. The confidence level instead reflects the long-run reliability of the method used to generate the interval. ## History Methods for calculating confidence intervals for the binomial proportion appeared from the 1920s. C.J. Clopper, E.S. Pearson, The use of confidence or fiducial limits illustrated in the case of the binomial, Biometrika 26(4), 1934, pages 404–413, https://doi.org/10.1093/biomet/26.4.404 The main ideas of confidence intervals in general were developed in the early 1930s,J. Neyman (1935), Ann. Math. Statist. 6(3): 111-116 (September, 1935).
https://en.wikipedia.org/wiki/Confidence_interval
Math. Statist. 6(3): 111-116 (September, 1935). https://doi.org/10.1214/aoms/1177732585 and the first thorough and general account was given by Jerzy Neyman in 1937. Neyman described the development of the ideas as follows (reference numbers have been changed): [My work on confidence intervals] originated about 1930 from a simple question of Waclaw Pytkowski, then my student in Warsaw, engaged in an empirical study in farm economics. The question was: how to characterize non-dogmatically the precision of an estimated regression coefficient? ... Pytkowski's monograph ... appeared in print in 1932.Pytkowski, W., The dependence of the income in small farms upon their area, the outlay and the capital invested in cows. (Polish, English summary) Bibliotaka Palawska, 1932. It so happened that, somewhat earlier, Fisher published his first paper concerned with fiducial distributions and fiducial argument. Quite unexpectedly, while the conceptual framework of fiducial argument is entirely different from that of confidence intervals, the specific solutions of several particular problems coincided. Thus, in the first paper in which I presented the theory of confidence intervals, published in 1934, I recognized Fisher's priority for the idea that interval estimation is possible without any reference to Bayes' theorem and with the solution being independent from probabilities a priori.
https://en.wikipedia.org/wiki/Confidence_interval
Quite unexpectedly, while the conceptual framework of fiducial argument is entirely different from that of confidence intervals, the specific solutions of several particular problems coincided. Thus, in the first paper in which I presented the theory of confidence intervals, published in 1934, I recognized Fisher's priority for the idea that interval estimation is possible without any reference to Bayes' theorem and with the solution being independent from probabilities a priori. At the same time I mildly suggested that Fisher's approach to the problem involved a minor misunderstanding. In medical journals, confidence intervals were promoted in the 1970s but only became widely used in the 1980s. By 1988, medical journals were requiring the reporting of confidence intervals. ## Definition Let $$ X $$ be a random sample from a probability distribution with statistical parameter $$ (\theta, \varphi) $$ . Here, $$ \theta $$ is the quantity to be estimated, while $$ \varphi $$ includes other parameters (if any) that determine the distribution.
https://en.wikipedia.org/wiki/Confidence_interval
## Definition Let $$ X $$ be a random sample from a probability distribution with statistical parameter $$ (\theta, \varphi) $$ . Here, $$ \theta $$ is the quantity to be estimated, while $$ \varphi $$ includes other parameters (if any) that determine the distribution. A confidence interval for the parameter $$ \theta $$ , with confidence level or coefficient $$ \gamma $$ , is an interval $$ (u(X), v(X)) $$ determined by random variables $$ u(X) $$ and $$ v(X) $$ with the property: $$ P(u(X) < \theta < v(X)) = \gamma \quad \text{for all }(\theta, \varphi). $$ The number $$ \gamma $$ , whose typical value is close to but not greater than 1, is sometimes given in the form $$ 1 - \alpha $$ (or as a percentage $$ 100%\cdot(1 - \alpha) $$ ), where $$ \alpha $$ is a small positive number, often 0.05. It means that the interval $$ (u(X), v(X)) $$ has a probability $$ \gamma $$ of covering the value of $$ \theta $$ in repeated sampling.
https://en.wikipedia.org/wiki/Confidence_interval
A confidence interval for the parameter $$ \theta $$ , with confidence level or coefficient $$ \gamma $$ , is an interval $$ (u(X), v(X)) $$ determined by random variables $$ u(X) $$ and $$ v(X) $$ with the property: $$ P(u(X) < \theta < v(X)) = \gamma \quad \text{for all }(\theta, \varphi). $$ The number $$ \gamma $$ , whose typical value is close to but not greater than 1, is sometimes given in the form $$ 1 - \alpha $$ (or as a percentage $$ 100%\cdot(1 - \alpha) $$ ), where $$ \alpha $$ is a small positive number, often 0.05. It means that the interval $$ (u(X), v(X)) $$ has a probability $$ \gamma $$ of covering the value of $$ \theta $$ in repeated sampling. In many applications, confidence intervals that have exactly the required confidence level are hard to construct, but approximate intervals can be computed. The rule for constructing the interval may be accepted if $$ P(u(X) < \theta<v(X)) \approx\ \gamma $$ to an acceptable level of approximation.
https://en.wikipedia.org/wiki/Confidence_interval
In many applications, confidence intervals that have exactly the required confidence level are hard to construct, but approximate intervals can be computed. The rule for constructing the interval may be accepted if $$ P(u(X) < \theta<v(X)) \approx\ \gamma $$ to an acceptable level of approximation. Alternatively, some authors simply require that $$ P(u(X) < \theta < v(X)) \ge\ \gamma $$ When it is known that the coverage probability can be strictly larger than $$ \gamma $$ for some parameter values, the confidence interval is called conservative, i.e., it errs on the safe side; which also means that the interval can be wider than need be. ### Methods of derivation There are many ways of calculating confidence intervals, and the best method depends on the situation. Two widely applicable methods are bootstrapping and the central limit theorem.
https://en.wikipedia.org/wiki/Confidence_interval
### Methods of derivation There are many ways of calculating confidence intervals, and the best method depends on the situation. Two widely applicable methods are bootstrapping and the central limit theorem. The latter method works only if the sample is large, since it entails calculating the sample mean $$ \bar{X}_n $$ and sample standard deviation $$ S_n $$ and assuming that the quantity $$ \frac{\bar{X}_n - \mu}{S_n / \sqrt{n}} $$ is normally distributed, where $$ \mu $$ and $$ n $$ are the population mean and the sample size, respectively. ## Example Suppose $$ X_1, \ldots, X_n $$ is an independent sample from a normally distributed population with unknown parameters mean $$ \mu $$ and variance $$ \sigma^2. $$
https://en.wikipedia.org/wiki/Confidence_interval
The latter method works only if the sample is large, since it entails calculating the sample mean $$ \bar{X}_n $$ and sample standard deviation $$ S_n $$ and assuming that the quantity $$ \frac{\bar{X}_n - \mu}{S_n / \sqrt{n}} $$ is normally distributed, where $$ \mu $$ and $$ n $$ are the population mean and the sample size, respectively. ## Example Suppose $$ X_1, \ldots, X_n $$ is an independent sample from a normally distributed population with unknown parameters mean $$ \mu $$ and variance $$ \sigma^2. $$ Define the sample mean $$ \bar{X} $$ and unbiased sample variance $$ S^2 $$ as $$ \bar{X} = \frac, $$ $$ S^2 = \frac\sum_{i=1}^n (X_i - \bar{X})^2. $$ Then the value $$ T = \frac{{\bar{X} - \mu}}{{S/\sqrt{n}}} $$ has a Student's t distribution with $$ n - 1 $$ degrees of freedom.
https://en.wikipedia.org/wiki/Confidence_interval
## Example Suppose $$ X_1, \ldots, X_n $$ is an independent sample from a normally distributed population with unknown parameters mean $$ \mu $$ and variance $$ \sigma^2. $$ Define the sample mean $$ \bar{X} $$ and unbiased sample variance $$ S^2 $$ as $$ \bar{X} = \frac, $$ $$ S^2 = \frac\sum_{i=1}^n (X_i - \bar{X})^2. $$ Then the value $$ T = \frac{{\bar{X} - \mu}}{{S/\sqrt{n}}} $$ has a Student's t distribution with $$ n - 1 $$ degrees of freedom. This value is useful because its distribution does not depend on the values of the unobservable parameters $$ \mu $$ and $$ \sigma^2 $$ ; i.e., it is a pivotal quantity. Suppose we wanted to calculate a 95% confidence interval for $$ \mu. $$ First, let $$ c $$ be the 97.5th percentile of the distribution of $$ T $$ .
https://en.wikipedia.org/wiki/Confidence_interval
Suppose we wanted to calculate a 95% confidence interval for $$ \mu. $$ First, let $$ c $$ be the 97.5th percentile of the distribution of $$ T $$ . Then there is a 2.5% chance that $$ T $$ will be less than $$ -c $$ and a 2.5% chance that it will be larger than $$ +c $$ (as the t distribution is symmetric about 0). In other words, $$ P_T(-c \leq T \leq c) = 0.95. $$ Consequently, by replacing $$ T $$ with $$ \frac{{\bar{X} - \mu}}{{S/\sqrt{n}}} $$ and re-arranging terms, $$ P_X {\left(\bar{X} - \frac{{\sqrt{n}}} \leq \mu \leq \bar{X} + \frac{{\sqrt{n}}}\right)} = 0.95 $$ where $$ P_X $$ is the probability measure for the sample $$ X_1, \ldots, X_n $$ .
https://en.wikipedia.org/wiki/Confidence_interval
Then there is a 2.5% chance that $$ T $$ will be less than $$ -c $$ and a 2.5% chance that it will be larger than $$ +c $$ (as the t distribution is symmetric about 0). In other words, $$ P_T(-c \leq T \leq c) = 0.95. $$ Consequently, by replacing $$ T $$ with $$ \frac{{\bar{X} - \mu}}{{S/\sqrt{n}}} $$ and re-arranging terms, $$ P_X {\left(\bar{X} - \frac{{\sqrt{n}}} \leq \mu \leq \bar{X} + \frac{{\sqrt{n}}}\right)} = 0.95 $$ where $$ P_X $$ is the probability measure for the sample $$ X_1, \ldots, X_n $$ . It means that there is 95% probability with which this condition $$ \bar{X} - \frac{{\sqrt{n}}} \leq \mu \leq \bar{X} + \frac{{\sqrt{n}}} $$ occurs in repeated sampling.
https://en.wikipedia.org/wiki/Confidence_interval
In other words, $$ P_T(-c \leq T \leq c) = 0.95. $$ Consequently, by replacing $$ T $$ with $$ \frac{{\bar{X} - \mu}}{{S/\sqrt{n}}} $$ and re-arranging terms, $$ P_X {\left(\bar{X} - \frac{{\sqrt{n}}} \leq \mu \leq \bar{X} + \frac{{\sqrt{n}}}\right)} = 0.95 $$ where $$ P_X $$ is the probability measure for the sample $$ X_1, \ldots, X_n $$ . It means that there is 95% probability with which this condition $$ \bar{X} - \frac{{\sqrt{n}}} \leq \mu \leq \bar{X} + \frac{{\sqrt{n}}} $$ occurs in repeated sampling. After observing a sample, we find values $$ \bar{x} $$ for $$ \bar{X} $$ and $$ s $$ for $$ S, $$ from which we compute the below interval, and we say it is a 95% confidence interval for the mean. $$ \left[\bar{x} - \frac{cs}{\sqrt{n}}, \bar{x} + \frac{cs}{\sqrt{n}}\right]. $$
https://en.wikipedia.org/wiki/Confidence_interval
It means that there is 95% probability with which this condition $$ \bar{X} - \frac{{\sqrt{n}}} \leq \mu \leq \bar{X} + \frac{{\sqrt{n}}} $$ occurs in repeated sampling. After observing a sample, we find values $$ \bar{x} $$ for $$ \bar{X} $$ and $$ s $$ for $$ S, $$ from which we compute the below interval, and we say it is a 95% confidence interval for the mean. $$ \left[\bar{x} - \frac{cs}{\sqrt{n}}, \bar{x} + \frac{cs}{\sqrt{n}}\right]. $$ ## Interpretation Various interpretations of a confidence interval can be given (taking the 95% confidence interval as an example in the following). -
https://en.wikipedia.org/wiki/Confidence_interval
After observing a sample, we find values $$ \bar{x} $$ for $$ \bar{X} $$ and $$ s $$ for $$ S, $$ from which we compute the below interval, and we say it is a 95% confidence interval for the mean. $$ \left[\bar{x} - \frac{cs}{\sqrt{n}}, \bar{x} + \frac{cs}{\sqrt{n}}\right]. $$ ## Interpretation Various interpretations of a confidence interval can be given (taking the 95% confidence interval as an example in the following). - The confidence interval can be expressed in terms of a long-run frequency in repeated samples (or in resampling): "Were this procedure to be repeated on numerous samples, the proportion of calculated 95% confidence intervals that encompassed the true value of the population parameter would tend toward 95%." - The confidence interval can be expressed in terms of probability with respect to a single theoretical (yet to be realized) sample: "There is a 95% probability that the 95% confidence interval calculated from a given future sample will cover the true value of the population parameter." This essentially reframes the "repeated samples" interpretation as a probability rather than a frequency. -
https://en.wikipedia.org/wiki/Confidence_interval
The confidence interval can be expressed in terms of a long-run frequency in repeated samples (or in resampling): "Were this procedure to be repeated on numerous samples, the proportion of calculated 95% confidence intervals that encompassed the true value of the population parameter would tend toward 95%." - The confidence interval can be expressed in terms of probability with respect to a single theoretical (yet to be realized) sample: "There is a 95% probability that the 95% confidence interval calculated from a given future sample will cover the true value of the population parameter." This essentially reframes the "repeated samples" interpretation as a probability rather than a frequency. - The confidence interval can be expressed in terms of statistical significance, e.g.: "The 95% confidence interval represents values that are not statistically significantly different from the point estimate at the .05 level." ### Common misunderstandings Confidence intervals and levels are frequently misunderstood, and published studies have shown that even professional scientists often misinterpret them. - A 95% confidence level does not mean that for a given realized interval there is a 95% probability that the population parameter lies within the interval. - A 95% confidence level does not mean that 95% of the sample data lie within the confidence interval. - A 95% confidence level does not mean that there is a 95% probability of the parameter estimate from a repeat of the experiment falling within the confidence interval computed from a given experiment.
https://en.wikipedia.org/wiki/Confidence_interval
- A 95% confidence level does not mean that 95% of the sample data lie within the confidence interval. - A 95% confidence level does not mean that there is a 95% probability of the parameter estimate from a repeat of the experiment falling within the confidence interval computed from a given experiment. For example, suppose a factory produces metal rods. A random sample of 25 rods gives a 95% confidence interval for the population mean length of 36.8 to 39.0 mm. - It is incorrect to say that there is a 95% probability that the true population mean lies within this interval, because the true mean is fixed, not random. For example, it might be 37 mm, which is within the confidence interval, or 40 mm, which is not; in any case, whether it falls between 36.8 and 39.0 mm is a matter of fact, not probability. - It is not necessarily true that the lengths of 95% of the sampled rods lie within this interval. In this case, it cannot be true: 95% of 25 is not an integer. - It is incorrect to say that if we took a second sample, there is a 95% probability that the sample mean length (an estimate of the population mean length) would fall within this interval. In fact, if the true mean length is far from this specific confidence interval, it could be very unlikely that the next sample mean falls within the interval.
https://en.wikipedia.org/wiki/Confidence_interval
- It is incorrect to say that if we took a second sample, there is a 95% probability that the sample mean length (an estimate of the population mean length) would fall within this interval. In fact, if the true mean length is far from this specific confidence interval, it could be very unlikely that the next sample mean falls within the interval. Instead, the 95% confidence level means that if we took 100 such samples, we would expect the true population mean to lie within approximately 95 of the calculated intervals. ### Comparison with prediction intervals A confidence interval is used to estimate a population parameter, such as the mean. For example, the expected value of a fair six-sided die is 3.5. Based on repeated sampling, after computing many 95% confidence intervals, roughly 95% of them will contain 3.5. A prediction interval, on the other hand, provides a range within which a future individual observation is expected to fall with a certain probability. In the case of a single roll of a fair six-sided die, the outcome will always lie between 1 and 6. Thus, a 95% prediction interval for a future roll is approximately [1, 6], since this range captures the inherent variability of individual outcomes. The key distinction is that confidence intervals quantify uncertainty in estimating parameters, while prediction intervals quantify uncertainty in forecasting future observations.
https://en.wikipedia.org/wiki/Confidence_interval
Thus, a 95% prediction interval for a future roll is approximately [1, 6], since this range captures the inherent variability of individual outcomes. The key distinction is that confidence intervals quantify uncertainty in estimating parameters, while prediction intervals quantify uncertainty in forecasting future observations. ### Comparison with credible intervals In many common settings, such as estimating the mean of a normal distribution with known variance, confidence intervals coincide with credible intervals under non-informative priors. In such cases, common misconceptions about confidence intervals (e.g. interpreting them as probability statements about the parameter) may yield practically correct conclusions. ### Examples of how naïve interpretation of confidence intervals can be problematic #### Confidence procedure for uniform location Welch presented an example which clearly shows the difference between the theory of confidence intervals and other theories of interval estimation (including Fisher's fiducial intervals and objective Bayesian intervals). Robinson called this example "[p]ossibly the best known counterexample for Neyman's version of confidence interval theory." To Welch, it showed the superiority of confidence interval theory; to critics of the theory, it shows a deficiency. Here we present a simplified version. Suppose that _
https://en.wikipedia.org/wiki/Confidence_interval
Here we present a simplified version. Suppose that _ BLOCK0_ are independent observations from a uniform $$ (\theta - 1/2, \theta + 1/2) $$ distribution. Then the optimal 50% confidence procedure for $$ \theta $$ is $$ \bar{X} \pm \begin{cases} \dfrac{|X_1-X_2|}{2} & \text{if } |X_1-X_2| < 1/2 \\[8pt] \dfrac{1-|X_1-X_2|}{2} &\text{if } |X_1-X_2| \geq 1/2. \end{cases} $$ A fiducial or objective Bayesian argument can be used to derive the interval estimate $$ \bar{X} \pm \frac{1-|X_1-X_2|}{4}, $$ which is also a 50% confidence procedure. Welch showed that the first confidence procedure dominates the second, according to desiderata from confidence interval theory; for every $$ \theta_1\neq\theta $$ , the probability that the first procedure contains $$ \theta_1 $$ is less than or equal to the probability that the second procedure contains $$ \theta_1 $$ . The average width of the intervals from the first procedure is less than that of the second.
https://en.wikipedia.org/wiki/Confidence_interval
Welch showed that the first confidence procedure dominates the second, according to desiderata from confidence interval theory; for every $$ \theta_1\neq\theta $$ , the probability that the first procedure contains $$ \theta_1 $$ is less than or equal to the probability that the second procedure contains $$ \theta_1 $$ . The average width of the intervals from the first procedure is less than that of the second. Hence, the first procedure is preferred under classical confidence interval theory. However, when $$ |X_1-X_2| \geq 1/2 $$ , intervals from the first procedure are guaranteed to contain the true value $$ \theta $$ : Therefore, the nominal 50% confidence coefficient is unrelated to the uncertainty we should have that a specific interval contains the true value. The second procedure does not have this property. Moreover, when the first procedure generates a very short interval, this indicates that _BLOCK10 _ are very close together and hence only offer the information in a single data point. Yet the first interval will exclude almost all reasonable values of the parameter due to its short width. The second procedure does not have this property.
https://en.wikipedia.org/wiki/Confidence_interval
Yet the first interval will exclude almost all reasonable values of the parameter due to its short width. The second procedure does not have this property. The two counter-intuitive properties of the first procedure – 100% coverage when $$ X_1,X_2 $$ are far apart and almost 0% coverage when $$ X_1,X_2 $$ are close together – balance out to yield 50% coverage on average. However, despite the first procedure being optimal, its intervals offer neither an assessment of the precision of the estimate nor an assessment of the uncertainty one should have that the interval contains the true value. This example is used to argue against naïve interpretations of confidence intervals. If a confidence procedure is asserted to have properties beyond that of the nominal coverage (such as relation to precision, or a relationship with Bayesian inference), those properties must be proved; they do not follow from the fact that a procedure is a confidence procedure. #### Confidence procedure for ω2 Steiger suggested a number of confidence procedures for common effect size measures in ANOVA.
https://en.wikipedia.org/wiki/Confidence_interval
If a confidence procedure is asserted to have properties beyond that of the nominal coverage (such as relation to precision, or a relationship with Bayesian inference), those properties must be proved; they do not follow from the fact that a procedure is a confidence procedure. #### Confidence procedure for ω2 Steiger suggested a number of confidence procedures for common effect size measures in ANOVA. Morey et al. point out that several of these confidence procedures, including the one for ω2, have the property that as the F statistic becomes increasingly small—indicating misfit with all possible values of ω2—the confidence interval shrinks and can even contain only the single value ω2 = 0; that is, the CI is infinitesimally narrow (this occurs when $$ p\geq1-\alpha/2 $$ for a $$ 100(1-\alpha)\% $$ CI). This behavior is consistent with the relationship between the confidence procedure and significance testing: as F becomes so small that the group means are much closer together than we would expect by chance, a significance test might indicate rejection for most or all values of ω2. Hence the interval will be very narrow or even empty (or, by a convention suggested by Steiger, containing only 0). However, this does not indicate that the estimate of ω2 is very precise. In a sense, it indicates the opposite: that the trustworthiness of the results themselves may be in doubt.
https://en.wikipedia.org/wiki/Confidence_interval
However, this does not indicate that the estimate of ω2 is very precise. In a sense, it indicates the opposite: that the trustworthiness of the results themselves may be in doubt. This is contrary to the common interpretation of confidence intervals that they reveal the precision of the estimate. ## Confidence interval for specific distributions - Confidence interval for binomial distribution - Confidence interval for exponent of the power law distribution - Confidence interval for mean of the exponential distribution - Confidence interval for mean of the Poisson distribution - Confidence intervals for mean and variance of the normal distribution (also here) - Confidence interval for the parameters of a simple linear regression - Confidence interval for the difference of means (based on data from a normal distributions, without assuming equal variances) - Confidence interval for the difference between two proportions
https://en.wikipedia.org/wiki/Confidence_interval
In probability theory and statistics, a probability distribution is a function that gives the probabilities of occurrence of possible events for an experiment. It is a mathematical description of a random phenomenon in terms of its sample space and the probabilities of events (subsets of the sample space). For instance, if is used to denote the outcome of a coin toss ("the experiment"), then the probability distribution of would take the value 0.5 (1 in 2 or 1/2) for , and 0.5 for (assuming that the coin is fair). More commonly, probability distributions are used to compare the relative occurrence of many different random values. Probability distributions can be defined in different ways and for discrete or for continuous variables. Distributions with special properties or for especially important applications are given specific names. ## Introduction A probability distribution is a mathematical description of the probabilities of events, subsets of the sample space. The sample space, often represented in notation by $$ \ \Omega\ , $$ is the set of all possible outcomes of a random phenomenon being observed. The sample space may be any set: a set of real numbers, a set of descriptive labels, a set of vectors, a set of arbitrary non-numerical values, etc.
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
The sample space, often represented in notation by $$ \ \Omega\ , $$ is the set of all possible outcomes of a random phenomenon being observed. The sample space may be any set: a set of real numbers, a set of descriptive labels, a set of vectors, a set of arbitrary non-numerical values, etc. For example, the sample space of a coin flip could be To define probability distributions for the specific case of random variables (so the sample space can be seen as a numeric set), it is common to distinguish between discrete and continuous random variables. In the discrete case, it is sufficient to specify a probability mass function $$ p $$ assigning a probability to each possible outcome (e.g. when throwing a fair die, each of the six digits to , corresponding to the number of dots on the die, has probability $$ \tfrac{1}{6}). $$
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
For example, the sample space of a coin flip could be To define probability distributions for the specific case of random variables (so the sample space can be seen as a numeric set), it is common to distinguish between discrete and continuous random variables. In the discrete case, it is sufficient to specify a probability mass function $$ p $$ assigning a probability to each possible outcome (e.g. when throwing a fair die, each of the six digits to , corresponding to the number of dots on the die, has probability $$ \tfrac{1}{6}). $$ The probability of an event is then defined to be the sum of the probabilities of all outcomes that satisfy the event; for example, the probability of the event "the die rolls an even value" is $$ p(\text{“}2\text{”}) + p(\text{“}4\text{”}) + p(\text{“}6\text{”}) = \frac{1}{6} + \frac{1}{6} + \frac{1}{6} = \frac{1}{2}. $$ In contrast, when a random variable takes values from a continuum then by convention, any individual outcome is assigned probability zero. For such continuous random variables, only events that include infinitely many outcomes such as intervals have probability greater than 0.
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
The probability of an event is then defined to be the sum of the probabilities of all outcomes that satisfy the event; for example, the probability of the event "the die rolls an even value" is $$ p(\text{“}2\text{”}) + p(\text{“}4\text{”}) + p(\text{“}6\text{”}) = \frac{1}{6} + \frac{1}{6} + \frac{1}{6} = \frac{1}{2}. $$ In contrast, when a random variable takes values from a continuum then by convention, any individual outcome is assigned probability zero. For such continuous random variables, only events that include infinitely many outcomes such as intervals have probability greater than 0. For example, consider measuring the weight of a piece of ham in the supermarket, and assume the scale can provide arbitrarily many digits of precision. Then, the probability that it weighs exactly 500 g must be zero because no matter how high the level of precision chosen, it cannot be assumed that there are no non-zero decimal digits in the remaining omitted digits ignored by the precision level. However, for the same use case, it is possible to meet quality control requirements such as that a package of "500 g" of ham must weigh between 490 g and 510 g with at least 98% probability.
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
Then, the probability that it weighs exactly 500 g must be zero because no matter how high the level of precision chosen, it cannot be assumed that there are no non-zero decimal digits in the remaining omitted digits ignored by the precision level. However, for the same use case, it is possible to meet quality control requirements such as that a package of "500 g" of ham must weigh between 490 g and 510 g with at least 98% probability. This is possible because this measurement does not require as much precision from the underlying equipment. Continuous probability distributions can be described by means of the cumulative distribution function, which describes the probability that the random variable is no larger than a given value (i.e., for some . The cumulative distribution function is the area under the probability density function from to , as shown in figure 1. Most continuous probability distributions encountered in practice are not only continuous but also absolutely continuous. Such distributions can be described by their probability density function. Informally, the probability density $$ f $$ of a random variable $$ X $$ describes the infinitesimal probability that _ BLOCK6_ takes any value $$ x $$ — that is $$ P(x \leq X < x + \Delta x) \approx f(x) \, \Delta x $$ as _
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
Informally, the probability density $$ f $$ of a random variable $$ X $$ describes the infinitesimal probability that _ BLOCK6_ takes any value $$ x $$ — that is $$ P(x \leq X < x + \Delta x) \approx f(x) \, \Delta x $$ as _ BLOCK9_ becomes is arbitrarily small. The probability that $$ X $$ lies in a given interval can be computed rigorously by integrating the probability density function over that interval. ## General probability definition Let $$ (\Omega, \mathcal{F}, P) $$ be a probability space, $$ (E, \mathcal{E}) $$ be a measurable space, and $$ X : \Omega \to E $$ be a $$ (E, \mathcal{E}) $$ -valued random variable. Then the probability distribution of $$ X $$ is the pushforward measure of the probability measure $$ P $$ onto $$ (E, \mathcal{E}) $$ induced by $$ X $$ .
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
## General probability definition Let $$ (\Omega, \mathcal{F}, P) $$ be a probability space, $$ (E, \mathcal{E}) $$ be a measurable space, and $$ X : \Omega \to E $$ be a $$ (E, \mathcal{E}) $$ -valued random variable. Then the probability distribution of $$ X $$ is the pushforward measure of the probability measure $$ P $$ onto $$ (E, \mathcal{E}) $$ induced by $$ X $$ . Explicitly, this pushforward measure on $$ (E, \mathcal{E}) $$ is given by $$ X_{*} (P) (B) = P \left( X^{-1} (B) \right) $$ for $$ B \in \mathcal{E}. $$ Any probability distribution is a probability measure on $$ (E, \mathcal{E}) $$ (in general different from $$ P $$ , unless $$ X $$ happens to be the identity map). A probability distribution can be described in various forms, such as by a probability mass function or a cumulative distribution function.
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
Explicitly, this pushforward measure on $$ (E, \mathcal{E}) $$ is given by $$ X_{*} (P) (B) = P \left( X^{-1} (B) \right) $$ for $$ B \in \mathcal{E}. $$ Any probability distribution is a probability measure on $$ (E, \mathcal{E}) $$ (in general different from $$ P $$ , unless $$ X $$ happens to be the identity map). A probability distribution can be described in various forms, such as by a probability mass function or a cumulative distribution function. One of the most general descriptions, which applies for absolutely continuous and discrete variables, is by means of a probability function $$ P \colon \mathcal{A} \to \Reals $$ whose input space $$ \mathcal{A} $$ is a σ-algebra, and gives a real number probability as its output, particularly, a number in $$ [0,1] \subseteq \Reals $$ . The probability function $$ P $$ can take as argument subsets of the sample space itself, as in the coin toss example, where the function $$ P $$ was defined so that and .
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
One of the most general descriptions, which applies for absolutely continuous and discrete variables, is by means of a probability function $$ P \colon \mathcal{A} \to \Reals $$ whose input space $$ \mathcal{A} $$ is a σ-algebra, and gives a real number probability as its output, particularly, a number in $$ [0,1] \subseteq \Reals $$ . The probability function $$ P $$ can take as argument subsets of the sample space itself, as in the coin toss example, where the function $$ P $$ was defined so that and . However, because of the widespread use of random variables, which transform the sample space into a set of numbers (e.g., $$ \R $$ , $$ \N $$ ), it is more common to study probability distributions whose argument are subsets of these particular kinds of sets (number sets), and all probability distributions discussed in this article are of this type. It is common to denote as $$ P(X \in E) $$ the probability that a certain value of the variable $$ X $$ belongs to a certain event $$ E $$ .
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
It is common to denote as $$ P(X \in E) $$ the probability that a certain value of the variable $$ X $$ belongs to a certain event $$ E $$ . The above probability function only characterizes a probability distribution if it satisfies all the Kolmogorov axioms, that is: 1. $$ P(X \in E) \ge 0 \; \forall E \in \mathcal{A} $$ , so the probability is non-negative 1. $$ P(X \in E) \le 1 \; \forall E \in \mathcal{A} $$ , so no probability exceeds $$ 1 $$ 1. $$ P(X \in \bigcup_{i} E_i ) = \sum_i P(X \in E_i) $$ for any countable disjoint family of sets $$ \{ E_i \} $$ The concept of probability function is made more rigorous by defining it as the element of a probability space $$ (X, \mathcal{A}, P) $$ , where $$ X $$ is the set of possible outcomes, $$ \mathcal{A} $$ is the set of all subsets $$ E \subset X $$ whose probability can be measured, and $$ P $$ is the probability function, or probability measure, that assigns a probability to each of these measurable subsets $$ E \in \mathcal{A} $$ .
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
the probability that a certain value of the variable $$ X $$ belongs to a certain event $$ E $$ . The above probability function only characterizes a probability distribution if it satisfies all the Kolmogorov axioms, that is: 1. $$ P(X \in E) \ge 0 \; \forall E \in \mathcal{A} $$ , so the probability is non-negative 1. $$ P(X \in E) \le 1 \; \forall E \in \mathcal{A} $$ , so no probability exceeds $$ 1 $$ 1. $$ P(X \in \bigcup_{i} E_i ) = \sum_i P(X \in E_i) $$ for any countable disjoint family of sets $$ \{ E_i \} $$ The concept of probability function is made more rigorous by defining it as the element of a probability space $$ (X, \mathcal{A}, P) $$ , where $$ X $$ is the set of possible outcomes, $$ \mathcal{A} $$ is the set of all subsets $$ E \subset X $$ whose probability can be measured, and $$ P $$ is the probability function, or probability measure, that assigns a probability to each of these measurable subsets $$ E \in \mathcal{A} $$ . Probability distributions usually belong to one of two classes.
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
The above probability function only characterizes a probability distribution if it satisfies all the Kolmogorov axioms, that is: 1. $$ P(X \in E) \ge 0 \; \forall E \in \mathcal{A} $$ , so the probability is non-negative 1. $$ P(X \in E) \le 1 \; \forall E \in \mathcal{A} $$ , so no probability exceeds $$ 1 $$ 1. $$ P(X \in \bigcup_{i} E_i ) = \sum_i P(X \in E_i) $$ for any countable disjoint family of sets $$ \{ E_i \} $$ The concept of probability function is made more rigorous by defining it as the element of a probability space $$ (X, \mathcal{A}, P) $$ , where $$ X $$ is the set of possible outcomes, $$ \mathcal{A} $$ is the set of all subsets $$ E \subset X $$ whose probability can be measured, and $$ P $$ is the probability function, or probability measure, that assigns a probability to each of these measurable subsets $$ E \in \mathcal{A} $$ . Probability distributions usually belong to one of two classes. A discrete probability distribution is applicable to the scenarios where the set of possible outcomes is discrete (e.g. a coin toss, a roll of a die) and the probabilities are encoded by a discrete list of the probabilities of the outcomes; in this case the discrete probability distribution is known as probability mass function.
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
Probability distributions usually belong to one of two classes. A discrete probability distribution is applicable to the scenarios where the set of possible outcomes is discrete (e.g. a coin toss, a roll of a die) and the probabilities are encoded by a discrete list of the probabilities of the outcomes; in this case the discrete probability distribution is known as probability mass function. On the other hand, absolutely continuous probability distributions are applicable to scenarios where the set of possible outcomes can take on values in a continuous range (e.g. real numbers), such as the temperature on a given day. In the absolutely continuous case, probabilities are described by a probability density function, and the probability distribution is by definition the integral of the probability density function. The normal distribution is a commonly encountered absolutely continuous probability distribution. More complex experiments, such as those involving stochastic processes defined in continuous time, may demand the use of more general probability measures. A probability distribution whose sample space is one-dimensional (for example real numbers, list of labels, ordered labels or binary) is called univariate, while a distribution whose sample space is a vector space of dimension 2 or more is called multivariate.
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
More complex experiments, such as those involving stochastic processes defined in continuous time, may demand the use of more general probability measures. A probability distribution whose sample space is one-dimensional (for example real numbers, list of labels, ordered labels or binary) is called univariate, while a distribution whose sample space is a vector space of dimension 2 or more is called multivariate. A univariate distribution gives the probabilities of a single random variable taking on various different values; a multivariate distribution (a joint probability distribution) gives the probabilities of a random vector – a list of two or more random variables – taking on various combinations of values. Important and commonly encountered univariate probability distributions include the binomial distribution, the hypergeometric distribution, and the normal distribution. A commonly encountered multivariate distribution is the multivariate normal distribution. Besides the probability function, the cumulative distribution function, the probability mass function and the probability density function, the moment generating function and the characteristic function also serve to identify a probability distribution, as they uniquely determine an underlying cumulative distribution function. ## Terminology Some key concepts and terms, widely used in the literature on the topic of probability distributions, are listed below.
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
Besides the probability function, the cumulative distribution function, the probability mass function and the probability density function, the moment generating function and the characteristic function also serve to identify a probability distribution, as they uniquely determine an underlying cumulative distribution function. ## Terminology Some key concepts and terms, widely used in the literature on the topic of probability distributions, are listed below. ### Basic terms - Random variable: takes values from a sample space; probabilities describe which values and set of values are more likely taken. - Event: set of possible values (outcomes) of a random variable that occurs with a certain probability. - Probability function or probability measure: describes the probability $$ P(X \in E) $$ that the event $$ E, $$ occurs. - ## ### ### Cumulative distribution function : function evaluating the probability that $$ X $$ will take a value less than or equal to $$ x $$ for a random variable (only for real-valued random variables). - Quantile function: the inverse of the cumulative distribution function. Gives $$ x $$ such that, with probability $$ q $$ , $$ X $$ will not exceed $$ x $$ . ### ## Discrete probability distribution s - Discrete probability distribution: for many random variables with finitely or countably infinitely many values.
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
Gives $$ x $$ such that, with probability $$ q $$ , $$ X $$ will not exceed $$ x $$ . ### ## Discrete probability distribution s - Discrete probability distribution: for many random variables with finitely or countably infinitely many values. - Probability mass function (pmf): function that gives the probability that a discrete random variable is equal to some value. - Frequency distribution: a table that displays the frequency of various outcomes . - Relative frequency distribution: a frequency distribution where each value has been divided (normalized) by a number of outcomes in a sample (i.e. sample size). - Categorical distribution: for discrete random variables with a finite set of values. ### ## Absolutely continuous probability distribution s - Absolutely continuous probability distribution: for many random variables with uncountably many values. - Probability density function (pdf) or probability density: function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can be interpreted as providing a relative likelihood that the value of the random variable would equal that sample. ### Related terms - Support: set of values that can be assumed with non-zero probability (or probability density in the case of a continuous distribution) by the random variable.
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
- Probability density function (pdf) or probability density: function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can be interpreted as providing a relative likelihood that the value of the random variable would equal that sample. ### Related terms - Support: set of values that can be assumed with non-zero probability (or probability density in the case of a continuous distribution) by the random variable. For a random variable $$ X $$ , it is sometimes denoted as $$ R_X $$ . - Tail: the regions close to the bounds of the random variable, if the pmf or pdf are relatively low therein. Usually has the form $$ X > a $$ , $$ X < b $$ or a union thereof. - Head: the region where the pmf or pdf is relatively high.
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
Tail: the regions close to the bounds of the random variable, if the pmf or pdf are relatively low therein. Usually has the form $$ X > a $$ , $$ X < b $$ or a union thereof. - Head: the region where the pmf or pdf is relatively high. Usually has the form $$ a < X < b $$ . - Expected value or mean: the weighted average of the possible values, using their probabilities as their weights; or the continuous analog thereof. - Median: the value such that the set of values less than the median, and the set greater than the median, each have probabilities no greater than one-half. - Mode: for a discrete random variable, the value with highest probability; for an absolutely continuous random variable, a location at which the probability density function has a local peak.
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
Usually has the form $$ X > a $$ , $$ X < b $$ or a union thereof. - Head: the region where the pmf or pdf is relatively high. Usually has the form $$ a < X < b $$ . - Expected value or mean: the weighted average of the possible values, using their probabilities as their weights; or the continuous analog thereof. - Median: the value such that the set of values less than the median, and the set greater than the median, each have probabilities no greater than one-half. - Mode: for a discrete random variable, the value with highest probability; for an absolutely continuous random variable, a location at which the probability density function has a local peak. - Quantile: the q-quantile is the value $$ x $$ such that $$ P(X < x) = q $$ . - Variance: the second moment of the pmf or pdf about the mean; an important measure of the dispersion of the distribution. - Standard deviation: the square root of the variance, and hence another measure of dispersion. - Symmetry: a property of some distributions in which the portion of the distribution to the left of a specific value (usually the median) is a mirror image of the portion to its right. - Skewness: a measure of the extent to which a pmf or pdf "leans" to one side of its mean.
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
Usually has the form $$ a < X < b $$ . - Expected value or mean: the weighted average of the possible values, using their probabilities as their weights; or the continuous analog thereof. - Median: the value such that the set of values less than the median, and the set greater than the median, each have probabilities no greater than one-half. - Mode: for a discrete random variable, the value with highest probability; for an absolutely continuous random variable, a location at which the probability density function has a local peak. - Quantile: the q-quantile is the value $$ x $$ such that $$ P(X < x) = q $$ . - Variance: the second moment of the pmf or pdf about the mean; an important measure of the dispersion of the distribution. - Standard deviation: the square root of the variance, and hence another measure of dispersion. - Symmetry: a property of some distributions in which the portion of the distribution to the left of a specific value (usually the median) is a mirror image of the portion to its right. - Skewness: a measure of the extent to which a pmf or pdf "leans" to one side of its mean. The third standardized moment of the distribution. - Kurtosis: a measure of the "fatness" of the tails of a pmf or pdf. The fourth standardized moment of the distribution.
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
The third standardized moment of the distribution. - Kurtosis: a measure of the "fatness" of the tails of a pmf or pdf. The fourth standardized moment of the distribution. Cumulative distribution function In the special case of a real-valued random variable, the probability distribution can equivalently be represented by a cumulative distribution function instead of a probability measure. The cumulative distribution function of a random variable $$ X $$ with regard to a probability distribution $$ p $$ is defined as $$ F(x) = P(X \leq x). $$ The cumulative distribution function of any real-valued random variable has the properties: - is non-decreasing; - is right-continuous; - ; - and ; and - . Conversely, any function $$ F:\mathbb{R}\to\mathbb{R} $$ that satisfies the first four of the properties above is the cumulative distribution function of some probability distribution on the real numbers. Any probability distribution can be decomposed as the mixture of a discrete, an absolutely continuous and a singular continuous distribution, and thus any cumulative distribution function admits a decomposition as the convex sum of the three according cumulative distribution functions.
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
Conversely, any function $$ F:\mathbb{R}\to\mathbb{R} $$ that satisfies the first four of the properties above is the cumulative distribution function of some probability distribution on the real numbers. Any probability distribution can be decomposed as the mixture of a discrete, an absolutely continuous and a singular continuous distribution, and thus any cumulative distribution function admits a decomposition as the convex sum of the three according cumulative distribution functions. Discrete probability distribution A discrete probability distribution is the probability distribution of a random variable that can take on only a countable number of values (almost surely) which means that the probability of any event $$ E $$ can be expressed as a (finite or countably infinite) sum: $$ P(X\in E) = \sum_{\omega\in A \cap E} P(X = \omega), $$ where $$ A $$ is a countable set with $$ P(X \in A) = 1 $$ . Thus the discrete random variables (i.e. random variables whose probability distribution is discrete) are exactly those with a probability mass function $$ p(x) = P(X=x) $$ . In the case where the range of values is countably infinite, these values have to decline to zero fast enough for the probabilities to add up to 1.
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
Thus the discrete random variables (i.e. random variables whose probability distribution is discrete) are exactly those with a probability mass function $$ p(x) = P(X=x) $$ . In the case where the range of values is countably infinite, these values have to decline to zero fast enough for the probabilities to add up to 1. For example, if $$ p(n) = \tfrac{1}{2^n} $$ for $$ n = 1, 2, ... $$ , the sum of probabilities would be $$ 1/2 + 1/4 + 1/8 + \dots = 1 $$ . Well-known discrete probability distributions used in statistical modeling include the Poisson distribution, the Bernoulli distribution, the binomial distribution, the geometric distribution, the negative binomial distribution and categorical distribution. When a sample (a set of observations) is drawn from a larger population, the sample points have an empirical distribution that is discrete, and which provides information about the population distribution. Additionally, the discrete uniform distribution is commonly used in computer programs that make equal-probability random selections between a number of choices.
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
When a sample (a set of observations) is drawn from a larger population, the sample points have an empirical distribution that is discrete, and which provides information about the population distribution. Additionally, the discrete uniform distribution is commonly used in computer programs that make equal-probability random selections between a number of choices. Cumulative distribution function A real-valued discrete random variable can equivalently be defined as a random variable whose cumulative distribution function increases only by jump discontinuities—that is, its cdf increases only where it "jumps" to a higher value, and is constant in intervals without jumps. The points where jumps occur are precisely the values which the random variable may take. Thus the cumulative distribution function has the form $$ F(x) = P(X \leq x) = \sum_{\omega \leq x} p(\omega). $$ The points where the cdf jumps always form a countable set; this may be any countable set and thus may even be dense in the real numbers. ### Dirac delta representation A discrete probability distribution is often represented with Dirac measures, also called one-point distributions (see below), the probability distributions of deterministic random variables. For any outcome $$ \omega $$ , let $$ \delta_\omega $$ be the Dirac measure concentrated at $$ \omega $$ .
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
### Dirac delta representation A discrete probability distribution is often represented with Dirac measures, also called one-point distributions (see below), the probability distributions of deterministic random variables. For any outcome $$ \omega $$ , let $$ \delta_\omega $$ be the Dirac measure concentrated at $$ \omega $$ . Given a discrete probability distribution, there is a countable set $$ A $$ with $$ P(X \in A) = 1 $$ and a probability mass function $$ p $$ .
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
For any outcome $$ \omega $$ , let $$ \delta_\omega $$ be the Dirac measure concentrated at $$ \omega $$ . Given a discrete probability distribution, there is a countable set $$ A $$ with $$ P(X \in A) = 1 $$ and a probability mass function $$ p $$ . If $$ E $$ is any event, then $$ P(X \in E) = \sum_{\omega \in A} p(\omega) \delta_\omega(E), $$ or in short, $$ P_X = \sum_{\omega \in A} p(\omega) \delta_\omega. $$ Similarly, discrete distributions can be represented with the Dirac delta function as a generalized probability density function $$ f $$ , where $$ f(x) = \sum_{\omega \in A} p(\omega) \delta(x - \omega), $$ which means $$ P(X \in E) = \int_E f(x) \, dx = \sum_{\omega \in A} p(\omega) \int_E \delta(x - \omega) = \sum_{\omega \in A \cap E} p(\omega) $$ for any event $$ E. $$
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
Given a discrete probability distribution, there is a countable set $$ A $$ with $$ P(X \in A) = 1 $$ and a probability mass function $$ p $$ . If $$ E $$ is any event, then $$ P(X \in E) = \sum_{\omega \in A} p(\omega) \delta_\omega(E), $$ or in short, $$ P_X = \sum_{\omega \in A} p(\omega) \delta_\omega. $$ Similarly, discrete distributions can be represented with the Dirac delta function as a generalized probability density function $$ f $$ , where $$ f(x) = \sum_{\omega \in A} p(\omega) \delta(x - \omega), $$ which means $$ P(X \in E) = \int_E f(x) \, dx = \sum_{\omega \in A} p(\omega) \int_E \delta(x - \omega) = \sum_{\omega \in A \cap E} p(\omega) $$ for any event $$ E. $$ ### Indicator-function representation For a discrete random variable $$ X $$ , let $$ u_0, u_1, \dots $$ be the values it can take with non-zero probability.
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
If $$ E $$ is any event, then $$ P(X \in E) = \sum_{\omega \in A} p(\omega) \delta_\omega(E), $$ or in short, $$ P_X = \sum_{\omega \in A} p(\omega) \delta_\omega. $$ Similarly, discrete distributions can be represented with the Dirac delta function as a generalized probability density function $$ f $$ , where $$ f(x) = \sum_{\omega \in A} p(\omega) \delta(x - \omega), $$ which means $$ P(X \in E) = \int_E f(x) \, dx = \sum_{\omega \in A} p(\omega) \int_E \delta(x - \omega) = \sum_{\omega \in A \cap E} p(\omega) $$ for any event $$ E. $$ ### Indicator-function representation For a discrete random variable $$ X $$ , let $$ u_0, u_1, \dots $$ be the values it can take with non-zero probability. Denote $$ \Omega_i=X^{-1}(u_i)= \{\omega: X(\omega)=u_i\},\, i=0, 1, 2, \dots $$ These are disjoint sets, and for such sets $$ P\left(\bigcup_i \Omega_i\right)=\sum_i P(\Omega_i)=\sum_i P(X=u_i)=1. $$ It follows that the probability that $$ X $$ takes any value except for $$ u_0, u_1, \dots $$ is zero, and thus one can write $$ X $$ as $$ X(\omega)=\sum_i u_i 1_{\Omega_i}(\omega) $$ except on a set of probability zero, where $$ 1_A $$ is the indicator function of $$ A $$ .
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
### Indicator-function representation For a discrete random variable $$ X $$ , let $$ u_0, u_1, \dots $$ be the values it can take with non-zero probability. Denote $$ \Omega_i=X^{-1}(u_i)= \{\omega: X(\omega)=u_i\},\, i=0, 1, 2, \dots $$ These are disjoint sets, and for such sets $$ P\left(\bigcup_i \Omega_i\right)=\sum_i P(\Omega_i)=\sum_i P(X=u_i)=1. $$ It follows that the probability that $$ X $$ takes any value except for $$ u_0, u_1, \dots $$ is zero, and thus one can write $$ X $$ as $$ X(\omega)=\sum_i u_i 1_{\Omega_i}(\omega) $$ except on a set of probability zero, where $$ 1_A $$ is the indicator function of $$ A $$ . This may serve as an alternative definition of discrete random variables. ### One-point distribution A special case is the discrete distribution of a random variable that can take on only one fixed value, in other words, a Dirac measure.
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
This may serve as an alternative definition of discrete random variables. ### One-point distribution A special case is the discrete distribution of a random variable that can take on only one fixed value, in other words, a Dirac measure. Expressed formally, the random variable $$ X $$ has a one-point distribution if it has a possible outcome $$ x $$ such that $$ P(X{=}x)=1. $$ All other possible outcomes then have probability 0. Its cumulative distribution function jumps immediately from 0 before $$ x $$ to 1 at $$ x $$ . It is closely related to a deterministic distribution, which cannot take on any other value, while a one-point distribution can take other values, though only with probability 0. For most practical purposes the two notions are equivalent. Absolutely continuous probability distribution An absolutely continuous probability distribution is a probability distribution on the real numbers with uncountably many possible values, such as a whole interval in the real line, and where the probability of any event can be expressed as an integral. More precisely, a real random variable $$ X $$ has an absolutely continuous probability distribution if there is a function $$ f: \Reals \to [0, \infty] $$ such that for each interval $$ I = [a,b] \subset \mathbb{R} $$ the probability of $$ X $$ belonging to _
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
Absolutely continuous probability distribution An absolutely continuous probability distribution is a probability distribution on the real numbers with uncountably many possible values, such as a whole interval in the real line, and where the probability of any event can be expressed as an integral. More precisely, a real random variable $$ X $$ has an absolutely continuous probability distribution if there is a function $$ f: \Reals \to [0, \infty] $$ such that for each interval $$ I = [a,b] \subset \mathbb{R} $$ the probability of $$ X $$ belonging to _ BLOCK9_ is given by the integral of $$ f $$ over _ BLOCK11_: $$ P\left(a \le X \le b \right) = \int_a^b f(x) \, dx . $$ This is the definition of a probability density function, so that absolutely continuous probability distributions are exactly those with a probability density function. In particular, the probability for $$ X $$ to take any single value $$ a $$ (that is, $$ a \le X \le a $$ ) is zero, because an integral with coinciding upper and lower limits is always equal to zero.
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
BLOCK11_: $$ P\left(a \le X \le b \right) = \int_a^b f(x) \, dx . $$ This is the definition of a probability density function, so that absolutely continuous probability distributions are exactly those with a probability density function. In particular, the probability for $$ X $$ to take any single value $$ a $$ (that is, $$ a \le X \le a $$ ) is zero, because an integral with coinciding upper and lower limits is always equal to zero. If the interval $$ [a,b] $$ is replaced by any measurable set $$ A $$ , the according equality still holds: $$ P(X \in A) = \int_A f(x) \, dx . $$ An absolutely continuous random variable is a random variable whose probability distribution is absolutely continuous. There are many examples of absolutely continuous probability distributions: normal, uniform, chi-squared, and others. Cumulative distribution function Absolutely continuous probability distributions as defined above are precisely those with an absolutely continuous cumulative distribution function.
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
There are many examples of absolutely continuous probability distributions: normal, uniform, chi-squared, and others. Cumulative distribution function Absolutely continuous probability distributions as defined above are precisely those with an absolutely continuous cumulative distribution function. In this case, the cumulative distribution function $$ F $$ has the form $$ F(x) = P(X \leq x) = \int_{-\infty}^x f(t)\,dt $$ where _BLOCK21 _ is a density of the random variable $$ X $$ with regard to the distribution $$ P $$ . Note on terminology: Absolutely continuous distributions ought to be distinguished from continuous distributions, which are those having a continuous cumulative distribution function. Every absolutely continuous distribution is a continuous distribution but the inverse is not true, there exist singular distributions, which are neither absolutely continuous nor discrete nor a mixture of those, and do not have a density. An example is given by the Cantor distribution. Some authors however use the term "continuous distribution" to denote all distributions whose cumulative distribution function is absolutely continuous, i.e. refer to absolutely continuous distributions as continuous distributions. For a more general definition of density functions and the equivalent absolutely continuous measures see absolutely continuous measure.
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
Some authors however use the term "continuous distribution" to denote all distributions whose cumulative distribution function is absolutely continuous, i.e. refer to absolutely continuous distributions as continuous distributions. For a more general definition of density functions and the equivalent absolutely continuous measures see absolutely continuous measure. ## Kolmogorov definition In the measure-theoretic formalization of probability theory, a random variable is defined as a measurable function $$ X $$ from a probability space $$ (\Omega, \mathcal{F}, \mathbb{P}) $$ to a measurable space $$ (\mathcal{X},\mathcal{A}) $$ . Given that probabilities of events of the form $$ \{\omega\in\Omega\mid X(\omega)\in A\} $$ satisfy Kolmogorov's probability axioms, the probability distribution of is the image measure $$ X_*\mathbb{P} $$ of $$ X $$ , which is a probability measure on $$ (\mathcal{X},\mathcal{A}) $$ satisfying $$ X_*\mathbb{P} = \mathbb{P}X^{-1} $$ .
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
## Kolmogorov definition In the measure-theoretic formalization of probability theory, a random variable is defined as a measurable function $$ X $$ from a probability space $$ (\Omega, \mathcal{F}, \mathbb{P}) $$ to a measurable space $$ (\mathcal{X},\mathcal{A}) $$ . Given that probabilities of events of the form $$ \{\omega\in\Omega\mid X(\omega)\in A\} $$ satisfy Kolmogorov's probability axioms, the probability distribution of is the image measure $$ X_*\mathbb{P} $$ of $$ X $$ , which is a probability measure on $$ (\mathcal{X},\mathcal{A}) $$ satisfying $$ X_*\mathbb{P} = \mathbb{P}X^{-1} $$ . ## Other kinds of distributions Absolutely continuous and discrete distributions with support on $$ \mathbb{R}^k $$ or $$ \mathbb{N}^k $$ are extremely useful to model a myriad of phenomena, since most practical distributions are supported on relatively simple subsets, such as hypercubes or balls.
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
Given that probabilities of events of the form $$ \{\omega\in\Omega\mid X(\omega)\in A\} $$ satisfy Kolmogorov's probability axioms, the probability distribution of is the image measure $$ X_*\mathbb{P} $$ of $$ X $$ , which is a probability measure on $$ (\mathcal{X},\mathcal{A}) $$ satisfying $$ X_*\mathbb{P} = \mathbb{P}X^{-1} $$ . ## Other kinds of distributions Absolutely continuous and discrete distributions with support on $$ \mathbb{R}^k $$ or $$ \mathbb{N}^k $$ are extremely useful to model a myriad of phenomena, since most practical distributions are supported on relatively simple subsets, such as hypercubes or balls. However, this is not always the case, and there exist phenomena with supports that are actually complicated curves $$ \gamma: [a, b] \rightarrow \mathbb{R}^n $$ within some space $$ \mathbb{R}^n $$ or similar. In these cases, the probability distribution is supported on the image of such curve, and is likely to be determined empirically, rather than finding a closed formula for it.
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
However, this is not always the case, and there exist phenomena with supports that are actually complicated curves $$ \gamma: [a, b] \rightarrow \mathbb{R}^n $$ within some space $$ \mathbb{R}^n $$ or similar. In these cases, the probability distribution is supported on the image of such curve, and is likely to be determined empirically, rather than finding a closed formula for it. One example is shown in the figure to the right, which displays the evolution of a system of differential equations (commonly known as the Rabinovich–Fabrikant equations) that can be used to model the behaviour of Langmuir waves in plasma. When this phenomenon is studied, the observed states from the subset are as indicated in red. So one could ask what is the probability of observing a state in a certain position of the red subset; if such a probability exists, it is called the probability measure of the system. This kind of complicated support appears quite frequently in dynamical systems. It is not simple to establish that the system has a probability measure, and the main problem is the following.
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
This kind of complicated support appears quite frequently in dynamical systems. It is not simple to establish that the system has a probability measure, and the main problem is the following. Let $$ t_1 \ll t_2 \ll t_3 $$ be instants in time and $$ O $$ a subset of the support; if the probability measure exists for the system, one would expect the frequency of observing states inside set $$ O $$ would be equal in interval $$ [t_1,t_2] $$ and $$ [t_2,t_3] $$ , which might not happen; for example, it could oscillate similar to a sine, $$ \sin(t) $$ , whose limit when $$ t \rightarrow \infty $$ does not converge. Formally, the measure exists only if the limit of the relative frequency converges when the system is observed into the infinite future. The branch of dynamical systems that studies the existence of a probability measure is ergodic theory. Note that even in these cases, the probability distribution, if it exists, might still be termed "absolutely continuous" or "discrete" depending on whether the support is uncountable or countable, respectively. ## Random number generation Most algorithms are based on a pseudorandom number generator that produces numbers $$ X $$ that are uniformly distributed in the half-open interval .
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
Note that even in these cases, the probability distribution, if it exists, might still be termed "absolutely continuous" or "discrete" depending on whether the support is uncountable or countable, respectively. ## Random number generation Most algorithms are based on a pseudorandom number generator that produces numbers $$ X $$ that are uniformly distributed in the half-open interval . These random variates $$ X $$ are then transformed via some algorithm to create a new random variate having the required probability distribution. With this source of uniform pseudo-randomness, realizations of any random variable can be generated. For example, suppose has a uniform distribution between 0 and 1. To construct a random Bernoulli variable for some , define $$ X = \begin{cases} 1& \text{if } U<p\\ 0& \text{if } U\geq p. \end{cases} $$ We thus have $$ P(X=1) = P(U<p) = p, \quad P(X=0) = P(U\geq p) = 1-p. $$ Therefore, the random variable has a Bernoulli distribution with parameter .
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
For example, suppose has a uniform distribution between 0 and 1. To construct a random Bernoulli variable for some , define $$ X = \begin{cases} 1& \text{if } U<p\\ 0& \text{if } U\geq p. \end{cases} $$ We thus have $$ P(X=1) = P(U<p) = p, \quad P(X=0) = P(U\geq p) = 1-p. $$ Therefore, the random variable has a Bernoulli distribution with parameter . This method can be adapted to generate real-valued random variables with any distribution: for be any cumulative distribution function , let be the generalized left inverse of $$ F, $$ also known in this context as the quantile function or inverse distribution function: $$ F^{\mathrm{inv}}(p) = \inf \{x \in \mathbb{R} : p \le F(x)\}. $$ Then, if and only if . As a result, if is uniformly distributed on , then the cumulative distribution function of is .
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
This method can be adapted to generate real-valued random variables with any distribution: for be any cumulative distribution function , let be the generalized left inverse of $$ F, $$ also known in this context as the quantile function or inverse distribution function: $$ F^{\mathrm{inv}}(p) = \inf \{x \in \mathbb{R} : p \le F(x)\}. $$ Then, if and only if . As a result, if is uniformly distributed on , then the cumulative distribution function of is . For example, suppose we want to generate a random variable having an exponential distribution with parameter $$ \lambda $$ — that is, with cumulative distribution function $$ F : x \mapsto 1 - e^{-\lambda x}. $$ $$ \begin{align} F(x) = u &\Leftrightarrow 1-e^{-\lambda x} = u \\[2pt] &\Leftrightarrow e^{-\lambda x } = 1-u \\[2pt] &\Leftrightarrow -\lambda x = \ln(1-u) \\[2pt] &\Leftrightarrow x = \frac{-1}{\lambda}\ln(1-u) \end{align} $$ so _
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution