text
stringlengths 105
4.17k
| source
stringclasses 883
values |
---|---|
It would appear that Bush was inspired by patents for a 'statistical machine' – filed by Emanuel Goldberg in the 1920s and 1930s – that searched for documents stored on film. The first description of a computer searching for information was described by Holmstrom in 1948, detailing an early mention of the Univac computer. Automated information retrieval systems were introduced in the 1950s: one even featured in the 1957 romantic comedy Desk Set. In the 1960s, the first large information retrieval research group was formed by Gerard Salton at Cornell. By the 1970s several different retrieval techniques had been shown to perform well on small text corpora such as the Cranfield collection (several thousand documents). Large-scale retrieval systems, such as the Lockheed Dialog system, came into use early in the 1970s.
In 1992, the US Department of Defense along with the National Institute of Standards and Technology (NIST), cosponsored the Text Retrieval Conference (TREC) as part of the TIPSTER text program. The aim of this was to look into the information retrieval community by supplying the infrastructure that was needed for evaluation of text retrieval methodologies on a very large text collection. This catalyzed research on methods that scale to huge corpora. The introduction of web search engines has boosted the need for very large scale retrieval systems even further.
By the late 1990s, the rise of the World Wide Web fundamentally transformed information retrieval.
|
https://en.wikipedia.org/wiki/Information_retrieval
|
The introduction of web search engines has boosted the need for very large scale retrieval systems even further.
By the late 1990s, the rise of the World Wide Web fundamentally transformed information retrieval. While early search engines such as AltaVista (1995) and Yahoo! (1994) offered keyword-based retrieval, they were limited in scale and ranking refinement. The breakthrough came in 1998 with the founding of Google, which introduced the PageRank algorithm, using the web’s hyperlink structure to assess page importance and improve relevance ranking.
During the 2000s, web search systems evolved rapidly with the integration of machine learning techniques. These systems began to incorporate user behavior data (e.g., click-through logs), query reformulation, and content-based signals to improve search accuracy and personalization. In 2009, Microsoft launched Bing, introducing features that would later incorporate semantic web technologies through the development of its Satori knowledge base. Academic analysis have highlighted Bing’s semantic capabilities, including structured data use and entity recognition, as part of a broader industry shift toward improving search relevance and understanding user intent through natural language processing.
A major leap occurred in 2018, when Google deployed BERT (Bidirectional Encoder Representations from Transformers) to better understand the contextual meaning of queries and documents. This marked one of the first times deep neural language models were used at scale in real-world retrieval systems.
|
https://en.wikipedia.org/wiki/Information_retrieval
|
A major leap occurred in 2018, when Google deployed BERT (Bidirectional Encoder Representations from Transformers) to better understand the contextual meaning of queries and documents. This marked one of the first times deep neural language models were used at scale in real-world retrieval systems. BERT’s bidirectional training enabled a more refined comprehension of word relationships in context, improving the handling of natural language queries. Because of its success, transformer-based models gained traction in academic research and commercial search applications.
Simultaneously, the research community began exploring neural ranking models that outperformed traditional lexical-based methods. Long-standing benchmarks such as the Text REtrieval Conference (TREC), initiated in 1992, and more recent evaluation frameworks Microsoft MARCO(MAchine Reading COmprehension) (2019) became central to training and evaluating retrieval systems across multiple tasks and domains. MS MARCO has also been adopted in the TREC Deep Learning Tracks, where it serves as a core dataset for evaluating advances in neural ranking models within a standardized benchmarking environment.
As deep learning became integral to information retrieval systems, researchers began to categorize neural approaches into three broad classes: sparse, dense, and hybrid models.
|
https://en.wikipedia.org/wiki/Information_retrieval
|
MS MARCO has also been adopted in the TREC Deep Learning Tracks, where it serves as a core dataset for evaluating advances in neural ranking models within a standardized benchmarking environment.
As deep learning became integral to information retrieval systems, researchers began to categorize neural approaches into three broad classes: sparse, dense, and hybrid models. Sparse models, including traditional term-based methods and learned variants like SPLADE, rely on interpretable representations and inverted indexes to enable efficient exact term matching with added semantic signals. Dense models, such as dual-encoder architectures like ColBERT, use continuous vector embeddings to support semantic similarity beyond keyword overlap. Hybrid models aim to combine the advantages of both, balancing the lexical (token) precision of sparse methods with the semantic depth of dense models. This way of categorizing models balances scalability, relevance, and efficiency in retrieval systems.
As IR systems increasingly rely on deep learning, concerns around bias, fairness, and explainability have also come to the picture. Research is now focused not just on relevance and efficiency, but on transparency, accountability, and user trust in retrieval algorithms.
## Applications
Areas where information retrieval techniques are employed include (the entries are in alphabetical order within each category):
|
https://en.wikipedia.org/wiki/Information_retrieval
|
Research is now focused not just on relevance and efficiency, but on transparency, accountability, and user trust in retrieval algorithms.
## Applications
Areas where information retrieval techniques are employed include (the entries are in alphabetical order within each category):
### General applications
- Digital libraries
- Information filtering
- Recommender systems
- Media search
- Blog search
- Image retrieval
- 3D retrieval
- Music retrieval
- News search
- Speech retrieval
- Video retrieval
- Search engines
- Site search
- Desktop search
- Enterprise search
- Federated search
- Mobile search
- Social search
- Web search
### Domain-specific applications
- Expert search finding
- Genomic information retrieval
- Geographic information retrieval
- Information retrieval for chemical structures
- Information retrieval in software engineering
- Legal information retrieval
- Vertical search
### Other retrieval methods
Methods/Techniques in which information retrieval techniques are employed include:
- Adversarial information retrieval
- Automatic summarization
- Multi-document summarization
- Compound term processing
- Cross-lingual retrieval
- Document classification
- Spam filtering
- Question answering
## Model types
In order to effectively retrieve relevant documents by IR strategies, the documents are typically transformed into a suitable representation. Each retrieval strategy incorporates a specific model for its document representation purposes. The picture on the right illustrates the relationship of some common models.
|
https://en.wikipedia.org/wiki/Information_retrieval
|
Each retrieval strategy incorporates a specific model for its document representation purposes. The picture on the right illustrates the relationship of some common models. In the picture, the models are categorized according to two dimensions: the mathematical basis and the properties of the model.
### First dimension: mathematical basis
- Set-theoretic models represent documents as sets of words or phrases. Similarities are usually derived from set-theoretic operations on those sets. Common models are:
- Standard Boolean model
- Extended Boolean model
- Fuzzy retrieval
- Algebraic models represent documents and queries usually as vectors, matrices, or tuples. The similarity of the query vector and document vector is represented as a scalar value.
- Vector space model
- Generalized vector space model
- (Enhanced) Topic-based Vector Space Model
- Extended Boolean model
- Latent semantic indexing a.k.a. latent semantic analysis
- Probabilistic models treat the process of document retrieval as a probabilistic inference. Similarities are computed as probabilities that a document is relevant for a given query. Probabilistic theorems like Bayes' theorem are often used in these models.
|
https://en.wikipedia.org/wiki/Information_retrieval
|
Similarities are computed as probabilities that a document is relevant for a given query. Probabilistic theorems like Bayes' theorem are often used in these models.
- Binary Independence Model
- Probabilistic relevance model on which is based the okapi (BM25) relevance function
- Uncertain inference
- Language models
- Divergence-from-randomness model
- Latent Dirichlet allocation
- Feature-based retrieval models view documents as vectors of values of feature functions (or just features) and seek the best way to combine these features into a single relevance score, typically by learning to rank methods. Feature functions are arbitrary functions of document and query, and as such can easily incorporate almost any other retrieval model as just another feature.
### Second dimension: properties of the model
- Models without term-interdependencies treat different terms/words as independent. This fact is usually represented in vector space models by the orthogonality assumption of term vectors or in probabilistic models by an independency assumption for term variables.
- Models with immanent term interdependencies allow a representation of interdependencies between terms. However the degree of the interdependency between two terms is defined by the model itself.
|
https://en.wikipedia.org/wiki/Information_retrieval
|
- Models with immanent term interdependencies allow a representation of interdependencies between terms. However the degree of the interdependency between two terms is defined by the model itself. It is usually directly or indirectly derived (e.g. by dimensional reduction) from the co-occurrence of those terms in the whole set of documents.
- Models with transcendent term interdependencies allow a representation of interdependencies between terms, but they do not allege how the interdependency between two terms is defined. They rely on an external source for the degree of interdependency between two terms. (For example, a human or sophisticated algorithms.)
### Third Dimension: representational approach-based classification
In addition to the theoretical distinctions, modern information retrieval models are also categorized on how queries and documents are represented and compared, using a practical classification distinguishing between sparse, dense and hybrid models.
- Sparse models utilize interpretable, term-based representations and typically rely on inverted index structures. Classical methods such as TF-IDF and BM25 fall under this category, along with more recent learned sparse models that integrate neural architectures while retaining sparsity.
- Dense models represent queries and documents as continuous vectors using deep learning models, typically transformer-based encoders.
|
https://en.wikipedia.org/wiki/Information_retrieval
|
Classical methods such as TF-IDF and BM25 fall under this category, along with more recent learned sparse models that integrate neural architectures while retaining sparsity.
- Dense models represent queries and documents as continuous vectors using deep learning models, typically transformer-based encoders. These models enable semantic similarity matching beyond exact term overlap and are used in tasks involving semantic search and question answering.
- Hybrid models aim to combine the strengths of both approaches, integrating lexical (tokens) and semantic signals through score fusion, late interaction, or multi-stage ranking pipelines.
This classification has become increasingly common in both academic and the real world applications and is getting widely adopted and used in evaluation benchmarks for Information Retrieval models.
## Performance and correctness measures
The evaluation of an information retrieval system' is the process of assessing how well a system meets the information needs of its users. In general, measurement considers a collection of documents to be searched and a search query. Traditional evaluation metrics, designed for Boolean retrieval or top-k retrieval, include precision and recall. All measures assume a ground truth notion of relevance: every document is known to be either relevant or non-relevant to a particular query. In practice, queries may be ill-posed and there may be different shades of relevance.
|
https://en.wikipedia.org/wiki/Information_retrieval
|
All measures assume a ground truth notion of relevance: every document is known to be either relevant or non-relevant to a particular query. In practice, queries may be ill-posed and there may be different shades of relevance.
## Libraries for searching and indexing
- Lemur
- Lucene
- Solr
- Elasticsearch
- Manatee
- Manticore search
- Sphinx
- Terrier Search Engine
- Xapian
## Timeline
- Before the 1900s
- : 1801: Joseph Marie Jacquard invents the Jacquard loom, the first machine to use punched cards to control a sequence of operations.
- : 1880s: Herman Hollerith invents an electro-mechanical data tabulator using punch cards as a machine readable medium.
- : 1890 Hollerith cards, keypunches and tabulators used to process the 1890 US census data.
- 1920s–1930s
- : Emanuel Goldberg submits patents for his "Statistical Machine", a document search engine that used photoelectric cells and pattern recognition to search the metadata on rolls of microfilmed documents.
|
https://en.wikipedia.org/wiki/Information_retrieval
|
Hollerith cards, keypunches and tabulators used to process the 1890 US census data.
- 1920s–1930s
- : Emanuel Goldberg submits patents for his "Statistical Machine", a document search engine that used photoelectric cells and pattern recognition to search the metadata on rolls of microfilmed documents.
- 1940s–1950s
- : late 1940s: The US military confronted problems of indexing and retrieval of wartime scientific research documents captured from Germans.
- :: 1945: Vannevar Bush's As We May Think appeared in Atlantic Monthly.
- :: 1947: Hans Peter Luhn (research engineer at IBM since 1941) began work on a mechanized punch card-based system for searching chemical compounds.
- : 1950s: Growing concern in the US for a "science gap" with the USSR motivated, encouraged funding and provided a backdrop for mechanized literature searching systems (Allen Kent et al.) and the invention of the citation index by Eugene Garfield.
- : 1950: The term "information retrieval" was coined by Calvin Mooers.
- : 1951: Philip Bagley conducted the earliest experiment in computerized document retrieval in a master thesis at MIT.
- : 1955: Allen Kent joined Case Western Reserve University, and eventually became associate director of the Center for Documentation and Communications Research.
|
https://en.wikipedia.org/wiki/Information_retrieval
|
and the invention of the citation index by Eugene Garfield.
- : 1950: The term "information retrieval" was coined by Calvin Mooers.
- : 1951: Philip Bagley conducted the earliest experiment in computerized document retrieval in a master thesis at MIT.
- : 1955: Allen Kent joined Case Western Reserve University, and eventually became associate director of the Center for Documentation and Communications Research. That same year, Kent and colleagues published a paper in American Documentation describing the precision and recall measures as well as detailing a proposed "framework" for evaluating an IR system which included statistical sampling methods for determining the number of relevant documents not retrieved.
- : 1958: International Conference on Scientific Information Washington DC included consideration of IR systems as a solution to problems identified. See: Proceedings of the International Conference on Scientific Information, 1958 (National Academy of Sciences, Washington, DC, 1959)
- : 1959: Hans Peter Luhn published "Auto-encoding of documents for information retrieval".
- 1960s:
- : early 1960s: Gerard Salton began work on IR at Harvard, later moved to Cornell.
- : 1960:
|
https://en.wikipedia.org/wiki/Information_retrieval
|
Proceedings of the International Conference on Scientific Information, 1958 (National Academy of Sciences, Washington, DC, 1959)
- : 1959: Hans Peter Luhn published "Auto-encoding of documents for information retrieval".
- 1960s:
- : early 1960s: Gerard Salton began work on IR at Harvard, later moved to Cornell.
- : 1960: Melvin Earl Maron and John Lary Kuhns published "On relevance, probabilistic indexing, and information retrieval" in the Journal of the ACM 7(3):216–244, July 1960.
- : 1962:
- :* Cyril W. Cleverdon published early findings of the Cranfield studies, developing a model for IR system evaluation. See: Cyril W. Cleverdon, "Report on the Testing and Analysis of an Investigation into the Comparative Efficiency of Indexing Systems". Cranfield Collection of Aeronautics, Cranfield, England, 1962.
- :* Kent published Information Analysis and Retrieval.
- : 1963:
- :* Weinberg report "Science, Government and Information" gave a full articulation of the idea of a "crisis of scientific information". The report was named after Dr. Alvin Weinberg.
- :* Joseph Becker and Robert M. Hayes published text on information retrieval. Becker, Joseph; Hayes, Robert Mayo. Information storage and retrieval: tools, elements, theories.
|
https://en.wikipedia.org/wiki/Information_retrieval
|
Becker, Joseph; Hayes, Robert Mayo. Information storage and retrieval: tools, elements, theories. New York, Wiley (1963).
- : 1964:
- :* Karen Spärck Jones finished her thesis at Cambridge, Synonymy and Semantic Classification, and continued work on computational linguistics as it applies to IR.
- :* The National Bureau of Standards sponsored a symposium titled "Statistical Association Methods for Mechanized Documentation". Several highly significant papers, including G. Salton's first published reference (we believe) to the SMART system.
- :mid-1960s:
- ::* National Library of Medicine developed MEDLARS Medical Literature Analysis and Retrieval System, the first major machine-readable database and batch-retrieval system.
- ::* Project Intrex at MIT.
- :: 1965: J. C. R. Licklider published Libraries of the Future.
- :: 1966: Don Swanson was involved in studies at University of Chicago on Requirements for Future Catalogs.
- : late 1960s: F. Wilfrid Lancaster completed evaluation studies of the MEDLARS system and published the first edition of his text on information retrieval.
- :: 1968:
- :* Gerard Salton published Automatic Information Organization and Retrieval.
-
|
https://en.wikipedia.org/wiki/Information_retrieval
|
Several highly significant papers, including G. Salton's first published reference (we believe) to the SMART system.
- :mid-1960s:
- ::* National Library of Medicine developed MEDLARS Medical Literature Analysis and Retrieval System, the first major machine-readable database and batch-retrieval system.
- ::* Project Intrex at MIT.
- :: 1965: J. C. R. Licklider published Libraries of the Future.
- :: 1966: Don Swanson was involved in studies at University of Chicago on Requirements for Future Catalogs.
- : late 1960s: F. Wilfrid Lancaster completed evaluation studies of the MEDLARS system and published the first edition of his text on information retrieval.
- :: 1968:
- :* Gerard Salton published Automatic Information Organization and Retrieval.
- :* John W. Sammon, Jr.'s RADC Tech report "Some Mathematics of Information Storage and Retrieval..." outlined the vector model.
- :: 1969: Sammon's "A nonlinear mapping for data structure analysis " (IEEE Transactions on Computers) was the first proposal for visualization interface to an IR system.
- 1970s
- : early 1970s:
- ::*
|
https://en.wikipedia.org/wiki/Information_retrieval
|
:* John W. Sammon, Jr.'s RADC Tech report "Some Mathematics of Information Storage and Retrieval..." outlined the vector model.
- :: 1969: Sammon's "A nonlinear mapping for data structure analysis " (IEEE Transactions on Computers) was the first proposal for visualization interface to an IR system.
- 1970s
- : early 1970s:
- ::* First online systems—NLM's AIM-TWX, MEDLINE; Lockheed's Dialog; SDC's ORBIT.
- ::* Theodor Nelson promoting concept of hypertext, published Computer Lib/Dream Machines.
- : 1971: Nicholas Jardine and Cornelis J. van Rijsbergen published "The use of hierarchic clustering in information retrieval", which articulated the "cluster hypothesis".
- : 1975: Three highly influential publications by Salton fully articulated his vector processing framework and term discrimination model:
- ::* A Theory of Indexing (Society for Industrial and Applied Mathematics)
- ::* A Theory of Term Importance in Automatic Text Analysis (JASIS v. 26)
- ::* A Vector Space Model for Automatic Indexing (CACM 18:11)
- : 1978: The First ACM SIGIR conference.
- : 1979: C. J. van Rijsbergen published Information Retrieval (Butterworths).
|
https://en.wikipedia.org/wiki/Information_retrieval
|
A Theory of Term Importance in Automatic Text Analysis (JASIS v. 26)
- ::* A Vector Space Model for Automatic Indexing (CACM 18:11)
- : 1978: The First ACM SIGIR conference.
- : 1979: C. J. van Rijsbergen published Information Retrieval (Butterworths). Heavy emphasis on probabilistic models.
- : 1979: Tamas Doszkocs implemented the CITE natural language user interface for MEDLINE at the National Library of Medicine. The CITE system supported free form query input, ranked output and relevance feedback.
- 1980s
- : 1980: First international ACM SIGIR conference, joint with British Computer Society IR group in Cambridge.
- : 1982: Nicholas J. Belkin, Robert N. Oddy, and Helen M. Brooks proposed the ASK (Anomalous State of Knowledge) viewpoint for information retrieval. This was an important concept, though their automated analysis tool proved ultimately disappointing.
- : 1983: Salton (and Michael J. McGill) published Introduction to Modern Information Retrieval (McGraw-Hill), with heavy emphasis on vector models.
- : 1985:
|
https://en.wikipedia.org/wiki/Information_retrieval
|
This was an important concept, though their automated analysis tool proved ultimately disappointing.
- : 1983: Salton (and Michael J. McGill) published Introduction to Modern Information Retrieval (McGraw-Hill), with heavy emphasis on vector models.
- : 1985: David Blair and Bill Maron publish: An Evaluation of Retrieval Effectiveness for a Full-Text Document-Retrieval System
- : mid-1980s: Efforts to develop end-user versions of commercial IR systems.
- :: 1985–1993: Key papers on and experimental systems for visualization interfaces.
- :: Work by Donald B. Crouch, Robert R. Korfhage, Matthew Chalmers, Anselm Spoerri and others.
- : 1989: First World Wide Web proposals by Tim Berners-Lee at CERN.
- 1990s
- : 1992: First TREC conference.
- : 1997: Publication of Korfhage's Information Storage and Retrieval with emphasis on visualization and multi-reference point systems.
- : 1998: Google is founded by Larry Page and Sergey Brin. It introduces the PageRank algorithm, which evaluates the importance of web pages based on hyperlink structure.
- : 1999: Publication of Ricardo Baeza-Yates and Berthier Ribeiro-Neto's Modern Information Retrieval by Addison Wesley, the first book that attempts to cover all IR.
- 2000s
- : 2001: Wikipedia launches as a free, collaborative online encyclopedia.
|
https://en.wikipedia.org/wiki/Information_retrieval
|
Publication of Ricardo Baeza-Yates and Berthier Ribeiro-Neto's Modern Information Retrieval by Addison Wesley, the first book that attempts to cover all IR.
- 2000s
- : 2001: Wikipedia launches as a free, collaborative online encyclopedia. It quickly becomes a major resource for information retrieval, particularly for natural language processing and semantic search benchmarks.
- : 2009: Microsoft launches Bing, introducing features such as related searches, semantic suggestions, and later incorporating deep learning techniques into its ranking algorithms.
- 2010s
- : 2013: Google’s Hummingbird algorithm goes live, marking a shift from keyword matching toward understanding query intent and semantic context in search queries.
- : 2018: Google AI researchers release BERT (Bidirectional Encoder Representations from Transformers), enabling deep bidirectional understanding of language and improving document ranking and query understanding in IR.
- : 2019: Microsoft introduces MS MARCO (Microsoft MAchine Reading COmprehension), a large-scale dataset designed for training and evaluating machine reading and passage ranking models.
- 2020s
- : 2020: The ColBERT (Contextualized Late Interaction over BERT) model, designed for efficient passage retrieval using contextualized embeddings, was introduced at SIGIR 2020.
- : 2021: SPLADE is introduced at SIGIR 2021.
|
https://en.wikipedia.org/wiki/Information_retrieval
|
It quickly becomes a major resource for information retrieval, particularly for natural language processing and semantic search benchmarks.
- : 2009: Microsoft launches Bing, introducing features such as related searches, semantic suggestions, and later incorporating deep learning techniques into its ranking algorithms.
- 2010s
- : 2013: Google’s Hummingbird algorithm goes live, marking a shift from keyword matching toward understanding query intent and semantic context in search queries.
- : 2018: Google AI researchers release BERT (Bidirectional Encoder Representations from Transformers), enabling deep bidirectional understanding of language and improving document ranking and query understanding in IR.
- : 2019: Microsoft introduces MS MARCO (Microsoft MAchine Reading COmprehension), a large-scale dataset designed for training and evaluating machine reading and passage ranking models.
- 2020s
- : 2020: The ColBERT (Contextualized Late Interaction over BERT) model, designed for efficient passage retrieval using contextualized embeddings, was introduced at SIGIR 2020.
- : 2021: SPLADE is introduced at SIGIR 2021. It’s a sparse neural retrieval model that balances lexical and semantic features using masked language modeling and sparsity regularization.
- : 2022: The BEIR benchmark is released to evaluate zero-shot IR across 18 datasets covering diverse tasks. It standardizes comparisons between dense, sparse, and hybrid IR models.
## Major conferences
- SIGIR:
|
https://en.wikipedia.org/wiki/Information_retrieval
|
Nuclear astrophysics studies the origin of the chemical elements and isotopes, and the role of nuclear energy generation, in cosmic sources such as stars, supernovae, novae, and violent binary-star interactions.
It is an interdisciplinary part of both nuclear physics and astrophysics, involving close collaboration among researchers in various subfields of each of these fields. This includes, notably, nuclear reactions and their rates as they occur in cosmic environments, and modeling of astrophysical objects where these nuclear reactions may occur, but also considerations of cosmic evolution of isotopic and elemental composition (often called chemical evolution). Constraints from observations involve multiple messengers, all across the electromagnetic spectrum (nuclear gamma-rays, X-rays, optical, and radio/sub-mm astronomy), as well as isotopic measurements of solar-system materials such as meteorites and their stardust inclusions, cosmic rays, material deposits on Earth and Moon). Nuclear physics experiments address stability (i.e., lifetimes and masses) for atomic nuclei well beyond the regime of stable nuclides into the realm of radioactive/unstable nuclei, almost to the limits of bound nuclei (the drip lines), and under high density (up to neutron star matter) and high temperature (plasma temperatures up to ).
|
https://en.wikipedia.org/wiki/Nuclear_astrophysics
|
Constraints from observations involve multiple messengers, all across the electromagnetic spectrum (nuclear gamma-rays, X-rays, optical, and radio/sub-mm astronomy), as well as isotopic measurements of solar-system materials such as meteorites and their stardust inclusions, cosmic rays, material deposits on Earth and Moon). Nuclear physics experiments address stability (i.e., lifetimes and masses) for atomic nuclei well beyond the regime of stable nuclides into the realm of radioactive/unstable nuclei, almost to the limits of bound nuclei (the drip lines), and under high density (up to neutron star matter) and high temperature (plasma temperatures up to ). Theories and simulations are essential parts herein, as cosmic nuclear reaction environments cannot be realized, but at best partially approximated by experiments.
## History
In the 1940s, geologist Hans Suess speculated that the regularity that was observed in the abundances of elements may be related to structural properties of the atomic nucleus. These considerations were seeded by the discovery of radioactivity by Becquerel in 1896 as an aside of advances in chemistry which aimed at production of gold.
|
https://en.wikipedia.org/wiki/Nuclear_astrophysics
|
In the 1940s, geologist Hans Suess speculated that the regularity that was observed in the abundances of elements may be related to structural properties of the atomic nucleus. These considerations were seeded by the discovery of radioactivity by Becquerel in 1896 as an aside of advances in chemistry which aimed at production of gold. This remarkable possibility for transformation of matter created much excitement among physicists for the next decades, culminating in discovery of the atomic nucleus, with milestones in Ernest Rutherford's scattering experiments in 1911, and the discovery of the neutron by James Chadwick (1932). After Aston demonstrated that the mass of helium is less than four times that of the proton, Eddington proposed that, through an unknown process in the Sun's core, hydrogen is transmuted into helium, liberating energy. Twenty years later, Bethe and von Weizsäcker independently derived the CN cycle, the first known nuclear reaction that accomplishes this transmutation. The interval between Eddington's proposal and derivation of the CN cycle can mainly be attributed to an incomplete understanding of nuclear structure.
|
https://en.wikipedia.org/wiki/Nuclear_astrophysics
|
Twenty years later, Bethe and von Weizsäcker independently derived the CN cycle, the first known nuclear reaction that accomplishes this transmutation. The interval between Eddington's proposal and derivation of the CN cycle can mainly be attributed to an incomplete understanding of nuclear structure. The basic principles for explaining the origin of elements and energy generation in stars appear in the concepts describing nucleosynthesis, which arose in the 1940s, led by George Gamow and presented in a 2-page paper in 1948 as the Alpher–Bethe–Gamow paper. A complete concept of processes that make up cosmic nucleosynthesis was presented in the late 1950s by Burbidge, Burbidge, Fowler, and Hoyle, and by Cameron. Fowler is largely credited with initiating collaboration between astronomers, astrophysicists, and theoretical and experimental nuclear physicists, in a field that we now know as nuclear astrophysics (for which he won the 1983 Nobel Prize). During these same decades, Arthur Eddington and others were able to link the liberation of nuclear binding energy through such nuclear reactions to the structural equations of stars.
These developments were not without curious deviations. Many notable physicists of the 19th century such as Mayer, Waterson, von Helmholtz, and Lord Kelvin, postulated that the Sun radiates thermal energy by converting gravitational potential energy into heat.
|
https://en.wikipedia.org/wiki/Nuclear_astrophysics
|
These developments were not without curious deviations. Many notable physicists of the 19th century such as Mayer, Waterson, von Helmholtz, and Lord Kelvin, postulated that the Sun radiates thermal energy by converting gravitational potential energy into heat. Its lifetime as calculated from this assumption using the virial theorem, around 19 million years, was found inconsistent with the interpretation of geological records and the (then new) theory of biological evolution. Alternatively, if the Sun consisted entirely of a fossil fuel like coal, considering the rate of its thermal energy emission, its lifetime would be merely four or five thousand years, clearly inconsistent with records of human civilization.
## Basic concepts
During cosmic times, nuclear reactions re-arrange the nucleons that were left behind from the big bang (in the form of isotopes of hydrogen and helium, and traces of lithium, beryllium, and boron) to other isotopes and elements as we find them today (see graph). The driver is a conversion of nuclear binding energy to exothermic energy, favoring nuclei with more binding of their nucleons - these are then lighter as their original components by the binding energy. The most tightly-bound nucleus from symmetric matter of neutrons and protons is 56Ni.
|
https://en.wikipedia.org/wiki/Nuclear_astrophysics
|
The driver is a conversion of nuclear binding energy to exothermic energy, favoring nuclei with more binding of their nucleons - these are then lighter as their original components by the binding energy. The most tightly-bound nucleus from symmetric matter of neutrons and protons is 56Ni. The release of nuclear binding energy is what allows stars to shine for up to billions of years, and may disrupt stars in stellar explosions in case of violent reactions (such as 12C+12C fusion for thermonuclear supernova explosions). As matter is processed as such within stars and stellar explosions, some of the products are ejected from the nuclear-reaction site and end up in interstellar gas. Then, it may form new stars, and be processed further through nuclear reactions, in a cycle of matter. This results in compositional evolution of cosmic gas in and between stars and galaxies, enriching such gas with heavier elements. Nuclear astrophysics is the science to describe and understand the nuclear and astrophysical processes within such cosmic and galactic chemical evolution, linking it to knowledge from nuclear physics and astrophysics.
|
https://en.wikipedia.org/wiki/Nuclear_astrophysics
|
This results in compositional evolution of cosmic gas in and between stars and galaxies, enriching such gas with heavier elements. Nuclear astrophysics is the science to describe and understand the nuclear and astrophysical processes within such cosmic and galactic chemical evolution, linking it to knowledge from nuclear physics and astrophysics. Measurements are used to test our understanding: Astronomical constraints are obtained from stellar and interstellar abundance data of elements and isotopes, and other multi-messenger astronomical measurements of the cosmic object phenomena help to understand and model these. Nuclear properties can be obtained from terrestrial nuclear laboratories such as accelerators with their experiments. Theory and simulations are needed to understand and complement such data, providing models for nuclear reaction rates under the variety of cosmic conditions, and for the structure and dynamics of cosmic objects.
## Findings, current status, and issues
Nuclear astrophysics remains as a complex puzzle to science. The current consensus on the origins of elements and isotopes are that only hydrogen and helium (and traces of lithium) can be formed in a homogeneous Big Bang (see Big Bang nucleosynthesis), while all other elements and their isotopes are formed in cosmic objects that formed later, such as in stars and their explosions.
|
https://en.wikipedia.org/wiki/Nuclear_astrophysics
|
## Findings, current status, and issues
Nuclear astrophysics remains as a complex puzzle to science. The current consensus on the origins of elements and isotopes are that only hydrogen and helium (and traces of lithium) can be formed in a homogeneous Big Bang (see Big Bang nucleosynthesis), while all other elements and their isotopes are formed in cosmic objects that formed later, such as in stars and their explosions.
The Sun's primary energy source is hydrogen fusion to helium at about 15 million degrees. The proton–proton chain reactions dominate, they occur at much lower energies although much more slowly than catalytic hydrogen fusion through CNO cycle reactions. Nuclear astrophysics gives a picture of the Sun's energy source producing a lifetime consistent with the age of the Solar System derived from meteoritic abundances of lead and uranium isotopes – an age of about 4.5 billion years. The core hydrogen burning of stars, as it now occurs in the Sun, defines the main sequence of stars, illustrated in the Hertzsprung-Russell diagram that classifies stages of stellar evolution. The Sun's lifetime of H burning via pp-chains is about 9 billion years. This primarily is determined by extremely slow production of deuterium,
+ → + + +
which is governed by the weak interaction.
|
https://en.wikipedia.org/wiki/Nuclear_astrophysics
|
The Sun's lifetime of H burning via pp-chains is about 9 billion years. This primarily is determined by extremely slow production of deuterium,
+ → + + +
which is governed by the weak interaction.
Work that led to discovery of neutrino oscillation (implying a non-zero mass for the neutrino absent in the Standard Model of particle physics) was motivated by a solar neutrino flux about three times lower than expected from theories — a long-standing concern in the nuclear astrophysics community colloquially known as the Solar neutrino problem.
The concepts of nuclear astrophysics are supported by observation of the element technetium (the lightest chemical element without stable isotopes) in stars, by galactic gamma-ray line emitters (such as 26Al, 60Fe, and 44Ti), by radioactive-decay gamma-ray lines from the 56Ni decay chain observed from two supernovae (SN1987A and SN2014J) coincident with optical supernova light, and by observation of neutrinos from the Sun and from supernova 1987a. These observations have far-reaching implications. 26Al has a lifetime of a million years, which is very short on a galactic timescale, proving that nucleosynthesis is an ongoing process within our Milky Way Galaxy in the current epoch.
|
https://en.wikipedia.org/wiki/Nuclear_astrophysics
|
These observations have far-reaching implications. 26Al has a lifetime of a million years, which is very short on a galactic timescale, proving that nucleosynthesis is an ongoing process within our Milky Way Galaxy in the current epoch.
Current descriptions of the cosmic evolution of elemental abundances are broadly consistent with those observed in the Solar System and galaxy.
The roles of specific cosmic objects in producing these elemental abundances are clear for some elements, and heavily debated for others. For example, iron is believed to originate mostly from thermonuclear supernova explosions (also called supernovae of type Ia), and carbon and oxygen is believed to originate mostly from massive stars and their explosions. Lithium, beryllium, and boron are believed to originate from spallation reactions of cosmic-ray nuclei such as carbon and heavier nuclei, breaking these apart. Elements heavier than nickel are produced via the slow and rapid neutron capture processes, each contributing roughly half the abundance of these elements. The s-process is believed to occur in the envelopes of dying stars, whereas some uncertainty exists regarding r-process sites.
|
https://en.wikipedia.org/wiki/Nuclear_astrophysics
|
Elements heavier than nickel are produced via the slow and rapid neutron capture processes, each contributing roughly half the abundance of these elements. The s-process is believed to occur in the envelopes of dying stars, whereas some uncertainty exists regarding r-process sites. The r-process is believed to occur in supernova explosions and compact object mergers, though observational evidence is limited to a single event, GW170817, and relative yields of proposed r-process sites leading to observed heavy element abundances are uncertain.
The transport of nuclear reaction products from their sources through the interstellar and intergalactic medium also is unclear. Additionally, many nuclei that are involved in cosmic nuclear reactions are unstable and may only exist temporarily in cosmic sites, and their properties (e.g., binding energy) cannot be investigated in the laboratory due to difficulties in their synthesis. Similarly, stellar structure and its dynamics is not satisfactorily described in models and hard to observe except through asteroseismology, and supernova explosion models lack a consistent description based on physical processes, and include heuristic elements. Current research extensively utilizes computation and numerical modeling.
## Future work
Although the foundations of nuclear astrophysics appear clear and plausible, many puzzles remain.
|
https://en.wikipedia.org/wiki/Nuclear_astrophysics
|
## Future work
Although the foundations of nuclear astrophysics appear clear and plausible, many puzzles remain. These include understanding helium fusion (specifically the 12C(α,γ)16O reaction(s)), astrophysical sites of the r-process, anomalous lithium abundances in population II stars, the explosion mechanism in core-collapse supernovae, and progenitors of thermonuclear supernovae.
|
https://en.wikipedia.org/wiki/Nuclear_astrophysics
|
An operating system (OS) is system software that manages computer hardware and software resources, and provides common services for computer programs.
Time-sharing operating systems schedule tasks for efficient use of the system and may also include accounting software for cost allocation of processor time, mass storage, peripherals, and other resources.
For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware, although the application code is usually executed directly by the hardware and frequently makes system calls to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computerfrom cellular phones and video game consoles to web servers and supercomputers.
as of September 2024, Android is the most popular operating system with a 46% market share, followed by
### Microsoft Windows
at 26%, iOS and iPadOS at 18%, macOS at 5%, and
### Linux
at 1%. Android, iOS, and iPadOS are mobile operating systems, while Windows, macOS, and Linux are desktop operating systems. Linux distributions are dominant in the server and supercomputing sectors. Other specialized classes of operating systems (special-purpose operating systems), such as embedded and real-time systems, exist for many applications.
### Security
-focused operating systems also exist. Some operating systems have low system requirements (e.g. light-weight Linux distribution). Others may have higher system requirements.
|
https://en.wikipedia.org/wiki/Operating_system
|
Some operating systems have low system requirements (e.g. light-weight Linux distribution). Others may have higher system requirements.
Some operating systems require installation or may come pre-installed with purchased computers (OEM-installation), whereas others may run directly from media (i.e. live CD) or flash memory (i.e. a LiveUSB from a USB stick).
## Definition and purpose
An operating system is difficult to define, but has been called "the layer of software that manages a computer's resources for its users and their applications". Operating systems include the software that is always running, called a kernel—but can include other software as well. The two other types of programs that can run on a computer are system programs—which are associated with the operating system, but may not be part of the kernel—and applications—all other software.
There are three main purposes that an operating system fulfills:
- Operating systems allocate resources between different applications, deciding when they will receive central processing unit (CPU) time or space in memory. On modern personal computers, users often want to run several applications at once. In order to ensure that one program cannot monopolize the computer's limited hardware resources, the operating system gives each application a share of the resource, either in time (CPU) or space (memory).
|
https://en.wikipedia.org/wiki/Operating_system
|
On modern personal computers, users often want to run several applications at once. In order to ensure that one program cannot monopolize the computer's limited hardware resources, the operating system gives each application a share of the resource, either in time (CPU) or space (memory). The operating system also must isolate applications from each other to protect them from errors and security vulnerabilities in another application's code, but enable communications between different applications.
- Operating systems provide an interface that abstracts the details of accessing hardware details (such as physical memory) to make things easier for programmers. Virtualization also enables the operating system to mask limited hardware resources; for example, virtual memory can provide a program with the illusion of nearly unlimited memory that exceeds the computer's actual memory.
- Operating systems provide common services, such as an interface for accessing network and disk devices. This enables an application to be run on different hardware without needing to be rewritten. Which services to include in an operating system varies greatly, and this functionality makes up the great majority of code for most operating systems.
## Types of operating systems
### Multicomputer operating systems
With multiprocessors multiple CPUs share memory. A multicomputer or cluster computer has multiple CPUs, each of which has its own memory. Multicomputers were developed because large multiprocessors are difficult to engineer and prohibitively expensive; they are universal in cloud computing because of the size of the machine needed.
|
https://en.wikipedia.org/wiki/Operating_system
|
A multicomputer or cluster computer has multiple CPUs, each of which has its own memory. Multicomputers were developed because large multiprocessors are difficult to engineer and prohibitively expensive; they are universal in cloud computing because of the size of the machine needed. The different CPUs often need to send and receive messages to each other; to ensure good performance, the operating systems for these machines need to minimize this copying of packets. Newer systems are often multiqueue—separating groups of users into separate queues—to reduce the need for packet copying and support more concurrent users. Another technique is remote direct memory access, which enables each CPU to access memory belonging to other CPUs. Multicomputer operating systems often support remote procedure calls where a CPU can call a procedure on another CPU, or distributed shared memory, in which the operating system uses virtualization to generate shared memory that does not physically exist.
### Distributed systems
A distributed system is a group of distinct, networked computers—each of which might have their own operating system and file system. Unlike multicomputers, they may be dispersed anywhere in the world. Middleware, an additional software layer between the operating system and applications, is often used to improve consistency. Although it functions similarly to an operating system, it is not a true operating system.
### Embedded
Embedded operating systems are designed to be used in embedded computer systems, whether they are internet of things objects or not connected to a network.
|
https://en.wikipedia.org/wiki/Operating_system
|
Although it functions similarly to an operating system, it is not a true operating system.
### Embedded
Embedded operating systems are designed to be used in embedded computer systems, whether they are internet of things objects or not connected to a network. Embedded systems include many household appliances. The distinguishing factor is that they do not load user-installed software. Consequently, they do not need protection between different applications, enabling simpler designs. Very small operating systems might run in less than 10 kilobytes, and the smallest are for smart cards. Examples include Embedded Linux, QNX, VxWorks, and the extra-small systems RIOT and TinyOS.
### Real-time
A real-time operating system is an operating system that guarantees to process events or data by or at a specific moment in time. Hard real-time systems require exact timing and are common in manufacturing, avionics, military, and other similar uses. With soft real-time systems, the occasional missed event is acceptable; this category often includes audio or multimedia systems, as well as smartphones. In order for hard real-time systems be sufficiently exact in their timing, often they are just a library with no protection between applications, such as eCos.
### Hypervisor
A hypervisor is an operating system that runs a virtual machine. The virtual machine is unaware that it is an application and operates as if it had its own hardware.
|
https://en.wikipedia.org/wiki/Operating_system
|
### Hypervisor
A hypervisor is an operating system that runs a virtual machine. The virtual machine is unaware that it is an application and operates as if it had its own hardware. Virtual machines can be paused, saved, and resumed, making them useful for operating systems research, development, and debugging. They also enhance portability by enabling applications to be run on a computer even if they are not compatible with the base operating system.
### Library
A library operating system (libOS) is one in which the services that a typical operating system provides, such as networking, are provided in the form of libraries and composed with a single application and configuration code to construct a unikernel:
a specialized (only the absolute necessary pieces of code are extracted from libraries and bound together
), single address space, machine image that can be deployed to cloud or embedded environments.
The operating system code and application code are not executed in separated protection domains (there is only a single application running, at least conceptually, so there is no need to prevent interference between applications) and OS services are accessed via simple library calls (potentially inlining them based on compiler thresholds), without the usual overhead of context switches,
in a way similarly to embedded and real-time OSes.
|
https://en.wikipedia.org/wiki/Operating_system
|
### Library
A library operating system (libOS) is one in which the services that a typical operating system provides, such as networking, are provided in the form of libraries and composed with a single application and configuration code to construct a unikernel:
a specialized (only the absolute necessary pieces of code are extracted from libraries and bound together
), single address space, machine image that can be deployed to cloud or embedded environments.
The operating system code and application code are not executed in separated protection domains (there is only a single application running, at least conceptually, so there is no need to prevent interference between applications) and OS services are accessed via simple library calls (potentially inlining them based on compiler thresholds), without the usual overhead of context switches,
in a way similarly to embedded and real-time OSes. Note that this overhead is not negligible: to the direct cost of mode switching it's necessary to add the indirect pollution of important processor structures (like CPU caches, the instruction pipeline, and so on) which affects both user-mode and kernel-mode performance.
## History
The first computers in the late 1940s and 1950s were directly programmed either with plugboards or with machine code inputted on media such as punch cards, without programming languages or operating systems. After the introduction of the transistor in the mid-1950s, mainframes began to be built.
|
https://en.wikipedia.org/wiki/Operating_system
|
## History
The first computers in the late 1940s and 1950s were directly programmed either with plugboards or with machine code inputted on media such as punch cards, without programming languages or operating systems. After the introduction of the transistor in the mid-1950s, mainframes began to be built. These still needed professional operators who manually do what a modern operating system would do, such as scheduling programs to run, but mainframes still had rudimentary operating systems such as Fortran Monitor System (FMS) and IBSYS. In the 1960s, IBM introduced the first series of intercompatible computers (System/360). All of them ran the same operating system—OS/360—which consisted of millions of lines of assembly language that had thousands of bugs. The OS/360 also was the first popular operating system to support multiprogramming, such that the CPU could be put to use on one job while another was waiting on input/output (I/O). Holding multiple jobs in memory necessitated memory partitioning and safeguards against one job accessing the memory allocated to a different one.
Around the same time, teleprinters began to be used as terminals so multiple users could access the computer simultaneously. The operating system MULTICS was intended to allow hundreds of users to access a large computer. Despite its limited adoption, it can be considered the precursor to cloud computing.
|
https://en.wikipedia.org/wiki/Operating_system
|
The operating system MULTICS was intended to allow hundreds of users to access a large computer. Despite its limited adoption, it can be considered the precursor to cloud computing. The UNIX operating system originated as a development of MULTICS for a single user. Because UNIX's source code was available, it became the basis of other, incompatible operating systems, of which the most successful were AT&T's System V and the University of California's Berkeley Software Distribution (BSD). To increase compatibility, the IEEE released the POSIX standard for operating system application programming interfaces (APIs), which is supported by most UNIX systems. MINIX was a stripped-down version of UNIX, developed in 1987 for educational uses, that inspired the commercially available, free software Linux. Since 2008, MINIX is used in controllers of most Intel microchips, while Linux is widespread in data centers and Android smartphones.
### Microcomputers
The invention of large scale integration enabled the production of personal computers (initially called microcomputers) from around 1980. For around five years, the CP/M (Control Program for Microcomputers) was the most popular operating system for microcomputers. Later, IBM bought the DOS (Disk Operating System) from Microsoft. After modifications requested by IBM, the resulting system was called MS-DOS ( Disk Operating System) and was widely used on IBM microcomputers.
|
https://en.wikipedia.org/wiki/Operating_system
|
Later, IBM bought the DOS (Disk Operating System) from Microsoft. After modifications requested by IBM, the resulting system was called MS-DOS ( Disk Operating System) and was widely used on IBM microcomputers. Later versions increased their sophistication, in part by borrowing features from UNIX.
Apple's Macintosh was the first popular computer to use a graphical user interface (GUI). The GUI proved much more user friendly than the text-only command-line interface earlier operating systems had used. Following the success of Macintosh, MS-DOS was updated with a GUI overlay called Windows. Windows later was rewritten as a stand-alone operating system, borrowing so many features from another (VAX VMS) that a large legal settlement was paid. In the twenty-first century, Windows continues to be popular on personal computers but has less market share of servers. UNIX operating systems, especially Linux, are the most popular on enterprise systems and servers but are also used on mobile devices and many other computer systems.
On mobile devices, Symbian OS was dominant at first, being usurped by BlackBerry OS (introduced 2002) and iOS for iPhones (from 2007). Later on, the open-source Android operating system (introduced 2008), with a Linux kernel and a C library (Bionic) partially based on BSD code, became most popular.
|
https://en.wikipedia.org/wiki/Operating_system
|
On mobile devices, Symbian OS was dominant at first, being usurped by BlackBerry OS (introduced 2002) and iOS for iPhones (from 2007). Later on, the open-source Android operating system (introduced 2008), with a Linux kernel and a C library (Bionic) partially based on BSD code, became most popular.
## Components
The components of an operating system are designed to ensure that various parts of a computer function cohesively. With the de facto obsoletion of DOS, all user software must interact with the operating system to access hardware.
### Kernel
The kernel is the part of the operating system that provides protection between different applications and users. This protection is key to improving reliability by keeping errors isolated to one program, as well as security by limiting the power of malicious software and protecting private data, and ensuring that one program cannot monopolize the computer's resources. Most operating systems have two modes of operation: in user mode, the hardware checks that the software is only executing legal instructions, whereas the kernel has unrestricted powers and is not subject to these checks. The kernel also manages memory for other processes and controls access to input/output devices.
#### Program execution
The operating system provides an interface between an application program and the computer hardware, so that an application program can interact with the hardware only by obeying rules and procedures programmed into the operating system.
|
https://en.wikipedia.org/wiki/Operating_system
|
The kernel also manages memory for other processes and controls access to input/output devices.
#### Program execution
The operating system provides an interface between an application program and the computer hardware, so that an application program can interact with the hardware only by obeying rules and procedures programmed into the operating system. The operating system is also a set of services which simplify development and execution of application programs. Executing an application program typically involves the creation of a process by the operating system kernel, which assigns memory space and other resources, establishes a priority for the process in multi-tasking systems, loads program binary code into memory, and initiates execution of the application program, which then interacts with the user and with hardware devices. However, in some systems an application can request that the operating system execute another application within the same process, either as a subroutine or in a separate thread, e.g., the LINK and ATTACH facilities of OS/360 and successors.
#### Interrupts
An interrupt (also known as an abort, exception, fault, signal, or trap) provides an efficient way for most operating systems to react to the environment. Interrupts cause the central processing unit (CPU) to have a control flow change away from the currently running program to an interrupt handler, also known as an interrupt service routine (ISR). An interrupt service routine may cause the central processing unit (CPU) to have a context switch.
|
https://en.wikipedia.org/wiki/Operating_system
|
Interrupts cause the central processing unit (CPU) to have a control flow change away from the currently running program to an interrupt handler, also known as an interrupt service routine (ISR). An interrupt service routine may cause the central processing unit (CPU) to have a context switch. The details of how a computer processes an interrupt vary from architecture to architecture, and the details of how interrupt service routines behave vary from operating system to operating system. However, several interrupt functions are common. The architecture and operating system must:
1. transfer control to an interrupt service routine.
1. save the state of the currently running process.
1. restore the state after the interrupt is serviced.
##### Software interrupt
A software interrupt is a message to a process that an event has occurred. This contrasts with a hardware interrupt — which is a message to the central processing unit (CPU) that an event has occurred. Software interrupts are similar to hardware interrupts — there is a change away from the currently running process. Similarly, both hardware and software interrupts execute an interrupt service routine.
Software interrupts may be normally occurring events. It is expected that a time slice will occur, so the kernel will have to perform a context switch. A computer program may set a timer to go off after a few seconds in case too much data causes an algorithm to take too long.
Software interrupts may be error conditions, such as a malformed machine instruction.
|
https://en.wikipedia.org/wiki/Operating_system
|
A computer program may set a timer to go off after a few seconds in case too much data causes an algorithm to take too long.
Software interrupts may be error conditions, such as a malformed machine instruction. However, the most common error conditions are division by zero and accessing an invalid memory address.
Users can send messages to the kernel to modify the behavior of a currently running process. For example, in the command-line environment, pressing the interrupt character (usually Control-C) might terminate the currently running process.
To generate software interrupts for x86 CPUs, the INT assembly language instruction is available. The syntax is `INT X`, where `X` is the offset number (in hexadecimal format) to the interrupt vector table.
##### Signal
To generate software interrupts in Unix-like operating systems, the `kill(pid,signum)` system call will send a signal to another process. `pid` is the process identifier of the receiving process. `signum` is the signal number (in mnemonic format) to be sent. (The abrasive name of `kill` was chosen because early implementations only terminated the process.)
In Unix-like operating systems, signals inform processes of the occurrence of asynchronous events. To communicate asynchronously, interrupts are required.
|
https://en.wikipedia.org/wiki/Operating_system
|
In Unix-like operating systems, signals inform processes of the occurrence of asynchronous events. To communicate asynchronously, interrupts are required. One reason a process needs to asynchronously communicate to another process solves a variation of the classic reader/writer problem. The writer receives a pipe from the shell for its output to be sent to the reader's input stream. The command-line syntax is `alpha | bravo`. `alpha` will write to the pipe when its computation is ready and then sleep in the wait queue. `bravo` will then be moved to the ready queue and soon will read from its input stream. The kernel will generate software interrupts to coordinate the piping.
Signals may be classified into 7 categories. The categories are:
1. when a process finishes normally.
1. when a process has an error exception.
1. when a process runs out of a system resource.
1. when a process executes an illegal instruction.
1. when a process sets an alarm event.
1. when a process is aborted from the keyboard.
1. when a process has a tracing alert for debugging.
##### Hardware interrupt
#### Input/output
(I/O) devices are slower than the CPU. Therefore, it would slow down the computer if the CPU had to wait for each I/O to finish.
|
https://en.wikipedia.org/wiki/Operating_system
|
#### Input/output
(I/O) devices are slower than the CPU. Therefore, it would slow down the computer if the CPU had to wait for each I/O to finish. Instead, a computer may implement interrupts for I/O completion, avoiding the need for polling or busy waiting.
Some computers require an interrupt for each character or word, costing a significant amount of CPU time.
##### Direct memory access
(DMA) is an architecture feature to allow devices to bypass the CPU and access main memory directly. (Separate from the architecture, a device may perform direct memory access to and from main memory either directly or via a bus.)
Input/output
##### Interrupt-driven I/O
When a computer user types a key on the keyboard, typically the character appears immediately on the screen. Likewise, when a user moves a mouse, the cursor immediately moves across the screen. Each keystroke and mouse movement generates an interrupt called Interrupt-driven I/O. An interrupt-driven I/O occurs when a process causes an interrupt for every character or word transmitted.
Direct memory access
Devices such as hard disk drives, solid-state drives, and magnetic tape drives can transfer data at a rate high enough that interrupting the CPU for every byte or word transferred, and having the CPU transfer the byte or word between the device and memory, would require too much CPU time.
|
https://en.wikipedia.org/wiki/Operating_system
|
Each keystroke and mouse movement generates an interrupt called Interrupt-driven I/O. An interrupt-driven I/O occurs when a process causes an interrupt for every character or word transmitted.
Direct memory access
Devices such as hard disk drives, solid-state drives, and magnetic tape drives can transfer data at a rate high enough that interrupting the CPU for every byte or word transferred, and having the CPU transfer the byte or word between the device and memory, would require too much CPU time. Data is, instead, transferred between the device and memory independently of the CPU by hardware such as a channel or a direct memory access controller; an interrupt is delivered only when all the data is transferred.
If a computer program executes a system call to perform a block I/O write operation, then the system call might execute the following instructions:
- Set the contents of the CPU's registers (including the program counter) into the process control block.
- Create an entry in the device-status table. The operating system maintains this table to keep track of which processes are waiting for which devices. One field in the table is the memory address of the process control block.
- Place all the characters to be sent to the device into a memory buffer.
- Set the memory address of the memory buffer to a predetermined device register.
- Set the buffer size (an integer) to another predetermined register.
- Execute the machine instruction to begin the writing.
-
|
https://en.wikipedia.org/wiki/Operating_system
|
- Set the buffer size (an integer) to another predetermined register.
- Execute the machine instruction to begin the writing.
- Perform a context switch to the next process in the ready queue.
While the writing takes place, the operating system will context switch to other processes as normal. When the device finishes writing, the device will interrupt the currently running process by asserting an interrupt request. The device will also place an integer onto the data bus. Upon accepting the interrupt request, the operating system will:
- Push the contents of the program counter (a register) followed by the status register onto the call stack.
- Push the contents of the other registers onto the call stack. (Alternatively, the contents of the registers may be placed in a system table.)
- Read the integer from the data bus. The integer is an offset to the interrupt vector table. The vector table's instructions will then:
Access the device-status table.
Extract the process control block.
Perform a context switch back to the writing process.
When the writing process has its time slice expired, the operating system will:
- Pop from the call stack the registers other than the status register and program counter.
- Pop from the call stack the status register.
- Pop from the call stack the address of the next instruction, and set it back into the program counter.
|
https://en.wikipedia.org/wiki/Operating_system
|
When the writing process has its time slice expired, the operating system will:
- Pop from the call stack the registers other than the status register and program counter.
- Pop from the call stack the status register.
- Pop from the call stack the address of the next instruction, and set it back into the program counter.
With the program counter now reset, the interrupted process will resume its time slice.
#### Memory management
Among other things, a multiprogramming operating system kernel must be responsible for managing all system memory which is currently in use by the programs. This ensures that a program does not interfere with memory already in use by another program. Since programs time share, each program must have independent access to memory.
Cooperative memory management, used by many early operating systems, assumes that all programs make voluntary use of the kernel's memory manager, and do not exceed their allocated memory. This system of memory management is almost never seen anymore, since programs often contain bugs which can cause them to exceed their allocated memory. If a program fails, it may cause memory used by one or more other programs to be affected or overwritten. Malicious programs or viruses may purposefully alter another program's memory, or may affect the operation of the operating system itself. With cooperative memory management, it takes only one misbehaved program to crash the system.
Memory protection enables the kernel to limit a process' access to the computer's memory.
|
https://en.wikipedia.org/wiki/Operating_system
|
With cooperative memory management, it takes only one misbehaved program to crash the system.
Memory protection enables the kernel to limit a process' access to the computer's memory. Various methods of memory protection exist, including memory segmentation and paging. All methods require some level of hardware support (such as the 80286 MMU), which does not exist in all computers.
In both segmentation and paging, certain protected mode registers specify to the CPU what memory address it should allow a running program to access. Attempts to access other addresses trigger an interrupt, which causes the CPU to re-enter supervisor mode, placing the kernel in charge. This is called a segmentation violation or Seg-V for short, and since it is both difficult to assign a meaningful result to such an operation, and because it is usually a sign of a misbehaving program, the kernel generally resorts to terminating the offending program, and reports the error.
Windows versions 3.1 through ME had some level of memory protection, but programs could easily circumvent the need to use it. A general protection fault would be produced, indicating a segmentation violation had occurred; however, the system would often crash anyway.
#### Virtual memory
The use of virtual memory addressing (such as paging or segmentation) means that the kernel can choose what memory each program may use at any given time, allowing the operating system to use the same memory locations for multiple tasks.
|
https://en.wikipedia.org/wiki/Operating_system
|
A general protection fault would be produced, indicating a segmentation violation had occurred; however, the system would often crash anyway.
#### Virtual memory
The use of virtual memory addressing (such as paging or segmentation) means that the kernel can choose what memory each program may use at any given time, allowing the operating system to use the same memory locations for multiple tasks.
If a program tries to access memory that is not accessible memory, but nonetheless has been allocated to it, the kernel is interrupted . This kind of interrupt is typically a page fault.
When the kernel detects a page fault it generally adjusts the virtual memory range of the program which triggered it, granting it access to the memory requested. This gives the kernel discretionary power over where a particular application's memory is stored, or even whether or not it has been allocated yet.
In modern operating systems, memory which is accessed less frequently can be temporarily stored on a disk or other media to make that space available for use by other programs. This is called swapping, as an area of memory can be used by multiple programs, and what that memory area contains can be swapped or exchanged on demand.
Virtual memory provides the programmer or the user with the perception that there is a much larger amount of RAM in the computer than is really there.
### Concurrency
Concurrency refers to the operating system's ability to carry out multiple tasks simultaneously.
|
https://en.wikipedia.org/wiki/Operating_system
|
Virtual memory provides the programmer or the user with the perception that there is a much larger amount of RAM in the computer than is really there.
### Concurrency
Concurrency refers to the operating system's ability to carry out multiple tasks simultaneously. Virtually all modern operating systems support concurrency.
Threads enable splitting a process' work into multiple parts that can run simultaneously. The number of threads is not limited by the number of processors available. If there are more threads than processors, the operating system kernel schedules, suspends, and resumes threads, controlling when each thread runs and how much CPU time it receives. During a context switch a running thread is suspended, its state is saved into the thread control block and stack, and the state of the new thread is loaded in. Historically, on many systems a thread could run until it relinquished control (cooperative multitasking). Because this model can allow a single thread to monopolize the processor, most operating systems now can interrupt a thread (preemptive multitasking).
Threads have their own thread ID, program counter (PC), a register set, and a stack, but share code, heap data, and other resources with other threads of the same process. Thus, there is less overhead to create a thread than a new process. On single-CPU systems, concurrency is switching between processes. Many computers have multiple CPUs.
|
https://en.wikipedia.org/wiki/Operating_system
|
On single-CPU systems, concurrency is switching between processes. Many computers have multiple CPUs. Parallelism with multiple threads running on different CPUs can speed up a program, depending on how much of it can be executed concurrently.
### File system
Permanent storage devices used in twenty-first century computers, unlike volatile dynamic random-access memory (DRAM), are still accessible after a crash or power failure. Permanent (non-volatile) storage is much cheaper per byte, but takes several orders of magnitude longer to access, read, and write. The two main technologies are a hard drive consisting of magnetic disks, and flash memory (a solid-state drive that stores data in electrical circuits). The latter is more expensive but faster and more durable.
File systems are an abstraction used by the operating system to simplify access to permanent storage. They provide human-readable filenames and other metadata, increase performance via amortization of accesses, prevent multiple threads from accessing the same section of memory, and include checksums to identify corruption. File systems are composed of files (named collections of data, of an arbitrary size) and directories (also called folders) that list human-readable filenames and other directories.
|
https://en.wikipedia.org/wiki/Operating_system
|
They provide human-readable filenames and other metadata, increase performance via amortization of accesses, prevent multiple threads from accessing the same section of memory, and include checksums to identify corruption. File systems are composed of files (named collections of data, of an arbitrary size) and directories (also called folders) that list human-readable filenames and other directories. An absolute file path begins at the root directory and lists subdirectories divided by punctuation, while a relative path defines the location of a file from a directory.
System calls (which are sometimes wrapped by libraries) enable applications to create, delete, open, and close files, as well as link, read, and write to them. All these operations are carried out by the operating system on behalf of the application. The operating system's efforts to reduce latency include storing recently requested blocks of memory in a cache and prefetching data that the application has not asked for, but might need next. Device drivers are software specific to each input/output (I/O) device that enables the operating system to work without modification over different hardware.
Another component of file systems is a dictionary that maps a file's name and metadata to the data block where its contents are stored. Most file systems use directories to convert file names to file numbers.
|
https://en.wikipedia.org/wiki/Operating_system
|
Another component of file systems is a dictionary that maps a file's name and metadata to the data block where its contents are stored. Most file systems use directories to convert file names to file numbers. To find the block number, the operating system uses an index (often implemented as a tree). Separately, there is a free space map to track free blocks, commonly implemented as a bitmap. Although any free block can be used to store a new file, many operating systems try to group together files in the same directory to maximize performance, or periodically reorganize files to reduce fragmentation.
Maintaining data reliability in the face of a computer crash or hardware failure is another concern. File writing protocols are designed with atomic operations so as not to leave permanent storage in a partially written, inconsistent state in the event of a crash at any point during writing. Data corruption is addressed by redundant storage (for example, RAID—redundant array of inexpensive disks) and checksums to detect when data has been corrupted. With multiple layers of checksums and backups of a file, a system can recover from multiple hardware failures. Background processes are often used to detect and recover from data corruption.
Security
Security means protecting users from other users of the same computer, as well as from those who seeking remote access to it over a network.
|
https://en.wikipedia.org/wiki/Operating_system
|
Background processes are often used to detect and recover from data corruption.
Security
Security means protecting users from other users of the same computer, as well as from those who seeking remote access to it over a network. Operating systems security rests on achieving the CIA triad: confidentiality (unauthorized users cannot access data), integrity (unauthorized users cannot modify data), and availability (ensuring that the system remains available to authorized users, even in the event of a denial of service attack). As with other computer systems, isolating security domains—in the case of operating systems, the kernel, processes, and virtual machines—is key to achieving security. Other ways to increase security include simplicity to minimize the attack surface, locking access to resources by default, checking all requests for authorization, principle of least authority (granting the minimum privilege essential for performing a task), privilege separation, and reducing shared data.
Some operating system designs are more secure than others. Those with no isolation between the kernel and applications are least secure, while those with a monolithic kernel like most general-purpose operating systems are still vulnerable if any part of the kernel is compromised. A more secure design features microkernels that separate the kernel's privileges into many separate security domains and reduce the consequences of a single kernel breach.
|
https://en.wikipedia.org/wiki/Operating_system
|
Those with no isolation between the kernel and applications are least secure, while those with a monolithic kernel like most general-purpose operating systems are still vulnerable if any part of the kernel is compromised. A more secure design features microkernels that separate the kernel's privileges into many separate security domains and reduce the consequences of a single kernel breach. Unikernels are another approach that improves security by minimizing the kernel and separating out other operating systems functionality by application.
Most operating systems are written in C or C++, which create potential vulnerabilities for exploitation. Despite attempts to protect against them, vulnerabilities are caused by buffer overflow attacks, which are enabled by the lack of bounds checking. Hardware vulnerabilities, some of them caused by CPU optimizations, can also be used to compromise the operating system. There are known instances of operating system programmers deliberately implanting vulnerabilities, such as back doors.
Operating systems security is hampered by their increasing complexity and the resulting inevitability of bugs. Because formal verification of operating systems may not be feasible, developers use operating system hardening to reduce vulnerabilities, e.g. address space layout randomization, control-flow integrity, access restrictions, and other techniques. There are no restrictions on who can contribute code to open source operating systems; such operating systems have transparent change histories and distributed governance structures.
|
https://en.wikipedia.org/wiki/Operating_system
|
Because formal verification of operating systems may not be feasible, developers use operating system hardening to reduce vulnerabilities, e.g. address space layout randomization, control-flow integrity, access restrictions, and other techniques. There are no restrictions on who can contribute code to open source operating systems; such operating systems have transparent change histories and distributed governance structures. Open source developers strive to work collaboratively to find and eliminate security vulnerabilities, using code review and type checking to expunge malicious code. Andrew S. Tanenbaum advises releasing the source code of all operating systems, arguing that it prevents developers from placing trust in secrecy and thus relying on the unreliable practice of security by obscurity.
### User interface
A user interface (UI) is essential to support human interaction with a computer. The two most common user interface types for any computer are
- command-line interface, where computer commands are typed, line-by-line,
- graphical user interface (GUI) using a visual environment, most commonly a combination of the window, icon, menu, and pointer elements, also known as WIMP.
For personal computers, including smartphones and tablet computers, and for workstations, user input is typically from a combination of keyboard, mouse, and trackpad or touchscreen, all of which are connected to the operating system with specialized software.
|
https://en.wikipedia.org/wiki/Operating_system
|
The two most common user interface types for any computer are
- command-line interface, where computer commands are typed, line-by-line,
- graphical user interface (GUI) using a visual environment, most commonly a combination of the window, icon, menu, and pointer elements, also known as WIMP.
For personal computers, including smartphones and tablet computers, and for workstations, user input is typically from a combination of keyboard, mouse, and trackpad or touchscreen, all of which are connected to the operating system with specialized software. Personal computer users who are not software developers or coders often prefer GUIs for both input and output; GUIs are supported by most personal computers. The software to support GUIs is more complex than a command line for input and plain text output. Plain text output is often preferred by programmers, and is easy to support.
## Operating system development as a hobby
A hobby operating system may be classified as one whose code has not been directly derived from an existing operating system, and has few users and active developers.
In some cases, hobby development is in support of a "homebrew" computing device, for example, a simple single-board computer powered by a 6502 microprocessor. Or, development may be for an architecture already in widespread use. Operating system development may come from entirely new concepts, or may commence by modeling an existing operating system.
|
https://en.wikipedia.org/wiki/Operating_system
|
Or, development may be for an architecture already in widespread use. Operating system development may come from entirely new concepts, or may commence by modeling an existing operating system. In either case, the hobbyist is her/his own developer, or may interact with a small and sometimes unstructured group of individuals who have like interests.
Examples of hobby operating systems include Syllable and TempleOS.
## Diversity of operating systems and portability
If an application is written for use on a specific operating system, and is ported to another OS, the functionality required by that application may be implemented differently by that OS (the names of functions, meaning of arguments, etc.) requiring the application to be adapted, changed, or otherwise maintained.
This cost in supporting operating systems diversity can be avoided by instead writing applications against software platforms such as Java or Qt. These abstractions have already borne the cost of adaptation to specific operating systems and their system libraries.
Another approach is for operating system vendors to adopt standards. For example, POSIX and OS abstraction layers provide commonalities that reduce porting costs.
## Popular operating systems
as of September 2024, Android (based on the Linux kernel) is the most popular operating system with a 46% market share, followed by Microsoft Windows at 26%, iOS and iPadOS at 18%, macOS at 5%, and Linux at 1%.
|
https://en.wikipedia.org/wiki/Operating_system
|
For example, POSIX and OS abstraction layers provide commonalities that reduce porting costs.
## Popular operating systems
as of September 2024, Android (based on the Linux kernel) is the most popular operating system with a 46% market share, followed by Microsoft Windows at 26%, iOS and iPadOS at 18%, macOS at 5%, and Linux at 1%. Android, iOS, and iPadOS are mobile operating systems, while Windows, macOS, and Linux are desktop operating systems.
Linux
Linux is a free software distributed under the GNU General Public License (GPL), which means that all of its derivatives are legally required to release their source code. Linux was designed by programmers for their own use, thus emphasizing simplicity and consistency, with a small number of basic elements that can be combined in nearly unlimited ways, and avoiding redundancy.
Its design is similar to other UNIX systems not using a microkernel. It is written in C and uses UNIX System V syntax, but also supports BSD syntax. Linux supports standard UNIX networking features, as well as the full suite of UNIX tools, while supporting multiple users and employing preemptive multitasking. Initially of a minimalist design, Linux is a flexible system that can work in under 16 MB of RAM, but still is used on large multiprocessor systems. Similar to other UNIX systems, Linux distributions are composed of a kernel, system libraries, and system utilities.
|
https://en.wikipedia.org/wiki/Operating_system
|
Initially of a minimalist design, Linux is a flexible system that can work in under 16 MB of RAM, but still is used on large multiprocessor systems. Similar to other UNIX systems, Linux distributions are composed of a kernel, system libraries, and system utilities. Linux has a graphical user interface (GUI) with a desktop, folder and file icons, as well as the option to access the operating system via a command line.
Android is a partially open-source operating system closely based on Linux and has become the most widely used operating system by users, due to its popularity on smartphones and, to a lesser extent, embedded systems needing a GUI, such as "smart watches, automotive dashboards, airplane seatbacks, medical devices, and home appliances". Unlike Linux, much of Android is written in Java and uses object-oriented design.
Microsoft Windows
Windows is a proprietary operating system that is widely used on desktop computers, laptops, tablets, phones, workstations, enterprise servers, and Xbox consoles. The operating system was designed for "security, reliability, compatibility, high performance, extensibility, portability, and international support"—later on, energy efficiency and support for dynamic devices also became priorities.
Windows Executive works via kernel-mode objects for important data structures like processes, threads, and sections (memory objects, for example files).
|
https://en.wikipedia.org/wiki/Operating_system
|
The operating system was designed for "security, reliability, compatibility, high performance, extensibility, portability, and international support"—later on, energy efficiency and support for dynamic devices also became priorities.
Windows Executive works via kernel-mode objects for important data structures like processes, threads, and sections (memory objects, for example files). The operating system supports demand paging of virtual memory, which speeds up I/O for many applications. I/O device drivers use the Windows Driver Model. The NTFS file system has a master table and each file is represented as a record with metadata. The scheduling includes preemptive multitasking. Windows has many security features; especially important are the use of access-control lists and integrity levels. Every process has an authentication token and each object is given a security descriptor. Later releases have added even more security features.
|
https://en.wikipedia.org/wiki/Operating_system
|
A data model is an abstract model that organizes elements of data and standardizes how they relate to one another and to the properties of real-world entities. For instance, a data model may specify that the data element representing a car be composed of a number of other elements which, in turn, represent the color and size of the car and define its owner.
The corresponding professional activity is called generally data modeling or, more specifically, database design.
Data models are typically specified by a data expert, data specialist, data scientist, data librarian, or a data scholar.
A data modeling language and notation are often represented in graphical form as diagrams.
A data model can sometimes be referred to as a data structure, especially in the context of programming languages. Data models are often complemented by function models, especially in the context of enterprise models.
A data model explicitly determines the structure of data; conversely, structured data is data organized according to an explicit data model or data structure. Structured data is in contrast to unstructured data and semi-structured data.
## Overview
The term data model can refer to two distinct but closely related concepts. Sometimes it refers to an abstract formalization of the objects and relationships found in a particular application domain: for example the customers, products, and orders found in a manufacturing organization.
|
https://en.wikipedia.org/wiki/Data_model
|
## Overview
The term data model can refer to two distinct but closely related concepts. Sometimes it refers to an abstract formalization of the objects and relationships found in a particular application domain: for example the customers, products, and orders found in a manufacturing organization. At other times it refers to the set of concepts used in defining such formalizations: for example concepts such as entities, attributes, relations, or tables. So the "data model" of a banking application may be defined using the entity–relationship "data model". This article uses the term in both senses.
Managing large quantities of structured and unstructured data is a primary function of information systems. Data models describe the structure, manipulation, and integrity aspects of the data stored in data management systems such as relational databases. They may also describe data with a looser structure, such as word processing documents, email messages, pictures, digital audio, and video: XDM, for example, provides a data model for XML documents.
### The role of data models
The main aim of data models is to support the development of information systems by providing the definition and format of data. According to West and Fowler (1999) "if this is done consistently across systems then compatibility of data can be achieved. If the same data structures are used to store and access data then different applications can share data. The results of this are indicated above.
|
https://en.wikipedia.org/wiki/Data_model
|
If the same data structures are used to store and access data then different applications can share data. The results of this are indicated above. However, systems and interfaces often cost more than they should, to build, operate, and maintain. They may also constrain the business rather than support it. A major cause is that the quality of the data models implemented in systems and interfaces is poor".
- "Business rules, specific to how things are done in a particular place, are often fixed in the structure of a data model. This means that small changes in the way business is conducted lead to large changes in computer systems and interfaces".
- "Entity types are often not identified, or incorrectly identified. This can lead to replication of data, data structure, and functionality, together with the attendant costs of that duplication in development and maintenance".
- "Data models for different systems are arbitrarily different. The result of this is that complex interfaces are required between systems that share data. These interfaces can account for between 25–70% of the cost of current systems".
- "Data cannot be shared electronically with customers and suppliers, because the structure and meaning of data has not been standardized. For example, engineering design data and drawings for process plant are still sometimes exchanged on paper".
The reason for these problems is a lack of standards that will ensure that data models will both meet business needs and be consistent.
A data model explicitly determines the structure of data.
|
https://en.wikipedia.org/wiki/Data_model
|
The reason for these problems is a lack of standards that will ensure that data models will both meet business needs and be consistent.
A data model explicitly determines the structure of data. Typical applications of data models include database models, design of information systems, and enabling exchange of data. Usually, data models are specified in a data modeling language.[3]
### Three perspectives
A data model instance may be one of three kinds according to ANSI in 1975:
1. Conceptual data model: describes the semantics of a domain, being the scope of the model. For example, it may be a model of the interest area of an organization or industry. This consists of entity classes, representing kinds of things of significance in the domain, and relationship assertions about associations between pairs of entity classes. A conceptual schema specifies the kinds of facts or propositions that can be expressed using the model. In that sense, it defines the allowed expressions in an artificial 'language' with a scope that is limited by the scope of the model.
1. Logical data model: describes the semantics, as represented by a particular data manipulation technology. This consists of descriptions of tables and columns, object oriented classes, and XML tags, among other things.
1. Physical data model: describes the physical means by which data are stored. This is concerned with partitions, CPUs, tablespaces, and the like.
|
https://en.wikipedia.org/wiki/Data_model
|
Physical data model: describes the physical means by which data are stored. This is concerned with partitions, CPUs, tablespaces, and the like.
The significance of this approach, according to ANSI, is that it allows the three perspectives to be relatively independent of each other. Storage technology can change without affecting either the logical or the conceptual model. The table/column structure can change without (necessarily) affecting the conceptual model. In each case, of course, the structures must remain consistent with the other model. The table/column structure may be different from a direct translation of the entity classes and attributes, but it must ultimately carry out the objectives of the conceptual entity class structure. Early phases of many software development projects emphasize the design of a conceptual data model. Such a design can be detailed into a logical data model. In later stages, this model may be translated into physical data model. However, it is also possible to implement a conceptual model directly.
## History
One of the earliest pioneering works in modeling information systems was done by Young and Kent (1958),Janis A. Bubenko jr (2007) "From Information Algebra to Enterprise Modelling and Ontologies - a Historical Perspective on Modelling for Information Systems". In: Conceptual Modelling in Information Systems Engineering. John Krogstie et al. eds.
|
https://en.wikipedia.org/wiki/Data_model
|
In: Conceptual Modelling in Information Systems Engineering. John Krogstie et al. eds. pp 1–18 who argued for "a precise and abstract way of specifying the informational and time characteristics of a data processing problem". They wanted to create "a notation that should enable the analyst to organize the problem around any piece of hardware". Their work was the first effort to create an abstract specification and invariant basis for designing different alternative implementations using different hardware components. The next step in IS modeling was taken by CODASYL, an IT industry consortium formed in 1959, who essentially aimed at the same thing as Young and Kent: the development of "a proper structure for machine-independent problem definition language, at the system level of data processing". This led to the development of a specific IS information algebra.
In the 1960s data modeling gained more significance with the initiation of the management information system (MIS) concept. According to Leondes (2002), "during that time, the information system provided the data and information for management purposes. The first generation database system, called Integrated Data Store (IDS), was designed by Charles Bachman at General Electric. Two famous database models, the network data model and the hierarchical data model, were proposed during this period of time".
|
https://en.wikipedia.org/wiki/Data_model
|
The first generation database system, called Integrated Data Store (IDS), was designed by Charles Bachman at General Electric. Two famous database models, the network data model and the hierarchical data model, were proposed during this period of time". Towards the end of the 1960s, Edgar F. Codd worked out his theories of data arrangement, and proposed the relational model for database management based on first-order predicate logic.
In the 1970s entity–relationship modeling emerged as a new type of conceptual data modeling, originally formalized in 1976 by Peter Chen.
### Entity–relationship model
s were being used in the first stage of information system design during the requirements analysis to describe information needs or the type of information that is to be stored in a database. This technique can describe any ontology, i.e., an overview and classification of concepts and their relationships, for a certain area of interest.
In the 1970s G.M. Nijssen developed "Natural Language Information Analysis Method" (NIAM) method, and developed this in the 1980s in cooperation with Terry Halpin into Object–Role Modeling (ORM). However, it was Terry Halpin's 1989 PhD thesis that created the formal foundation on which Object–Role Modeling is based.
|
https://en.wikipedia.org/wiki/Data_model
|
In the 1970s G.M. Nijssen developed "Natural Language Information Analysis Method" (NIAM) method, and developed this in the 1980s in cooperation with Terry Halpin into Object–Role Modeling (ORM). However, it was Terry Halpin's 1989 PhD thesis that created the formal foundation on which Object–Role Modeling is based.
Bill Kent, in his 1978 book Data and Reality, compared a data model to a map of a territory, emphasizing that in the real world, "highways are not painted red, rivers don't have county lines running down the middle, and you can't see contour lines on a mountain". In contrast to other researchers who tried to create models that were mathematically clean and elegant, Kent emphasized the essential messiness of the real world, and the task of the data modeler to create order out of chaos without excessively distorting the truth.
In the 1980s, according to Jan L. Harrington (2000), "the development of the object-oriented paradigm brought about a fundamental change in the way we look at data and the procedures that operate on data. Traditionally, data and procedures have been stored separately: the data and their relationship in a database, the procedures in an application program. Object orientation, however, combined an entity's procedure with its data.
|
https://en.wikipedia.org/wiki/Data_model
|
Traditionally, data and procedures have been stored separately: the data and their relationship in a database, the procedures in an application program. Object orientation, however, combined an entity's procedure with its data. "
During the early 1990s, three Dutch mathematicians Guido Bakema, Harm van der Lek, and JanPieter Zwart, continued the development on the work of G.M. Nijssen. They focused more on the communication part of the semantics. In 1997 they formalized the method Fully Communication Oriented Information Modeling FCO-IM.
## Types
### Database model
A database model is a specification describing how a database is structured and used.
Several such models have been suggested. Common models include:
Flat model
This may not strictly qualify as a data model. The flat (or table) model consists of a single, two-dimensional array of data elements, where all members of a given column are assumed to be similar values, and all members of a row are assumed to be related to one another.
Hierarchical model
The hierarchical model is similar to the network model except that links in the hierarchical model form a tree structure, while the network model allows arbitrary graph.
Network model
This model organizes data using two fundamental constructs, called records and sets. Records contain fields, and sets define one-to-many relationships between records: one owner, many members.
|
https://en.wikipedia.org/wiki/Data_model
|
Network model
This model organizes data using two fundamental constructs, called records and sets. Records contain fields, and sets define one-to-many relationships between records: one owner, many members. The network data model is an abstraction of the design concept used in the implementation of databases.
Relational model
is a database model based on first-order predicate logic. Its core idea is to describe a database as a collection of predicates over a finite set of predicate variables, describing constraints on the possible values and combinations of values. The power of the relational data model lies in its mathematical foundations and a simple user-level paradigm.
Object–relational model
Similar to a relational database model, but objects, classes, and inheritance are directly supported in database schemas and in the query language.
### Object–role modeling
A method of data modeling that has been defined as "attribute free", and "fact-based". The result is a verifiably correct system, from which other common artifacts, such as ERD, UML, and semantic models may be derived. Associations between data objects are described during the database design procedure, such that normalization is an inevitable result of the process.
Star schema
The simplest style of data warehouse schema.
|
https://en.wikipedia.org/wiki/Data_model
|
Associations between data objects are described during the database design procedure, such that normalization is an inevitable result of the process.
Star schema
The simplest style of data warehouse schema. The star schema consists of a few "fact tables" (possibly only one, justifying the name) referencing any number of "dimension tables". The star schema is considered an important special case of the snowflake schema.
###
### Data structure
diagram
A data structure diagram (DSD) is a diagram and data model used to describe conceptual data models by providing graphical notations which document entities and their relationships, and the constraints that bind them. The basic graphic elements of DSDs are boxes, representing entities, and arrows, representing relationships. Data structure diagrams are most useful for documenting complex data entities.
Data structure diagrams are an extension of the entity–relationship model (ER model). In DSDs, attributes are specified inside the entity boxes rather than outside of them, while relationships are drawn as boxes composed of attributes which specify the constraints that bind entities together. DSDs differ from the ER model in that the ER model focuses on the relationships between different entities, whereas DSDs focus on the relationships of the elements within an entity and enable users to fully see the links and relationships between each entity.
|
https://en.wikipedia.org/wiki/Data_model
|
In DSDs, attributes are specified inside the entity boxes rather than outside of them, while relationships are drawn as boxes composed of attributes which specify the constraints that bind entities together. DSDs differ from the ER model in that the ER model focuses on the relationships between different entities, whereas DSDs focus on the relationships of the elements within an entity and enable users to fully see the links and relationships between each entity.
There are several styles for representing data structure diagrams, with the notable difference in the manner of defining cardinality. The choices are between arrow heads, inverted arrow heads (crow's feet), or numerical representation of the cardinality.
Entity–relationship model
An entity–relationship model (ERM), sometimes referred to as an entity–relationship diagram (ERD), could be used to represent an abstract conceptual data model (or semantic data model or physical data model) used in software engineering to represent structured data. There are several notations used for ERMs. Like DSD's, attributes are specified inside the entity boxes rather than outside of them, while relationships are drawn as lines, with the relationship constraints as descriptions on the line. The E-R model, while robust, can become visually cumbersome when representing entities with several attributes.
|
https://en.wikipedia.org/wiki/Data_model
|
Like DSD's, attributes are specified inside the entity boxes rather than outside of them, while relationships are drawn as lines, with the relationship constraints as descriptions on the line. The E-R model, while robust, can become visually cumbersome when representing entities with several attributes.
There are several styles for representing data structure diagrams, with a notable difference in the manner of defining cardinality. The choices are between arrow heads, inverted arrow heads (crow's feet), or numerical representation of the cardinality.
### Geographic data model
A data model in Geographic information systems is a mathematical construct for representing geographic objects or surfaces as data. For example,
- the vector data model represents geography as points, lines, and polygons
- the raster data model represents geography as cell matrixes that store numeric values;
- and the Triangulated irregular network (TIN) data model represents geography as sets of contiguous, nonoverlapping triangles.
### Generic data model
Generic data models are generalizations of conventional data models. They define standardized general relation types, together with the kinds of things that may be related by such a relation type. Generic data models are developed as an approach to solving some shortcomings of conventional data models. For example, different modelers usually produce different conventional data models of the same domain.
|
https://en.wikipedia.org/wiki/Data_model
|
Generic data models are developed as an approach to solving some shortcomings of conventional data models. For example, different modelers usually produce different conventional data models of the same domain. This can lead to difficulty in bringing the models of different people together and is an obstacle for data exchange and data integration. Invariably, however, this difference is attributable to different levels of abstraction in the models and differences in the kinds of facts that can be instantiated (the semantic expression capabilities of the models). The modelers need to communicate and agree on certain elements that are to be rendered more concretely, in order to make the differences less significant.
### Semantic data model
A semantic data model in software engineering is a technique to define the meaning of data within the context of its interrelationships with other data. A semantic data model is an abstraction that defines how the stored symbols relate to the real world. A semantic data model is sometimes called a conceptual data model.
The logical data structure of a database management system (DBMS), whether hierarchical, network, or relational, cannot totally satisfy the requirements for a conceptual definition of data because it is limited in scope and biased toward the implementation strategy employed by the DBMS. Therefore, the need to define data from a conceptual view has led to the development of semantic data modeling techniques. That is, techniques to define the meaning of data within the context of its interrelationships with other data.
|
https://en.wikipedia.org/wiki/Data_model
|
Therefore, the need to define data from a conceptual view has led to the development of semantic data modeling techniques. That is, techniques to define the meaning of data within the context of its interrelationships with other data. As illustrated in the figure. The real world, in terms of resources, ideas, events, etc., are symbolically defined within physical data stores. A semantic data model is an abstraction that defines how the stored symbols relate to the real world. Thus, the model must be a true representation of the real world.
## Topics
### Data architecture
Data architecture is the design of data for use in defining the target state and the subsequent planning needed to hit the target state. It is usually one of several architecture domains that form the pillars of an enterprise architecture or solution architecture.
A data architecture describes the data structures used by a business and/or its applications. There are descriptions of data in storage and data in motion; descriptions of data stores, data groups, and data items; and mappings of those data artifacts to data qualities, applications, locations, etc.
Essential to realizing the target state, Data architecture describes how data is processed, stored, and utilized in a given system. It provides criteria for data processing operations that make it possible to design data flows and also control the flow of data in the system.
|
https://en.wikipedia.org/wiki/Data_model
|
There are descriptions of data in storage and data in motion; descriptions of data stores, data groups, and data items; and mappings of those data artifacts to data qualities, applications, locations, etc.
Essential to realizing the target state, Data architecture describes how data is processed, stored, and utilized in a given system. It provides criteria for data processing operations that make it possible to design data flows and also control the flow of data in the system.
### Data modeling
Data modeling in software engineering is the process of creating a data model by applying formal data model descriptions using data modeling techniques. Data modeling is a technique for defining business requirements for a database. It is sometimes called database modeling because a data model is eventually implemented in a database.
The figure illustrates the way data models are developed and used today. A conceptual data model is developed based on the data requirements for the application that is being developed, perhaps in the context of an activity model. The data model will normally consist of entity types, attributes, relationships, integrity rules, and the definitions of those objects. This is then used as the start point for interface or database design.
|
https://en.wikipedia.org/wiki/Data_model
|
The data model will normally consist of entity types, attributes, relationships, integrity rules, and the definitions of those objects. This is then used as the start point for interface or database design.
### Data properties
Some important properties of data for which requirements need to be met are:
- definition-related properties
- relevance: the usefulness of the data in the context of your business.
- clarity: the availability of a clear and shared definition for the data.
- consistency: the compatibility of the same type of data from different sources.
- content-related properties
- timeliness: the availability of data at the time required and how up-to-date that data is.
- accuracy: how close to the truth the data is.
- properties related to both definition and content
- completeness: how much of the required data is available.
- accessibility: where, how, and to whom the data is available or not available (e.g. security).
- cost: the cost incurred in obtaining the data, and making it available for use.
### Data organization
Another kind of data model describes how to organize data using a database management system or other data management technology. It describes, for example, relational tables and columns or object-oriented classes and attributes. Such a data model is sometimes referred to as the physical data model, but in the original ANSI three schema architecture, it is called "logical".
|
https://en.wikipedia.org/wiki/Data_model
|
It describes, for example, relational tables and columns or object-oriented classes and attributes. Such a data model is sometimes referred to as the physical data model, but in the original ANSI three schema architecture, it is called "logical". In that architecture, the physical model describes the storage media (cylinders, tracks, and tablespaces). Ideally, this model is derived from the more conceptual data model described above. It may differ, however, to account for constraints like processing capacity and usage patterns.
While data analysis is a common term for data modeling, the activity actually has more in common with the ideas and methods of synthesis (inferring general concepts from particular instances) than it does with analysis (identifying component concepts from more general ones). {Presumably we call ourselves systems analysts because no one can say systems synthesists.} Data modeling strives to bring the data structures of interest together into a cohesive, inseparable, whole by eliminating unnecessary data redundancies and by relating data structures with relationships.
A different approach is to use adaptive systems such as artificial neural networks that can autonomously create implicit models of data.
Data structure
A data structure is a way of storing data in a computer so that it can be used efficiently. It is an organization of mathematical and logical concepts of data.
|
https://en.wikipedia.org/wiki/Data_model
|
Data structure
A data structure is a way of storing data in a computer so that it can be used efficiently. It is an organization of mathematical and logical concepts of data. Often a carefully chosen data structure will allow the most efficient algorithm to be used. The choice of the data structure often begins from the choice of an abstract data type.
A data model describes the structure of the data within a given domain and, by implication, the underlying structure of that domain itself. This means that a data model in fact specifies a dedicated grammar for a dedicated artificial language for that domain. A data model represents classes of entities (kinds of things) about which a company wishes to hold information, the attributes of that information, and relationships among those entities and (often implicit) relationships among those attributes. The model describes the organization of the data to some extent irrespective of how data might be represented in a computer system.
The entities represented by a data model can be the tangible entities, but models that include such concrete entity classes tend to change over time. Robust data models often identify abstractions of such entities. For example, a data model might include an entity class called "Person", representing all the people who interact with an organization. Such an abstract entity class is typically more appropriate than ones called "Vendor" or "Employee", which identify specific roles played by those people.
### Data model theory
|
https://en.wikipedia.org/wiki/Data_model
|
Such an abstract entity class is typically more appropriate than ones called "Vendor" or "Employee", which identify specific roles played by those people.
### Data model theory
The term data model can have two meanings:
1. A data model theory, i.e. a formal description of how data may be structured and accessed.
1. A data model instance, i.e. applying a data model theory to create a practical data model instance for some particular application.
A data model theory has three main components:
- The structural part: a collection of data structures which are used to create databases representing the entities or objects modeled by the database.
- The integrity part: a collection of rules governing the constraints placed on these data structures to ensure structural integrity.
- The manipulation part: a collection of operators which can be applied to the data structures, to update and query the data contained in the database.
For example, in the relational model, the structural part is based on a modified concept of the mathematical relation; the integrity part is expressed in first-order logic and the manipulation part is expressed using the relational algebra, tuple calculus and domain calculus.
A data model instance is created by applying a data model theory. This is typically done to solve some business enterprise requirement. Business requirements are normally captured by a semantic logical data model.
|
https://en.wikipedia.org/wiki/Data_model
|
This is typically done to solve some business enterprise requirement. Business requirements are normally captured by a semantic logical data model. This is transformed into a physical data model instance from which is generated a physical database. For example, a data modeler may use a data modeling tool to create an entity–relationship model of the corporate data repository of some business enterprise. This model is transformed into a relational model, which in turn generates a relational database.
### Patterns
Patterns are common data modeling structures that occur in many data models.
## Related models
### Data-flow diagram
A data-flow diagram (DFD) is a graphical representation of the "flow" of data through an information system. It differs from the flowchart as it shows the data flow instead of the control flow of the program. A data-flow diagram can also be used for the visualization of data processing (structured design). Data-flow diagrams were invented by Larry Constantine, the original developer of structured design, based on Martin and Estrin's "data-flow graph" model of computation.
It is common practice to draw a context-level data-flow diagram first which shows the interaction between the system and outside entities. The DFD is designed to show how a system is divided into smaller portions and to highlight the flow of data between those parts.
|
https://en.wikipedia.org/wiki/Data_model
|
It is common practice to draw a context-level data-flow diagram first which shows the interaction between the system and outside entities. The DFD is designed to show how a system is divided into smaller portions and to highlight the flow of data between those parts. This context-level data-flow diagram is then "exploded" to show more detail of the system being modeled
### Information model
An Information model is not a type of data model, but more or less an alternative model. Within the field of software engineering, both a data model and an information model can be abstract, formal representations of entity types that include their properties, relationships and the operations that can be performed on them. The entity types in the model may be kinds of real-world objects, such as devices in a network, or they may themselves be abstract, such as for the entities used in a billing system. Typically, they are used to model a constrained domain that can be described by a closed set of entity types, properties, relationships and operations.
According to Lee (1999) an information model is a representation of concepts, relationships, constraints, rules, and operations to specify data semantics for a chosen domain of discourse. It can provide sharable, stable, and organized structure of information requirements for the domain context. More in general the term information model is used for models of individual things, such as facilities, buildings, process plants, etc.
|
https://en.wikipedia.org/wiki/Data_model
|
It can provide sharable, stable, and organized structure of information requirements for the domain context. More in general the term information model is used for models of individual things, such as facilities, buildings, process plants, etc. In those cases the concept is specialised to Facility Information Model, Building Information Model, Plant Information Model, etc. Such an information model is an integration of a model of the facility with the data and documents about the facility.
An information model provides formalism to the description of a problem domain without constraining how that description is mapped to an actual implementation in software. There may be many mappings of the information model. Such mappings are called data models, irrespective of whether they are object models (e.g. using UML), entity–relationship models or XML schemas.
### Object model
An object model in computer science is a collection of objects or classes through which a program can examine and manipulate some specific parts of its world. In other words, the object-oriented interface to some service or system. Such an interface is said to be the object model of the represented service or system. For example, the Document Object Model (DOM) is a collection of objects that represent a page in a web browser, used by script programs to examine and dynamically change the page. There is a Microsoft Excel object model for controlling Microsoft Excel from another program, and the ASCOM Telescope Driver is an object model for controlling an astronomical telescope.
|
https://en.wikipedia.org/wiki/Data_model
|
For example, the Document Object Model (DOM) is a collection of objects that represent a page in a web browser, used by script programs to examine and dynamically change the page. There is a Microsoft Excel object model for controlling Microsoft Excel from another program, and the ASCOM Telescope Driver is an object model for controlling an astronomical telescope.
In computing the term object model has a distinct second meaning of the general properties of objects in a specific computer programming language, technology, notation or methodology that uses them. For example, the Java object model, the COM object model, or the object model of OMT. Such object models are usually defined using concepts such as class, message, inheritance, polymorphism, and encapsulation. There is an extensive literature on formalized object models as a subset of the formal semantics of programming languages.
Object–role modeling
Object–Role Modeling (ORM) is a method for conceptual modeling, and can be used as a tool for information and rules analysis.
Object–Role Modeling is a fact-oriented method for performing systems analysis at the conceptual level. The quality of a database application depends critically on its design. To help ensure correctness, clarity, adaptability and productivity, information systems are best specified first at the conceptual level, using concepts and language that people can readily understand.
|
https://en.wikipedia.org/wiki/Data_model
|
The quality of a database application depends critically on its design. To help ensure correctness, clarity, adaptability and productivity, information systems are best specified first at the conceptual level, using concepts and language that people can readily understand.
The conceptual design may include data, process and behavioral perspectives, and the actual DBMS used to implement the design might be based on one of many logical data models (relational, hierarchic, network, object-oriented, etc.).
### Unified Modeling Language models
The Unified Modeling Language (UML) is a standardized general-purpose modeling language in the field of software engineering. It is a graphical language for visualizing, specifying, constructing, and documenting the artifacts of a software-intensive system. The Unified Modeling Language offers a standard way to write a system's blueprints, including:
- Conceptual things such as business processes and system functions
- Concrete things such as programming language statements, database schemas, and
- Reusable software components.
UML offers a mix of functional models, data models, and database models.
|
https://en.wikipedia.org/wiki/Data_model
|
In distributed computing, a conflict-free replicated data type (CRDT) is a data structure that is replicated across multiple computers in a network, with the following features:
1. The application can update any replica independently, concurrently and without coordinating with other replicas.
1. An algorithm (itself part of the data type) automatically resolves any inconsistencies that might occur.
1. Although replicas may have different state at any particular point in time, they are guaranteed to eventually converge.
The CRDT concept was formally defined in 2011 by Marc Shapiro, Nuno Preguiça, Carlos Baquero and Marek Zawirski. Development was initially motivated by collaborative text editing and mobile computing. CRDTs have also been used in online chat systems, online gambling, and in the SoundCloud audio distribution platform. The NoSQL distributed databases Redis, Riak and Cosmos DB have CRDT data types.
## Background
Concurrent updates to multiple replicas of the same data, without coordination between the computers hosting the replicas, can result in inconsistencies between the replicas, which in the general case may not be resolvable. Restoring consistency and data integrity when there are conflicts between updates may require some or all of the updates to be entirely or partially dropped.
Accordingly, much of distributed computing focuses on the problem of how to prevent concurrent updates to replicated data.
|
https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type
|
Restoring consistency and data integrity when there are conflicts between updates may require some or all of the updates to be entirely or partially dropped.
Accordingly, much of distributed computing focuses on the problem of how to prevent concurrent updates to replicated data. But another possible approach is optimistic replication, where all concurrent updates are allowed to go through, with inconsistencies possibly created, and the results are merged or "resolved" later. In this approach, consistency between the replicas is eventually re-established via "merges" of differing replicas. While optimistic replication might not work in the general case, there is a significant and practically useful class of data structures, CRDTs, where it does work — where it is always possible to merge or resolve concurrent updates on different replicas of the data structure without conflicts. This makes CRDTs ideal for optimistic replication.
As an example, a one-way Boolean event flag is a trivial CRDT: one bit, with a value of true or false. True means some particular event has occurred at least once. False means the event has not occurred. Once set to true, the flag cannot be set back to false (an event having occurred cannot un-occur).
|
https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type
|
False means the event has not occurred. Once set to true, the flag cannot be set back to false (an event having occurred cannot un-occur). The resolution method is "true wins": when merging a replica where the flag is true (that replica has observed the event), and another one where the flag is false (that replica hasn't observed the event), the resolved result is true — the event has been observed.
## Types of CRDTs
There are two approaches to CRDTs, both of which can provide strong eventual consistency: state-based CRDTs and operation-based CRDTs.
### State-based CRDTs
State-based CRDTs (also called convergent replicated data types, or CvRDTs) are defined by two types, a type for local states and a type for actions on the state, together with three functions: A function to produce an initial state, a merge function of states, and a function to apply an action to update a state. State-based CRDTs simply send their full local state to other replicas on every update, where the received new state is then merged into the local state. To ensure eventual convergence the functions should fulfill the following properties:
The merge function should compute the join for any pair of replica states, and should form a semilattice with the initial state as the neutral element. In particular this means, that the merge function must be commutative, associative, and idempotent.
|
https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type
|
To ensure eventual convergence the functions should fulfill the following properties:
The merge function should compute the join for any pair of replica states, and should form a semilattice with the initial state as the neutral element. In particular this means, that the merge function must be commutative, associative, and idempotent. The intuition behind commutativity, associativity and idempotence is that these properties are used to make the CRDT invariant under package re-ordering and duplication. Furthermore, the update function must be monotone with regard to the partial order defined by said semilattice.
Delta state CRDTs (or simply Delta CRDTs) are optimized state-based CRDTs where only recently applied changes to a state are disseminated instead of the entire state.
### Operation-based CRDTs
Operation-based CRDTs (also called commutative replicated data types, or CmRDTs) are defined without a merge function. Instead of transmitting states, the update actions are transmitted directly to replicas and applied. For example, an operation-based CRDT of a single integer might broadcast the operations (+10) or (−20). The application of operations should still be commutative and associative. However, instead of requiring that application of operations is idempotent, stronger assumptions on the communications infrastructure are expected -- all operations must be delivered to the other replicas without duplication.
|
https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type
|
The application of operations should still be commutative and associative. However, instead of requiring that application of operations is idempotent, stronger assumptions on the communications infrastructure are expected -- all operations must be delivered to the other replicas without duplication.
Pure operation-based CRDTs are a variant of operation-based CRDTs that reduces the metadata size.
### Comparison
The two alternatives are theoretically equivalent, as each can emulate the other.
However, there are practical differences.
State-based CRDTs are often simpler to design and to implement; their only requirement from the communication substrate is some kind of gossip protocol.
Their drawback is that the entire state of every CRDT must be transmitted eventually to every other replica, which may be costly.
In contrast, operation-based CRDTs transmit only the update operations, which are typically small.
However, operation-based CRDTs require guarantees from the communication middleware; that the operations are not dropped or duplicated when transmitted to the other replicas, and that they are delivered in causal order.
While operations-based CRDTs place more requirements on the protocol for transmitting operations between replicas, they use less bandwidth than state-based CRDTs when the number of transactions is small in comparison to the size of internal state.
|
https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type
|
However, operation-based CRDTs require guarantees from the communication middleware; that the operations are not dropped or duplicated when transmitted to the other replicas, and that they are delivered in causal order.
While operations-based CRDTs place more requirements on the protocol for transmitting operations between replicas, they use less bandwidth than state-based CRDTs when the number of transactions is small in comparison to the size of internal state. However, since the state-based CRDT merge function is associative, merging with the state of some replica yields all previous updates to that replica. Gossip protocols work well for propagating state-based CRDT state to other replicas while reducing network use and handling topology changes.
Some lower bounds on the storage complexity of state-based CRDTs are known.
## Known CRDTs
### G-Counter (Grow-only Counter)
This state-based CRDT implements a counter for a cluster of n nodes. Each node in the cluster is assigned an ID from 0 to n - 1, which is retrieved with a call to `myId()`. Thus each node is assigned its own slot in the array P, which it increments locally. Updates are propagated in the background, and merged by taking the `max()` of every element in P. The compare function is included to illustrate a partial order on the states.
|
https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type
|
Updates are propagated in the background, and merged by taking the `max()` of every element in P. The compare function is included to illustrate a partial order on the states. The merge function is commutative, associative, and idempotent. The update function monotonically increases the internal state according to the compare function. This is thus a correctly defined state-based CRDT and will provide strong eventual consistency. The operations-based CRDT equivalent broadcasts increment operations as they are received.
### PN-Counter (Positive-Negative Counter)
A common strategy in CRDT development is to combine multiple CRDTs to make a more complex CRDT. In this case, two G-Counters are combined to create a data type supporting both increment and decrement operations. The "P" G-Counter counts increments; and the "N" G-Counter counts decrements. The value of the PN-Counter is the value of the P counter minus the value of the N counter. Merge is handled by letting the merged P counter be the merge of the two P G-Counters, and similarly for N counters. Note that the CRDT's internal state must increase monotonically, even though its external state as exposed through query can return to previous values.
### G-Set (Grow-only Set)
|
https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type
|
Note that the CRDT's internal state must increase monotonically, even though its external state as exposed through query can return to previous values.
### G-Set (Grow-only Set)
The G-Set (grow-only set) is a set which only allows adds. An element, once added, cannot be removed. The merger of two G-Sets is their union.
### 2P-Set (Two-Phase Set)
Two G-Sets (grow-only sets) are combined to create the 2P-set. With the addition of a remove set (called the "tombstone" set), elements can be added and also removed. Once removed, an element cannot be re-added; that is, once an element e is in the tombstone set, query will never again return True for that element. The 2P-set uses "remove-wins" semantics, so `remove(e)` takes precedence over `add(e)`.
### LWW-Element-Set (Last-Write-Wins-Element-Set)
LWW-Element-Set is similar to 2P-Set in that it consists of an "add set" and a "remove set", with a timestamp for each element.
|
https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type
|
The 2P-set uses "remove-wins" semantics, so `remove(e)` takes precedence over `add(e)`.
### LWW-Element-Set (Last-Write-Wins-Element-Set)
LWW-Element-Set is similar to 2P-Set in that it consists of an "add set" and a "remove set", with a timestamp for each element. Elements are added to an LWW-Element-Set by inserting the element into the add set, with a timestamp. Elements are removed from the LWW-Element-Set by being added to the remove set, again with a timestamp. An element is a member of the LWW-Element-Set if it is in the add set, and either not in the remove set, or in the remove set but with an earlier timestamp than the latest timestamp in the add set. Merging two replicas of the LWW-Element-Set consists of taking the union of the add sets and the union of the remove sets. When timestamps are equal, the "bias" of the LWW-Element-Set comes into play. A LWW-Element-Set can be biased towards adds or removals.
|
https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type
|
When timestamps are equal, the "bias" of the LWW-Element-Set comes into play. A LWW-Element-Set can be biased towards adds or removals. The advantage of LWW-Element-Set over 2P-Set is that, unlike 2P-Set, LWW-Element-Set allows an element to be reinserted after having been removed.
### OR-Set (Observed-Remove Set)
OR-Set resembles LWW-Element-Set, but using unique tags instead of timestamps. For each element in the set, a list of add-tags and a list of remove-tags are maintained. An element is inserted into the OR-Set by having a new unique tag generated and added to the add-tag list for the element. Elements are removed from the OR-Set by having all the tags in the element's add-tag list added to the element's remove-tag (tombstone) list. To merge two OR-Sets, for each element, let its add-tag list be the union of the two add-tag lists, and likewise for the two remove-tag lists. An element is a member of the set if and only if the add-tag list less the remove-tag list is nonempty.
|
https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.