text
stringlengths
105
4.17k
source
stringclasses
883 values
To merge two OR-Sets, for each element, let its add-tag list be the union of the two add-tag lists, and likewise for the two remove-tag lists. An element is a member of the set if and only if the add-tag list less the remove-tag list is nonempty. An optimization that eliminates the need for maintaining a tombstone set is possible; this avoids the potentially unbounded growth of the tombstone set. The optimization is achieved by maintaining a vector of timestamps for each replica. ### Sequence CRDTs A sequence, list, or ordered set CRDT can be used to build a collaborative real-time editor, as an alternative to operational transformation (OT). Some known Sequence CRDTs are Treedoc, RGA, Woot, Logoot, and LSEQ. CRATE is a decentralized real-time editor built on top of LSEQSplit (an extension of LSEQ) and runnable on a network of browsers using WebRTC. LogootSplit was proposed as an extension of Logoot in order to reduce the metadata for sequence CRDTs. MUTE is an online web-based peer-to-peer real-time collaborative editor relying on the LogootSplit algorithm.
https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type
LogootSplit was proposed as an extension of Logoot in order to reduce the metadata for sequence CRDTs. MUTE is an online web-based peer-to-peer real-time collaborative editor relying on the LogootSplit algorithm. Industrial sequence CRDTs, including open-source ones, are known to out-perform academic implementations due to optimizations and a more realistic testing methodology. The main popular example is Yjs CRDT, a pioneer in using a plainlist instead of a tree (ala Kleppmann's automerge). ## Industry use - Fluid Framework is an open-source collaborative platform built by Microsoft that provides both server reference implementations and client-side SDKs for creating modern real-time web applications using CRDTs. - Nimbus Note is a collaborative note-taking application that uses the Yjs CRDT for collaborative editing. - Redis is a distributed, highly available, and scalable in-memory database with a "CRDT-enabled database" feature. - SoundCloud open-sourced Roshi, a LWW-element-set CRDT for the SoundCloud stream implemented on top of Redis. - Riak is a distributed NoSQL key-value data store based on CRDTs.
https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type
- Redis is a distributed, highly available, and scalable in-memory database with a "CRDT-enabled database" feature. - SoundCloud open-sourced Roshi, a LWW-element-set CRDT for the SoundCloud stream implemented on top of Redis. - Riak is a distributed NoSQL key-value data store based on CRDTs. League of Legends uses the Riak CRDT implementation for its in-game chat system, which handles 7.5 million concurrent users and 11,000 messages per second. - Bet365 stores hundreds of megabytes of data in the Riak implementation of OR-Set. - TomTom employs CRDTs to synchronize navigation data between the devices of a user. - Phoenix, a web framework written in Elixir, uses CRDTs to support real-time multi-node information sharing in version 1.2. - Facebook implements CRDTs in their Apollo low-latency "consistency at scale" database. - Facebook uses CRDTs in their FlightTracker system for managing the Facebook graph internally. - Teletype for Atom employs CRDTs to enable developers share their workspace with team members and collaborate on code in real time. - Apple implements CRDTs in the Notes app for syncing offline edits between multiple devices. - Novell, Inc. introduced a state-based CRDT with "loosely consistent" directory replication (NetWare Directory Services), included in NetWare 4.0 in 1995.
https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type
- SoundCloud open-sourced Roshi, a LWW-element-set CRDT for the SoundCloud stream implemented on top of Redis. - Riak is a distributed NoSQL key-value data store based on CRDTs. League of Legends uses the Riak CRDT implementation for its in-game chat system, which handles 7.5 million concurrent users and 11,000 messages per second. - Bet365 stores hundreds of megabytes of data in the Riak implementation of OR-Set. - TomTom employs CRDTs to synchronize navigation data between the devices of a user. - Phoenix, a web framework written in Elixir, uses CRDTs to support real-time multi-node information sharing in version 1.2. - Facebook implements CRDTs in their Apollo low-latency "consistency at scale" database. - Facebook uses CRDTs in their FlightTracker system for managing the Facebook graph internally. - Teletype for Atom employs CRDTs to enable developers share their workspace with team members and collaborate on code in real time. - Apple implements CRDTs in the Notes app for syncing offline edits between multiple devices. - Novell, Inc. introduced a state-based CRDT with "loosely consistent" directory replication (NetWare Directory Services), included in NetWare 4.0 in 1995. The successor product, eDirectory, delivered improvements to the replication process.
https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type
A hash function is any function that can be used to map data of arbitrary size to fixed-size values, though there are some hash functions that support variable-length output. The values returned by a hash function are called hash values, hash codes, (hash/message) digests, or simply hashes. The values are usually used to index a fixed-size table called a hash table. Use of a hash function to index a hash table is called hashing or scatter-storage addressing. Hash functions and their associated hash tables are used in data storage and retrieval applications to access data in a small and nearly constant time per retrieval. They require an amount of storage space only fractionally greater than the total space required for the data or records themselves. Hashing is a computationally- and storage-space-efficient form of data access that avoids the non-constant access time of ordered and unordered lists and structured trees, and the often-exponential storage requirements of direct access of state spaces of large or variable-length keys. Use of hash functions relies on statistical properties of key and function interaction: worst-case behavior is intolerably bad but rare, and average-case behavior can be nearly optimal (minimal collision).
https://en.wikipedia.org/wiki/Hash_function
Hashing is a computationally- and storage-space-efficient form of data access that avoids the non-constant access time of ordered and unordered lists and structured trees, and the often-exponential storage requirements of direct access of state spaces of large or variable-length keys. Use of hash functions relies on statistical properties of key and function interaction: worst-case behavior is intolerably bad but rare, and average-case behavior can be nearly optimal (minimal collision). Hash functions are related to (and often confused with) checksums, check digits, fingerprints, lossy compression, randomization functions, error-correcting codes, and ciphers. Although the concepts overlap to some extent, each one has its own uses and requirements and is designed and optimized differently. The hash function differs from these concepts mainly in terms of data integrity. ## Hash tables may use non-cryptographic hash functions, while cryptographic hash functions are used in cybersecurity to secure sensitive data such as passwords. ## Overview In a hash table, a hash function takes a key as an input, which is associated with a datum or record and used to identify it to the data storage and retrieval application. The keys may be fixed-length, like an integer, or variable-length, like a name. In some cases, the key is the datum itself.
https://en.wikipedia.org/wiki/Hash_function
The keys may be fixed-length, like an integer, or variable-length, like a name. In some cases, the key is the datum itself. The output is a hash code used to index a hash table holding the data or records, or pointers to them. A hash function may be considered to perform three functions: - Convert variable-length keys into fixed-length (usually machine-word-length or less) values, by folding them by words or other units using a parity-preserving operator like ADD or XOR, - Scramble the bits of the key so that the resulting values are uniformly distributed over the keyspace, and - Map the key values into ones less than or equal to the size of the table. A good hash function satisfies two basic properties: it should be very fast to compute, and it should minimize duplication of output values (collisions). Hash functions rely on generating favorable probability distributions for their effectiveness, reducing access time to nearly constant. High table loading factors, pathological key sets, and poorly designed hash functions can result in access times approaching linear in the number of items in the table. Hash functions can be designed to give the best worst-case performance, good performance under high table loading factors, and in special cases, perfect (collisionless) mapping of keys into hash codes.
https://en.wikipedia.org/wiki/Hash_function
High table loading factors, pathological key sets, and poorly designed hash functions can result in access times approaching linear in the number of items in the table. Hash functions can be designed to give the best worst-case performance, good performance under high table loading factors, and in special cases, perfect (collisionless) mapping of keys into hash codes. Implementation is based on parity-preserving bit operations (XOR and ADD), multiply, or divide. A necessary adjunct to the hash function is a collision-resolution method that employs an auxiliary data structure like linked lists, or systematic probing of the table to find an empty slot. Hash tables Hash functions are used in conjunction with hash tables to store and retrieve data items or data records. The hash function translates the key associated with each datum or record into a hash code, which is used to index the hash table. When an item is to be added to the table, the hash code may index an empty slot (also called a bucket), in which case the item is added to the table there. If the hash code indexes a full slot, then some kind of collision resolution is required: the new item may be omitted (not added to the table), or replace the old item, or be added to the table in some other location by a specified procedure. That procedure depends on the structure of the hash table.
https://en.wikipedia.org/wiki/Hash_function
If the hash code indexes a full slot, then some kind of collision resolution is required: the new item may be omitted (not added to the table), or replace the old item, or be added to the table in some other location by a specified procedure. That procedure depends on the structure of the hash table. In chained hashing, each slot is the head of a linked list or chain, and items that collide at the slot are added to the chain. Chains may be kept in random order and searched linearly, or in serial order, or as a self-ordering list by frequency to speed up access. In open address hashing, the table is probed starting from the occupied slot in a specified manner, usually by linear probing, quadratic probing, or double hashing until an open slot is located or the entire table is probed (overflow). Searching for the item follows the same procedure until the item is located, an open slot is found, or the entire table has been searched (item not in table). ### Specialized uses Hash functions are also used to build caches for large data sets stored in slow media. A cache is generally simpler than a hashed search table, since any collision can be resolved by discarding or writing back the older of the two colliding items.
https://en.wikipedia.org/wiki/Hash_function
### Specialized uses Hash functions are also used to build caches for large data sets stored in slow media. A cache is generally simpler than a hashed search table, since any collision can be resolved by discarding or writing back the older of the two colliding items. Hash functions are an essential ingredient of the Bloom filter, a space-efficient probabilistic data structure that is used to test whether an element is a member of a set. A special case of hashing is known as geometric hashing or the grid method. In these applications, the set of all inputs is some sort of metric space, and the hashing function can be interpreted as a partition of that space into a grid of cells. The table is often an array with two or more indices (called a grid file, grid index, bucket grid, and similar names), and the hash function returns an index tuple. This principle is widely used in computer graphics, computational geometry, and many other disciplines, to solve many proximity problems in the plane or in three-dimensional space, such as finding closest pairs in a set of points, similar shapes in a list of shapes, similar images in an image database, and so on. Hash tables are also used to implement associative arrays and dynamic sets. ## Properties ### Uniformity A good hash function should map the expected inputs as evenly as possible over its output range.
https://en.wikipedia.org/wiki/Hash_function
## Properties ### Uniformity A good hash function should map the expected inputs as evenly as possible over its output range. That is, every hash value in the output range should be generated with roughly the same probability. The reason for this last requirement is that the cost of hashing-based methods goes up sharply as the number of collisions—pairs of inputs that are mapped to the same hash value—increases. If some hash values are more likely to occur than others, then a larger fraction of the lookup operations will have to search through a larger set of colliding table entries. This criterion only requires the value to be uniformly distributed, not random in any sense. A good randomizing function is (barring computational efficiency concerns) generally a good choice as a hash function, but the converse need not be true. Hash tables often contain only a small subset of the valid inputs. For instance, a club membership list may contain only a hundred or so member names, out of the very large set of all possible names. In these cases, the uniformity criterion should hold for almost all typical subsets of entries that may be found in the table, not just for the global set of all possible entries. In other words, if a typical set of records is hashed to table slots, then the probability of a bucket receiving many more than records should be vanishingly small.
https://en.wikipedia.org/wiki/Hash_function
In these cases, the uniformity criterion should hold for almost all typical subsets of entries that may be found in the table, not just for the global set of all possible entries. In other words, if a typical set of records is hashed to table slots, then the probability of a bucket receiving many more than records should be vanishingly small. In particular, if , then very few buckets should have more than one or two records. A small number of collisions is virtually inevitable, even if is much larger than —see the birthday problem. In special cases when the keys are known in advance and the key set is static, a hash function can be found that achieves absolute (or collisionless) uniformity. Such a hash function is said to be perfect. There is no algorithmic way of constructing such a function—searching for one is a factorial function of the number of keys to be mapped versus the number of table slots that they are mapped into. Finding a perfect hash function over more than a very small set of keys is usually computationally infeasible; the resulting function is likely to be more computationally complex than a standard hash function and provides only a marginal advantage over a function with good statistical properties that yields a minimum number of collisions. See universal hash function.
https://en.wikipedia.org/wiki/Hash_function
Finding a perfect hash function over more than a very small set of keys is usually computationally infeasible; the resulting function is likely to be more computationally complex than a standard hash function and provides only a marginal advantage over a function with good statistical properties that yields a minimum number of collisions. See universal hash function. ### Testing and measurement When testing a hash function, the uniformity of the distribution of hash values can be evaluated by the chi-squared test. This test is a goodness-of-fit measure: it is the actual distribution of items in buckets versus the expected (or uniform) distribution of items. The formula is $$ \frac{\sum_{j=0}^{m-1}(b_j)(b_j+1)/2}{(n/2m)(n+2m-1)}, $$ where is the number of keys, is the number of buckets, and is the number of items in bucket . A ratio within one confidence interval (such as 0.95 to 1.05) is indicative that the hash function evaluated has an expected uniform distribution. Hash functions can have some technical properties that make it more likely that they will have a uniform distribution when applied. One is the strict avalanche criterion: whenever a single input bit is complemented, each of the output bits changes with a 50% probability. The reason for this property is that selected subsets of the keyspace may have low variability.
https://en.wikipedia.org/wiki/Hash_function
One is the strict avalanche criterion: whenever a single input bit is complemented, each of the output bits changes with a 50% probability. The reason for this property is that selected subsets of the keyspace may have low variability. For the output to be uniformly distributed, a low amount of variability, even one bit, should translate into a high amount of variability (i.e. distribution over the tablespace) in the output. Each bit should change with a probability of 50% because, if some bits are reluctant to change, then the keys become clustered around those values. If the bits want to change too readily, then the mapping is approaching a fixed XOR function of a single bit. Standard tests for this property have been described in the literature. The relevance of the criterion to a multiplicative hash function is assessed here. ### Efficiency In data storage and retrieval applications, the use of a hash function is a trade-off between search time and data storage space. If search time were unbounded, then a very compact unordered linear list would be the best medium; if storage space were unbounded, then a randomly accessible structure indexable by the key-value would be very large and very sparse, but very fast. A hash function takes a finite amount of time to map a potentially large keyspace to a feasible amount of storage space searchable in a bounded amount of time regardless of the number of keys.
https://en.wikipedia.org/wiki/Hash_function
If search time were unbounded, then a very compact unordered linear list would be the best medium; if storage space were unbounded, then a randomly accessible structure indexable by the key-value would be very large and very sparse, but very fast. A hash function takes a finite amount of time to map a potentially large keyspace to a feasible amount of storage space searchable in a bounded amount of time regardless of the number of keys. In most applications, the hash function should be computable with minimum latency and secondarily in a minimum number of instructions. Computational complexity varies with the number of instructions required and latency of individual instructions, with the simplest being the bitwise methods (folding), followed by the multiplicative methods, and the most complex (slowest) are the division-based methods. Because collisions should be infrequent, and cause a marginal delay but are otherwise harmless, it is usually preferable to choose a faster hash function over one that needs more computation but saves a few collisions. Division-based implementations can be of particular concern because a division requires multiple cycles on nearly all processor microarchitectures. Division (modulo) by a constant can be inverted to become a multiplication by the word-size multiplicative-inverse of that constant. This can be done by the programmer, or by the compiler.
https://en.wikipedia.org/wiki/Hash_function
Division (modulo) by a constant can be inverted to become a multiplication by the word-size multiplicative-inverse of that constant. This can be done by the programmer, or by the compiler. Division can also be reduced directly into a series of shift-subtracts and shift-adds, though minimizing the number of such operations required is a daunting problem; the number of machine-language instructions resulting may be more than a dozen and swamp the pipeline. If the microarchitecture has hardware multiply functional units, then the multiply-by-inverse is likely a better approach. We can allow the table size to not be a power of 2 and still not have to perform any remainder or division operation, as these computations are sometimes costly. For example, let be significantly less than . Consider a pseudorandom number generator function that is uniform on the interval . A hash function uniform on the interval is . We can replace the division by a (possibly faster) right bit shift: . If keys are being hashed repeatedly, and the hash function is costly, then computing time can be saved by precomputing the hash codes and storing them with the keys. Matching hash codes almost certainly means that the keys are identical. This technique is used for the transposition table in game-playing programs, which stores a 64-bit hashed representation of the board position.
https://en.wikipedia.org/wiki/Hash_function
Matching hash codes almost certainly means that the keys are identical. This technique is used for the transposition table in game-playing programs, which stores a 64-bit hashed representation of the board position. ### Universality A universal hashing scheme is a randomized algorithm that selects a hash function among a family of such functions, in such a way that the probability of a collision of any two distinct keys is , where is the number of distinct hash values desired—independently of the two keys. Universal hashing ensures (in a probabilistic sense) that the hash function application will behave as well as if it were using a random function, for any distribution of the input data. It will, however, have more collisions than perfect hashing and may require more operations than a special-purpose hash function. ### Applicability A hash function that allows only certain table sizes or strings only up to a certain length, or cannot accept a seed (i.e. allow double hashing) is less useful than one that does. A hash function is applicable in a variety of situations. Particularly within cryptography, notable applications include: - Integrity checking: Identical hash values for different files imply equality, providing a reliable means to detect file modifications. - Key derivation: Minor input changes result in a random-looking output alteration, known as the diffusion property.
https://en.wikipedia.org/wiki/Hash_function
Particularly within cryptography, notable applications include: - Integrity checking: Identical hash values for different files imply equality, providing a reliable means to detect file modifications. - Key derivation: Minor input changes result in a random-looking output alteration, known as the diffusion property. Thus, hash functions are valuable for key derivation functions. - Message authentication codes (MACs): Through the integration of a confidential key with the input data, hash functions can generate MACs ensuring the genuineness of the data, such as in HMACs. - Password storage: The password's hash value does not expose any password details, emphasizing the importance of securely storing hashed passwords on the server. - Signatures: Message hashes are signed rather than the whole message. ### Deterministic A hash procedure must be deterministic—for a given input value, it must always generate the same hash value. In other words, it must be a function of the data to be hashed, in the mathematical sense of the term. This requirement excludes hash functions that depend on external variable parameters, such as pseudo-random number generators or the time of day. It also excludes functions that depend on the memory address of the object being hashed, because the address may change during execution (as may happen on systems that use certain methods of garbage collection), although sometimes rehashing of the item is possible.
https://en.wikipedia.org/wiki/Hash_function
This requirement excludes hash functions that depend on external variable parameters, such as pseudo-random number generators or the time of day. It also excludes functions that depend on the memory address of the object being hashed, because the address may change during execution (as may happen on systems that use certain methods of garbage collection), although sometimes rehashing of the item is possible. The determinism is in the context of the reuse of the function. For example, Python adds the feature that hash functions make use of a randomized seed that is generated once when the Python process starts in addition to the input to be hashed. The Python hash (SipHash) is still a valid hash function when used within a single run, but if the values are persisted (for example, written to disk), they can no longer be treated as valid hash values, since in the next run the random value might differ. ### Defined range It is often desirable that the output of a hash function have fixed size (but see below). If, for example, the output is constrained to 32-bit integer values, then the hash values can be used to index into an array. Such hashing is commonly used to accelerate data searches. Producing fixed-length output from variable-length input can be accomplished by breaking the input data into chunks of specific size.
https://en.wikipedia.org/wiki/Hash_function
Such hashing is commonly used to accelerate data searches. Producing fixed-length output from variable-length input can be accomplished by breaking the input data into chunks of specific size. Hash functions used for data searches use some arithmetic expression that iteratively processes chunks of the input (such as the characters in a string) to produce the hash value. ### Variable range In many applications, the range of hash values may be different for each run of the program or may change along the same run (for instance, when a hash table needs to be expanded). In those situations, one needs a hash function which takes two parameters—the input data , and the number of allowed hash values. A common solution is to compute a fixed hash function with a very large range (say, to ), divide the result by , and use the division's remainder. If is itself a power of , this can be done by bit masking and bit shifting. When this approach is used, the hash function must be chosen so that the result has fairly uniform distribution between and , for any value of that may occur in the application. Depending on the function, the remainder may be uniform only for certain values of , e.g. odd or prime numbers. ### Variable range with minimal movement (dynamic hash function)
https://en.wikipedia.org/wiki/Hash_function
Depending on the function, the remainder may be uniform only for certain values of , e.g. odd or prime numbers. ### Variable range with minimal movement (dynamic hash function) When the hash function is used to store values in a hash table that outlives the run of the program, and the hash table needs to be expanded or shrunk, the hash table is referred to as a dynamic hash table. A hash function that will relocate the minimum number of records when the table is resized is desirable. What is needed is a hash function (where is the key being hashed and is the number of allowed hash values) such that with probability close to . Linear hashing and spiral hashing are examples of dynamic hash functions that execute in constant time but relax the property of uniformity to achieve the minimal movement property. Extendible hashing uses a dynamic hash function that requires space proportional to to compute the hash function, and it becomes a function of the previous keys that have been inserted. Several algorithms that preserve the uniformity property but require time proportional to to compute the value of have been invented. A hash function with minimal movement is especially useful in distributed hash tables. ### Data normalization In some applications, the input data may contain features that are irrelevant for comparison purposes.
https://en.wikipedia.org/wiki/Hash_function
A hash function with minimal movement is especially useful in distributed hash tables. ### Data normalization In some applications, the input data may contain features that are irrelevant for comparison purposes. For example, when looking up a personal name, it may be desirable to ignore the distinction between upper and lower case letters. For such data, one must use a hash function that is compatible with the data equivalence criterion being used: that is, any two inputs that are considered equivalent must yield the same hash value. This can be accomplished by normalizing the input before hashing it, as by upper-casing all letters. ## Hashing integer data types There are several common algorithms for hashing integers. The method giving the best distribution is data-dependent. One of the simplest and most common methods in practice is the modulo division method. ### Identity hash function If the data to be hashed is small enough, then one can use the data itself (reinterpreted as an integer) as the hashed value. The cost of computing this identity hash function is effectively zero. This hash function is perfect, as it maps each input to a distinct hash value. The meaning of "small enough" depends on the size of the type that is used as the hashed value. For example, in Java, the hash code is a 32-bit integer.
https://en.wikipedia.org/wiki/Hash_function
The meaning of "small enough" depends on the size of the type that is used as the hashed value. For example, in Java, the hash code is a 32-bit integer. Thus the 32-bit integer `Integer` and 32-bit floating-point `Float` objects can simply use the value directly, whereas the 64-bit integer `Long` and 64-bit floating-point `Double` cannot. Other types of data can also use this hashing scheme. For example, when mapping character strings between upper and lower case, one can use the binary encoding of each character, interpreted as an integer, to index a table that gives the alternative form of that character ("A" for "a", "8" for "8", etc.). If each character is stored in 8 bits (as in extended ASCII or ISO Latin 1), the table has only 28 = 256 entries; in the case of Unicode characters, the table would have 17 × 216 = entries. The same technique can be used to map two-letter country codes like "us" or "za" to country names (262 = 676 table entries), 5-digit ZIP codes like 13083 to city names ( entries), etc. Invalid data values (such as the country code "xx" or the ZIP code 00000) may be left undefined in the table or mapped to some appropriate "null" value.
https://en.wikipedia.org/wiki/Hash_function
The same technique can be used to map two-letter country codes like "us" or "za" to country names (262 = 676 table entries), 5-digit ZIP codes like 13083 to city names ( entries), etc. Invalid data values (such as the country code "xx" or the ZIP code 00000) may be left undefined in the table or mapped to some appropriate "null" value. ### Trivial hash function If the keys are uniformly or sufficiently uniformly distributed over the key space, so that the key values are essentially random, then they may be considered to be already "hashed". In this case, any number of any bits in the key may be extracted and collated as an index into the hash table. For example, a simple hash function might mask off the least significant bits and use the result as an index into a hash table of size . ### Mid-squares A mid-squares hash code is produced by squaring the input and extracting an appropriate number of middle digits or bits. For example, if the input is and the hash table size , then squaring the key produces , so the hash code is taken as the middle 4 digits of the 17-digit number (ignoring the high digit) 8750. The mid-squares method produces a reasonable hash code if there is not a lot of leading or trailing zeros in the key.
https://en.wikipedia.org/wiki/Hash_function
For example, if the input is and the hash table size , then squaring the key produces , so the hash code is taken as the middle 4 digits of the 17-digit number (ignoring the high digit) 8750. The mid-squares method produces a reasonable hash code if there is not a lot of leading or trailing zeros in the key. This is a variant of multiplicative hashing, but not as good because an arbitrary key is not a good multiplier. ### Division hashing A standard technique is to use a modulo function on the key, by selecting a divisor which is a prime number close to the table size, so . The table size is usually a power of 2. This gives a distribution from . This gives good results over a large number of key sets. A significant drawback of division hashing is that division requires multiple cycles on most modern architectures (including x86) and can be 10 times slower than multiplication. A second drawback is that it will not break up clustered keys. For example, the keys 123000, 456000, 789000, etc. modulo 1000 all map to the same address. This technique works well in practice because many key sets are sufficiently random already, and the probability that a key set will be cyclical by a large prime number is small.
https://en.wikipedia.org/wiki/Hash_function
For example, the keys 123000, 456000, 789000, etc. modulo 1000 all map to the same address. This technique works well in practice because many key sets are sufficiently random already, and the probability that a key set will be cyclical by a large prime number is small. ### Algebraic coding Algebraic coding is a variant of the division method of hashing which uses division by a polynomial modulo 2 instead of an integer to map bits to bits. In this approach, , and we postulate an th-degree polynomial . A key can be regarded as the polynomial . The remainder using polynomial arithmetic modulo 2 is . Then . If is constructed to have or fewer non-zero coefficients, then keys which share fewer than bits are guaranteed to not collide. is a function of , , and (the last of which is a divisor of ) and is constructed from the finite field . Knuth gives an example: taking yields . The derivation is as follows: Let be the smallest set of integers such that and . Define $$ P(x)=\prod_{j\in S}(x-\alpha^j) $$ where and where the coefficients of are computed in this field. Then the degree of .
https://en.wikipedia.org/wiki/Hash_function
Define $$ P(x)=\prod_{j\in S}(x-\alpha^j) $$ where and where the coefficients of are computed in this field. Then the degree of . Since is a root of whenever is a root, it follows that the coefficients of satisfy , so they are all 0 or 1. If is any nonzero polynomial modulo 2 with at most nonzero coefficients, then is not a multiple of modulo 2. If follows that the corresponding hash function will map keys with fewer than bits in common to unique indices. The usual outcome is that either will get large, or will get large, or both, for the scheme to be computationally feasible. Therefore, it is more suited to hardware or microcode implementation. ### Unique permutation hashing Unique permutation hashing has a guaranteed best worst-case insertion time. ### Multiplicative hashing Standard multiplicative hashing uses the formula , which produces a hash value in . The value is an appropriately chosen value that should be relatively prime to ; it should be large, and its binary representation a random mix of 1s and 0s. An important practical special case occurs when and are powers of 2 and is the machine word size. In this case, this formula becomes .
https://en.wikipedia.org/wiki/Hash_function
An important practical special case occurs when and are powers of 2 and is the machine word size. In this case, this formula becomes . This is special because arithmetic modulo is done by default in low-level programming languages and integer division by a power of 2 is simply a right-shift, so, in C, for example, this function becomes unsigned hash(unsigned K) { return (a*K) >> (w-m); } and for fixed and this translates into a single integer multiplication and right-shift, making it one of the fastest hash functions to compute. Multiplicative hashing is susceptible to a "common mistake" that leads to poor diffusion—higher-value input bits do not affect lower-value output bits. A transmutation on the input which shifts the span of retained top bits down and XORs or ADDs them to the key before the multiplication step corrects for this. The resulting function looks like: unsigned hash(unsigned K) { K ^= K >> (w-m); return (a*K) >> (w-m); } ### Fibonacci hashing Fibonacci hashing is a form of multiplicative hashing in which the multiplier is , where is the machine word length and (phi) is the golden ratio (approximately 1.618).
https://en.wikipedia.org/wiki/Hash_function
The resulting function looks like: unsigned hash(unsigned K) { K ^= K >> (w-m); return (a*K) >> (w-m); } ### Fibonacci hashing Fibonacci hashing is a form of multiplicative hashing in which the multiplier is , where is the machine word length and (phi) is the golden ratio (approximately 1.618). A property of this multiplier is that it uniformly distributes over the table space, blocks of consecutive keys with respect to any block of bits in the key. Consecutive keys within the high bits or low bits of the key (or some other field) are relatively common. The multipliers for various word lengths are: - 16: a = 9E3716 = 10 - 32: a = 16 = 10 - 48: a = 16 = 10 - 64: a = 16 = 10 The multiplier should be odd, so the least significant bit of the output is invertible modulo . The last two values given above are rounded (up and down, respectively) by more than 1/2 of a least-significant bit to achieve this. ### Zobrist hashing Tabulation hashing, more generally known as Zobrist hashing after Albert Zobrist, is a method for constructing universal families of hash functions by combining table lookup with XOR operations.
https://en.wikipedia.org/wiki/Hash_function
The last two values given above are rounded (up and down, respectively) by more than 1/2 of a least-significant bit to achieve this. ### Zobrist hashing Tabulation hashing, more generally known as Zobrist hashing after Albert Zobrist, is a method for constructing universal families of hash functions by combining table lookup with XOR operations. This algorithm has proven to be very fast and of high quality for hashing purposes (especially hashing of integer-number keys). Zobrist hashing was originally introduced as a means of compactly representing chess positions in computer game-playing programs. A unique random number was assigned to represent each type of piece (six each for black and white) on each space of the board. Thus a table of 64×12 such numbers is initialized at the start of the program. The random numbers could be any length, but 64 bits was natural due to the 64 squares on the board. A position was transcribed by cycling through the pieces in a position, indexing the corresponding random numbers (vacant spaces were not included in the calculation) and XORing them together (the starting value could be 0 (the identity value for XOR) or a random seed). The resulting value was reduced by modulo, folding, or some other operation to produce a hash table index.
https://en.wikipedia.org/wiki/Hash_function
A position was transcribed by cycling through the pieces in a position, indexing the corresponding random numbers (vacant spaces were not included in the calculation) and XORing them together (the starting value could be 0 (the identity value for XOR) or a random seed). The resulting value was reduced by modulo, folding, or some other operation to produce a hash table index. The original Zobrist hash was stored in the table as the representation of the position. Later, the method was extended to hashing integers by representing each byte in each of 4 possible positions in the word by a unique 32-bit random number. Thus, a table of 28×4 random numbers is constructed. A 32-bit hashed integer is transcribed by successively indexing the table with the value of each byte of the plain text integer and XORing the loaded values together (again, the starting value can be the identity value or a random seed). The natural extension to 64-bit integers is by use of a table of 28×8 64-bit random numbers. This kind of function has some nice theoretical properties, one of which is called 3-tuple independence, meaning that every 3-tuple of keys is equally likely to be mapped to any 3-tuple of hash values. ### Customized hash function A hash function can be designed to exploit existing entropy in the keys.
https://en.wikipedia.org/wiki/Hash_function
This kind of function has some nice theoretical properties, one of which is called 3-tuple independence, meaning that every 3-tuple of keys is equally likely to be mapped to any 3-tuple of hash values. ### Customized hash function A hash function can be designed to exploit existing entropy in the keys. If the keys have leading or trailing zeros, or particular fields that are unused, always zero or some other constant, or generally vary little, then masking out only the volatile bits and hashing on those will provide a better and possibly faster hash function. Selected divisors or multipliers in the division and multiplicative schemes may make more uniform hash functions if the keys are cyclic or have other redundancies. ## Hashing variable-length data When the data values are long (or variable-length) character strings—such as personal names, web page addresses, or mail messages—their distribution is usually very uneven, with complicated dependencies. For example, text in any natural language has highly non-uniform distributions of characters, and character pairs, characteristic of the language. For such data, it is prudent to use a hash function that depends on all characters of the string—and depends on each character in a different way.
https://en.wikipedia.org/wiki/Hash_function
For example, text in any natural language has highly non-uniform distributions of characters, and character pairs, characteristic of the language. For such data, it is prudent to use a hash function that depends on all characters of the string—and depends on each character in a different way. ### Middle and ends Simplistic hash functions may add the first and last characters of a string along with the length, or form a word-size hash from the middle 4 characters of a string. This saves iterating over the (potentially long) string, but hash functions that do not hash on all characters of a string can readily become linear due to redundancies, clustering, or other pathologies in the key set. Such strategies may be effective as a custom hash function if the structure of the keys is such that either the middle, ends, or other fields are zero or some other invariant constant that does not differentiate the keys; then the invariant parts of the keys can be ignored. ### Character folding The paradigmatic example of folding by characters is to add up the integer values of all the characters in the string. A better idea is to multiply the hash total by a constant, typically a sizable prime number, before adding in the next character, ignoring overflow. Using exclusive-or instead of addition is also a plausible alternative.
https://en.wikipedia.org/wiki/Hash_function
A better idea is to multiply the hash total by a constant, typically a sizable prime number, before adding in the next character, ignoring overflow. Using exclusive-or instead of addition is also a plausible alternative. The final operation would be a modulo, mask, or other function to reduce the word value to an index the size of the table. The weakness of this procedure is that information may cluster in the upper or lower bits of the bytes; this clustering will remain in the hashed result and cause more collisions than a proper randomizing hash. ASCII byte codes, for example, have an upper bit of 0, and printable strings do not use the last byte code or most of the first 32 byte codes, so the information, which uses the remaining byte codes, is clustered in the remaining bits in an unobvious manner. The classic approach, dubbed the PJW hash based on the work of Peter J. Weinberger at Bell Labs in the 1970s, was originally designed for hashing identifiers into compiler symbol tables as given in the "Dragon Book". This hash function offsets the bytes 4 bits before adding them together. When the quantity wraps, the high 4 bits are shifted out and if non-zero, xored back into the low byte of the cumulative quantity.
https://en.wikipedia.org/wiki/Hash_function
This hash function offsets the bytes 4 bits before adding them together. When the quantity wraps, the high 4 bits are shifted out and if non-zero, xored back into the low byte of the cumulative quantity. The result is a word-size hash code to which a modulo or other reducing operation can be applied to produce the final hash index. Today, especially with the advent of 64-bit word sizes, much more efficient variable-length string hashing by word chunks is available. ### Word length folding Modern microprocessors will allow for much faster processing if 8-bit character strings are not hashed by processing one character at a time, but by interpreting the string as an array of 32-bit or 64-bit integers and hashing/accumulating these "wide word" integer values by means of arithmetic operations (e.g. multiplication by constant and bit-shifting). The final word, which may have unoccupied byte positions, is filled with zeros or a specified randomizing value before being folded into the hash. The accumulated hash code is reduced by a final modulo or other operation to yield an index into the table. ### Radix conversion hashing Analogous to the way an ASCII or EBCDIC character string representing a decimal number is converted to a numeric quantity for computing, a variable-length string can be converted as .
https://en.wikipedia.org/wiki/Hash_function
The accumulated hash code is reduced by a final modulo or other operation to yield an index into the table. ### Radix conversion hashing Analogous to the way an ASCII or EBCDIC character string representing a decimal number is converted to a numeric quantity for computing, a variable-length string can be converted as . This is simply a polynomial in a radix that takes the components as the characters of the input string of length . It can be used directly as the hash code, or a hash function applied to it to map the potentially large value to the hash table size. The value of is usually a prime number large enough to hold the number of different characters in the character set of potential keys. Radix conversion hashing of strings minimizes the number of collisions. Available data sizes may restrict the maximum length of string that can be hashed with this method. For example, a 128-bit word will hash only a 26-character alphabetic string (ignoring case) with a radix of 29; a printable ASCII string is limited to 9 characters using radix 97 and a 64-bit word. However, alphabetic keys are usually of modest length, because keys must be stored in the hash table. Numeric character strings are usually not a problem; 64 bits can count up to , or 19 decimal digits with radix 10.
https://en.wikipedia.org/wiki/Hash_function
However, alphabetic keys are usually of modest length, because keys must be stored in the hash table. Numeric character strings are usually not a problem; 64 bits can count up to , or 19 decimal digits with radix 10. ### Rolling hash In some applications, such as substring search, one can compute a hash function for every -character substring of a given -character string by advancing a window of width characters along the string, where is a fixed integer, and . The straightforward solution, which is to extract such a substring at every character position in the text and compute separately, requires a number of operations proportional to . However, with the proper choice of , one can use the technique of rolling hash to compute all those hashes with an effort proportional to where is the number of occurrences of the substring. The most familiar algorithm of this type is Rabin-Karp with best and average case performance and worst case (in all fairness, the worst case here is gravely pathological: both the text string and substring are composed of a repeated single character, such as ="AAAAAAAAAAA", and ="AAA"). The hash function used for the algorithm is usually the Rabin fingerprint, designed to avoid collisions in 8-bit character strings, but other suitable hash functions are also used. ### Fuzzy hash ### Perceptual hash
https://en.wikipedia.org/wiki/Hash_function
### Fuzzy hash ### Perceptual hash ## Analysis Worst case results for a hash function can be assessed two ways: theoretical and practical. The theoretical worst case is the probability that all keys map to a single slot. The practical worst case is the expected longest probe sequence (hash function + collision resolution method). This analysis considers uniform hashing, that is, any key will map to any particular slot with probability , a characteristic of universal hash functions. While Knuth worries about adversarial attack on real time systems, Gonnet has shown that the probability of such a case is "ridiculously small". His representation was that the probability of of keys mapping to a single slot is , where is the load factor, . ## History The term hash offers a natural analogy with its non-technical meaning (to chop up or make a mess out of something), given how hash functions scramble their input data to derive their output. In his research for the precise origin of the term, Donald Knuth notes that, while Hans Peter Luhn of IBM appears to have been the first to use the concept of a hash function in a memo dated January 1953, the term itself did not appear in published literature until the late 1960s, in Herbert Hellerman's Digital Computer System Principles, even though it was already widespread jargon by then.
https://en.wikipedia.org/wiki/Hash_function
Statistical inference is the process of using data analysis to infer properties of an underlying probability distribution. Inferential statistical analysis infers properties of a population, for example by testing hypotheses and deriving estimates. It is assumed that the observed data set is sampled from a larger population. Inferential statistics can be contrasted with descriptive statistics. Descriptive statistics is solely concerned with properties of the observed data, and it does not rest on the assumption that the data come from a larger population. In machine learning, the term inference is sometimes used instead to mean "make a prediction, by evaluating an already trained model"; in this context inferring properties of the model is referred to as training or learning (rather than inference), and using a model for prediction is referred to as inference (instead of prediction); see also predictive inference. ## Introduction Statistical inference makes propositions about a population, using data drawn from the population with some form of sampling. Given a hypothesis about a population, for which we wish to draw inferences, statistical inference consists of (first) selecting a statistical model of the process that generates the data and (second) deducing propositions from the model. Konishi and Kitagawa state "The majority of the problems in statistical inference can be considered to be problems related to statistical modeling".
https://en.wikipedia.org/wiki/Statistical_inference
Given a hypothesis about a population, for which we wish to draw inferences, statistical inference consists of (first) selecting a statistical model of the process that generates the data and (second) deducing propositions from the model. Konishi and Kitagawa state "The majority of the problems in statistical inference can be considered to be problems related to statistical modeling". Relatedly, Sir David Cox has said, "How [the] translation from subject-matter problem to statistical model is done is often the most critical part of an analysis". The conclusion of a statistical inference is a statistical proposition. Some common forms of statistical proposition are the following: - a point estimate, i.e. a particular value that best approximates some parameter of interest; - an interval estimate, e.g. a confidence interval (or set estimate), i.e. an interval constructed using a dataset drawn from a population so that, under repeated sampling of such datasets, such intervals would contain the true parameter value with the probability at the stated confidence level; - a credible interval, i.e. a set of values containing, for example, 95% of posterior belief; - rejection of a hypothesis; - clustering or classification of data points into groups. ## Models and assumptions Any statistical inference requires some assumptions. A statistical model is a set of assumptions concerning the generation of the observed data and similar data.
https://en.wikipedia.org/wiki/Statistical_inference
## Models and assumptions Any statistical inference requires some assumptions. A statistical model is a set of assumptions concerning the generation of the observed data and similar data. Descriptions of statistical models usually emphasize the role of population quantities of interest, about which we wish to draw inference. Descriptive statistics are typically used as a preliminary step before more formal inferences are drawn. ### Degree of models/assumptions Statisticians distinguish between three levels of modeling assumptions: - Fully parametric: The probability distributions describing the data-generation process are assumed to be fully described by a family of probability distributions involving only a finite number of unknown parameters. For example, one may assume that the distribution of population values is truly Normal, with unknown mean and variance, and that datasets are generated by 'simple' random sampling. The family of generalized linear models is a widely used and flexible class of parametric models. - Non-parametric: The assumptions made about the process generating the data are much less than in parametric statistics and may be minimal.
https://en.wikipedia.org/wiki/Statistical_inference
The family of generalized linear models is a widely used and flexible class of parametric models. - Non-parametric: The assumptions made about the process generating the data are much less than in parametric statistics and may be minimal. For example, every continuous probability distribution has a median, which may be estimated using the sample median or the Hodges–Lehmann–Sen estimator, which has good properties when the data arise from simple random sampling. - Semi-parametric: This term typically implies assumptions 'in between' fully and non-parametric approaches. For example, one may assume that a population distribution has a finite mean. Furthermore, one may assume that the mean response level in the population depends in a truly linear manner on some covariate (a parametric assumption) but not make any parametric assumption describing the variance around that mean (i.e. about the presence or possible form of any heteroscedasticity). More generally, semi-parametric models can often be separated into 'structural' and 'random variation' components. One component is treated parametrically and the other non-parametrically. The well-known Cox model is a set of semi-parametric assumptions.
https://en.wikipedia.org/wiki/Statistical_inference
One component is treated parametrically and the other non-parametrically. The well-known Cox model is a set of semi-parametric assumptions. ### Importance of valid models/assumptions Whatever level of assumption is made, correctly calibrated inference, in general, requires these assumptions to be correct; i.e. that the data-generating mechanisms really have been correctly specified. Incorrect assumptions of 'simple' random sampling can invalidate statistical inference. More complex semi- and fully parametric assumptions are also cause for concern. For example, incorrectly assuming the Cox model can in some cases lead to faulty conclusions. Incorrect assumptions of Normality in the population also invalidates some forms of regression-based inference. The use of any parametric model is viewed skeptically by most experts in sampling human populations: "most sampling statisticians, when they deal with confidence intervals at all, limit themselves to statements about [estimators] based on very large samples, where the central limit theorem ensures that these [estimators] will have distributions that are nearly normal." In particular, a normal distribution "would be a totally unrealistic and catastrophically unwise assumption to make if we were dealing with any kind of economic population."
https://en.wikipedia.org/wiki/Statistical_inference
The use of any parametric model is viewed skeptically by most experts in sampling human populations: "most sampling statisticians, when they deal with confidence intervals at all, limit themselves to statements about [estimators] based on very large samples, where the central limit theorem ensures that these [estimators] will have distributions that are nearly normal." In particular, a normal distribution "would be a totally unrealistic and catastrophically unwise assumption to make if we were dealing with any kind of economic population." Here, the central limit theorem states that the distribution of the sample mean "for very large samples" is approximately normally distributed, if the distribution is not heavy-tailed. #### Approximate distributions Given the difficulty in specifying exact distributions of sample statistics, many methods have been developed for approximating these. With finite samples, approximation results measure how close a limiting distribution approaches the statistic's sample distribution: For example, with 10,000 independent samples the normal distribution approximates (to two digits of accuracy) the distribution of the sample mean for many population distributions, by the Berry–Esseen theorem. Yet for many practical purposes, the normal approximation provides a good approximation to the sample-mean's distribution when there are 10 (or more) independent samples, according to simulation studies and statisticians' experience.
https://en.wikipedia.org/wiki/Statistical_inference
With finite samples, approximation results measure how close a limiting distribution approaches the statistic's sample distribution: For example, with 10,000 independent samples the normal distribution approximates (to two digits of accuracy) the distribution of the sample mean for many population distributions, by the Berry–Esseen theorem. Yet for many practical purposes, the normal approximation provides a good approximation to the sample-mean's distribution when there are 10 (or more) independent samples, according to simulation studies and statisticians' experience. Following Kolmogorov's work in the 1950s, advanced statistics uses approximation theory and functional analysis to quantify the error of approximation. In this approach, the metric geometry of probability distributions is studied; this approach quantifies approximation error with, for example, the Kullback–Leibler divergence, Bregman divergence, and the Hellinger distance. Erik Torgerson (1991) Comparison of Statistical Experiments, volume 36 of Encyclopedia of Mathematics. Cambridge University Press. With indefinitely large samples, limiting results like the central limit theorem describe the sample statistic's limiting distribution if one exists. Limiting results are not statements about finite samples, and indeed are irrelevant to finite samples. "Indeed, limit theorems 'as  tends to infinity' are logically devoid of content about what happens at any particular .
https://en.wikipedia.org/wiki/Statistical_inference
Limiting results are not statements about finite samples, and indeed are irrelevant to finite samples. "Indeed, limit theorems 'as  tends to infinity' are logically devoid of content about what happens at any particular . All they can do is suggest certain approaches whose performance must then be checked on the case at hand." — Le Cam (1986) (page xiv) However, the asymptotic theory of limiting distributions is often invoked for work with finite samples. For example, limiting results are often invoked to justify the generalized method of moments and the use of generalized estimating equations, which are popular in econometrics and biostatistics. The magnitude of the difference between the limiting distribution and the true distribution (formally, the 'error' of the approximation) can be assessed using simulation. The heuristic application of limiting results to finite samples is common practice in many applications, especially with low-dimensional models with log-concave likelihoods (such as with one-parameter exponential families). ### Randomization-based models For a given dataset that was produced by a randomization design, the randomization distribution of a statistic (under the null-hypothesis) is defined by evaluating the test statistic for all of the plans that could have been generated by the randomization design.
https://en.wikipedia.org/wiki/Statistical_inference
The heuristic application of limiting results to finite samples is common practice in many applications, especially with low-dimensional models with log-concave likelihoods (such as with one-parameter exponential families). ### Randomization-based models For a given dataset that was produced by a randomization design, the randomization distribution of a statistic (under the null-hypothesis) is defined by evaluating the test statistic for all of the plans that could have been generated by the randomization design. In frequentist inference, the randomization allows inferences to be based on the randomization distribution rather than a subjective model, and this is important especially in survey sampling and design of experiments. Hinkelmann and Kempthorne(2008) Statistical inference from randomized studies is also more straightforward than many other situations. David A. Freedman et alia's Statistics. In ### Bayesian inference , randomization is also of importance: in survey sampling, use of sampling without replacement ensures the exchangeability of the sample with the population; in randomized experiments, randomization warrants a missing at random assumption for covariate information. Objective randomization allows properly inductive procedures. Peirce (1883)Rao, C.R. (1997) Statistics and Truth: Putting Chance to Work, World Scientific.
https://en.wikipedia.org/wiki/Statistical_inference
Objective randomization allows properly inductive procedures. Peirce (1883)Rao, C.R. (1997) Statistics and Truth: Putting Chance to Work, World Scientific. Many statisticians prefer randomization-based analysis of data that was generated by well-defined randomization procedures. (However, it is true that in fields of science with developed theoretical knowledge and experimental control, randomized experiments may increase the costs of experimentation without improving the quality of inferences. Cox (2006), p. 196.) Similarly, results from randomized experiments are recommended by leading statistical authorities as allowing inferences with greater reliability than do observational studies of the same phenomena. However, a good observational study may be better than a bad randomized experiment. The statistical analysis of a randomized experiment may be based on the randomization scheme stated in the experimental protocol and does not need a subjective model. Hinkelmann & Kempthorne (2008) However, at any time, some hypotheses cannot be tested using objective statistical models, which accurately describe randomized experiments or random samples. In some cases, such randomized studies are uneconomical or unethical. #### Model-based analysis of randomized experiments It is standard practice to refer to a statistical model, e.g., a linear or logistic models, when analyzing data from randomized experiments. However, the randomization scheme guides the choice of a statistical model.
https://en.wikipedia.org/wiki/Statistical_inference
#### Model-based analysis of randomized experiments It is standard practice to refer to a statistical model, e.g., a linear or logistic models, when analyzing data from randomized experiments. However, the randomization scheme guides the choice of a statistical model. It is not possible to choose an appropriate model without knowing the randomization scheme. Seriously misleading results can be obtained analyzing data from randomized experiments while ignoring the experimental protocol; common mistakes include forgetting the blocking used in an experiment and confusing repeated measurements on the same experimental unit with independent replicates of the treatment applied to different experimental units. #### Model-free randomization inference Model-free techniques provide a complement to model-based methods, which employ reductionist strategies of reality-simplification. The former combine, evolve, ensemble and train algorithms dynamically adapting to the contextual affinities of a process and learning the intrinsic characteristics of the observations.
https://en.wikipedia.org/wiki/Statistical_inference
#### Model-free randomization inference Model-free techniques provide a complement to model-based methods, which employ reductionist strategies of reality-simplification. The former combine, evolve, ensemble and train algorithms dynamically adapting to the contextual affinities of a process and learning the intrinsic characteristics of the observations. For example, model-free simple linear regression is based either on: - a random design, where the pairs of observations $$ (X_1,Y_1), (X_2,Y_2), \cdots , (X_n,Y_n) $$ are independent and identically distributed (iid), - or a deterministic design, where the variables $$ X_1, X_2, \cdots, X_n $$ are deterministic, but the corresponding response variables $$ Y_1,Y_2, \cdots, Y_n $$ are random and independent with a common conditional distribution, i.e., $$ P\left (Y_j \leq y | X_j =x\right ) = D_x(y) $$ , which is independent of the index $$ j $$ . In either case, the model-free randomization inference for features of the common conditional distribution $$ D_x(.) $$ relies on some regularity conditions, e.g. functional smoothness.
https://en.wikipedia.org/wiki/Statistical_inference
For example, model-free simple linear regression is based either on: - a random design, where the pairs of observations $$ (X_1,Y_1), (X_2,Y_2), \cdots , (X_n,Y_n) $$ are independent and identically distributed (iid), - or a deterministic design, where the variables $$ X_1, X_2, \cdots, X_n $$ are deterministic, but the corresponding response variables $$ Y_1,Y_2, \cdots, Y_n $$ are random and independent with a common conditional distribution, i.e., $$ P\left (Y_j \leq y | X_j =x\right ) = D_x(y) $$ , which is independent of the index $$ j $$ . In either case, the model-free randomization inference for features of the common conditional distribution $$ D_x(.) $$ relies on some regularity conditions, e.g. functional smoothness. For instance, model-free randomization inference for the population feature conditional mean, $$ \mu(x)=E(Y | X = x) $$ , can be consistently estimated via local averaging or local polynomial fitting, under the assumption that $$ \mu(x) $$ is smooth.
https://en.wikipedia.org/wiki/Statistical_inference
In either case, the model-free randomization inference for features of the common conditional distribution $$ D_x(.) $$ relies on some regularity conditions, e.g. functional smoothness. For instance, model-free randomization inference for the population feature conditional mean, $$ \mu(x)=E(Y | X = x) $$ , can be consistently estimated via local averaging or local polynomial fitting, under the assumption that $$ \mu(x) $$ is smooth. Also, relying on asymptotic normality or resampling, we can construct confidence intervals for the population feature, in this case, the conditional mean, $$ \mu(x) $$ . ## Paradigms for inference Different schools of statistical inference have become established. These schools—or "paradigms"—are not mutually exclusive, and methods that work well under one paradigm often have attractive interpretations under other paradigms. Bandyopadhyay and Forster describe four paradigms: The classical (or frequentist) paradigm, the Bayesian paradigm, the likelihoodist paradigm, and the Akaikean-Information Criterion-based paradigm. ### Frequentist inference This paradigm calibrates the plausibility of propositions by considering (notional) repeated sampling of a population distribution to produce datasets similar to the one at hand.
https://en.wikipedia.org/wiki/Statistical_inference
Bandyopadhyay and Forster describe four paradigms: The classical (or frequentist) paradigm, the Bayesian paradigm, the likelihoodist paradigm, and the Akaikean-Information Criterion-based paradigm. ### Frequentist inference This paradigm calibrates the plausibility of propositions by considering (notional) repeated sampling of a population distribution to produce datasets similar to the one at hand. By considering the dataset's characteristics under repeated sampling, the frequentist properties of a statistical proposition can be quantified—although in practice this quantification may be challenging. #### Examples of frequentist inference - p-value - Confidence interval - Null hypothesis significance testing #### Frequentist inference, objectivity, and decision theory One interpretation of frequentist inference (or classical inference) is that it is applicable only in terms of frequency probability; that is, in terms of repeated sampling from a population. However, the approach of Neyman develops these procedures in terms of pre-experiment probabilities. That is, before undertaking an experiment, one decides on a rule for coming to a conclusion such that the probability of being correct is controlled in a suitable way: such a probability need not have a frequentist or repeated sampling interpretation.
https://en.wikipedia.org/wiki/Statistical_inference
However, the approach of Neyman develops these procedures in terms of pre-experiment probabilities. That is, before undertaking an experiment, one decides on a rule for coming to a conclusion such that the probability of being correct is controlled in a suitable way: such a probability need not have a frequentist or repeated sampling interpretation. In contrast, Bayesian inference works in terms of conditional probabilities (i.e. probabilities conditional on the observed data), compared to the marginal (but conditioned on unknown parameters) probabilities used in the frequentist approach. The frequentist procedures of significance testing and confidence intervals can be constructed without regard to utility functions. However, some elements of frequentist statistics, such as statistical decision theory, do incorporate utility functions. In particular, frequentist developments of optimal inference (such as minimum-variance unbiased estimators, or uniformly most powerful testing) make use of loss functions, which play the role of (negative) utility functions. Loss functions need not be explicitly stated for statistical theorists to prove that a statistical procedure has an optimality property.
https://en.wikipedia.org/wiki/Statistical_inference
In particular, frequentist developments of optimal inference (such as minimum-variance unbiased estimators, or uniformly most powerful testing) make use of loss functions, which play the role of (negative) utility functions. Loss functions need not be explicitly stated for statistical theorists to prove that a statistical procedure has an optimality property. However, loss-functions are often useful for stating optimality properties: for example, median-unbiased estimators are optimal under absolute value loss functions, in that they minimize expected loss, and least squares estimators are optimal under squared error loss functions, in that they minimize expected loss. While statisticians using frequentist inference must choose for themselves the parameters of interest, and the estimators/test statistic to be used, the absence of obviously explicit utilities and prior distributions has helped frequentist procedures to become widely viewed as 'objective'. Bayesian inference The Bayesian calculus describes degrees of belief using the 'language' of probability; beliefs are positive, integrate into one, and obey probability axioms. Bayesian inference uses the available posterior beliefs as the basis for making statistical propositions. There are several different justifications for using the Bayesian approach. #### Examples of Bayesian inference - Credible interval for interval estimation - Bayes factors for model comparison
https://en.wikipedia.org/wiki/Statistical_inference
There are several different justifications for using the Bayesian approach. #### Examples of Bayesian inference - Credible interval for interval estimation - Bayes factors for model comparison #### Bayesian inference, subjectivity and decision theory Many informal Bayesian inferences are based on "intuitively reasonable" summaries of the posterior. For example, the posterior mean, median and mode, highest posterior density intervals, and Bayes Factors can all be motivated in this way. While a user's utility function need not be stated for this sort of inference, these summaries do all depend (to some extent) on stated prior beliefs, and are generally viewed as subjective conclusions. (Methods of prior construction which do not require external input have been proposed but not yet fully developed.) Formally, Bayesian inference is calibrated with reference to an explicitly stated utility, or loss function; the 'Bayes rule' is the one which maximizes expected utility, averaged over the posterior uncertainty. Formal Bayesian inference therefore automatically provides optimal decisions in a decision theoretic sense. Given assumptions, data and utility, Bayesian inference can be made for essentially any problem, although not every statistical inference need have a Bayesian interpretation.
https://en.wikipedia.org/wiki/Statistical_inference
Formal Bayesian inference therefore automatically provides optimal decisions in a decision theoretic sense. Given assumptions, data and utility, Bayesian inference can be made for essentially any problem, although not every statistical inference need have a Bayesian interpretation. Analyses which are not formally Bayesian can be (logically) incoherent; a feature of Bayesian procedures which use proper priors (i.e. those integrable to one) is that they are guaranteed to be coherent. Some advocates of Bayesian inference assert that inference must take place in this decision-theoretic framework, and that Bayesian inference should not conclude with the evaluation and summarization of posterior beliefs. ### Likelihood-based inference Likelihood-based inference is a paradigm used to estimate the parameters of a statistical model based on observed data. Likelihoodism approaches statistics by using the likelihood function, denoted as $$ L(x | \theta) $$ , quantifies the probability of observing the given data $$ x $$ , assuming a specific set of parameter values $$ \theta $$ . In likelihood-based inference, the goal is to find the set of parameter values that maximizes the likelihood function, or equivalently, maximizes the probability of observing the given data.
https://en.wikipedia.org/wiki/Statistical_inference
Likelihoodism approaches statistics by using the likelihood function, denoted as $$ L(x | \theta) $$ , quantifies the probability of observing the given data $$ x $$ , assuming a specific set of parameter values $$ \theta $$ . In likelihood-based inference, the goal is to find the set of parameter values that maximizes the likelihood function, or equivalently, maximizes the probability of observing the given data. The process of likelihood-based inference usually involves the following steps: 1. Formulating the statistical model: A statistical model is defined based on the problem at hand, specifying the distributional assumptions and the relationship between the observed data and the unknown parameters. The model can be simple, such as a normal distribution with known variance, or complex, such as a hierarchical model with multiple levels of random effects. 1. Constructing the likelihood function: Given the statistical model, the likelihood function is constructed by evaluating the joint probability density or mass function of the observed data as a function of the unknown parameters. This function represents the probability of observing the data for different values of the parameters. 1. Maximizing the likelihood function: The next step is to find the set of parameter values that maximizes the likelihood function.
https://en.wikipedia.org/wiki/Statistical_inference
1. Maximizing the likelihood function: The next step is to find the set of parameter values that maximizes the likelihood function. This can be achieved using optimization techniques such as numerical optimization algorithms. The estimated parameter values, often denoted as $$ \bar{y} $$ , are the maximum likelihood estimates (MLEs). 1. Assessing uncertainty: Once the MLEs are obtained, it is crucial to quantify the uncertainty associated with the parameter estimates. This can be done by calculating standard errors, confidence intervals, or conducting hypothesis tests based on asymptotic theory or simulation techniques such as bootstrapping. 1. Model checking: After obtaining the parameter estimates and assessing their uncertainty, it is important to assess the adequacy of the statistical model. This involves checking the assumptions made in the model and evaluating the fit of the model to the data using goodness-of-fit tests, residual analysis, or graphical diagnostics. 1. Inference and interpretation: Finally, based on the estimated parameters and model assessment, statistical inference can be performed. This involves drawing conclusions about the population parameters, making predictions, or testing hypotheses based on the estimated model.
https://en.wikipedia.org/wiki/Statistical_inference
1. Inference and interpretation: Finally, based on the estimated parameters and model assessment, statistical inference can be performed. This involves drawing conclusions about the population parameters, making predictions, or testing hypotheses based on the estimated model. ### AIC-based inference The Akaike information criterion (AIC) is an estimator of the relative quality of statistical models for a given set of data. Given a collection of models for the data, AIC estimates the quality of each model, relative to each of the other models. Thus, AIC provides a means for model selection. AIC is founded on information theory: it offers an estimate of the relative information lost when a given model is used to represent the process that generated the data. (In doing so, it deals with the trade-off between the goodness of fit of the model and the simplicity of the model.) ### Other paradigms for inference #### Minimum description length The minimum description length (MDL) principle has been developed from ideas in information theory and the theory of Kolmogorov complexity. The (MDL) principle selects statistical models that maximally compress the data; inference proceeds without assuming counterfactual or non-falsifiable "data-generating mechanisms" or probability models for the data, as might be done in frequentist or Bayesian approaches.
https://en.wikipedia.org/wiki/Statistical_inference
#### Minimum description length The minimum description length (MDL) principle has been developed from ideas in information theory and the theory of Kolmogorov complexity. The (MDL) principle selects statistical models that maximally compress the data; inference proceeds without assuming counterfactual or non-falsifiable "data-generating mechanisms" or probability models for the data, as might be done in frequentist or Bayesian approaches. However, if a "data generating mechanism" does exist in reality, then according to Shannon's source coding theorem it provides the MDL description of the data, on average and asymptotically. In minimizing description length (or descriptive complexity), MDL estimation is similar to maximum likelihood estimation and maximum a posteriori estimation (using maximum-entropy Bayesian priors). However, MDL avoids assuming that the underlying probability model is known; the MDL principle can also be applied without assumptions that e.g. the data arose from independent sampling. Rissanen (1989), page 84 The MDL principle has been applied in communication-coding theory in information theory, in linear regression, and in data mining. The evaluation of MDL-based inferential procedures often uses techniques or criteria from computational complexity theory.
https://en.wikipedia.org/wiki/Statistical_inference
Rissanen (1989), page 84 The MDL principle has been applied in communication-coding theory in information theory, in linear regression, and in data mining. The evaluation of MDL-based inferential procedures often uses techniques or criteria from computational complexity theory. #### Fiducial inference Fiducial inference was an approach to statistical inference based on fiducial probability, also known as a "fiducial distribution". In subsequent work, this approach has been called ill-defined, extremely limited in applicability, and even fallacious.Zabell (1992) However this argument is the same as that which shows that a so-called confidence distribution is not a valid probability distribution and, since this has not invalidated the application of confidence intervals, it does not necessarily invalidate conclusions drawn from fiducial arguments. An attempt was made to reinterpret the early work of Fisher's fiducial argument as a special case of an inference theory using upper and lower probabilities. #### Structural inference Developing ideas of Fisher and of Pitman from 1938 to 1939, George A. Barnard developed "structural inference" or "pivotal inference", an approach using invariant probabilities on group families. Barnard reformulated the arguments behind fiducial inference on a restricted class of models on which "fiducial" procedures would be well-defined and useful.
https://en.wikipedia.org/wiki/Statistical_inference
#### Structural inference Developing ideas of Fisher and of Pitman from 1938 to 1939, George A. Barnard developed "structural inference" or "pivotal inference", an approach using invariant probabilities on group families. Barnard reformulated the arguments behind fiducial inference on a restricted class of models on which "fiducial" procedures would be well-defined and useful. Donald A. S. Fraser developed a general theory for structural inference based on group theory and applied this to linear models. The theory formulated by Fraser has close links to decision theory and Bayesian statistics and can provide optimal frequentist decision rules if they exist. ## Inference topics The topics below are usually included in the area of statistical inference. 1. Statistical assumptions 1. Statistical decision theory 1. Estimation theory 1. Statistical hypothesis testing 1. Revising opinions in statistics 1. Design of experiments, the analysis of variance, and regression 1. Survey sampling 1. Summarizing statistical data ## Predictive inference Predictive inference is an approach to statistical inference that emphasizes the prediction of future observations based on past observations. Initially, predictive inference was based on observable parameters and it was the main purpose of studying probability, but it fell out of favor in the 20th century due to a new parametric approach pioneered by Bruno de Finetti.
https://en.wikipedia.org/wiki/Statistical_inference
## Predictive inference Predictive inference is an approach to statistical inference that emphasizes the prediction of future observations based on past observations. Initially, predictive inference was based on observable parameters and it was the main purpose of studying probability, but it fell out of favor in the 20th century due to a new parametric approach pioneered by Bruno de Finetti. The approach modeled phenomena as a physical system observed with error (e.g., celestial mechanics). De Finetti's idea of exchangeability—that future observations should behave like past observations—came to the attention of the English-speaking world with the 1974 translation from French of his 1937 paper, and has since been propounded by such statisticians as Seymour Geisser.
https://en.wikipedia.org/wiki/Statistical_inference
A large language model (LLM) is a type of machine learning model designed for natural language processing tasks such as language generation. LLMs are language models with many parameters, and are trained with self-supervised learning on a vast amount of text. The largest and most capable LLMs are generative pretrained transformers (GPTs). Modern models can be fine-tuned for specific tasks or guided by prompt engineering. These models acquire predictive power regarding syntax, semantics, and ontologies inherent in human language corpora, but they also inherit inaccuracies and biases present in the data they are trained in. ## History Before 2017, there were a few language models that were large as compared to capacities then available. In the 1990s, the IBM alignment models pioneered statistical language modelling. A smoothed n-gram model in 2001 trained on 0.3 billion words achieved state-of-the-art perplexity at the time. In the 2000s, as Internet use became prevalent, some researchers constructed Internet-scale language datasets ("web as corpus"), upon which they trained statistical language models. In 2009, in most language processing tasks, statistical language models dominated over symbolic language models because they can usefully ingest large datasets. After neural networks became dominant in image processing around 2012, they were applied to language modelling as well.
https://en.wikipedia.org/wiki/Large_language_model
In 2009, in most language processing tasks, statistical language models dominated over symbolic language models because they can usefully ingest large datasets. After neural networks became dominant in image processing around 2012, they were applied to language modelling as well. Google converted its translation service to Neural Machine Translation in 2016. Because it preceded the existence of transformers, it was done by seq2seq deep LSTM networks. At the 2017 NeurIPS conference, Google researchers introduced the transformer architecture in their landmark paper "Attention Is All You Need". This paper's goal was to improve upon 2014 seq2seq technology, and was based mainly on the attention mechanism developed by Bahdanau et al. in 2014. The following year in 2018, BERT was introduced and quickly became "ubiquitous". Though the original transformer has both encoder and decoder blocks, BERT is an encoder-only model. Academic and research usage of BERT began to decline in 2023, following rapid improvements in the abilities of decoder-only models (such as GPT) to solve tasks via prompting. Although decoder-only GPT-1 was introduced in 2018, it was GPT-2 in 2019 that caught widespread attention because OpenAI at first deemed it too powerful to release publicly, out of fear of malicious use.
https://en.wikipedia.org/wiki/Large_language_model
Academic and research usage of BERT began to decline in 2023, following rapid improvements in the abilities of decoder-only models (such as GPT) to solve tasks via prompting. Although decoder-only GPT-1 was introduced in 2018, it was GPT-2 in 2019 that caught widespread attention because OpenAI at first deemed it too powerful to release publicly, out of fear of malicious use. GPT-3 in 2020 went a step further and as of 2024 is available only via API with no offering of downloading the model to execute locally. But it was the 2022 consumer-facing browser-based ChatGPT that captured the imaginations of the general population and caused some media hype and online buzz. The 2023 GPT-4 was praised for its increased accuracy and as a "holy grail" for its multimodal capabilities. OpenAI did not reveal the high-level architecture and the number of parameters of GPT-4. The release of ChatGPT led to an uptick in LLM usage across several research subfields of computer science, including robotics, software engineering, and societal impact work. In 2024 OpenAI released the reasoning model OpenAI o1, which generates long chains of thought before returning a final answer. Competing language models have for the most part been attempting to equal the GPT series, at least in terms of number of parameters.
https://en.wikipedia.org/wiki/Large_language_model
In 2024 OpenAI released the reasoning model OpenAI o1, which generates long chains of thought before returning a final answer. Competing language models have for the most part been attempting to equal the GPT series, at least in terms of number of parameters. Since 2022, source-available models have been gaining popularity, especially at first with BLOOM and LLaMA, though both have restrictions on the field of use. Mistral AI's models Mistral 7B and Mixtral 8x7b have the more permissive Apache License. In January 2025, DeepSeek released DeepSeek R1, a 671-billion-parameter open-weight model that performs comparably to OpenAI o1 but at a much lower cost. Since 2023, many LLMs have been trained to be multimodal, having the ability to also process or generate other types of data, such as images or audio. These LLMs are also called large multimodal models (LMMs). As of 2024, the largest and most capable models are all based on the transformer architecture. Some recent implementations are based on other architectures, such as recurrent neural network variants and Mamba (a state space model). ## Dataset preprocessing ### Tokenization As machine learning algorithms process numbers rather than text, the text must be converted to numbers.
https://en.wikipedia.org/wiki/Large_language_model
## Dataset preprocessing ### Tokenization As machine learning algorithms process numbers rather than text, the text must be converted to numbers. In the first step, a vocabulary is decided upon, then integer indices are arbitrarily but uniquely assigned to each vocabulary entry, and finally, an embedding is associated to the integer index. Algorithms include byte-pair encoding ( #### BPE ) and WordPiece. There are also special tokens serving as control characters, such as `[MASK]` for masked-out token (as used in BERT), and `[UNK]` ("unknown") for characters not appearing in the vocabulary. Also, some special symbols are used to denote special text formatting. For example, "Ġ" denotes a preceding whitespace in RoBERTa and GPT. "##" denotes continuation of a preceding word in BERT. For example, the BPE tokenizer used by GPT-3 (Legacy) would split `tokenizer: texts -> series of numerical "tokens"` as token izer : texts ->series  of numerical  " tokens" Tokenization also compresses the datasets. Because LLMs generally require input to be an array that is not jagged, the shorter texts must be "padded" until they match the length of the longest one.
https://en.wikipedia.org/wiki/Large_language_model
For example, the BPE tokenizer used by GPT-3 (Legacy) would split `tokenizer: texts -> series of numerical "tokens"` as token izer : texts ->series  of numerical  " tokens" Tokenization also compresses the datasets. Because LLMs generally require input to be an array that is not jagged, the shorter texts must be "padded" until they match the length of the longest one. How many tokens are, on average, needed per word depends on the language of the dataset. BPE As an example, consider a tokenizer based on byte-pair encoding. In the first step, all unique characters (including blanks and punctuation marks) are treated as an initial set of n-grams (i.e. initial set of uni-grams). Successively the most frequent pair of adjacent characters is merged into a bi-gram and all instances of the pair are replaced by it. All occurrences of adjacent pairs of (previously merged) n-grams that most frequently occur together are then again merged into even lengthier n-gram, until a vocabulary of prescribed size is obtained (in case of GPT-3, the size is 50257). After a tokenizer is trained, any text can be tokenized by it, as long as it does not contain characters not appearing in the initial-set of uni-grams.
https://en.wikipedia.org/wiki/Large_language_model
All occurrences of adjacent pairs of (previously merged) n-grams that most frequently occur together are then again merged into even lengthier n-gram, until a vocabulary of prescribed size is obtained (in case of GPT-3, the size is 50257). After a tokenizer is trained, any text can be tokenized by it, as long as it does not contain characters not appearing in the initial-set of uni-grams. #### Problems A token vocabulary based on the frequencies extracted from mainly English corpora uses as few tokens as possible for an average English word. However, an average word in another language encoded by such an English-optimized tokenizer is split into a suboptimal amount of tokens. GPT-2 tokenizer can use up to 15 times more tokens per word for some languages, for example for the Shan language from Myanmar. Even more widespread languages such as Portuguese and German have "a premium of 50%" compared to English. Greedy tokenization also causes subtle problems with text completion. ### Dataset cleaning In the context of training LLMs, datasets are typically cleaned by removing low-quality, duplicated, or toxic data. Cleaned datasets can increase training efficiency and lead to improved downstream performance. A trained LLM can be used to clean datasets for training a further LLM.
https://en.wikipedia.org/wiki/Large_language_model
Cleaned datasets can increase training efficiency and lead to improved downstream performance. A trained LLM can be used to clean datasets for training a further LLM. With the increasing proportion of LLM-generated content on the web, data cleaning in the future may include filtering out such content. LLM-generated content can pose a problem if the content is similar to human text (making filtering difficult) but of lower quality (degrading performance of models trained on it). ### Synthetic data Training of largest language models might need more linguistic data than naturally available, or that the naturally occurring data is of insufficient quality. In these cases, synthetic data might be used. Microsoft's Phi series of LLMs is trained on textbook-like data generated by another LLM. ## Training and architecture ### Reinforcement learning from human feedback Reinforcement learning from human feedback (RLHF) through algorithms, such as proximal policy optimization, is used to further fine-tune a model based on a dataset of human preferences. ### Instruction tuning Using "self-instruct" approaches, LLMs have been able to bootstrap correct responses, replacing any naive responses, starting from human-generated corrections of a few cases.
https://en.wikipedia.org/wiki/Large_language_model
### Reinforcement learning from human feedback Reinforcement learning from human feedback (RLHF) through algorithms, such as proximal policy optimization, is used to further fine-tune a model based on a dataset of human preferences. ### Instruction tuning Using "self-instruct" approaches, LLMs have been able to bootstrap correct responses, replacing any naive responses, starting from human-generated corrections of a few cases. For example, in the instruction "Write an essay about the main themes represented in Hamlet," an initial naive completion might be "If you submit the essay after March 17, your grade will be reduced by 10% for each day of delay," based on the frequency of this textual sequence in the corpus. ### Mixture of experts The largest LLM may be too expensive to train and use directly. For such models, mixture of experts (MoE) can be applied, a line of research pursued by Google researchers since 2017 to train models reaching up to 1 trillion parameters. ### Prompt engineering, attention mechanism, and context window Most results previously achievable only by (costly) fine-tuning, can be achieved through prompt engineering, although limited to the scope of a single conversation (more precisely, limited to the scope of a context window).
https://en.wikipedia.org/wiki/Large_language_model
### Prompt engineering, attention mechanism, and context window Most results previously achievable only by (costly) fine-tuning, can be achieved through prompt engineering, although limited to the scope of a single conversation (more precisely, limited to the scope of a context window). In order to find out which tokens are relevant to each other within the scope of the context window, the attention mechanism calculates "soft" weights for each token, more precisely for its embedding, by using multiple attention heads, each with its own "relevance" for calculating its own soft weights. For example, the small (i.e. 117M parameter sized) GPT-2 model has had twelve attention heads and a context window of only 1k tokens. In its medium version it has 345M parameters and contains 24 layers, each with 12 attention heads. For the training with gradient descent a batch size of 512 was utilized. The largest models, such as Google's Gemini 1.5, presented in February 2024, can have a context window sized up to 1 million (context window of 10 million was also "successfully tested"). Other models with large context windows includes Anthropic's Claude 2.1, with a context window of up to 200k tokens. Note that this maximum refers to the number of input tokens and that the maximum number of output tokens differs from the input and is often smaller.
https://en.wikipedia.org/wiki/Large_language_model
Other models with large context windows includes Anthropic's Claude 2.1, with a context window of up to 200k tokens. Note that this maximum refers to the number of input tokens and that the maximum number of output tokens differs from the input and is often smaller. For example, the GPT-4 Turbo model has a maximum output of 4096 tokens. Length of a conversation that the model can take into account when generating its next answer is limited by the size of a context window, as well. If the length of a conversation, for example with ChatGPT, is longer than its context window, only the parts inside the context window are taken into account when generating the next answer, or the model needs to apply some algorithm to summarize the too distant parts of conversation. The shortcomings of making a context window larger include higher computational cost and possibly diluting the focus on local context, while making it smaller can cause a model to miss an important long-range dependency. Balancing them is a matter of experimentation and domain-specific considerations. A model may be pre-trained either to predict how the segment continues, or what is missing in the segment, given a segment from its training dataset.
https://en.wikipedia.org/wiki/Large_language_model
Balancing them is a matter of experimentation and domain-specific considerations. A model may be pre-trained either to predict how the segment continues, or what is missing in the segment, given a segment from its training dataset. It can be either - autoregressive (i.e. predicting how the segment continues, as GPTs do): for example given a segment "I like to eat", the model predicts "ice cream", or "sushi". - "masked" (i.e. filling in the parts missing from the segment, the way "BERT" does it): for example, given a segment "I like to `[__] [__]` cream", the model predicts that "eat" and "ice" are missing. Models may be trained on auxiliary tasks which test their understanding of the data distribution, such as Next Sentence Prediction (NSP), in which pairs of sentences are presented and the model must predict whether they appear consecutively in the training corpus. During training, regularization loss is also used to stabilize training. However regularization loss is usually not used during testing and evaluation. ### Infrastructure Substantial infrastructure is necessary for training the largest models. ## Training cost The qualifier "large" in "large language model" is inherently vague, as there is no definitive threshold for the number of parameters required to qualify as "large". As time goes on, what was previously considered "large" may evolve.
https://en.wikipedia.org/wiki/Large_language_model
## Training cost The qualifier "large" in "large language model" is inherently vague, as there is no definitive threshold for the number of parameters required to qualify as "large". As time goes on, what was previously considered "large" may evolve. GPT-1 of 2018 is usually considered the first LLM, even though it has only 0.117 billion parameters. The tendency towards larger models is visible in the list of large language models. As technology advanced, large sums have been invested in increasingly large models. For example, training of the GPT-2 (i.e. a 1.5-billion-parameters model) in 2019 cost $50,000, while training of the PaLM (i.e. a 540-billion-parameters model) in 2022 cost $8 million, and Megatron-Turing NLG 530B (in 2021) cost around $11 million. For Transformer-based LLM, training cost is much higher than inference cost. It costs 6 FLOPs per parameter to train on one token, whereas it costs 1 to 2 FLOPs per parameter to infer on one token. ## Tool use There are certain tasks that, in principle, cannot be solved by any LLM, at least not without the use of external tools or additional software.
https://en.wikipedia.org/wiki/Large_language_model
It costs 6 FLOPs per parameter to train on one token, whereas it costs 1 to 2 FLOPs per parameter to infer on one token. ## Tool use There are certain tasks that, in principle, cannot be solved by any LLM, at least not without the use of external tools or additional software. An example of such a task is responding to the user's input '354 * 139 = ', provided that the LLM has not already encountered a continuation of this calculation in its training corpus. In such cases, the LLM needs to resort to running program code that calculates the result, which can then be included in its response.: Another example is "What is the time now? It is ", where a separate program interpreter would need to execute a code to get system time on the computer, so that the LLM can include it in its reply. This basic strategy can be sophisticated with multiple attempts of generated programs, and other sampling strategies. Generally, in order to get an LLM to use tools, one must fine-tune it for tool-use. If the number of tools is finite, then fine-tuning may be done just once. If the number of tools can grow arbitrarily, as with online API services, then the LLM can be fine-tuned to be able to read API documentation and call API correctly.
https://en.wikipedia.org/wiki/Large_language_model
If the number of tools is finite, then fine-tuning may be done just once. If the number of tools can grow arbitrarily, as with online API services, then the LLM can be fine-tuned to be able to read API documentation and call API correctly. Retrieval-augmented generation (RAG) is another approach that enhances LLMs by integrating them with document retrieval systems. Given a query, a document retriever is called to retrieve the most relevant documents. This is usually done by encoding the query and the documents into vectors, then finding the documents with vectors (usually stored in a vector database) most similar to the vector of the query. The LLM then generates an output based on both the query and context included from the retrieved documents. ## Agency An LLM is typically not an autonomous agent by itself, as it lacks the ability to interact with dynamic environments, recall past behaviors, and plan future actions, but can be transformed into one by integrating modules like profiling, memory, planning, and action. The ReAct pattern, a portmanteau of "Reason + Act", constructs an agent out of an LLM, using the LLM as a planner. The LLM is prompted to "think out loud".
https://en.wikipedia.org/wiki/Large_language_model
The ReAct pattern, a portmanteau of "Reason + Act", constructs an agent out of an LLM, using the LLM as a planner. The LLM is prompted to "think out loud". Specifically, the language model is prompted with a textual description of the environment, a goal, a list of possible actions, and a record of the actions and observations so far. It generates one or more thoughts before generating an action, which is then executed in the environment. The linguistic description of the environment given to the LLM planner can even be the LaTeX code of a paper describing the environment. In the DEPS ("Describe, Explain, Plan and Select") method, an LLM is first connected to the visual world via image descriptions, then it is prompted to produce plans for complex tasks and behaviors based on its pretrained knowledge and environmental feedback it receives. The Reflexion method constructs an agent that learns over multiple episodes. At the end of each episode, the LLM is given the record of the episode, and prompted to think up "lessons learned", which would help it perform better at a subsequent episode. These "lessons learned" are given to the agent in the subsequent episodes. Monte Carlo tree search can use an LLM as rollout heuristic. When a programmatic world model is not available, an LLM can also be prompted with a description of the environment to act as world model.
https://en.wikipedia.org/wiki/Large_language_model
Monte Carlo tree search can use an LLM as rollout heuristic. When a programmatic world model is not available, an LLM can also be prompted with a description of the environment to act as world model. For open-ended exploration, an LLM can be used to score observations for their "interestingness", which can be used as a reward signal to guide a normal (non-LLM) reinforcement learning agent. Alternatively, it can propose increasingly difficult tasks for curriculum learning. Instead of outputting individual actions, an LLM planner can also construct "skills", or functions for complex action sequences. The skills can be stored and later invoked, allowing increasing levels of abstraction in planning. LLM-powered agents can keep a long-term memory of its previous contexts, and the memory can be retrieved in the same way as Retrieval Augmented Generation. Multiple such agents can interact socially. ## Compression Typically, LLMs are trained with single- or half-precision floating point numbers (float32 and float16). One float16 has 16 bits, or 2 bytes, and so one billion parameters require 2 gigabytes. The largest models typically have 100 billion parameters, requiring 200 gigabytes to load, which places them outside the range of most consumer electronics.
https://en.wikipedia.org/wiki/Large_language_model
One float16 has 16 bits, or 2 bytes, and so one billion parameters require 2 gigabytes. The largest models typically have 100 billion parameters, requiring 200 gigabytes to load, which places them outside the range of most consumer electronics. Post-training quantization aims to decrease the space requirement by lowering precision of the parameters of a trained model, while preserving most of its performance. The simplest form of quantization simply truncates all numbers to a given number of bits. It can be improved by using a different quantization codebook per layer. Further improvement can be done by applying different precisions to different parameters, with higher precision for particularly important parameters ("outlier weights"). See the visual guide to quantization by Maarten Grootendorst for a visual depiction. While quantized models are typically frozen, and only pre-quantized models are fine-tuned, quantized models can still be fine-tuned. ## Multimodality Multimodality means "having several modalities", and a "modality" refers to a type of input or output, such as video, image, audio, text, proprioception, etc. There have been many AI models trained specifically to ingest one modality and output another modality, such as AlexNet for image to label, visual question answering for image-text to text, and speech recognition for speech to text.
https://en.wikipedia.org/wiki/Large_language_model
## Multimodality Multimodality means "having several modalities", and a "modality" refers to a type of input or output, such as video, image, audio, text, proprioception, etc. There have been many AI models trained specifically to ingest one modality and output another modality, such as AlexNet for image to label, visual question answering for image-text to text, and speech recognition for speech to text. A common method to create multimodal models out of an LLM is to "tokenize" the output of a trained encoder. Concretely, one can construct an LLM that can understand images as follows: take a trained LLM, and take a trained image encoder $$ E $$ . Make a small multilayered perceptron $$ f $$ , so that for any image $$ y $$ , the post-processed vector $$ f(E(y)) $$ has the same dimensions as an encoded token. That is an "image token". Then, one can interleave text tokens and image tokens. The compound model is then fine-tuned on an image-text dataset. This basic construction can be applied with more sophistication to improve the model. The image encoder may be frozen to improve stability.
https://en.wikipedia.org/wiki/Large_language_model
This basic construction can be applied with more sophistication to improve the model. The image encoder may be frozen to improve stability. Flamingo demonstrated the effectiveness of the tokenization method, finetuning a pair of pretrained language model and image encoder to perform better on visual question answering than models trained from scratch. Google PaLM model was fine-tuned into a multimodal model PaLM-E using the tokenization method, and applied to robotic control. LLaMA models have also been turned multimodal using the tokenization method, to allow image inputs, and video inputs. GPT-4 can use both text and image as inputs (although the vision component was not released to the public until GPT-4V); Google DeepMind's Gemini is also multimodal. Mistral introduced its own multimodal Pixtral 12B model in September 2024. ## Reasoning In late 2024, a new direction emerged in LLM development with models specifically designed for complex reasoning tasks. These "reasoning models" were trained to spend more time generating step-by-step solutions before providing final answers, similar to human problem-solving processes. OpenAI introduced this trend with their o1 model in September 2024, followed by o3 in December 2024. These models showed significant improvements in mathematics, science, and coding tasks compared to traditional LLMs.
https://en.wikipedia.org/wiki/Large_language_model
OpenAI introduced this trend with their o1 model in September 2024, followed by o3 in December 2024. These models showed significant improvements in mathematics, science, and coding tasks compared to traditional LLMs. For example, on International Mathematics Olympiad qualifying exam problems, GPT-4o achieved 13% accuracy while o1 reached 83%. In January 2025, the Chinese company DeepSeek released DeepSeek-R1, a 671-billion-parameter open-weight reasoning model that achieved comparable performance to OpenAI's o1 while being significantly more cost-effective to operate. Unlike proprietary models from OpenAI, DeepSeek-R1's open-weight nature allowed researchers to study and build upon the algorithm, though its training data remained private. These reasoning models typically require more computational resources per query compared to traditional LLMs, as they perform more extensive processing to work through problems step-by-step. However, they have shown superior capabilities in domains requiring structured logical thinking, such as mathematics, scientific research, and computer programming. Efforts to reduce or compensate for hallucinations have employed automated reasoning, RAG (retrieval-augmented generation), fine-tuning, and other methods. ## Properties
https://en.wikipedia.org/wiki/Large_language_model
Efforts to reduce or compensate for hallucinations have employed automated reasoning, RAG (retrieval-augmented generation), fine-tuning, and other methods. ## Properties ### Scaling laws The performance of an LLM after pretraining largely depends on the: - cost of pretraining (the total amount of compute used), - size of the artificial neural network itself, such as number of parameters (i.e. amount of neurons in its layers, amount of weights between them and biases), - size of its pretraining dataset (i.e. number of tokens in corpus, ). "Scaling laws" are empirical statistical laws that predict LLM performance based on such factors.
https://en.wikipedia.org/wiki/Large_language_model
### Scaling laws The performance of an LLM after pretraining largely depends on the: - cost of pretraining (the total amount of compute used), - size of the artificial neural network itself, such as number of parameters (i.e. amount of neurons in its layers, amount of weights between them and biases), - size of its pretraining dataset (i.e. number of tokens in corpus, ). "Scaling laws" are empirical statistical laws that predict LLM performance based on such factors. One particular scaling law ("Chinchilla scaling") for LLM autoregressively trained for one epoch, with a log-log learning rate schedule, states that: $$ \begin{cases} C = C_0 ND \\[6pt] L = \frac{A}{N^\alpha} + \frac{B}{D^\beta} + L_0 \end{cases} $$ where the variables are - is the cost of training the model, in FLOPs. - is the number of parameters in the model. - is the number of tokens in the training set. - is the average negative log-likelihood loss per token (nats/token), achieved by the trained LLM on the test dataset. and the statistical hyper-parameters are - , meaning that it costs 6 FLOPs per parameter to train on one token.
https://en.wikipedia.org/wiki/Large_language_model
One particular scaling law ("Chinchilla scaling") for LLM autoregressively trained for one epoch, with a log-log learning rate schedule, states that: $$ \begin{cases} C = C_0 ND \\[6pt] L = \frac{A}{N^\alpha} + \frac{B}{D^\beta} + L_0 \end{cases} $$ where the variables are - is the cost of training the model, in FLOPs. - is the number of parameters in the model. - is the number of tokens in the training set. - is the average negative log-likelihood loss per token (nats/token), achieved by the trained LLM on the test dataset. and the statistical hyper-parameters are - , meaning that it costs 6 FLOPs per parameter to train on one token. Note that training cost is much higher than inference cost, where it costs 1 to 2 FLOPs per parameter to infer on one token. - ### Emergent abilities Performance of bigger models on various tasks, when plotted on a log-log scale, appears as a linear extrapolation of performance achieved by smaller models. However, this linearity may be punctuated by "break(s)" in the scaling law, where the slope of the line changes abruptly, and where larger models acquire "emergent abilities".
https://en.wikipedia.org/wiki/Large_language_model
### Emergent abilities Performance of bigger models on various tasks, when plotted on a log-log scale, appears as a linear extrapolation of performance achieved by smaller models. However, this linearity may be punctuated by "break(s)" in the scaling law, where the slope of the line changes abruptly, and where larger models acquire "emergent abilities". They arise from the complex interaction of the model's components and are not explicitly programmed or designed. Furthermore, recent research has demonstrated that AI systems, including large language models, can employ heuristic reasoning akin to human cognition. They balance between exhaustive logical processing and the use of cognitive shortcuts (heuristics), adapting their reasoning strategies to optimize between accuracy and effort. This behavior aligns with principles of resource-rational human cognition, as discussed in classical theories of bounded rationality and dual-process theory. One of the emergent abilities is in-context learning from example demonstrations.
https://en.wikipedia.org/wiki/Large_language_model
This behavior aligns with principles of resource-rational human cognition, as discussed in classical theories of bounded rationality and dual-process theory. One of the emergent abilities is in-context learning from example demonstrations. In-context learning is involved in tasks, such as: - reported arithmetics - decoding the International Phonetic Alphabet - unscrambling a word's letters - disambiguating word-in-context datasets - converting spatial words - cardinal directions (for example, replying "northeast" in response to a 3x3 grid of 8 zeros and a 1 in the top-right), color terms represented in text. - chain-of-thought prompting: In a 2022 research paper, chain-of-thought prompting only improved the performance for models that had at least 62B parameters. Smaller models perform better when prompted to answer immediately, without chain of thought. - identifying offensive content in paragraphs of Hinglish (a combination of Hindi and English), and generating a similar English equivalent of Kiswahili proverbs. Schaeffer et. al. argue that the emergent abilities are not unpredictably acquired, but predictably acquired according to a smooth scaling law.
https://en.wikipedia.org/wiki/Large_language_model
Schaeffer et. al. argue that the emergent abilities are not unpredictably acquired, but predictably acquired according to a smooth scaling law. The authors considered a toy statistical model of an LLM solving multiple-choice questions, and showed that this statistical model, modified to account for other types of tasks, applies to these tasks as well. Let $$ x $$ be the number of parameter count, and $$ y $$ be the performance of the model. ## Interpretation Large language models by themselves are black boxes, and it is not clear how they can perform linguistic tasks. Similarly, it is unclear if or how LLMs should be viewed as models of the human brain and/or human mind. Various techniques have been developed to enhance the transparency and interpretability of LLMs. Mechanistic interpretability aims to reverse-engineer LLMs by discovering symbolic algorithms that approximate the inference performed by an LLM. In recent years, sparse coding models such as sparse autoencoders, transcoders, and crosscoders have emerged as promising tools for identifying interpretable features. ### Studying a replacement model Transcoders, which are more interpretable than transformers, have been utilized to develop “replacement models.”
https://en.wikipedia.org/wiki/Large_language_model
In recent years, sparse coding models such as sparse autoencoders, transcoders, and crosscoders have emerged as promising tools for identifying interpretable features. ### Studying a replacement model Transcoders, which are more interpretable than transformers, have been utilized to develop “replacement models.” In one such study involving the mechanistic interpretation of writing a rhyming poem by an LLM, it was shown that although they are believed to simply predict the next token, they can, in fact, plan ahead. ### Explainability A related concept is AI explainability, which focuses on understanding how an AI model arrives at a given result. Techniques such as partial dependency plots, SHAP (SHapley Additive exPlanations), and feature importance assessments allow researchers to visualize and understand the contributions of various input features to the model's predictions. These methods help ensure that AI models make decisions based on relevant and fair criteria, enhancing trust and accountability. By integrating these techniques, researchers and practitioners can gain deeper insights into the operations of LLMs, fostering trust and facilitating the responsible deployment of these powerful models. In another example, the authors trained small transformers on modular arithmetic addition. The resulting models were reverse-engineered, and it turned out they used discrete Fourier transform.
https://en.wikipedia.org/wiki/Large_language_model
In another example, the authors trained small transformers on modular arithmetic addition. The resulting models were reverse-engineered, and it turned out they used discrete Fourier transform. ### Understanding and intelligence NLP researchers were evenly split when asked, in a 2022 survey, whether (untuned) LLMs "could (ever) understand natural language in some nontrivial sense". Proponents of "LLM understanding" believe that some LLM abilities, such as mathematical reasoning, imply an ability to "understand" certain concepts. A Microsoft team argued in 2023 that GPT-4 "can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more" and that GPT-4 "could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence system": "Can one reasonably say that a system that passes exams for software engineering candidates is not really intelligent?" Ilya Sutskever argues that predicting the next word sometimes involves reasoning and deep insights, for example if the LLM has to predict the name of the criminal in an unknown detective novel after processing the entire story leading up to the revelation. Some researchers characterize LLMs as "alien intelligence".
https://en.wikipedia.org/wiki/Large_language_model
Ilya Sutskever argues that predicting the next word sometimes involves reasoning and deep insights, for example if the LLM has to predict the name of the criminal in an unknown detective novel after processing the entire story leading up to the revelation. Some researchers characterize LLMs as "alien intelligence". For example, Conjecture CEO Connor Leahy considers untuned LLMs to be like inscrutable alien "Shoggoths", and believes that RLHF tuning creates a "smiling facade" obscuring the inner workings of the LLM: "If you don't push it too far, the smiley face stays on. But then you give it [an unexpected] prompt, and suddenly you see this massive underbelly of insanity, of weird thought processes and clearly non-human understanding. " In contrast, some skeptics of LLM understanding believe that existing LLMs are "simply remixing and recombining existing writing", a phenomenon known as stochastic parrot, or they point to the deficits existing LLMs continue to have in prediction skills, reasoning skills, agency, and explainability. For example, GPT-4 has natural deficits in planning and in real-time learning. Generative LLMs have been observed to confidently assert claims of fact which do not seem to be justified by their training data, a phenomenon which has been termed "hallucination".
https://en.wikipedia.org/wiki/Large_language_model
For example, GPT-4 has natural deficits in planning and in real-time learning. Generative LLMs have been observed to confidently assert claims of fact which do not seem to be justified by their training data, a phenomenon which has been termed "hallucination". Specifically, hallucinations in the context of LLMs correspond to the generation of text or responses that seem syntactically sound, fluent, and natural but are factually incorrect, nonsensical, or unfaithful to the provided source input. Neuroscientist Terrence Sejnowski has argued that "The diverging opinions of experts on the intelligence of LLMs suggests that our old ideas based on natural intelligence are inadequate". The matter of LLM's exhibiting intelligence or understanding has two main aspects – the first is how to model thought and language in a computer system, and the second is how to enable the computer system to generate human like language. These aspects of language as a model of cognition have been developed in the field of cognitive linguistics. American linguist George Lakoff presented Neural Theory of Language (NTL) as a computational basis for using language as a model of learning tasks and understanding. The NTL Model outlines how specific neural structures of the human brain shape the nature of thought and language and in turn what are the computational properties of such neural systems that can be applied to model thought and language in a computer system.
https://en.wikipedia.org/wiki/Large_language_model
American linguist George Lakoff presented Neural Theory of Language (NTL) as a computational basis for using language as a model of learning tasks and understanding. The NTL Model outlines how specific neural structures of the human brain shape the nature of thought and language and in turn what are the computational properties of such neural systems that can be applied to model thought and language in a computer system. After a framework for modeling language in a computer systems was established, the focus shifted to establishing frameworks for computer systems to generate language with acceptable grammar. In his 2014 book titled The Language Myth: Why Language Is Not An Instinct, British cognitive linguist and digital communication technologist Vyvyan Evans mapped out the role of probabilistic context-free grammar (PCFG) in enabling NLP to model cognitive patterns and generate human like language. ## Evaluation ### Perplexity The canonical measure of the performance of an LLM is its perplexity on a given text corpus. Perplexity measures how well a model predicts the contents of a dataset; the higher the likelihood the model assigns to the dataset, the lower the perplexity.
https://en.wikipedia.org/wiki/Large_language_model
### Perplexity The canonical measure of the performance of an LLM is its perplexity on a given text corpus. Perplexity measures how well a model predicts the contents of a dataset; the higher the likelihood the model assigns to the dataset, the lower the perplexity. In mathematical terms, perplexity is the exponential of the average negative log likelihood per token. $$ \log(\text{Perplexity}) = -\frac{1}{N} \sum_{i=1}^N \log(\Pr(\text{token}_i \mid \text{context for token}_i)) $$ Here, $$ N $$ is the number of tokens in the text corpus, and "context for token $$ i $$ " depends on the specific type of LLM. If the LLM is autoregressive, then "context for token $$ i $$ " is the segment of text appearing before token $$ i $$ . If the LLM is masked, then "context for token $$ i $$ " is the segment of text surrounding token $$ i $$ . Because language models may overfit to training data, models are usually evaluated by their perplexity on a test set.
https://en.wikipedia.org/wiki/Large_language_model
If the LLM is masked, then "context for token $$ i $$ " is the segment of text surrounding token $$ i $$ . Because language models may overfit to training data, models are usually evaluated by their perplexity on a test set. This evaluation is potentially problematic for larger models which, as they are trained on increasingly large corpora of text, are increasingly likely to inadvertently include portions of any given test set. #### Measures In information theory, the concept of entropy is intricately linked to perplexity, a relationship notably established by Claude Shannon. This relationship is mathematically expressed as $$ \text{Entropy} = \log_2(\text{Perplexity}) $$ . Entropy, in this context, is commonly quantified in terms of bits per word (BPW) or bits per character (BPC), which hinges on whether the language model utilizes word-based or character-based tokenization. Notably, in the case of larger language models that predominantly employ sub-word tokenization, bits per token (BPT) emerges as a seemingly more appropriate measure. However, due to the variance in tokenization methods across different Large Language Models (LLMs), BPT does not serve as a reliable metric for comparative analysis among diverse models.
https://en.wikipedia.org/wiki/Large_language_model
Notably, in the case of larger language models that predominantly employ sub-word tokenization, bits per token (BPT) emerges as a seemingly more appropriate measure. However, due to the variance in tokenization methods across different Large Language Models (LLMs), BPT does not serve as a reliable metric for comparative analysis among diverse models. To convert BPT into BPW, one can multiply it by the average number of tokens per word. In the evaluation and comparison of language models, cross-entropy is generally the preferred metric over entropy. The underlying principle is that a lower BPW is indicative of a model's enhanced capability for compression. This, in turn, reflects the model's proficiency in making accurate predictions. ### Benchmarks Benchmarks are used to evaluate LLM performance on specific tasks. Tests evaluate capabilities such as general knowledge, bias, commonsense reasoning, question answering, and mathematical problem-solving. Composite benchmarks examine multiple capabilities. Results are often sensitive to the prompting method. A question answering benchmark is termed "open book" if the model's prompt includes text from which the expected answer can be derived (for example, the previous question could be combined with text that includes the sentence "The Sharks have advanced to the Stanley Cup finals once, losing to the Pittsburgh Penguins in 2016.").
https://en.wikipedia.org/wiki/Large_language_model
Results are often sensitive to the prompting method. A question answering benchmark is termed "open book" if the model's prompt includes text from which the expected answer can be derived (for example, the previous question could be combined with text that includes the sentence "The Sharks have advanced to the Stanley Cup finals once, losing to the Pittsburgh Penguins in 2016."). Otherwise, the task is considered "closed book", and the model must draw solely on its training. Examples include GLUE, SuperGLUE, MMLU, BIG-bench, HELM, and HLE (Humanity's Last Exam). LLM bias may be assessed through benchmarks such as CrowS-Pairs (Crowdsourced Stereotype Pairs), Stereo Set, and Parity Benchmark. Fact-checking and misinformation detection benchmarks are available. A 2023 study compared the fact-checking accuracy of LLMs including ChatGPT 3.5 and 4.0, Bard, and Bing AI against independent fact-checkers such as PolitiFact and Snopes. The results demonstrated moderate proficiency, with GPT-4 achieving the highest accuracy at 71%, lagging behind human fact-checkers. An earlier standard tested using a portion of the evaluation dataset. It became more common to evaluate a pre-trained model directly through prompting techniques.
https://en.wikipedia.org/wiki/Large_language_model