text
stringlengths 105
4.17k
| source
stringclasses 883
values |
---|---|
In 1955, mathematician John Nash wrote a letter to the NSA, speculating that cracking a sufficiently complex code would require time exponential in the length of the key. If proved (and Nash was suitably skeptical), this would imply what is now called P ≠ NP, since a proposed key can be verified in polynomial time. Another mention of the underlying problem occurred in a 1956 letter written by Kurt Gödel to John von Neumann. Gödel asked whether theorem-proving (now known to be co-NP-complete) could be solved in quadratic or linear time, and pointed out one of the most important consequences—that if so, then the discovery of mathematical proofs could be automated.
## Context
The relation between the complexity classes
### P and NP
is studied in computational complexity theory, the part of the theory of computation dealing with the resources required during computation to solve a given problem. The most common resources are time (how many steps it takes to solve a problem) and space (how much memory it takes to solve a problem).
In such analysis, a model of the computer for which time must be analyzed is required. Typically such models assume that the computer is deterministic (given the computer's present state and any inputs, there is only one possible action that the computer might take) and sequential (it performs actions one after the other).
|
https://en.wikipedia.org/wiki/P_versus_NP_problem
|
In such analysis, a model of the computer for which time must be analyzed is required. Typically such models assume that the computer is deterministic (given the computer's present state and any inputs, there is only one possible action that the computer might take) and sequential (it performs actions one after the other).
In this theory, the class P consists of all decision problems (defined below) solvable on a deterministic sequential machine in a duration polynomial in the size of the input; the class NP consists of all decision problems whose positive solutions are verifiable in polynomial time given the right information, or equivalently, whose solution can be found in polynomial time on a non-deterministic machine. Clearly, P ⊆ NP. Arguably, the biggest open question in theoretical computer science concerns the relationship between those two classes:
Is P equal to NP?
Since 2002, William Gasarch has conducted three polls of researchers concerning this and related questions. Confidence that P ≠ NP has been increasing – in 2019, 88% believed P ≠ NP, as opposed to 83% in 2012 and 61% in 2002. When restricted to experts, the 2019 answers became 99% believed P ≠ NP.
|
https://en.wikipedia.org/wiki/P_versus_NP_problem
|
Confidence that P ≠ NP has been increasing – in 2019, 88% believed P ≠ NP, as opposed to 83% in 2012 and 61% in 2002. When restricted to experts, the 2019 answers became 99% believed P ≠ NP. These polls do not imply whether P = NP, Gasarch himself stated: "This does not bring us any closer to solving P=?NP or to knowing when it will be solved, but it attempts to be an objective report on the subjective opinion of this era. "
##
### NP-completeness
To attack the P = NP question , the concept of NP-completeness is very useful. NP-complete problems are problems that any other NP problem is reducible to in polynomial time and whose solution is still verifiable in polynomial time. That is, any NP problem can be transformed into any NP-complete problem. Informally, an NP-complete problem is an NP problem that is at least as "tough" as any other problem in NP.
NP-hard problems are those at least as hard as NP problems; i.e., all NP problems can be reduced (in polynomial time) to them. NP-hard problems need not be in NP; i.e., they need not have solutions verifiable in polynomial time.
|
https://en.wikipedia.org/wiki/P_versus_NP_problem
|
NP-hard problems are those at least as hard as NP problems; i.e., all NP problems can be reduced (in polynomial time) to them. NP-hard problems need not be in NP; i.e., they need not have solutions verifiable in polynomial time.
For instance, the Boolean satisfiability problem is NP-complete by the Cook–Levin theorem, so any instance of any problem in NP can be transformed mechanically into a Boolean satisfiability problem in polynomial time. The Boolean satisfiability problem is one of many NP-complete problems. If any NP-complete problem is in P, then it would follow that P = NP. However, many important problems are NP-complete, and no fast algorithm for any of them is known.
From the definition alone it is unintuitive that NP-complete problems exist; however, a trivial NP-complete problem can be formulated as follows: given a Turing machine M guaranteed to halt in polynomial time, does a polynomial-size input that M will accept exist? It is in NP because (given an input) it is simple to check whether M accepts the input by simulating M; it is NP-complete because the verifier for any particular instance of a problem in NP can be encoded as a polynomial-time machine M that takes the solution to be verified as input.
|
https://en.wikipedia.org/wiki/P_versus_NP_problem
|
It is in NP because (given an input) it is simple to check whether M accepts the input by simulating M; it is NP-complete because the verifier for any particular instance of a problem in NP can be encoded as a polynomial-time machine M that takes the solution to be verified as input. Then the question of whether the instance is a yes or no instance is determined by whether a valid input exists.
The first natural problem proven to be NP-complete was the Boolean satisfiability problem, also known as SAT. As noted above, this is the Cook–Levin theorem; its proof that satisfiability is NP-complete contains technical details about Turing machines as they relate to the definition of NP. However, after this problem was proved to be NP-complete, proof by reduction provided a simpler way to show that many other problems are also NP-complete, including the game Sudoku discussed earlier. In this case, the proof shows that a solution of Sudoku in polynomial time could also be used to complete Latin squares in polynomial time. This in turn gives a solution to the problem of partitioning tri-partite graphs into triangles, which could then be used to find solutions for the special case of SAT known as 3-SAT, which then provides a solution for general Boolean satisfiability.
|
https://en.wikipedia.org/wiki/P_versus_NP_problem
|
In this case, the proof shows that a solution of Sudoku in polynomial time could also be used to complete Latin squares in polynomial time. This in turn gives a solution to the problem of partitioning tri-partite graphs into triangles, which could then be used to find solutions for the special case of SAT known as 3-SAT, which then provides a solution for general Boolean satisfiability. So a polynomial-time solution to Sudoku leads, by a series of mechanical transformations, to a polynomial time solution of satisfiability, which in turn can be used to solve any other NP-problem in polynomial time. Using transformations like this, a vast class of seemingly unrelated problems are all reducible to one another, and are in a sense "the same problem".
## Harder problems
Although it is unknown whether P = NP, problems outside of P are known. Just as the class P is defined in terms of polynomial running time, the class EXPTIME is the set of all decision problems that have exponential running time. In other words, any problem in EXPTIME is solvable by a deterministic Turing machine in O(2p(n)) time, where p(n) is a polynomial function of n. A decision problem is EXPTIME-complete if it is in EXPTIME, and every problem in EXPTIME has a polynomial-time many-one reduction to it.
|
https://en.wikipedia.org/wiki/P_versus_NP_problem
|
Just as the class P is defined in terms of polynomial running time, the class EXPTIME is the set of all decision problems that have exponential running time. In other words, any problem in EXPTIME is solvable by a deterministic Turing machine in O(2p(n)) time, where p(n) is a polynomial function of n. A decision problem is EXPTIME-complete if it is in EXPTIME, and every problem in EXPTIME has a polynomial-time many-one reduction to it. A number of problems are known to be EXPTIME-complete. Because it can be shown that P ≠ EXPTIME, these problems are outside P, and so require more than polynomial time. In fact, by the time hierarchy theorem, they cannot be solved in significantly less than exponential time. Examples include finding a perfect strategy for chess positions on an N × N board and similar problems for other board games.
The problem of deciding the truth of a statement in Presburger arithmetic requires even more time. Fischer and Rabin proved in 1974 that every algorithm that decides the truth of Presburger statements of length n has a runtime of at least
$$
2^{2^{cn}}
$$
for some constant c. Hence, the problem is known to need more than exponential run time. Even more difficult are the undecidable problems, such as the halting problem.
|
https://en.wikipedia.org/wiki/P_versus_NP_problem
|
Fischer and Rabin proved in 1974 that every algorithm that decides the truth of Presburger statements of length n has a runtime of at least
$$
2^{2^{cn}}
$$
for some constant c. Hence, the problem is known to need more than exponential run time. Even more difficult are the undecidable problems, such as the halting problem. They cannot be completely solved by any algorithm, in the sense that for any particular algorithm there is at least one input for which that algorithm will not produce the right answer; it will either produce the wrong answer, finish without giving a conclusive answer, or otherwise run forever without producing any answer at all.
It is also possible to consider questions other than decision problems. One such class, consisting of counting problems, is called P: whereas an NP problem asks "Are there any solutions?", the corresponding #P problem asks "How many solutions are there?". Clearly, a #P problem must be at least as hard as the corresponding NP problem, since a count of solutions immediately tells if at least one solution exists, if the count is greater than zero. Surprisingly, some #P problems that are believed to be difficult correspond to easy (for example linear-time) P problems. For these problems, it is very easy to tell whether solutions exist, but thought to be very hard to tell how many.
|
https://en.wikipedia.org/wiki/P_versus_NP_problem
|
Surprisingly, some #P problems that are believed to be difficult correspond to easy (for example linear-time) P problems. For these problems, it is very easy to tell whether solutions exist, but thought to be very hard to tell how many. Many of these problems are P-complete, and hence among the hardest problems in #P, since a polynomial time solution to any of them would allow a polynomial time solution to all other #P problems.
## Problems in NP not known to be in P or NP-complete
In 1975, Richard E. Ladner showed that if P ≠ NP, then there exist problems in NP that are neither in P nor NP-complete. Such problems are called NP-intermediate problems. The graph isomorphism problem, the discrete logarithm problem, and the integer factorization problem are examples of problems believed to be NP-intermediate. They are some of the very few NP problems not known to be in P or to be NP-complete.
The graph isomorphism problem is the computational problem of determining whether two finite graphs are isomorphic. An important unsolved problem in complexity theory is whether the graph isomorphism problem is in P, NP-complete, or NP-intermediate. The answer is not known, but it is believed that the problem is at least not NP-complete.
|
https://en.wikipedia.org/wiki/P_versus_NP_problem
|
An important unsolved problem in complexity theory is whether the graph isomorphism problem is in P, NP-complete, or NP-intermediate. The answer is not known, but it is believed that the problem is at least not NP-complete. If graph isomorphism is NP-complete, the polynomial time hierarchy collapses to its second level. Since it is widely believed that the polynomial hierarchy does not collapse to any finite level, it is believed that graph isomorphism is not NP-complete. The best algorithm for this problem, due to László Babai, runs in quasi-polynomial time.
The integer factorization problem is the computational problem of determining the prime factorization of a given integer. Phrased as a decision problem, it is the problem of deciding whether the input has a factor less than k. No efficient integer factorization algorithm is known, and this fact forms the basis of several modern cryptographic systems, such as the RSA algorithm. The integer factorization problem is in NP and in co-NP (and even in UP and co-UP). If the problem is NP-complete, the polynomial time hierarchy will collapse to its first level (i.e., NP = co-NP).
|
https://en.wikipedia.org/wiki/P_versus_NP_problem
|
The integer factorization problem is in NP and in co-NP (and even in UP and co-UP). If the problem is NP-complete, the polynomial time hierarchy will collapse to its first level (i.e., NP = co-NP). The most efficient known algorithm for integer factorization is the general number field sieve, which takes expected time
$$
O\left (\exp \left ( \left (\tfrac{64n}{9} \log(2) \right )^{\frac{1}{3}} \left ( \log(n\log(2)) \right )^{\frac{2}{3}} \right) \right )
$$
to factor an n-bit integer. The best known quantum algorithm for this problem, Shor's algorithm, runs in polynomial time, although this does not indicate where the problem lies with respect to non-quantum complexity classes.
## Does P mean "easy"?
All of the above discussion has assumed that P means "easy" and "not in P" means "difficult", an assumption known as Cobham's thesis. It is a common assumption in complexity theory; but there are caveats.
First, it can be false in practice. A theoretical polynomial algorithm may have extremely large constant factors or exponents, rendering it impractical.
|
https://en.wikipedia.org/wiki/P_versus_NP_problem
|
First, it can be false in practice. A theoretical polynomial algorithm may have extremely large constant factors or exponents, rendering it impractical. For example, the problem of deciding whether a graph G contains H as a minor, where H is fixed, can be solved in a running time of O(n2), where n is the number of vertices in G. However, the big O notation hides a constant that depends superexponentially on H. The constant is greater than
$$
2 \uparrow \uparrow (2 \uparrow \uparrow (2 \uparrow \uparrow (h/2) ) )
$$
(using Knuth's up-arrow notation), and where h is the number of vertices in H.
On the other hand, even if a problem is shown to be NP-complete, and even if P ≠ NP, there may still be effective approaches to the problem in practice. There are algorithms for many NP-complete problems, such as the knapsack problem, the traveling salesman problem, and the Boolean satisfiability problem, that can solve to optimality many real-world instances in reasonable time. The empirical average-case complexity (time vs. problem size) of such algorithms can be surprisingly low.
|
https://en.wikipedia.org/wiki/P_versus_NP_problem
|
There are algorithms for many NP-complete problems, such as the knapsack problem, the traveling salesman problem, and the Boolean satisfiability problem, that can solve to optimality many real-world instances in reasonable time. The empirical average-case complexity (time vs. problem size) of such algorithms can be surprisingly low. An example is the simplex algorithm in linear programming, which works surprisingly well in practice; despite having exponential worst-case time complexity, it runs on par with the best known polynomial-time algorithms.
Finally, there are types of computations which do not conform to the Turing machine model on which P and NP are defined, such as quantum computation and randomized algorithms.
## Reasons to believe
### P ≠ NP
or
### P = NP
Cook provides a restatement of the problem in The P Versus NP Problem as "Does P = NP?" According to polls, most computer scientists believe that P ≠ NP. A key reason for this belief is that after decades of studying these problems no one has been able to find a polynomial-time algorithm for any of more than 3,000 important known NP-complete problems (see List of NP-complete problems).
|
https://en.wikipedia.org/wiki/P_versus_NP_problem
|
According to polls, most computer scientists believe that P ≠ NP. A key reason for this belief is that after decades of studying these problems no one has been able to find a polynomial-time algorithm for any of more than 3,000 important known NP-complete problems (see List of NP-complete problems). These algorithms were sought long before the concept of NP-completeness was even defined (Karp's 21 NP-complete problems, among the first found, were all well-known existing problems at the time they were shown to be NP-complete). Furthermore, the result P = NP would imply many other startling results that are currently believed to be false, such as NP = co-NP and P = PH.
It is also intuitively argued that the existence of problems that are hard to solve but whose solutions are easy to verify matches real-world experience.
On the other hand, some researchers believe that it is overconfident to believe P ≠ NP and that researchers should also explore proofs of P = NP. For example, in 2002 these statements were made:
### DLIN vs NLIN
When one substitutes "linear time on a multitape Turing machine" for "polynomial time" in the definitions of P and NP, one obtains the classes DLIN and NLIN.
It is known that DLIN ≠ NLIN.
|
https://en.wikipedia.org/wiki/P_versus_NP_problem
|
When one substitutes "linear time on a multitape Turing machine" for "polynomial time" in the definitions of P and NP, one obtains the classes DLIN and NLIN.
It is known that DLIN ≠ NLIN.
## Consequences of solution
One of the reasons the problem attracts so much attention is the consequences of the possible answers. Either direction of resolution would advance theory enormously, and perhaps have huge practical consequences as well.
P = NP
A proof that P = NP could have stunning practical consequences if the proof leads to efficient methods for solving some of the important problems in NP. The potential consequences, both positive and negative, arise since various NP-complete problems are fundamental in many fields.
It is also very possible that a proof would not lead to practical algorithms for NP-complete problems. The formulation of the problem does not require that the bounding polynomial be small or even specifically known. A non-constructive proof might show a solution exists without specifying either an algorithm to obtain it or a specific bound. Even if the proof is constructive, showing an explicit bounding polynomial and algorithmic details, if the polynomial is not very low-order the algorithm might not be sufficiently efficient in practice.
|
https://en.wikipedia.org/wiki/P_versus_NP_problem
|
A non-constructive proof might show a solution exists without specifying either an algorithm to obtain it or a specific bound. Even if the proof is constructive, showing an explicit bounding polynomial and algorithmic details, if the polynomial is not very low-order the algorithm might not be sufficiently efficient in practice. In this case the initial proof would be mainly of interest to theoreticians, but the knowledge that polynomial time solutions are possible would surely spur research into better (and possibly practical) methods to achieve them.
A solution showing P = NP could upend the field of cryptography, which relies on certain problems being difficult. A constructive and efficient solution to an NP-complete problem such as 3-SAT would break most existing cryptosystems including:
- Existing implementations of public-key cryptography, a foundation for many modern security applications such as secure financial transactions over the Internet.
- Symmetric ciphers such as AES or 3DES, used for the encryption of communications data.
- Cryptographic hashing, which underlies blockchain cryptocurrencies such as Bitcoin, and is used to authenticate software updates. For these applications, finding a pre-image that hashes to a given value must be difficult, ideally taking exponential time. If P = NP, then this can take polynomial time, through reduction to SAT.
|
https://en.wikipedia.org/wiki/P_versus_NP_problem
|
For these applications, finding a pre-image that hashes to a given value must be difficult, ideally taking exponential time. If P = NP, then this can take polynomial time, through reduction to SAT.
These would need modification or replacement with information-theoretically secure solutions that do not assume P ≠ NP.
There are also enormous benefits that would follow from rendering tractable many currently mathematically intractable problems. For instance, many problems in operations research are NP-complete, such as types of integer programming and the travelling salesman problem. Efficient solutions to these problems would have enormous implications for logistics. Many other important problems, such as some problems in protein structure prediction, are also NP-complete; making these problems efficiently solvable could considerably advance life sciences and biotechnology.
These changes could be insignificant compared to the revolution that efficiently solving NP-complete problems would cause in mathematics itself.
|
https://en.wikipedia.org/wiki/P_versus_NP_problem
|
Many other important problems, such as some problems in protein structure prediction, are also NP-complete; making these problems efficiently solvable could considerably advance life sciences and biotechnology.
These changes could be insignificant compared to the revolution that efficiently solving NP-complete problems would cause in mathematics itself. Gödel, in his early thoughts on computational complexity, noted that a mechanical method that could solve any problem would revolutionize mathematics:
Similarly, Stephen Cook (assuming not only a proof, but a practically efficient algorithm) says:
Research mathematicians spend their careers trying to prove theorems, and some proofs have taken decades or even centuries to find after problems have been stated—for instance, Fermat's Last Theorem took over three centuries to prove. A method guaranteed to find a proof if a "reasonable" size proof exists, would essentially end this struggle.
Donald Knuth has stated that he has come to believe that P = NP, but is reserved about the impact of a possible proof:
P ≠ NP
A proof of P ≠ NP would lack the practical computational benefits of a proof that P = NP, but would represent a great advance in computational complexity theory and guide future research. It would demonstrate that many common problems cannot be solved efficiently, so that the attention of researchers can be focused on partial solutions or solutions to other problems.
|
https://en.wikipedia.org/wiki/P_versus_NP_problem
|
Donald Knuth has stated that he has come to believe that P = NP, but is reserved about the impact of a possible proof:
P ≠ NP
A proof of P ≠ NP would lack the practical computational benefits of a proof that P = NP, but would represent a great advance in computational complexity theory and guide future research. It would demonstrate that many common problems cannot be solved efficiently, so that the attention of researchers can be focused on partial solutions or solutions to other problems. Due to widespread belief in P ≠ NP, much of this focusing of research has already taken place.
P ≠ NP still leaves open the average-case complexity of hard problems in NP. For example, it is possible that SAT requires exponential time in the worst case, but that almost all randomly selected instances of it are efficiently solvable. Russell Impagliazzo has described five hypothetical "worlds" that could result from different possible resolutions to the average-case complexity question. These range from "Algorithmica", where P = NP and problems like SAT can be solved efficiently in all instances, to "Cryptomania", where P ≠ NP and generating hard instances of problems outside P is easy, with three intermediate possibilities reflecting different possible distributions of difficulty over instances of NP-hard problems.
|
https://en.wikipedia.org/wiki/P_versus_NP_problem
|
Russell Impagliazzo has described five hypothetical "worlds" that could result from different possible resolutions to the average-case complexity question. These range from "Algorithmica", where P = NP and problems like SAT can be solved efficiently in all instances, to "Cryptomania", where P ≠ NP and generating hard instances of problems outside P is easy, with three intermediate possibilities reflecting different possible distributions of difficulty over instances of NP-hard problems. The "world" where P ≠ NP but all problems in NP are tractable in the average case is called "Heuristica" in the paper. A Princeton University workshop in 2009 studied the status of the five worlds.
## Results about difficulty of proof
Although the P = NP problem itself remains open despite a million-dollar prize and a huge amount of dedicated research, efforts to solve the problem have led to several new techniques. In particular, some of the most fruitful research related to the P = NP problem has been in showing that existing proof techniques are insufficient for answering the question, suggesting novel technical approaches are required.
|
https://en.wikipedia.org/wiki/P_versus_NP_problem
|
## Results about difficulty of proof
Although the P = NP problem itself remains open despite a million-dollar prize and a huge amount of dedicated research, efforts to solve the problem have led to several new techniques. In particular, some of the most fruitful research related to the P = NP problem has been in showing that existing proof techniques are insufficient for answering the question, suggesting novel technical approaches are required.
As additional evidence for the difficulty of the problem, essentially all known proof techniques in computational complexity theory fall into one of the following classifications, all insufficient to prove P ≠ NP:
ClassificationDefinitionRelativizing proofsImagine a world where every algorithm is allowed to make queries to some fixed subroutine called an oracle (which can answer a fixed set of questions in constant time, such as an oracle that solves any traveling salesman problem in 1 step), and the running time of the oracle is not counted against the running time of the algorithm. Most proofs (especially classical ones) apply uniformly in a world with oracles regardless of what the oracle does. These proofs are called relativizing. In 1975, Baker, Gill, and Solovay showed that P = NP with respect to some oracles, while P ≠ NP for other oracles.
|
https://en.wikipedia.org/wiki/P_versus_NP_problem
|
These proofs are called relativizing. In 1975, Baker, Gill, and Solovay showed that P = NP with respect to some oracles, while P ≠ NP for other oracles. As relativizing proofs can only prove statements that are true for all possible oracles, these techniques cannot resolve P = NP.Natural proofsIn 1993, Alexander Razborov and Steven Rudich defined a general class of proof techniques for circuit complexity lower bounds, called natural proofs. At the time, all previously known circuit lower bounds were natural, and circuit complexity was considered a very promising approach for resolving P = NP. However, Razborov and Rudich showed that if one-way functions exist, P and NP are indistinguishable to natural proof methods. Although the existence of one-way functions is unproven, most mathematicians believe that they do, and a proof of their existence would be a much stronger statement than P ≠ NP. Thus it is unlikely that natural proofs alone can resolve P = NP.Algebrizing proofsAfter the Baker–Gill–Solovay result, new non-relativizing proof techniques were successfully used to prove that IP = PSPACE.
|
https://en.wikipedia.org/wiki/P_versus_NP_problem
|
Although the existence of one-way functions is unproven, most mathematicians believe that they do, and a proof of their existence would be a much stronger statement than P ≠ NP. Thus it is unlikely that natural proofs alone can resolve P = NP.Algebrizing proofsAfter the Baker–Gill–Solovay result, new non-relativizing proof techniques were successfully used to prove that IP = PSPACE. However, in 2008, Scott Aaronson and Avi Wigderson showed that the main technical tool used in the IP = PSPACE proof, known as arithmetization, was also insufficient to resolve P = NP. Arithmetization converts the operations of an algorithm to algebraic and basic arithmetic symbols and then uses those to analyze the workings. In the IP = PSPACE proof, they convert the black box and the Boolean circuits to an algebraic problem. As mentioned previously, it has been proven that this method is not viable to solve P = NP and other time complexity problems.
These barriers are another reason why NP-complete problems are useful: if a polynomial-time algorithm can be demonstrated for an NP-complete problem, this would solve the P = NP problem in a way not excluded by the above results.
|
https://en.wikipedia.org/wiki/P_versus_NP_problem
|
As mentioned previously, it has been proven that this method is not viable to solve P = NP and other time complexity problems.
These barriers are another reason why NP-complete problems are useful: if a polynomial-time algorithm can be demonstrated for an NP-complete problem, this would solve the P = NP problem in a way not excluded by the above results.
These barriers lead some computer scientists to suggest the P versus NP problem may be independent of standard axiom systems like ZFC (cannot be proved or disproved within them). An independence result could imply that either P ≠ NP and this is unprovable in (e.g.) ZFC, or that P = NP but it is unprovable in ZFC that any polynomial-time algorithms are correct. However, if the problem is undecidable even with much weaker assumptions extending the Peano axioms for integer arithmetic, then nearly polynomial-time algorithms exist for all NP problems. Therefore, assuming (as most complexity theorists do) some NP problems don't have efficient algorithms, proofs of independence with those techniques are impossible. This also implies proving independence from PA or ZFC with current techniques is no easier than proving all NP problems have efficient algorithms.
|
https://en.wikipedia.org/wiki/P_versus_NP_problem
|
Therefore, assuming (as most complexity theorists do) some NP problems don't have efficient algorithms, proofs of independence with those techniques are impossible. This also implies proving independence from PA or ZFC with current techniques is no easier than proving all NP problems have efficient algorithms.
## Logical characterizations
The P = NP problem can be restated as certain classes of logical statements, as a result of work in descriptive complexity.
Consider all languages of finite structures with a fixed signature including a linear order relation. Then, all such languages in P are expressible in first-order logic with the addition of a suitable least fixed-point combinator. Recursive functions can be defined with this and the order relation. As long as the signature contains at least one predicate or function in addition to the distinguished order relation, so that the amount of space taken to store such finite structures is actually polynomial in the number of elements in the structure, this precisely characterizes P.
Similarly, NP is the set of languages expressible in existential second-order logic—that is, second-order logic restricted to exclude universal quantification over relations, functions, and subsets. The languages in the polynomial hierarchy, PH, correspond to all of second-order logic.
|
https://en.wikipedia.org/wiki/P_versus_NP_problem
|
As long as the signature contains at least one predicate or function in addition to the distinguished order relation, so that the amount of space taken to store such finite structures is actually polynomial in the number of elements in the structure, this precisely characterizes P.
Similarly, NP is the set of languages expressible in existential second-order logic—that is, second-order logic restricted to exclude universal quantification over relations, functions, and subsets. The languages in the polynomial hierarchy, PH, correspond to all of second-order logic. Thus, the question "is P a proper subset of NP" can be reformulated as "is existential second-order logic able to describe languages (of finite linearly ordered structures with nontrivial signature) that first-order logic with least fixed point cannot?". The word "existential" can even be dropped from the previous characterization, since P = NP if and only if P = PH (as the former would establish that NP = co-NP, which in turn implies that NP = PH).
## Polynomial-time algorithms
No known algorithm for a NP-complete problem runs in polynomial time. However, there are algorithms known for NP-complete problems that if P = NP, the algorithm runs in polynomial time on accepting instances (although with enormous constants, making the algorithm impractical).
|
https://en.wikipedia.org/wiki/P_versus_NP_problem
|
## Polynomial-time algorithms
No known algorithm for a NP-complete problem runs in polynomial time. However, there are algorithms known for NP-complete problems that if P = NP, the algorithm runs in polynomial time on accepting instances (although with enormous constants, making the algorithm impractical). However, these algorithms do not qualify as polynomial time because their running time on rejecting instances are not polynomial. The following algorithm, due to Levin (without any citation), is such an example below. It correctly accepts the NP-complete language SUBSET-SUM. It runs in polynomial time on inputs that are in SUBSET-SUM if and only if P = NP:
// Algorithm that accepts the NP-complete language SUBSET-SUM.
//
// this is a polynomial-time algorithm if and only if P = NP.
//
// "Polynomial-time" means it returns "yes" in polynomial time when
// the answer should be "yes", and runs forever when it is "no".
//
// Input: S = a finite set of integers
// Output: "yes" if any subset of S adds up to 0.
// Runs forever with no output otherwise.
// Note: "Program number M" is the program obtained by
// writing the integer M in binary, then
// considering that string of bits to be a
// program.
|
https://en.wikipedia.org/wiki/P_versus_NP_problem
|
Runs forever with no output otherwise.
// Note: "Program number M" is the program obtained by
// writing the integer M in binary, then
// considering that string of bits to be a
// program. Every possible program can be
// generated this way, though most do nothing
// because of syntax errors.
FOR K = 1...∞
FOR M = 1...K
Run program number M for K steps with input S
IF the program outputs a list of distinct integers
AND the integers are all in S
AND the integers sum to 0
THEN
OUTPUT "yes" and HALT
This is a polynomial-time algorithm accepting an NP-complete language only if P = NP. "Accepting" means it gives "yes" answers in polynomial time, but is allowed to run forever when the answer is "no" (also known as a semi-algorithm).
This algorithm is enormously impractical, even if P = NP. If the shortest program that can solve SUBSET-SUM in polynomial time is b bits long, the above algorithm will try at least other programs first.
## Formal definitions
P and NP
A decision problem is a problem that takes as input some string w over an alphabet Σ, and outputs "yes" or "no".
|
https://en.wikipedia.org/wiki/P_versus_NP_problem
|
If the shortest program that can solve SUBSET-SUM in polynomial time is b bits long, the above algorithm will try at least other programs first.
## Formal definitions
P and NP
A decision problem is a problem that takes as input some string w over an alphabet Σ, and outputs "yes" or "no". If there is an algorithm (say a Turing machine, or a computer program with unbounded memory) that produces the correct answer for any input string of length n in at most cnk steps, where k and c are constants independent of the input string, then we say that the problem can be solved in polynomial time and we place it in the class P. Formally, P is the set of languages that can be decided by a deterministic polynomial-time Turing machine. Meaning,
$$
\mathsf{P} = \{ L : L=L(M) \text{ for some deterministic polynomial-time Turing machine } M \}
$$
where
$$
L(M) = \{ w\in\Sigma^{*}: M \text{ accepts } w \}
$$
and a deterministic polynomial-time Turing machine is a deterministic Turing machine M that satisfies two conditions:
1. M halts on all inputs w and
1.
|
https://en.wikipedia.org/wiki/P_versus_NP_problem
|
1. M halts on all inputs w and
1. there exists
$$
k \in N
$$
such that
$$
T_M(n)\in O(n^k)
$$
, where O refers to the big O notation and
$$
T_M(n) = \max\{ t_M(w) : w\in\Sigma^{*}, |w| = n \}
$$
$$
t_M(w) = \text{ number of steps }M\text{ takes to halt on input }w.
$$
NP can be defined similarly using nondeterministic Turing machines (the traditional way). However, a modern approach uses the concept of certificate and verifier. Formally, NP is the set of languages with a finite alphabet and verifier that runs in polynomial time. The following defines a "verifier":
Let L be a language over a finite alphabet, Σ.
L ∈ NP if, and only if, there exists a binary relation
$$
R\subset\Sigma^{*}\times\Sigma^{*}
$$
and a positive integer k such that the following two conditions are satisfied:
1. ; and
1. the language is decidable by a deterministic Turing machine in polynomial time.
|
https://en.wikipedia.org/wiki/P_versus_NP_problem
|
The following defines a "verifier":
Let L be a language over a finite alphabet, Σ.
L ∈ NP if, and only if, there exists a binary relation
$$
R\subset\Sigma^{*}\times\Sigma^{*}
$$
and a positive integer k such that the following two conditions are satisfied:
1. ; and
1. the language is decidable by a deterministic Turing machine in polynomial time.
A Turing machine that decides LR is called a verifier for L and a y such that (x, y) ∈ R is called a certificate of membership of x in L.
Not all verifiers must be polynomial-time. However, for L to be in NP, there must be a verifier that runs in polynomial time.
Example
Let
$$
\mathrm{COMPOSITE} = \left \{x\in\mathbb{N} \mid x=pq \text{ for integers } p, q > 1 \right \}
$$
$$
R = \left \{(x,y)\in\mathbb{N} \times\mathbb{N} \mid 1<y \leq \sqrt x \text{ and } y \text{ divides } x \right \}.
$$
Whether a value of x is composite is equivalent to of whether x is a member of COMPOSITE.
|
https://en.wikipedia.org/wiki/P_versus_NP_problem
|
However, for L to be in NP, there must be a verifier that runs in polynomial time.
Example
Let
$$
\mathrm{COMPOSITE} = \left \{x\in\mathbb{N} \mid x=pq \text{ for integers } p, q > 1 \right \}
$$
$$
R = \left \{(x,y)\in\mathbb{N} \times\mathbb{N} \mid 1<y \leq \sqrt x \text{ and } y \text{ divides } x \right \}.
$$
Whether a value of x is composite is equivalent to of whether x is a member of COMPOSITE. It can be shown that COMPOSITE ∈ NP by verifying that it satisfies the above definition (if we identify natural numbers with their binary representations).
COMPOSITE also happens to be in P, a fact demonstrated by the invention of the AKS primality test.
NP-completeness
There are many equivalent ways of describing NP-completeness.
Let L be a language over a finite alphabet Σ.
L is NP-complete if, and only if, the following two conditions are satisfied:
1.
|
https://en.wikipedia.org/wiki/P_versus_NP_problem
|
Let L be a language over a finite alphabet Σ.
L is NP-complete if, and only if, the following two conditions are satisfied:
1. L ∈ NP; and
1. any L′ in NP is polynomial-time-reducible to L (written as
$$
L' \leq_{p} L
$$
), where
$$
L' \leq_{p} L
$$
if, and only if, the following two conditions are satisfied:
1. There exists f : Σ* → Σ* such that for all w in Σ* we have:
$$
(w\in L' \Leftrightarrow f(w)\in L)
$$
; and
1. there exists a polynomial-time Turing machine that halts with f(w) on its tape on any input w.
Alternatively, if L ∈ NP, and there is another NP-complete problem that can be polynomial-time reduced to L, then L is NP-complete. This is a common way of proving some new problem is NP-complete.
## Claimed solutions
While the P versus NP problem is generally considered unsolved, many amateur and some professional researchers have claimed solutions.
|
https://en.wikipedia.org/wiki/P_versus_NP_problem
|
This is a common way of proving some new problem is NP-complete.
## Claimed solutions
While the P versus NP problem is generally considered unsolved, many amateur and some professional researchers have claimed solutions. Gerhard J. Woeginger compiled a list of 116 purported proofs from 1986 to 2016, of which 61 were proofs of P = NP, 49 were proofs of P ≠ NP, and 6 proved other results, e.g. that the problem is undecidable. Some attempts at resolving P versus NP have received brief media attention, though these attempts have been refuted.
## Popular culture
The film Travelling Salesman, by director Timothy Lanzone, is the story of four mathematicians hired by the US government to solve the P versus NP problem.
In the sixth episode of The Simpsons seventh season "Treehouse of Horror VI", the equation P = NP is seen shortly after Homer accidentally stumbles into the "third dimension".
In the second episode of season 2 of Elementary, "Solve for X" Sherlock and Watson investigate the murders of mathematicians who were attempting to solve P versus NP.
## Similar problems
- R vs. RE problem, where R is analog of class P, and RE is analog class NP. These classes are not equal, because undecidable but verifiable problems do exist, for example, Hilbert's tenth problem which is RE-complete.
|
https://en.wikipedia.org/wiki/P_versus_NP_problem
|
## Similar problems
- R vs. RE problem, where R is analog of class P, and RE is analog class NP. These classes are not equal, because undecidable but verifiable problems do exist, for example, Hilbert's tenth problem which is RE-complete.
- A similar problem exists in the theory of algebraic complexity: VP vs. VNP problem. Like P vs. NP, the answer is currently unknown.
|
https://en.wikipedia.org/wiki/P_versus_NP_problem
|
Distributed computing is a field of computer science that studies distributed systems, defined as computer systems whose inter-communicating components are located on different networked computers.
The components of a distributed system communicate and coordinate their actions by passing messages to one another in order to achieve a common goal. Three significant challenges of distributed systems are: maintaining concurrency of components, overcoming the lack of a global clock, and managing the independent failure of components. When a component of one system fails, the entire system does not fail.
## Examples
of distributed systems vary from SOA-based systems to microservices to massively multiplayer online games to peer-to-peer applications. Distributed systems cost significantly more than monolithic architectures, primarily due to increased needs for additional hardware, servers, gateways, firewalls, new subnets, proxies, and so on. Also, distributed systems are prone to fallacies of distributed computing. On the other hand, a well designed distributed system is more scalable, more durable, more changeable and more fine-tuned than a monolithic application deployed on a single machine. According to Marc Brooker: "a system is scalable in the range where marginal cost of additional workload is nearly constant." Serverless technologies fit this definition but the total cost of ownership, and not just the infra cost must be considered.
|
https://en.wikipedia.org/wiki/Distributed_computing
|
According to Marc Brooker: "a system is scalable in the range where marginal cost of additional workload is nearly constant." Serverless technologies fit this definition but the total cost of ownership, and not just the infra cost must be considered.
A computer program that runs within a distributed system is called a distributed program, and distributed programming is the process of writing such programs. There are many different types of implementations for the message passing mechanism, including pure HTTP, RPC-like connectors and message queues.
Distributed computing also refers to the use of distributed systems to solve computational problems. In distributed computing, a problem is divided into many tasks, each of which is solved by one or more computers, which communicate with each other via message passing.
## Introduction
The word distributed in terms such as "distributed system", "distributed programming", and "distributed algorithm" originally referred to computer networks where individual computers were physically distributed within some geographical area. The terms are nowadays used in a much wider sense, even referring to autonomous processes that run on the same physical computer and interact with each other by message passing.
While there is no single definition of a distributed system, the following defining properties are commonly used as:
- There are several autonomous computational entities (computers or nodes), each of which has its own local memory.
- The entities communicate with each other by message passing.
|
https://en.wikipedia.org/wiki/Distributed_computing
|
While there is no single definition of a distributed system, the following defining properties are commonly used as:
- There are several autonomous computational entities (computers or nodes), each of which has its own local memory.
- The entities communicate with each other by message passing.
A distributed system may have a common goal, such as solving a large computational problem; the user then perceives the collection of autonomous processors as a unit. Alternatively, each computer may have its own user with individual needs, and the purpose of the distributed system is to coordinate the use of shared resources or provide communication services to the users.
Other typical properties of distributed systems include the following:
- The system has to tolerate failures in individual computers.
- The structure of the system (network topology, network latency, number of computers) is not known in advance, the system may consist of different kinds of computers and network links, and the system may change during the execution of a distributed program.
- Each computer has only a limited, incomplete view of the system. Each computer may know only one part of the input.
## Patterns
Here are common architectural patterns used for distributed computing:
- Saga interaction pattern
- Microservices
- Event driven architecture
|
https://en.wikipedia.org/wiki/Distributed_computing
|
Each computer may know only one part of the input.
## Patterns
Here are common architectural patterns used for distributed computing:
- Saga interaction pattern
- Microservices
- Event driven architecture
## Events vs. Messages
In distributed systems, events represent a fact or state change (e.g., OrderPlaced) and are typically broadcast asynchronously to multiple consumers, promoting loose coupling and scalability. While events generally don’t expect an immediate response, acknowledgment mechanisms are often implemented at the infrastructure level (e.g., Kafka commit offsets, SNS delivery statuses) rather than being an inherent part of the event pattern itself.
In contrast, messages serve a broader role, encompassing commands (e.g., ProcessPayment), events (e.g., PaymentProcessed), and documents (e.g., DataPayload). Both events and messages can support various delivery guarantees, including at-least-once, at-most-once, and exactly-once, depending on the technology stack and implementation. However, exactly-once delivery is often achieved through idempotency mechanisms rather than true, infrastructure-level exactly-once semantics.
Delivery patterns for both events and messages include publish/subscribe (one-to-many) and point-to-point (one-to-one).
|
https://en.wikipedia.org/wiki/Distributed_computing
|
However, exactly-once delivery is often achieved through idempotency mechanisms rather than true, infrastructure-level exactly-once semantics.
Delivery patterns for both events and messages include publish/subscribe (one-to-many) and point-to-point (one-to-one). While request/reply is technically possible, it is more commonly associated with messaging patterns rather than pure event-driven systems. Events excel at state propagation and decoupled notifications, while messages are better suited for command execution, workflow orchestration, and explicit coordination.
Modern architectures commonly combine both approaches, leveraging events for distributed state change notifications and messages for targeted command execution and structured workflows based on specific timing, ordering, and delivery requirements.
## Parallel and distributed computing
Distributed systems are groups of networked computers which share a common goal for their work.
The terms "concurrent computing", "parallel computing", and "distributed computing" have much overlap, and no clear distinction exists between them. The same system may be characterized both as "parallel" and "distributed"; the processors in a typical distributed system run concurrently in parallel. Parallel computing may be seen as a particularly tightly coupled form of distributed computing, and distributed computing may be seen as a loosely coupled form of parallel computing.
|
https://en.wikipedia.org/wiki/Distributed_computing
|
The same system may be characterized both as "parallel" and "distributed"; the processors in a typical distributed system run concurrently in parallel. Parallel computing may be seen as a particularly tightly coupled form of distributed computing, and distributed computing may be seen as a loosely coupled form of parallel computing. Nevertheless, it is possible to roughly classify concurrent systems as "parallel" or "distributed" using the following criteria:
- In parallel computing, all processors may have access to a shared memory to exchange information between processors.
- In distributed computing, each processor has its own private memory (distributed memory). Information is exchanged by passing messages between the processors.
The figure on the right illustrates the difference between distributed and parallel systems. Figure (a) is a schematic view of a typical distributed system; the system is represented as a network topology in which each node is a computer and each line connecting the nodes is a communication link. Figure (b) shows the same distributed system in more detail: each computer has its own local memory, and information can be exchanged only by passing messages from one node to another by using the available communication links. Figure (c) shows a parallel system in which each processor has a direct access to a shared memory.
|
https://en.wikipedia.org/wiki/Distributed_computing
|
Figure (b) shows the same distributed system in more detail: each computer has its own local memory, and information can be exchanged only by passing messages from one node to another by using the available communication links. Figure (c) shows a parallel system in which each processor has a direct access to a shared memory.
The situation is further complicated by the traditional uses of the terms parallel and distributed algorithm that do not quite match the above definitions of parallel and distributed systems (see below for more detailed discussion). Nevertheless, as a rule of thumb, high-performance parallel computation in a shared-memory multiprocessor uses parallel algorithms while the coordination of a large-scale distributed system uses distributed algorithms.
## History
The use of concurrent processes which communicate through message-passing has its roots in operating system architectures studied in the 1960s. The first widespread distributed systems were local-area networks such as Ethernet, which was invented in the 1970s.
ARPANET, one of the predecessors of the Internet, was introduced in the late 1960s, and ARPANET e-mail was invented in the early 1970s. E-mail became the most successful application of ARPANET, and it is probably the earliest example of a large-scale distributed application.
|
https://en.wikipedia.org/wiki/Distributed_computing
|
ARPANET, one of the predecessors of the Internet, was introduced in the late 1960s, and ARPANET e-mail was invented in the early 1970s. E-mail became the most successful application of ARPANET, and it is probably the earliest example of a large-scale distributed application. In addition to ARPANET (and its successor, the global Internet), other early worldwide computer networks included Usenet and FidoNet from the 1980s, both of which were used to support distributed discussion systems.
The study of distributed computing became its own branch of computer science in the late 1970s and early 1980s. The first conference in the field, Symposium on Principles of Distributed Computing (PODC), dates back to 1982, and its counterpart International Symposium on Distributed Computing (DISC) was first held in Ottawa in 1985 as the International Workshop on Distributed Algorithms on Graphs.
## Architectures
Various hardware and software architectures are used for distributed computing. At a lower level, it is necessary to interconnect multiple CPUs with some sort of network, regardless of whether that network is printed onto a circuit board or made up of loosely coupled devices and cables. At a higher level, it is necessary to interconnect processes running on those CPUs with some sort of communication system.
|
https://en.wikipedia.org/wiki/Distributed_computing
|
At a lower level, it is necessary to interconnect multiple CPUs with some sort of network, regardless of whether that network is printed onto a circuit board or made up of loosely coupled devices and cables. At a higher level, it is necessary to interconnect processes running on those CPUs with some sort of communication system.
Whether these CPUs share resources or not determines a first distinction between three types of architecture:
- Shared memory
- Shared disk
- Shared nothing.
Distributed programming typically falls into one of several basic architectures: client–server, three-tier, n-tier, or peer-to-peer; or categories: loose coupling, or tight coupling.
- Client–server: architectures where smart clients contact the server for data then format and display it to the users. Input at the client is committed back to the server when it represents a permanent change.
- Three-tier: architectures that move the client intelligence to a middle tier so that stateless clients can be used. This simplifies application deployment. Most web applications are three-tier.
- n-tier: architectures that refer typically to web applications which further forward their requests to other enterprise services. This type of application is the one most responsible for the success of application servers.
- Peer-to-peer: architectures where there are no special machines that provide a service or manage the network resources.
|
https://en.wikipedia.org/wiki/Distributed_computing
|
Most web applications are three-tier.
- n-tier: architectures that refer typically to web applications which further forward their requests to other enterprise services. This type of application is the one most responsible for the success of application servers.
- Peer-to-peer: architectures where there are no special machines that provide a service or manage the network resources. Instead all responsibilities are uniformly divided among all machines, known as peers. Peers can serve both as clients and as servers. Examples of this architecture include BitTorrent and the bitcoin network.
Another basic aspect of distributed computing architecture is the method of communicating and coordinating work among concurrent processes. Through various message passing protocols, processes may communicate directly with one another, typically in a main/sub relationship. Alternatively, a "database-centric" architecture can enable distributed computing to be done without any form of direct inter-process communication, by utilizing a shared database. Database-centric architecture in particular provides relational processing analytics in a schematic architecture allowing for live environment relay. This enables distributed computing functions both within and beyond the parameters of a networked database.
### Cell-Based Architecture
Cell-based architecture is a distributed computing approach in which computational resources are organized into self-contained units called cells.
|
https://en.wikipedia.org/wiki/Distributed_computing
|
This enables distributed computing functions both within and beyond the parameters of a networked database.
### Cell-Based Architecture
Cell-based architecture is a distributed computing approach in which computational resources are organized into self-contained units called cells. Each cell operates independently, processing requests while maintaining scalability, fault isolation, and availability.
A cell typically consists of multiple services or application components and functions as an autonomous unit. Some implementations replicate entire sets of services across multiple cells, while others partition workloads between cells. In replicated models, requests may be rerouted to an operational cell if another experiences a failure. This design is intended to enhance system resilience by reducing the impact of localized failures.
Some implementations employ circuit breakers within and between cells. Within a cell, circuit breakers may be used to prevent cascading failures among services, while inter-cell circuit breakers can isolate failing cells and redirect traffic to those that remain operational.
Cell-based architecture has been adopted in some large-scale distributed systems, particularly in cloud-native and high-availability environments, where fault isolation and redundancy are key design considerations. Its implementation varies depending on system requirements, infrastructure constraints, and operational objectives.
|
https://en.wikipedia.org/wiki/Distributed_computing
|
Cell-based architecture has been adopted in some large-scale distributed systems, particularly in cloud-native and high-availability environments, where fault isolation and redundancy are key design considerations. Its implementation varies depending on system requirements, infrastructure constraints, and operational objectives.
## Applications
Reasons for using distributed systems and distributed computing may include:
- The very nature of an application may require the use of a communication network that connects several computers: for example, data produced in one physical location and required in another location.
- There are many cases in which the use of a single computer would be possible in principle, but the use of a distributed system is beneficial for practical reasons. For example:
- It can allow for much larger storage and memory, faster compute, and higher bandwidth than a single machine.
- It can provide more reliability than a non-distributed system, as there is no single point of failure. Moreover, a distributed system may be easier to expand and manage than a monolithic uniprocessor system.
- It may be more cost-efficient to obtain the desired level of performance by using a cluster of several low-end computers, in comparison with a single high-end computer.
|
https://en.wikipedia.org/wiki/Distributed_computing
|
- It may be more cost-efficient to obtain the desired level of performance by using a cluster of several low-end computers, in comparison with a single high-end computer.
Examples
Examples of distributed systems and applications of distributed computing include the following:
- telecommunications networks:
- telephone networks and cellular networks,
- computer networks such as the Internet,
- wireless sensor networks,
- routing algorithms;
- network applications:
- World Wide Web and peer-to-peer networks,
- massively multiplayer online games and virtual reality communities,
- distributed databases and distributed database management systems,
- network file systems,
- distributed cache such as burst buffers,
- distributed information processing systems such as banking systems and airline reservation systems;
- real-time process control:
- aircraft control systems,
- industrial control systems;
- parallel computation:
- scientific computing, including cluster computing, grid computing, cloud computing, and various volunteer computing projects,
- distributed rendering in computer graphics.
- peer-to-peer
## Reactive distributed systems
According to Reactive Manifesto, reactive distributed systems are responsive, resilient, elastic and message-driven. Subsequently, Reactive systems are more flexible, loosely-coupled and scalable. To make your systems reactive, you are advised to implement Reactive Principles.
|
https://en.wikipedia.org/wiki/Distributed_computing
|
Subsequently, Reactive systems are more flexible, loosely-coupled and scalable. To make your systems reactive, you are advised to implement Reactive Principles. Reactive Principles are a set of principles and patterns which help to make your cloud native application as well as edge native applications more reactive.
## Theoretical foundations
### Models
Many tasks that we would like to automate by using a computer are of question–answer type: we would like to ask a question and the computer should produce an answer. In theoretical computer science, such tasks are called computational problems. Formally, a computational problem consists of instances together with a solution for each instance. Instances are questions that we can ask, and solutions are desired answers to these questions.
Theoretical computer science seeks to understand which computational problems can be solved by using a computer (computability theory) and how efficiently (computational complexity theory). Traditionally, it is said that a problem can be solved by using a computer if we can design an algorithm that produces a correct solution for any given instance. Such an algorithm can be implemented as a computer program that runs on a general-purpose computer: the program reads a problem instance from input, performs some computation, and produces the solution as output.
|
https://en.wikipedia.org/wiki/Distributed_computing
|
Traditionally, it is said that a problem can be solved by using a computer if we can design an algorithm that produces a correct solution for any given instance. Such an algorithm can be implemented as a computer program that runs on a general-purpose computer: the program reads a problem instance from input, performs some computation, and produces the solution as output. Formalisms such as random-access machines or universal Turing machines can be used as abstract models of a sequential general-purpose computer executing such an algorithm.
The field of concurrent and distributed computing studies similar questions in the case of either multiple computers, or a computer that executes a network of interacting processes: which computational problems can be solved in such a network and how efficiently? However, it is not at all obvious what is meant by "solving a problem" in the case of a concurrent or distributed system: for example, what is the task of the algorithm designer, and what is the concurrent or distributed equivalent of a sequential general-purpose computer?
The discussion below focuses on the case of multiple computers, although many of the issues are the same for concurrent processes running on a single computer.
Three viewpoints are commonly used:
Parallel algorithms in shared-memory model
- All processors have access to a shared memory. The algorithm designer chooses the program executed by each processor.
- One theoretical model is the parallel random-access machines (PRAM) that are used.
|
https://en.wikipedia.org/wiki/Distributed_computing
|
The algorithm designer chooses the program executed by each processor.
- One theoretical model is the parallel random-access machines (PRAM) that are used. However, the classical PRAM model assumes synchronous access to the shared memory.
- Shared-memory programs can be extended to distributed systems if the underlying operating system encapsulates the communication between nodes and virtually unifies the memory across all individual systems.
- A model that is closer to the behavior of real-world multiprocessor machines and takes into account the use of machine instructions, such as Compare-and-swap (CAS), is that of asynchronous shared memory. There is a wide body of work on this model, a summary of which can be found in the literature.
Parallel algorithms in message-passing model
- The algorithm designer chooses the structure of the network, as well as the program executed by each computer.
- Models such as Boolean circuits and sorting networks are used. A Boolean circuit can be seen as a computer network: each gate is a computer that runs an extremely simple computer program. Similarly, a sorting network can be seen as a computer network: each comparator is a computer.
Distributed algorithms in message-passing model
- The algorithm designer only chooses the computer program. All computers run the same program. The system must work correctly regardless of the structure of the network.
|
https://en.wikipedia.org/wiki/Distributed_computing
|
All computers run the same program. The system must work correctly regardless of the structure of the network.
- A commonly used model is a graph with one finite-state machine per node.
In the case of distributed algorithms, computational problems are typically related to graphs. Often the graph that describes the structure of the computer network is the problem instance. This is illustrated in the following example.
### An example
Consider the computational problem of finding a coloring of a given graph G. Different fields might take the following approaches:
Centralized algorithms
- The graph G is encoded as a string, and the string is given as input to a computer. The computer program finds a coloring of the graph, encodes the coloring as a string, and outputs the result.
Parallel algorithms
- Again, the graph G is encoded as a string. However, multiple computers can access the same string in parallel. Each computer might focus on one part of the graph and produce a coloring for that part.
- The main focus is on high-performance computation that exploits the processing power of multiple computers in parallel.
Distributed algorithms
- The graph G is the structure of the computer network.
|
https://en.wikipedia.org/wiki/Distributed_computing
|
- The main focus is on high-performance computation that exploits the processing power of multiple computers in parallel.
Distributed algorithms
- The graph G is the structure of the computer network. There is one computer for each node of G and one communication link for each edge of G. Initially, each computer only knows about its immediate neighbors in the graph G; the computers must exchange messages with each other to discover more about the structure of G. Each computer must produce its own color as output.
- The main focus is on coordinating the operation of an arbitrary distributed system.
While the field of parallel algorithms has a different focus than the field of distributed algorithms, there is much interaction between the two fields. For example, the Cole–Vishkin algorithm for graph coloring was originally presented as a parallel algorithm, but the same technique can also be used directly as a distributed algorithm.
Moreover, a parallel algorithm can be implemented either in a parallel system (using shared memory) or in a distributed system (using message passing). The traditional boundary between parallel and distributed algorithms (choose a suitable network vs. run in any given network) does not lie in the same place as the boundary between parallel and distributed systems (shared memory vs. message passing).
### Complexity measures
In parallel algorithms, yet another resource in addition to time and space is the number of computers.
|
https://en.wikipedia.org/wiki/Distributed_computing
|
The traditional boundary between parallel and distributed algorithms (choose a suitable network vs. run in any given network) does not lie in the same place as the boundary between parallel and distributed systems (shared memory vs. message passing).
### Complexity measures
In parallel algorithms, yet another resource in addition to time and space is the number of computers. Indeed, often there is a trade-off between the running time and the number of computers: the problem can be solved faster if there are more computers running in parallel (see speedup). If a decision problem can be solved in polylogarithmic time by using a polynomial number of processors, then the problem is said to be in the class NC. The class NC can be defined equally well by using the PRAM formalism or Boolean circuits—PRAM machines can simulate Boolean circuits efficiently and vice versa.
In the analysis of distributed algorithms, more attention is usually paid on communication operations than computational steps. Perhaps the simplest model of distributed computing is a synchronous system where all nodes operate in a lockstep fashion. This model is commonly known as the LOCAL model. During each communication round, all nodes in parallel (1) receive the latest messages from their neighbours, (2) perform arbitrary local computation, and (3) send new messages to their neighbors. In such systems, a central complexity measure is the number of synchronous communication rounds required to complete the task.
|
https://en.wikipedia.org/wiki/Distributed_computing
|
During each communication round, all nodes in parallel (1) receive the latest messages from their neighbours, (2) perform arbitrary local computation, and (3) send new messages to their neighbors. In such systems, a central complexity measure is the number of synchronous communication rounds required to complete the task.
This complexity measure is closely related to the diameter of the network. Let D be the diameter of the network. On the one hand, any computable problem can be solved trivially in a synchronous distributed system in approximately 2D communication rounds: simply gather all information in one location (D rounds), solve the problem, and inform each node about the solution (D rounds).
On the other hand, if the running time of the algorithm is much smaller than D communication rounds, then the nodes in the network must produce their output without having the possibility to obtain information about distant parts of the network. In other words, the nodes must make globally consistent decisions based on information that is available in their local D-neighbourhood. Many distributed algorithms are known with the running time much smaller than D rounds, and understanding which problems can be solved by such algorithms is one of the central research questions of the field. Typically an algorithm which solves a problem in polylogarithmic time in the network size is considered efficient in this model.
Another commonly used measure is the total number of bits transmitted in the network (cf. communication complexity).
|
https://en.wikipedia.org/wiki/Distributed_computing
|
Another commonly used measure is the total number of bits transmitted in the network (cf. communication complexity). The features of this concept are typically captured with the CONGEST(B) model, which is similarly defined as the LOCAL model, but where single messages can only contain B bits.
### Other problems
Traditional computational problems take the perspective that the user asks a question, a computer (or a distributed system) processes the question, then produces an answer and stops. However, there are also problems where the system is required not to stop, including the dining philosophers problem and other similar mutual exclusion problems. In these problems, the distributed system is supposed to continuously coordinate the use of shared resources so that no conflicts or deadlocks occur.
There are also fundamental challenges that are unique to distributed computing, for example those related to fault-tolerance. Examples of related problems include consensus problems, Byzantine fault tolerance, and self-stabilisation.
Much research is also focused on understanding the asynchronous nature of distributed systems:
- Synchronizers can be used to run synchronous algorithms in asynchronous systems.
- Logical clocks provide a causal happened-before ordering of events.
- Clock synchronization algorithms provide globally consistent physical time stamps.
Note that in distributed systems, latency should be measured through "99th percentile" because "median" and "average" can be misleading.
|
https://en.wikipedia.org/wiki/Distributed_computing
|
Much research is also focused on understanding the asynchronous nature of distributed systems:
- Synchronizers can be used to run synchronous algorithms in asynchronous systems.
- Logical clocks provide a causal happened-before ordering of events.
- Clock synchronization algorithms provide globally consistent physical time stamps.
Note that in distributed systems, latency should be measured through "99th percentile" because "median" and "average" can be misleading.
### Election
Coordinator election (or leader election) is the process of designating a single process as the organizer of some task distributed among several computers (nodes). Before the task is begun, all network nodes are either unaware which node will serve as the "coordinator" (or leader) of the task, or unable to communicate with the current coordinator. After a coordinator election algorithm has been run, however, each node throughout the network recognizes a particular, unique node as the task coordinator.
The network nodes communicate among themselves in order to decide which of them will get into the "coordinator" state. For that, they need some method in order to break the symmetry among them. For example, if each node has unique and comparable identities, then the nodes can compare their identities, and decide that the node with the highest identity is the coordinator.
|
https://en.wikipedia.org/wiki/Distributed_computing
|
For that, they need some method in order to break the symmetry among them. For example, if each node has unique and comparable identities, then the nodes can compare their identities, and decide that the node with the highest identity is the coordinator.
The definition of this problem is often attributed to LeLann, who formalized it as a method to create a new token in a token ring network in which the token has been lost.
Coordinator election algorithms are designed to be economical in terms of total bytes transmitted, and time. The algorithm suggested by Gallager, Humblet, and Spira for general undirected graphs has had a strong impact on the design of distributed algorithms in general, and won the Dijkstra Prize for an influential paper in distributed computing.
Many other algorithms were suggested for different kinds of network graphs, such as undirected rings, unidirectional rings, complete graphs, grids, directed Euler graphs, and others. A general method that decouples the issue of the graph family from the design of the coordinator election algorithm was suggested by Korach, Kutten, and Moran.
In order to perform coordination, distributed systems employ the concept of coordinators. The coordinator election problem is to choose a process from among a group of processes on different processors in a distributed system to act as the central coordinator. Several central coordinator election algorithms exist.
|
https://en.wikipedia.org/wiki/Distributed_computing
|
The coordinator election problem is to choose a process from among a group of processes on different processors in a distributed system to act as the central coordinator. Several central coordinator election algorithms exist.
### Properties of distributed systems
So far the focus has been on designing a distributed system that solves a given problem. A complementary research problem is studying the properties of a given distributed system.
The halting problem is an analogous example from the field of centralised computation: we are given a computer program and the task is to decide whether it halts or runs forever. The halting problem is undecidable in the general case, and naturally understanding the behaviour of a computer network is at least as hard as understanding the behaviour of one computer.
However, there are many interesting special cases that are decidable. In particular, it is possible to reason about the behaviour of a network of finite-state machines. One example is telling whether a given network of interacting (asynchronous and non-deterministic) finite-state machines can reach a deadlock. This problem is PSPACE-complete, i.e., it is decidable, but not likely that there is an efficient (centralised, parallel or distributed) algorithm that solves the problem in the case of large networks.
|
https://en.wikipedia.org/wiki/Distributed_computing
|
Feedforward refers to recognition-inference architecture of neural networks. Artificial neural network architectures are based on inputs multiplied by weights to obtain outputs (inputs-to-output): feedforward. Recurrent neural networks, or neural networks with loops allow information from later processing stages to feed back to earlier stages for sequence processing. However, at every stage of inference a feedforward multiplication remains the core, essential for backpropagation or backpropagation through time. Thus neural networks cannot contain feedback like negative feedback or positive feedback where the outputs feed back to the very same inputs and modify them, because this forms an infinite loop which is not possible to rewind in time to generate an error signal through backpropagation. This issue and nomenclature appear to be a point of confusion between some computer scientists and scientists in other fields studying brain networks.
## Mathematical foundations
### Activation function
The two historically common activation functions are both sigmoids, and are described by
$$
y(v_i) = \tanh(v_i) ~~ \textrm{and} ~~ y(v_i) = (1+e^{-v_i})^{-1}
$$
.
|
https://en.wikipedia.org/wiki/Feedforward_neural_network
|
### Activation function
The two historically common activation functions are both sigmoids, and are described by
$$
y(v_i) = \tanh(v_i) ~~ \textrm{and} ~~ y(v_i) = (1+e^{-v_i})^{-1}
$$
.
The first is a hyperbolic tangent that ranges from -1 to 1, while the other is the logistic function, which is similar in shape but ranges from 0 to 1. Here
$$
y_i
$$
is the output of the
$$
i
$$
th node (neuron) and
$$
v_i
$$
is the weighted sum of the input connections. Alternative activation functions have been proposed, including the rectifier and softplus functions. More specialized activation functions include radial basis functions (used in radial basis networks, another class of supervised neural network models).
In recent developments of deep learning the rectified linear unit (ReLU) is more frequently used as one of the possible ways to overcome the numerical problems related to the sigmoids.
### Learning
Learning occurs by changing connection weights after each piece of data is processed, based on the amount of error in the output compared to the expected result. This is an example of supervised learning, and is carried out through backpropagation.
|
https://en.wikipedia.org/wiki/Feedforward_neural_network
|
### Learning
Learning occurs by changing connection weights after each piece of data is processed, based on the amount of error in the output compared to the expected result. This is an example of supervised learning, and is carried out through backpropagation.
We can represent the degree of error in an output node
$$
j
$$
in the
$$
n
$$
th data point (training example) by
$$
e_j(n)=d_j(n)-y_j(n)
$$
, where
$$
d_j(n)
$$
is the desired target value for
$$
n
$$
th data point at node
$$
j
$$
, and _ BLOCK6_ is the value produced at node _ BLOCK7_ when the
$$
n
$$
th data point is given as an input.
The node weights can then be adjusted based on corrections that minimize the error in the entire output for the
$$
n
$$
th data point, given by
$$
\mathcal{E}(n)=\frac{1}{2}\sum_{\text{output node }j} e_j^2(n)
$$
.
|
https://en.wikipedia.org/wiki/Feedforward_neural_network
|
BLOCK7_ when the
$$
n
$$
th data point is given as an input.
The node weights can then be adjusted based on corrections that minimize the error in the entire output for the
$$
n
$$
th data point, given by
$$
\mathcal{E}(n)=\frac{1}{2}\sum_{\text{output node }j} e_j^2(n)
$$
.
Using gradient descent, the change in each weight
$$
w_{ij}
$$
is
$$
\Delta w_{ji} (n) = -\eta\frac{\partial\mathcal{E}(n)}{\partial v_j(n)} y_i(n)
$$
where
$$
y_i(n)
$$
is the output of the previous neuron
$$
i
$$
, and
$$
\eta
$$
is the learning rate, which is selected to ensure that the weights quickly converge to a response, without oscillations. In the previous expression,
$$
\frac{\partial\mathcal{E}(n)}{\partial v_j(n)}
$$
denotes the partial derivate of the error
$$
\mathcal{E}(n)
$$
according to the weighted sum
$$
v_j(n)
$$
of the input connections of neuron
$$
i
$$
.
|
https://en.wikipedia.org/wiki/Feedforward_neural_network
|
Using gradient descent, the change in each weight
$$
w_{ij}
$$
is
$$
\Delta w_{ji} (n) = -\eta\frac{\partial\mathcal{E}(n)}{\partial v_j(n)} y_i(n)
$$
where
$$
y_i(n)
$$
is the output of the previous neuron
$$
i
$$
, and
$$
\eta
$$
is the learning rate, which is selected to ensure that the weights quickly converge to a response, without oscillations. In the previous expression,
$$
\frac{\partial\mathcal{E}(n)}{\partial v_j(n)}
$$
denotes the partial derivate of the error
$$
\mathcal{E}(n)
$$
according to the weighted sum
$$
v_j(n)
$$
of the input connections of neuron
$$
i
$$
.
The derivative to be calculated depends on the induced local field
$$
v_j
$$
, which itself varies.
|
https://en.wikipedia.org/wiki/Feedforward_neural_network
|
In the previous expression,
$$
\frac{\partial\mathcal{E}(n)}{\partial v_j(n)}
$$
denotes the partial derivate of the error
$$
\mathcal{E}(n)
$$
according to the weighted sum
$$
v_j(n)
$$
of the input connections of neuron
$$
i
$$
.
The derivative to be calculated depends on the induced local field
$$
v_j
$$
, which itself varies. It is easy to prove that for an output node this derivative can be simplified to
$$
-\frac{\partial\mathcal{E}(n)}{\partial v_j(n)} = e_j(n)\phi^\prime (v_j(n))
$$
where
$$
\phi^\prime
$$
is the derivative of the activation function described above, which itself does not vary. The analysis is more difficult for the change in weights to a hidden node, but it can be shown that the relevant derivative is
$$
-\frac{\partial\mathcal{E}(n)}{\partial v_j(n)} = \phi^\prime (v_j(n))\sum_k -\frac{\partial\mathcal{E}(n)}{\partial v_k(n)} w_{kj}(n)
$$
.
|
https://en.wikipedia.org/wiki/Feedforward_neural_network
|
It is easy to prove that for an output node this derivative can be simplified to
$$
-\frac{\partial\mathcal{E}(n)}{\partial v_j(n)} = e_j(n)\phi^\prime (v_j(n))
$$
where
$$
\phi^\prime
$$
is the derivative of the activation function described above, which itself does not vary. The analysis is more difficult for the change in weights to a hidden node, but it can be shown that the relevant derivative is
$$
-\frac{\partial\mathcal{E}(n)}{\partial v_j(n)} = \phi^\prime (v_j(n))\sum_k -\frac{\partial\mathcal{E}(n)}{\partial v_k(n)} w_{kj}(n)
$$
.
This depends on the change in weights of the
$$
k
$$
th nodes, which represent the output layer. So to change the hidden layer weights, the output layer weights change according to the derivative of the activation function, and so this algorithm represents a backpropagation of the activation function.
## History
### Timeline
- Circa 1800, Legendre (1805) and Gauss (1795) created the simplest feedforward network which consists of a single weight layer with linear activation functions.
|
https://en.wikipedia.org/wiki/Feedforward_neural_network
|
## History
### Timeline
- Circa 1800, Legendre (1805) and Gauss (1795) created the simplest feedforward network which consists of a single weight layer with linear activation functions. It was trained by the least squares method for minimising mean squared error, also known as linear regression. Legendre and Gauss used it for the prediction of planetary movement from training data.
- In 1943, Warren McCulloch and Walter Pitts proposed the binary artificial neuron as a logical model of biological neural networks.
- In 1958, Frank Rosenblatt proposed the multilayered perceptron model, consisting of an input layer, a hidden layer with randomized weights that did not learn, and an output layer with learnable connections. R. D. Joseph (1960) mentions an even earlier perceptron-like device: "Farley and Clark of MIT Lincoln Laboratory actually preceded Rosenblatt in the development of a perceptron-like device." However, "they dropped the subject. "
- In 1960, Joseph also discussed multilayer perceptrons with an adaptive hidden layer. Rosenblatt (1962) cited and adopted these ideas, also crediting work by H. D. Block and B. W. Knight. Unfortunately, these early efforts did not lead to a working learning algorithm for hidden units, i.e., deep learning.
-
|
https://en.wikipedia.org/wiki/Feedforward_neural_network
|
Rosenblatt (1962) cited and adopted these ideas, also crediting work by H. D. Block and B. W. Knight. Unfortunately, these early efforts did not lead to a working learning algorithm for hidden units, i.e., deep learning.
- In 1965, Alexey Grigorevich Ivakhnenko and Valentin Lapa published Group Method of Data Handling, the first working deep learning algorithm, a method to train arbitrarily deep neural networks. It is based on layer by layer training through regression analysis. Superfluous hidden units are pruned using a separate validation set. Since the activation functions of the nodes are Kolmogorov-Gabor polynomials, these were also the first deep networks with multiplicative units or "gates." It was used to train an eight-layer neural net in 1971.
- In 1967, Shun'ichi Amari reported the first multilayered neural network trained by stochastic gradient descent, which was able to classify non-linearily separable pattern classes. Amari's student Saito conducted the computer experiments, using a five-layered feedforward network with two learning layers.
- In 1970, Seppo Linnainmaa published the modern form of backpropagation in his master thesis (1970). G.M. Ostrovski et al. republished it in 1971.
|
https://en.wikipedia.org/wiki/Feedforward_neural_network
|
In 1970, Seppo Linnainmaa published the modern form of backpropagation in his master thesis (1970). G.M. Ostrovski et al. republished it in 1971. Paul Werbos applied backpropagation to neural networks in 1982 (his 1974 PhD thesis, reprinted in a 1994 book, did not yet describe the algorithm). In 1986, David E. Rumelhart et al. popularised backpropagation but did not cite the original work.
- In 2003, interest in backpropagation networks returned due to the successes of deep learning being applied to language modelling by Yoshua Bengio with co-authors.
### Linear regression
### Perceptron
If using a threshold, i.e. a linear activation function, the resulting linear threshold unit is called a perceptron. (Often the term is used to denote just one of these units.) Multiple parallel non-linear units are able to approximate any continuous function from a compact interval of the real numbers into the interval [−1,1] despite the limited computational power of single unit with a linear threshold function.
Perceptrons can be trained by a simple learning algorithm that is usually called the delta rule. It calculates the errors between calculated output and sample output data, and uses this to create an adjustment to the weights, thus implementing a form of gradient descent.
|
https://en.wikipedia.org/wiki/Feedforward_neural_network
|
Perceptrons can be trained by a simple learning algorithm that is usually called the delta rule. It calculates the errors between calculated output and sample output data, and uses this to create an adjustment to the weights, thus implementing a form of gradient descent.
### Multilayer perceptron
A multilayer perceptron (MLP) is a misnomer for a modern feedforward artificial neural network, consisting of fully connected neurons (hence the synonym sometimes used of fully connected network (FCN)), often with a nonlinear kind of activation function, organized in at least three layers, notable for being able to distinguish data that is not linearly separable.
## Other feedforward networks
Examples of other feedforward networks include convolutional neural networks and radial basis function networks, which use a different activation function.
|
https://en.wikipedia.org/wiki/Feedforward_neural_network
|
In mathematics, computable numbers are the real numbers that can be computed to within any desired precision by a finite, terminating algorithm. They are also known as the recursive numbers, effective numbers, computable reals, or recursive reals. The concept of a computable real number was introduced by Émile Borel in 1912, using the intuitive notion of computability available at the time.
### Equivalent definitions
can be given using μ-recursive functions, Turing machines, or λ-calculus as the formal representation of algorithms. The computable numbers form a real closed field and can be used in the place of real numbers for many, but not all, mathematical purposes.
## Informal definition
In the following, Marvin Minsky defines the numbers to be computed in a manner similar to those defined by Alan Turing in 1936; i.e., as "sequences of digits interpreted as decimal fractions" between 0 and 1:
The key notions in the definition are (1) that some n is specified at the start, (2) for any n the computation only takes a finite number of steps, after which the machine produces the desired output and terminates.
|
https://en.wikipedia.org/wiki/Computable_number
|
The computable numbers form a real closed field and can be used in the place of real numbers for many, but not all, mathematical purposes.
## Informal definition
In the following, Marvin Minsky defines the numbers to be computed in a manner similar to those defined by Alan Turing in 1936; i.e., as "sequences of digits interpreted as decimal fractions" between 0 and 1:
The key notions in the definition are (1) that some n is specified at the start, (2) for any n the computation only takes a finite number of steps, after which the machine produces the desired output and terminates.
An alternate form of (2) – the machine successively prints all n of the digits on its tape, halting after printing the nth – emphasizes Minsky's observation: (3) That by use of a Turing machine, a finite definition – in the form of the machine's state table – is being used to define what is a potentially infinite string of decimal digits.
This is however not the modern definition which only requires the result be accurate to within any given accuracy. The informal definition above is subject to a rounding problem called the table-maker's dilemma whereas the modern definition is not.
|
https://en.wikipedia.org/wiki/Computable_number
|
This is however not the modern definition which only requires the result be accurate to within any given accuracy. The informal definition above is subject to a rounding problem called the table-maker's dilemma whereas the modern definition is not.
## Formal definition
A real number a is computable if it can be approximated by some computable function
$$
f:\mathbb{N}\to\mathbb{Z}
$$
in the following manner: given any positive integer n, the function produces an integer f(n) such that:
$$
{f(n)-1\over n} \leq a \leq {f(n)+1\over n}.
$$
A complex number is called computable if its real and imaginary parts are computable.
Equivalent definitions
There are two similar definitions that are equivalent:
- There exists a computable function which, given any positive rational error bound
$$
\varepsilon
$$
, produces a rational number r such that _ BLOCK3_- There is a computable sequence of rational numbers
$$
q_i
$$
converging to
$$
a
$$
such that
$$
|q_i - q_{i+1}| < 2^{-i}\,
$$
for each i.
There is another equivalent definition of computable numbers via computable Dedekind cuts.
|
https://en.wikipedia.org/wiki/Computable_number
|
BLOCK3_- There is a computable sequence of rational numbers
$$
q_i
$$
converging to
$$
a
$$
such that
$$
|q_i - q_{i+1}| < 2^{-i}\,
$$
for each i.
There is another equivalent definition of computable numbers via computable Dedekind cuts. A computable Dedekind cut is a computable function
$$
D\;
$$
which when provided with a rational number
$$
r
$$
as input returns
$$
D(r)=\mathrm{true}\;
$$
or
$$
D(r)=\mathrm{false}\;
$$
, satisfying the following conditions:
$$
\exists r D(r)=\mathrm{true}\;
$$
$$
\exists r D(r)=\mathrm{false}\;
$$
$$
(D(r)=\mathrm{true}) \wedge (D(s)=\mathrm{false}) \Rightarrow r<s\;
$$
$$
D(r)=\mathrm{true} \Rightarrow \exist s>r, D(s)=\mathrm{true}.\;
$$
An example is given by a program D that defines the cube root of 3.
|
https://en.wikipedia.org/wiki/Computable_number
|
There is a computable sequence of rational numbers
$$
q_i
$$
converging to
$$
a
$$
such that
$$
|q_i - q_{i+1}| < 2^{-i}\,
$$
for each i.
There is another equivalent definition of computable numbers via computable Dedekind cuts. A computable Dedekind cut is a computable function
$$
D\;
$$
which when provided with a rational number
$$
r
$$
as input returns
$$
D(r)=\mathrm{true}\;
$$
or
$$
D(r)=\mathrm{false}\;
$$
, satisfying the following conditions:
$$
\exists r D(r)=\mathrm{true}\;
$$
$$
\exists r D(r)=\mathrm{false}\;
$$
$$
(D(r)=\mathrm{true}) \wedge (D(s)=\mathrm{false}) \Rightarrow r<s\;
$$
$$
D(r)=\mathrm{true} \Rightarrow \exist s>r, D(s)=\mathrm{true}.\;
$$
An example is given by a program D that defines the cube root of 3. Assuming
$$
q>0\;
$$
this is defined by:
$$
p^3<3 q^3 \Rightarrow D(p/q)=\mathrm{true}\;
$$
$$
p^3>3 q^3 \Rightarrow D(p/q)=\mathrm{false}.\;
$$
A real number is computable if and only if there is a computable Dedekind cut D corresponding to it.
|
https://en.wikipedia.org/wiki/Computable_number
|
A computable Dedekind cut is a computable function
$$
D\;
$$
which when provided with a rational number
$$
r
$$
as input returns
$$
D(r)=\mathrm{true}\;
$$
or
$$
D(r)=\mathrm{false}\;
$$
, satisfying the following conditions:
$$
\exists r D(r)=\mathrm{true}\;
$$
$$
\exists r D(r)=\mathrm{false}\;
$$
$$
(D(r)=\mathrm{true}) \wedge (D(s)=\mathrm{false}) \Rightarrow r<s\;
$$
$$
D(r)=\mathrm{true} \Rightarrow \exist s>r, D(s)=\mathrm{true}.\;
$$
An example is given by a program D that defines the cube root of 3. Assuming
$$
q>0\;
$$
this is defined by:
$$
p^3<3 q^3 \Rightarrow D(p/q)=\mathrm{true}\;
$$
$$
p^3>3 q^3 \Rightarrow D(p/q)=\mathrm{false}.\;
$$
A real number is computable if and only if there is a computable Dedekind cut D corresponding to it. The function D is unique for each computable number (although of course two different programs may provide the same function).
|
https://en.wikipedia.org/wiki/Computable_number
|
Assuming
$$
q>0\;
$$
this is defined by:
$$
p^3<3 q^3 \Rightarrow D(p/q)=\mathrm{true}\;
$$
$$
p^3>3 q^3 \Rightarrow D(p/q)=\mathrm{false}.\;
$$
A real number is computable if and only if there is a computable Dedekind cut D corresponding to it. The function D is unique for each computable number (although of course two different programs may provide the same function).
## Properties
### Not computably enumerable
Assigning a Gödel number to each Turing machine definition produces a subset
$$
S
$$
of the natural numbers corresponding to the computable numbers and identifies a surjection from
$$
S
$$
to the computable numbers. There are only countably many Turing machines, showing that the computable numbers are subcountable. The set
$$
S
$$
of these Gödel numbers, however, is not computably enumerable (and consequently, neither are subsets of
$$
S
$$
that are defined in terms of it). This is because there is no algorithm to determine which Gödel numbers correspond to Turing machines that produce computable reals. In order to produce a computable real, a Turing machine must compute a total function, but the corresponding decision problem is in Turing degree 0′′.
|
https://en.wikipedia.org/wiki/Computable_number
|
This is because there is no algorithm to determine which Gödel numbers correspond to Turing machines that produce computable reals. In order to produce a computable real, a Turing machine must compute a total function, but the corresponding decision problem is in Turing degree 0′′. Consequently, there is no surjective computable function from the natural numbers to the set
$$
S
$$
of machines representing computable reals, and Cantor's diagonal argument cannot be used constructively to demonstrate uncountably many of them.
While the set of real numbers is uncountable, the set of computable numbers is classically countable and thus almost all real numbers are not computable. Here, for any given computable number
$$
x,
$$
the well ordering principle provides that there is a minimal element in
$$
S
$$
which corresponds to
$$
x
$$
, and therefore there exists a subset consisting of the minimal elements, on which the map is a bijection. The inverse of this bijection is an injection into the natural numbers of the computable numbers, proving that they are countable. But, again, this subset is not computable, even though the computable reals are themselves ordered.
|
https://en.wikipedia.org/wiki/Computable_number
|
The inverse of this bijection is an injection into the natural numbers of the computable numbers, proving that they are countable. But, again, this subset is not computable, even though the computable reals are themselves ordered.
### Properties as a field
The arithmetical operations on computable numbers are themselves computable in the sense that whenever real numbers a and b are computable then the following real numbers are also computable: a + b, a - b, ab, and a/b if b is nonzero.
These operations are actually uniformly computable; for example, there is a Turing machine which on input (A,B,
$$
\epsilon
$$
) produces output r, where A is the description of a Turing machine approximating a, B is the description of a Turing machine approximating b, and r is an
$$
\epsilon
$$
approximation of a + b.
The fact that computable real numbers form a field was first proved by Henry Gordon Rice in 1954.
Computable reals however do not form a computable field, because the definition of a computable field requires effective equality.
### Non-computability of the ordering
The order relation on the computable numbers is not computable. Let A be the description of a Turing machine approximating the number
$$
a
$$
.
|
https://en.wikipedia.org/wiki/Computable_number
|
### Non-computability of the ordering
The order relation on the computable numbers is not computable. Let A be the description of a Turing machine approximating the number
$$
a
$$
. Then there is no Turing machine which on input A outputs "YES" if
$$
a > 0
$$
and "NO" if
$$
a \le 0.
$$
To see why, suppose the machine described by A keeps outputting 0 as
$$
\epsilon
$$
approximations. It is not clear how long to wait before deciding that the machine will never output an approximation which forces a to be positive. Thus the machine will eventually have to guess that the number will equal 0, in order to produce an output; the sequence may later become different from 0. This idea can be used to show that the machine is incorrect on some sequences if it computes a total function. A similar problem occurs when the computable reals are represented as Dedekind cuts. The same holds for the equality relation: the equality test is not computable.
While the full order relation is not computable, the restriction of it to pairs of unequal numbers is computable.
|
https://en.wikipedia.org/wiki/Computable_number
|
The same holds for the equality relation: the equality test is not computable.
While the full order relation is not computable, the restriction of it to pairs of unequal numbers is computable. That is, there is a program that takes as input two Turing machines A and B approximating numbers
$$
a
$$
and
$$
b
$$
, where
$$
a \ne b
$$
, and outputs whether
$$
a < b
$$
or
$$
a > b.
$$
It is sufficient to use
$$
\epsilon
$$
-approximations where
$$
\epsilon < |b-a|/2,
$$
so by taking increasingly small
$$
\epsilon
$$
(approaching 0), one eventually can decide whether
$$
a < b
$$
or
$$
a > b.
$$
### Other properties
The computable real numbers do not share all the properties of the real numbers used in analysis. For example, the least upper bound of a bounded increasing computable sequence of computable real numbers need not be a computable real number. A sequence with this property is known as a Specker sequence, as the first construction is due to Ernst Specker in 1949. Despite the existence of counterexamples such as these, parts of calculus and real analysis can be developed in the field of computable numbers, leading to the study of computable analysis.
|
https://en.wikipedia.org/wiki/Computable_number
|
A sequence with this property is known as a Specker sequence, as the first construction is due to Ernst Specker in 1949. Despite the existence of counterexamples such as these, parts of calculus and real analysis can be developed in the field of computable numbers, leading to the study of computable analysis.
Every computable number is arithmetically definable, but not vice versa. There are many arithmetically definable, noncomputable real numbers, including:
- any number that encodes the solution of the halting problem (or any other undecidable problem) according to a chosen encoding scheme.
- Chaitin's constant,
$$
\Omega
$$
, which is a type of real number that is Turing equivalent to the halting problem.
Both of these examples in fact define an infinite set of definable, uncomputable numbers, one for each universal Turing machine.
A real number is computable if and only if the set of natural numbers it represents (when written in binary and viewed as a characteristic function) is computable.
The set of computable real numbers (as well as every countable, densely ordered subset of computable reals without ends) is order-isomorphic to the set of rational numbers.
|
https://en.wikipedia.org/wiki/Computable_number
|
A real number is computable if and only if the set of natural numbers it represents (when written in binary and viewed as a characteristic function) is computable.
The set of computable real numbers (as well as every countable, densely ordered subset of computable reals without ends) is order-isomorphic to the set of rational numbers.
## Digit strings and the Cantor and Baire spaces
Turing's original paper defined computable numbers as follows:
(The decimal expansion of a only refers to the digits following the decimal point.)
Turing was aware that this definition is equivalent to the
$$
\epsilon
$$
-approximation definition given above. The argument proceeds as follows: if a number is computable in the Turing sense, then it is also computable in the
$$
\epsilon
$$
sense: if
$$
n > \log_{10} (1/\epsilon)
$$
, then the first n digits of the decimal expansion for a provide an
$$
\epsilon
$$
approximation of a. For the converse, we pick an
$$
\epsilon
$$
computable real number a and generate increasingly precise approximations until the nth digit after the decimal point is certain.
|
https://en.wikipedia.org/wiki/Computable_number
|
Turing was aware that this definition is equivalent to the
$$
\epsilon
$$
-approximation definition given above. The argument proceeds as follows: if a number is computable in the Turing sense, then it is also computable in the
$$
\epsilon
$$
sense: if
$$
n > \log_{10} (1/\epsilon)
$$
, then the first n digits of the decimal expansion for a provide an
$$
\epsilon
$$
approximation of a. For the converse, we pick an
$$
\epsilon
$$
computable real number a and generate increasingly precise approximations until the nth digit after the decimal point is certain. This always generates a decimal expansion equal to a but it may improperly end in an infinite sequence of 9's in which case it must have a finite (and thus computable) proper decimal expansion.
Unless certain topological properties of the real numbers are relevant, it is often more convenient to deal with elements of
$$
2^{\omega}
$$
(total 0,1 valued functions) instead of reals numbers in
$$
[0,1]
$$
.
|
https://en.wikipedia.org/wiki/Computable_number
|
This always generates a decimal expansion equal to a but it may improperly end in an infinite sequence of 9's in which case it must have a finite (and thus computable) proper decimal expansion.
Unless certain topological properties of the real numbers are relevant, it is often more convenient to deal with elements of
$$
2^{\omega}
$$
(total 0,1 valued functions) instead of reals numbers in
$$
[0,1]
$$
. The members of
$$
2^{\omega}
$$
can be identified with binary decimal expansions, but since the decimal expansions
$$
.d_1d_2\ldots d_n0111\ldots
$$
and
$$
.d_1d_2\ldots d_n10
$$
denote the same real number, the interval
$$
[0,1]
$$
can only be bijectively (and homeomorphically under the subset topology) identified with the subset of
$$
2^{\omega}
$$
not ending in all 1's.
Note that this property of decimal expansions means that it is impossible to effectively identify the computable real numbers defined in terms of a decimal expansion and those defined in the
$$
\epsilon
$$
approximation sense.
|
https://en.wikipedia.org/wiki/Computable_number
|
The members of
$$
2^{\omega}
$$
can be identified with binary decimal expansions, but since the decimal expansions
$$
.d_1d_2\ldots d_n0111\ldots
$$
and
$$
.d_1d_2\ldots d_n10
$$
denote the same real number, the interval
$$
[0,1]
$$
can only be bijectively (and homeomorphically under the subset topology) identified with the subset of
$$
2^{\omega}
$$
not ending in all 1's.
Note that this property of decimal expansions means that it is impossible to effectively identify the computable real numbers defined in terms of a decimal expansion and those defined in the
$$
\epsilon
$$
approximation sense. Hirst has shown that there is no algorithm which takes as input the description of a Turing machine which produces
$$
\epsilon
$$
approximations for the computable number a, and produces as output a Turing machine which enumerates the digits of a in the sense of Turing's definition. Similarly, it means that the arithmetic operations on the computable reals are not effective on their decimal representations as when adding decimal numbers. In order to produce one digit, it may be necessary to look arbitrarily far to the right to determine if there is a carry to the current location.
|
https://en.wikipedia.org/wiki/Computable_number
|
Similarly, it means that the arithmetic operations on the computable reals are not effective on their decimal representations as when adding decimal numbers. In order to produce one digit, it may be necessary to look arbitrarily far to the right to determine if there is a carry to the current location. This lack of uniformity is one reason why the contemporary definition of computable numbers uses
$$
\epsilon
$$
approximations rather than decimal expansions.
However, from a computability theoretic or measure theoretic perspective, the two structures
$$
2^{\omega}
$$
and
$$
[0,1]
$$
are essentially identical. Thus, computability theorists often refer to members of
$$
2^{\omega}
$$
as reals. While
$$
2^{\omega}
$$
is totally disconnected, for questions about
$$
\Pi^0_1
$$
classes or randomness it is easier to work in
$$
2^{\omega}
$$
.
Elements of
$$
\omega^{\omega}
$$
are sometimes called reals as well and though containing a homeomorphic image of
$$
\mathbb{R}
$$
,
$$
\omega^{\omega}
$$
isn't even locally compact (in addition to being totally disconnected). This leads to genuine differences in the computational properties.
|
https://en.wikipedia.org/wiki/Computable_number
|
Elements of
$$
\omega^{\omega}
$$
are sometimes called reals as well and though containing a homeomorphic image of
$$
\mathbb{R}
$$
,
$$
\omega^{\omega}
$$
isn't even locally compact (in addition to being totally disconnected). This leads to genuine differences in the computational properties. For instance the
$$
x \in \mathbb{R}
$$
satisfying
$$
\forall(n \in \omega)\phi(x,n)
$$
, with
$$
\phi(x,n)
$$
quantifier free, must be computable while the unique
$$
x \in \omega^{\omega}
$$
satisfying a universal formula may have an arbitrarily high position in the hyperarithmetic hierarchy.
## Use in place of the reals
The computable numbers include the specific real numbers which appear in practice, including all real algebraic numbers, as well as e, π, and many other transcendental numbers. Though the computable reals exhaust those reals we can calculate or approximate, the assumption that all reals are computable leads to substantially different conclusions about the real numbers. The question naturally arises of whether it is possible to dispose of the full set of reals and use computable numbers for all of mathematics.
|
https://en.wikipedia.org/wiki/Computable_number
|
Though the computable reals exhaust those reals we can calculate or approximate, the assumption that all reals are computable leads to substantially different conclusions about the real numbers. The question naturally arises of whether it is possible to dispose of the full set of reals and use computable numbers for all of mathematics. This idea is appealing from a constructivist point of view, and has been pursued by the Russian school of constructive mathematics.
To actually develop analysis over computable numbers, some care must be taken. For example, if one uses the classical definition of a sequence, the set of computable numbers is not closed under the basic operation of taking the supremum of a bounded sequence (for example, consider a Specker sequence, see the section above). This difficulty is addressed by considering only sequences which have a computable modulus of convergence. The resulting mathematical theory is called computable analysis.
## Implementations of exact arithmetic
Computer packages representing real numbers as programs computing approximations have been proposed as early as 1985, under the name "exact arithmetic". Modern examples include the CoRN library (Coq), and the RealLib package (C++). A related line of work is based on taking a real RAM program and running it with rational or floating-point numbers of sufficient precision, such as the package.
|
https://en.wikipedia.org/wiki/Computable_number
|
In computer science, a red–black tree is a self-balancing binary search tree data structure noted for fast storage and retrieval of ordered information. The nodes in a red-black tree hold an extra "color" bit, often drawn as red and black, which help ensure that the tree is always approximately balanced.
When the tree is modified, the new tree is rearranged and "repainted" to restore the coloring properties that constrain how unbalanced the tree can become in the worst case. The properties are designed such that this rearranging and recoloring can be performed efficiently.
The (re-)balancing is not perfect, but guarantees searching in
$$
O(\log n)
$$
time, where
$$
n
$$
is the number of entries in the tree. The insert and delete operations, along with tree rearrangement and recoloring, also execute in
$$
O(\log n)
$$
time.
Tracking the color of each node requires only one bit of information per node because there are only two colors (due to memory alignment present in some programming languages, the real memory consumption may differ). The tree does not contain any other data specific to it being a red–black tree, so its memory footprint is almost identical to that of a classic (uncolored) binary search tree. In some cases, the added bit of information can be stored at no added memory cost.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
The tree does not contain any other data specific to it being a red–black tree, so its memory footprint is almost identical to that of a classic (uncolored) binary search tree. In some cases, the added bit of information can be stored at no added memory cost.
## History
In 1972, Rudolf Bayer invented a data structure that was a special order-4 case of a B-tree. These trees maintained all paths from root to leaf with the same number of nodes, creating perfectly balanced trees. However, they were not binary search trees. Bayer called them a "symmetric binary B-tree" in his paper and later they became popular as 2–3–4 trees or even 2–3 trees.
In a 1978 paper, "A Dichromatic Framework for Balanced Trees", Leonidas J. Guibas and Robert Sedgewick derived the red–black tree from the symmetric binary B-tree. The color "red" was chosen because it was the best-looking color produced by the color laser printer available to the authors while working at Xerox PARC. Another response from Guibas states that it was because of the red and black pens available to them to draw the trees.
In 1993, Arne Andersson introduced the idea of a right leaning tree to simplify insert and delete operations.
In 1999, Chris Okasaki showed how to make the insert operation purely functional.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
In 1993, Arne Andersson introduced the idea of a right leaning tree to simplify insert and delete operations.
In 1999, Chris Okasaki showed how to make the insert operation purely functional. Its balance function needed to take care of only 4 unbalanced cases and one default balanced case.
The original algorithm used 8 unbalanced cases, but reduced that to 6 unbalanced cases. Sedgewick showed that the insert operation can be implemented in just 46 lines of Java.
In 2008, Sedgewick proposed the left-leaning red–black tree, leveraging Andersson’s idea that simplified the insert and delete operations. Sedgewick originally allowed nodes whose two children are red, making his trees more like 2–3–4 trees, but later this restriction was added, making new trees more like 2–3 trees. Sedgewick implemented the insert algorithm in just 33 lines, significantly shortening his original 46 lines of code.
## Terminology
The black depth of a node is defined as the number of black nodes from the root to that node (i.e. the number of black ancestors). The black height of a red–black tree is the number of black nodes in any path from the root to the leaves, which, by requirement 4, is constant (alternatively, it could be defined as the black depth of any leaf node).
The black height of a node is the black height of the subtree rooted by it.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
The black height of a red–black tree is the number of black nodes in any path from the root to the leaves, which, by requirement 4, is constant (alternatively, it could be defined as the black depth of any leaf node).
The black height of a node is the black height of the subtree rooted by it. In this article, the black height of a null node shall be set to 0, because its subtree is empty as suggested by the example figure, and its tree height is also 0.
## Properties
In addition to the requirements imposed on a binary search tree the following must be satisfied by a
1. Every node is either red or black.
1. All null nodes are considered black.
1. A red node does not have a red child.
1. Every path from a given node to any of its leaf nodes goes through the same number of black nodes.
1. (Conclusion) If a node N has exactly one child, the child must be red. If the child were black, its leaves would sit at a different black depth than N's null node (which is considered black by rule 2), violating requirement 4.
Some authors, e.g. Cormen & al., claim "the root is black" as fifth requirement; but not Mehlhorn & Sanders or Sedgewick & Wayne.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
If the child were black, its leaves would sit at a different black depth than N's null node (which is considered black by rule 2), violating requirement 4.
Some authors, e.g. Cormen & al., claim "the root is black" as fifth requirement; but not Mehlhorn & Sanders or Sedgewick & Wayne. Since the root can always be changed from red to black, this rule has little effect on analysis.
This article also omits it, because it slightly disturbs the recursive algorithms and proofs.
As an example, every perfect binary tree that consists only of black nodes is a red–black tree.
The read-only operations, such as search or tree traversal, do not affect any of the requirements. In contrast, the modifying operations insert and delete easily maintain requirements 1 and 2, but with respect to the other requirements some extra effort must be made, to avoid introducing a violation of requirement 3, called a red-violation, or of requirement 4, called a black-violation.
The requirements enforce a critical property of red–black trees: the path from the root to the farthest leaf is no more than twice as long as the path from the root to the nearest leaf. The result is that the tree is height-balanced.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
The requirements enforce a critical property of red–black trees: the path from the root to the farthest leaf is no more than twice as long as the path from the root to the nearest leaf. The result is that the tree is height-balanced. Since operations such as inserting, deleting, and finding values require worst-case time proportional to the height
$$
h
$$
of the tree, this upper bound on the height allows red–black trees to be efficient in the worst case, namely logarithmic in the number
$$
n
$$
of entries, i.e. (a property which is shared by all self-balancing trees, e.g., AVL tree or B-tree, but not the ordinary binary search trees). For a mathematical proof see section
## Proof of bounds
.
Red–black trees, like all binary search trees, allow quite efficient sequential access (e.g. in-order traversal, that is: in the order Left–Root–Right) of their elements. But they support also asymptotically optimal direct access via a traversal from root to leaf, resulting in
$$
O(\log n)
$$
search time.
## Analogy to 2–3–4 trees
Red–black trees are similar in structure to 2–3–4 trees, which are B-trees of order 4.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
But they support also asymptotically optimal direct access via a traversal from root to leaf, resulting in
$$
O(\log n)
$$
search time.
## Analogy to 2–3–4 trees
Red–black trees are similar in structure to 2–3–4 trees, which are B-trees of order 4. In 2–3–4 trees, each node can contain between 1 and 3 values and have between 2 and 4 children. These 2–3–4 nodes correspond to black node – red children groups in red-black trees, as shown in figure 1. It is not a 1-to-1 correspondence, because 3-nodes have two equivalent representations: the red child may lie either to the left or right. The left-leaning red-black tree variant makes this relationship exactly 1-to-1, by only allowing the left child representation. Since every 2–3–4 node has a corresponding black node, invariant 4 of red-black trees is equivalent to saying that the leaves of a 2–3–4 tree all lie at the same level.
Despite structural similarities, operations on red–black trees are more economical than B-trees. B-trees require management of vectors of variable length, whereas red-black trees are simply binary trees.
## Applications and related data structures
Red–black trees offer worst-case guarantees for insertion time, deletion time, and search time.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
B-trees require management of vectors of variable length, whereas red-black trees are simply binary trees.
## Applications and related data structures
Red–black trees offer worst-case guarantees for insertion time, deletion time, and search time. Not only does this make them valuable in time-sensitive applications such as real-time applications, but it makes them valuable building blocks in other data structures that provide worst-case guarantees. For example, many data structures used in computational geometry are based on red–black trees, and the Completely Fair Scheduler and epoll system call of the Linux kernel use red–black trees.
The AVL tree is another structure supporting
$$
O(\log n)
$$
search, insertion, and removal. AVL trees can be colored red–black, and thus are a subset of red-black trees. The worst-case height of AVL is 0.720 times the worst-case height of red-black trees, so AVL trees are more rigidly balanced. The performance measurements of Ben Pfaff with realistic test cases in 79 runs find AVL to RB ratios between 0.677 and 1.077, median at 0.947, and geometric mean 0.910. The performance of WAVL trees lie in between AVL trees and red-black trees.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
The performance measurements of Ben Pfaff with realistic test cases in 79 runs find AVL to RB ratios between 0.677 and 1.077, median at 0.947, and geometric mean 0.910. The performance of WAVL trees lie in between AVL trees and red-black trees.
Red–black trees are also particularly valuable in functional programming, where they are one of the most common persistent data structures, used to construct associative arrays and sets that can retain previous versions after mutations. The persistent version of red–black trees requires
$$
O(\log n)
$$
space for each insertion or deletion, in addition to time.
For every 2–3–4 tree, there are corresponding red–black trees with data elements in the same order. The insertion and deletion operations on 2–3–4 trees are also equivalent to color-flipping and rotations in red–black trees. This makes 2–3–4 trees an important tool for understanding the logic behind red–black trees, and this is why many introductory algorithm texts introduce 2–3–4 trees just before red–black trees, even though 2–3–4 trees are not often used in practice.
In 2008, Sedgewick introduced a simpler version of the red–black tree called the left-leaning red–black tree by eliminating a previously unspecified degree of freedom in the implementation.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
This makes 2–3–4 trees an important tool for understanding the logic behind red–black trees, and this is why many introductory algorithm texts introduce 2–3–4 trees just before red–black trees, even though 2–3–4 trees are not often used in practice.
In 2008, Sedgewick introduced a simpler version of the red–black tree called the left-leaning red–black tree by eliminating a previously unspecified degree of freedom in the implementation. The LLRB maintains an additional invariant that all red links must lean left except during inserts and deletes. Red–black trees can be made isometric to either 2–3 trees, or 2–3–4 trees, for any sequence of operations. The 2–3–4 tree isometry was described in 1978 by Sedgewick. With 2–3–4 trees, the isometry is resolved by a "color flip," corresponding to a split, in which the red color of two children nodes leaves the children and moves to the parent node.
The original description of the tango tree, a type of tree optimised for fast searches, specifically uses red–black trees as part of its data structure.
As of Java 8, the HashMap has been modified such that instead of using a LinkedList to store different elements with colliding hashcodes, a red–black tree is used.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
The original description of the tango tree, a type of tree optimised for fast searches, specifically uses red–black trees as part of its data structure.
As of Java 8, the HashMap has been modified such that instead of using a LinkedList to store different elements with colliding hashcodes, a red–black tree is used. This results in the improvement of time complexity of searching such an element from
$$
O(m)
$$
to
$$
O(\log m)
$$
where
$$
m
$$
is the number of elements with colliding hashes.
## Implementation
The read-only operations, such as search or tree traversal, on a red–black tree require no modification from those used for binary search trees, because every red–black tree is a special case of a simple binary search tree. However, the immediate result of an insertion or removal may violate the properties of a red–black tree, the restoration of which is called rebalancing so that red–black trees become self-balancing.
Rebalancing (i.e. color changes) has a worst-case time complexity of
$$
O(\log n)
$$
and average of
$$
O(1)
$$
, though these are very quick in practice. Additionally, rebalancing takes no more than three tree rotations (two for insertion).
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.