text
stringlengths 16
172k
| source
stringlengths 32
122
|
---|---|
Leaf languageis a method incomputational complexity theoryfor characterizing acomplexity classby formalizing what it means for a machine to "accept" an input.[1]
Complexity classes are typically defined in terms of apolynomial-timenondeterministic Turing machine(NTM). These machines possess multiple computational paths, and the outcomes of these paths determine whether an input is accepted or rejected.[1]Traditionally, an NTM accepts an input if at least one path accepts it, and rejects it only if all paths reject it. In contrast, a co-Nondeterministic Turing Machine (co-NTM) accepts an input only if all paths accept it, and rejects it if any path rejects it. Moreover, more complex notions of acceptance can also be defined.
To formalize the characterization of a complexity class, one can examine theformal languageassociated with each acceptance condition. This involves assuming anordered tree, and reading the accepted/rejected strings from the leaves of thecomputation tree. NTMs will accept if the leaf string is in the language0*1{0, 1}*, and will reject if the leaf string is in the language0*.[2]
|
https://en.wikipedia.org/wiki/Leaf_language
|
Thelimits of computationare governed by a number of different factors. In particular, there are several physical and practical limits to the amount ofcomputationordata storagethat can be performed with a given amount ofmass,volume, orenergy.
Several methods have been proposed for producing computing devices or data storage devices that approach physical and practical limits:
In the field oftheoretical computer sciencethecomputabilityandcomplexityof computational problems are often sought-after.Computability theorydescribes the degree to which problems are computable, whereas complexity theory describes the asymptotic degree of resource consumption. Computational problems are therefore confined intocomplexity classes. Thearithmetical hierarchyandpolynomial hierarchyclassify the degree to which problems are respectively computable and computable in polynomial time. For instance, the levelΣ00=Π00=Δ00{\displaystyle \Sigma _{0}^{0}=\Pi _{0}^{0}=\Delta _{0}^{0}}of the arithmetical hierarchy classifies computable, partial functions. Moreover, this hierarchy is strict such that at any other class in the arithmetic hierarchy classifies strictlyuncomputablefunctions.
Many limits derived in terms of physical constants and abstract models of computation in computer science are loose.[12]Very few known limits directly obstruct leading-edge technologies, but many engineering obstacles currently cannot be explained by closed-form limits.
|
https://en.wikipedia.org/wiki/Limits_of_computation
|
This is alist ofcomplexity classesincomputational complexity theory. For other computational and complexity subjects, seelist of computability and complexity topics.
Many of these classes have a 'co' partner which consists of thecomplementsof all languages in the original class. For example, if a language L is in NP then the complement of L is in co-NP. (This does not mean that the complement of NP is co-NP—there are languages which are known to be in both, and other languages which are known to be in neither.)
"The hardest problems" of a class refer to problems which belong to the class such that every other problem of that class can be reduced to it.
|
https://en.wikipedia.org/wiki/List_of_complexity_classes
|
This is a list ofcomputability and complexity topics, by Wikipedia page.
Computability theoryis the part of the theory ofcomputationthat deals with what can be computed, in principle.Computational complexity theorydeals with how hard computations are, in quantitative terms, both with upper bounds (algorithmswhose complexity in the worst cases, as use of computing resources, can be estimated), and from below (proofs that no procedure to carry out some task can be very fast).
For more abstract foundational matters, see thelist of mathematical logic topics. See alsolist of algorithms,list of algorithm general topics.
See thelist of complexity classes
|
https://en.wikipedia.org/wiki/List_of_computability_and_complexity_topics
|
This article is alist of notable unsolved problemsincomputer science. A problem in computer science is considered unsolved when no solution is known or when experts in the field disagree about proposed solutions.
The graph isomorphism problem involves determining whether two finite graphs are isomorphic, meaning there is a one-to-one correspondence between their vertices and edges that preserves adjacency. While the problem is known to be in NP, it is not known whether it is NP-complete or solvable in polynomial time. This uncertainty places it in a unique complexity class, making it a significant open problem in computer science.[2]
|
https://en.wikipedia.org/wiki/List_of_unsolved_problems_in_computer_science
|
Incomputer science,parameterized complexityis a branch ofcomputational complexity theorythat focuses on classifyingcomputational problemsaccording to their inherent difficulty with respect tomultipleparameters of the input or output. The complexity of a problem is then measured as afunctionof those parameters. This allows the classification ofNP-hardproblems on a finer scale than in the classical setting, where the complexity of a problem is only measured as a function of the number of bits in the input. This appears to have been first demonstrated inGurevich, Stockmeyer & Vishkin (1984). The first systematic work on parameterized complexity was done byDowney & Fellows (1999).
Under the assumption thatP ≠ NP, there exist many natural problems that require super-polynomialrunning timewhen complexity is measured in terms of the input size only but that are computable in a time that is polynomial in the input size and exponential or worse in a parameterk. Hence, ifkis fixed at a small value and the growth of the function overkis relatively small then such problems can still be considered "tractable" despite their traditional classification as "intractable".
The existence of efficient, exact, and deterministic solving algorithms forNP-complete, or otherwiseNP-hard, problems is considered unlikely, if input parameters are not fixed; all known solving algorithms for these problems require time that isexponential(so in particular super-polynomial) in the total size of the input. However, some problems can be solved by algorithms that are exponential only in the size of a fixed parameter while polynomial in the size of the input. Such an algorithm is called afixed-parameter tractable(FPT) algorithm, because the problem can be solved efficiently (i.e., in polynomial time) for constant values of the fixed parameter.
Problems in which some parameterkis fixed are called parameterized problems. A parameterized problem that allows for such an FPT algorithm is said to be afixed-parameter tractableproblem and belongs to the classFPT, and the early name of the theory of parameterized complexity wasfixed-parameter tractability.
Many problems have the following form: given an objectxand a nonnegative integerk, doesxhave some property that depends onk? For instance, for thevertex cover problem, the parameter can be the number of vertices in the cover. In many applications, for example when modelling error correction, one can assume the parameter to be "small" compared to the total input size. Then it is challenging to find an algorithm that is exponentialonlyink, and not in the input size.
In this way, parameterized complexity can be seen astwo-dimensionalcomplexity theory. This concept is formalized as follows:
For example, there is an algorithm that solves the vertex cover problem inO(kn+1.274k){\displaystyle O(kn+1.274^{k})}time,[1]wherenis the number of vertices andkis the size of the vertex cover. This means that vertex cover is fixed-parameter tractable with the size of the solution as the parameter.
FPT contains thefixed parameter tractableproblems, which are those that can be solved in timef(k)⋅|x|O(1){\displaystyle f(k)\cdot {|x|}^{O(1)}}for some computable functionf. Typically, this function is thought of as single exponential, such as2O(k){\displaystyle 2^{O(k)}}, but the definition admits functions that grow even faster. This is essential for a large part of the early history of this class. The crucial part of the definition is to exclude functions of the formf(n,k){\displaystyle f(n,k)}, such askn{\displaystyle k^{n}}.
The classFPL(fixed parameter linear) is the class of problems solvable in timef(k)⋅|x|{\displaystyle f(k)\cdot |x|}for some computable functionf.[2]FPL is thus a subclass of FPT. An example is theBoolean satisfiabilityproblem, parameterised by the number of variables. A given formula of sizemwithkvariables can be checked by brute force in timeO(2km){\displaystyle O(2^{k}m)}. Avertex coverof sizekin a graph of orderncan be found in timeO(2kn){\displaystyle O(2^{k}n)}, so the vertex cover problem is also in FPL.
An example of a problem that is thought not to be in FPT isgraph coloringparameterised by the number of colors. It is known that 3-coloring isNP-hard, and an algorithm for graphk-coloring in timef(k)nO(1){\displaystyle f(k)n^{O(1)}}fork=3{\displaystyle k=3}would run in polynomial time in the size of the input. Thus, if graph coloring parameterised by the number of colors were in FPT, thenP = NP.
There are a number of alternative definitions of FPT. For example, the running-time requirement can be replaced byf(k)+|x|O(1){\displaystyle f(k)+|x|^{O(1)}}. Also, a parameterised problem is in FPT if it has a so-called kernel.Kernelizationis a preprocessing technique that reduces the original instance to its "hard kernel", a possibly much smaller instance that is equivalent to the original instance but has a size that is bounded by a function in the parameter.
FPT is closed under a parameterised notion ofreductionscalledfpt-reductions. Such reductions transform an instance(x,k){\displaystyle (x,k)}of some problem into an equivalent instance(x′,k′){\displaystyle (x',k')}of another problem (withk′≤g(k){\displaystyle k'\leq g(k)}) and can be computed in timef(k)⋅p(|x|){\displaystyle f(k)\cdot p(|x|)}wherep{\displaystyle p}is a polynomial.
Obviously, FPT contains all polynomial-time computable problems. Moreover, it contains all optimisation problems in NP that allow anefficient polynomial-time approximation scheme (EPTAS).
TheWhierarchyis a collection of computational complexity classes. A parameterized problem is in the classW[i], if every instance(x,k){\displaystyle (x,k)}can be transformed (in fpt-time) to a combinatorial circuit that hasweftat mosti, such that(x,k)∈L{\displaystyle (x,k)\in L}if and only if there is a satisfying assignment to the inputs that assigns 1 to exactlykinputs. Theweftis the largest number of logical units with fan-in greater than two on any path from an input to the output. The total number of logical units on the paths (known as depth) must be limited by a constant that holds for all instances of the problem.
Note thatFPT=W[0]{\displaystyle {\mathsf {FPT}}=W[0]}andW[i]⊆W[j]{\displaystyle W[i]\subseteq W[j]}for alli≤j{\displaystyle i\leq j}. The classes in theWhierarchy are also closed under fpt-reduction.
A complete problem forW[i] isWeightedi-Normalized Satisfiability:[3]given a Boolean formula written as an AND of ORs of ANDs of ... of possibly negated variables, withi+1{\displaystyle i+1}layers of ANDs or ORs (andialternations between AND and OR), can it be satisfied by setting exactlykvariables to 1?
Many natural computational problems occupy the lower levels,W[1] andW[2].
Examples ofW[1]-complete problems include
Examples ofW[2]-complete problems include
W[t]{\displaystyle W[t]}can be defined using the family of Weighted Weft-t-Depth-dSAT problems ford≥t{\displaystyle d\geq t}:W[t,d]{\displaystyle W[t,d]}is the class of parameterized problems that fpt-reduce to this problem, andW[t]=⋃d≥tW[t,d]{\displaystyle W[t]=\bigcup _{d\geq t}W[t,d]}.
Here,Weighted Weft-t-Depth-dSATis the following problem:
It can be shown that fort≥2{\displaystyle t\geq 2}the problem Weightedt-Normalize SAT is complete forW[t]{\displaystyle W[t]}under fpt-reductions.[4]Here,Weightedt-Normalize SATis the following problem:
W[P] is the class of problems that can be decided by a nondeterministich(k)⋅|x|O(1){\displaystyle h(k)\cdot {|x|}^{O(1)}}-time Turing machine that makes at mostO(f(k)⋅logn){\displaystyle O(f(k)\cdot \log n)}nondeterministic choices in the computation on(x,k){\displaystyle (x,k)}(ak-restricted Turing machine).Flum & Grohe (2006)
It is known that FPT is contained in W[P], and the inclusion is believed to be strict. However, resolving this issue would imply a solution to theP versus NPproblem.
Other connections to unparameterised computational complexity are that FPT equalsW[P] if and only ifcircuit satisfiabilitycan be decided in timeexp(o(n))mO(1){\displaystyle \exp(o(n))m^{O(1)}}, or if and only if there is a computable, nondecreasing, unbounded function f such that all languages recognised by a nondeterministic polynomial-time Turing machine usingf(n)logn{\displaystyle f(n)\log n}nondeterministic choices are inP.
W[P] can be loosely thought of as the class of problems where we have a setSofnitems, and we want to find a subsetT⊂S{\displaystyle T\subset S}of sizeksuch that a certain property holds. We can encode a choice as a list ofkintegers, stored in binary. Since the highest any of these numbers can be isn,⌈log2n⌉{\displaystyle \lceil \log _{2}n\rceil }bits are needed for each number. Thereforek⋅⌈log2n⌉{\displaystyle k\cdot \lceil \log _{2}n\rceil }total bits are needed to encode a choice. Therefore we can select a subsetT⊂S{\displaystyle T\subset S}withO(k⋅logn){\displaystyle O(k\cdot \log n)}nondeterministic choices.
XPis the class of parameterized problems that can be solved in timenf(k){\displaystyle n^{f(k)}}for some computable functionf. These problems are calledslicewisepolynomial, in the sense that each "slice" of fixed k has a polynomial algorithm, although possibly with a different exponent for each k. Compare this with FPT, which merely allows a different constant prefactor for each value of k. XP contains FPT, and it is known that this containment is strict by diagonalization.
para-NPis the class of parameterized problems that can be solved by anondeterministic algorithmin timef(k)⋅|x|O(1){\displaystyle f(k)\cdot |x|^{O(1)}}for some computable functionf. It is known thatFPT=para-NP{\displaystyle {\textsf {FPT}}={\textsf {para-NP}}}if and only ifP=NP{\displaystyle {\textsf {P}}={\textsf {NP}}}.[5]
A problem ispara-NP-hardif it isNP{\displaystyle {\textsf {NP}}}-hard already for a constant value of the parameter. That is, there is a "slice" of fixedkthat isNP{\displaystyle {\textsf {NP}}}-hard. A parameterized problem that ispara-NP{\displaystyle {\textsf {para-NP}}}-hard cannot belong to the classXP{\displaystyle {\textsf {XP}}}, unlessP=NP{\displaystyle {\textsf {P}}={\textsf {NP}}}. A classic example of apara-NP{\displaystyle {\textsf {para-NP}}}-hard parameterized problem isgraph coloring, parameterized by the numberkof colors, which is alreadyNP{\displaystyle {\textsf {NP}}}-hard fork=3{\displaystyle k=3}(seeGraph coloring#Computational complexity).
TheA hierarchyis a collection of computational complexity classes similar to the W hierarchy. However, while the W hierarchy is a hierarchy contained in NP, the A hierarchy more closely mimics thepolynomial-time hierarchyfrom classical complexity. It is known that A[1] = W[1] holds.
|
https://en.wikipedia.org/wiki/Parameterized_complexity
|
Inlogicandtheoretical computer science, and specificallyproof theoryandcomputational complexity theory,proof complexityis the field aiming to understand and analyse the computational resources that are required to prove or refute statements. Research in proof complexity is predominantly concerned with proving proof-length lower and upper bounds in variouspropositional proof systems. For example, among the major challenges of proof complexity is showing that theFrege system, the usualpropositional calculus, does not admit polynomial-size proofs of all tautologies. Here the size of the proof is simply the number of symbols in it, and a proof is said to be of polynomial size if it is polynomial in the size of the tautology it proves.
Systematic study of proof complexity began with the work ofStephen CookandRobert Reckhow(1979) who provided the basic definition of a propositional proof system from the perspective of computational complexity. Specifically Cook and Reckhow observed that proving proof size lower bounds on stronger and stronger propositional proof systems can be viewed as a step towards separatingNPfromcoNP(and thusPfrom NP), since the existence of a propositional proof system that admits polynomial size proofs for all tautologies is equivalent to NP=coNP.
Contemporary proof complexity research draws ideas and methods from many areas in computational complexity,algorithmsand mathematics. Since many important algorithms and algorithmic techniques can be cast as proof search algorithms for certain proof systems, proving lower bounds on proof sizes in these systems implies run-time lower bounds on the corresponding algorithms. This connects proof complexity to more applied areas such asSAT solving.
Mathematical logiccan also serve as a framework to study propositional proof sizes.First-order theoriesand, in particular, weak fragments ofPeano arithmetic, which come under the name ofbounded arithmetic, serve as uniform versions of propositional proof systems and provide further background for interpreting short propositional proofs in terms of various levels of feasible reasoning.
A propositional proof system is given as a proof-verification algorithmP(A,x) with two inputs. IfPaccepts the pair (A,x) we say thatxis aP-proof ofA.Pis required to run in polynomial time, and moreover, it must hold thatAhas aP-proof if and only ifAis a tautology.
Examples of propositional proof systems includesequent calculus,resolution,cutting planesandFrege systems. Strong mathematical theories such asZFCinduce propositional proof systems as well: a proof of a tautologyτ{\displaystyle \tau }in a propositional interpretation of ZFC is a ZFC-proof of a formalized statement 'τ{\displaystyle \tau }is a tautology'.
Proof complexity measures the efficiency of the proof system usually in terms of the minimal size of proofs possible in the system for a given tautology. The size of a proof (respectively formula) is the number of symbols needed to represent the proof (respectively formula). A propositional proof systemPispolynomially boundedif there exists a constantc{\displaystyle c}such that every tautology of sizen{\displaystyle n}has aP-proof of size(n+c)c{\displaystyle (n+c)^{c}}. A central question of proof complexity is to understand if tautologies admit polynomial-size proofs. Formally,
Problem(NP vs. coNP)
Does there exist a polynomially bounded propositional proof system?
Cook and Reckhow (1979) observed that there exists a polynomially bounded proof system if and only if NP=coNP. Therefore, proving that specific proof systems do not admit polynomial size proofs can be seen as a partial progress towards separating NP and coNP (and thus P and NP).[1]
Proof complexity compares the strength of proof systems using the notion of simulation. A proof systemPp-simulatesa proof systemQif there is a polynomial-time function that given aQ-proof of a tautology outputs aP-proof of the same tautology. IfPp-simulatesQandQp-simulatesP, the proof systemsPandQarep-equivalent. There is also a weaker notion of simulation: a proof systemPsimulatesa proof systemQif there is a polynomialpsuch that for everyQ-proofxof a tautologyA, there is aP-proofyofAsuch that the length ofy, |y| is at mostp(|x|).
For example, sequent calculus is p-equivalent to (every) Frege system.[2]
A proof system isp-optimalif itp-simulates all other proof systems, and it isoptimalif it simulates all other proof systems. It is an open problem whether such proof systems exist:
Problem(Optimality)
Does there exist a p-optimal or optimal propositional proof system?
Every propositional proof systemPcan be simulated byExtended Fregeextended with axioms postulating soundness ofP.[3]The existence of an optimal (respectively p-optimal) proof system is known to follow from the assumption that NE=coNE (respectivelyE=NE).[4]
For many weak proof systems it is known that they do not simulate certain stronger systems (see below). However, the question remains open if the notion of simulation is relaxed. For example, it is open whether Resolutioneffectively polynomially simulatesExtended Frege.[5]
An important question in proof complexity is to understand the complexity of searching for proofs in proof systems.
Problem(Automatability)
Are there efficient algorithms searching for proofs in standard proof systems such as Resolution or the Frege system?
The question can be formalized by the notion of automatability (also known as automatizability).[6]
A proof systemPisautomatableif there is an algorithm that given a tautologyτ{\displaystyle \tau }outputs aP-proof ofτ{\displaystyle \tau }in time polynomial in the size ofτ{\displaystyle \tau }and the length of the shortestP-proof ofτ{\displaystyle \tau }. Note that if a proof system is not polynomially bounded, it can still be automatable. A proof systemPisweakly automatableif there is a proof systemRand an algorithm that given a tautologyτ{\displaystyle \tau }outputs anR-proof ofτ{\displaystyle \tau }in time polynomial in the size ofτ{\displaystyle \tau }and the length of the shortestP-proof ofτ{\displaystyle \tau }.
Many proof systems of interest are believed to be non-automatable. However, currently only conditional negative results are known.
It is not known if the weak automatability of Resolution would break any standard complexity-theoretic hardness assumptions.
On the positive side,
Propositional proof systems can be interpreted as nonuniform equivalents of theories of higher order. The equivalence is most often studied in the context of theories ofbounded arithmetic. For example, the Extended Frege system corresponds to Cook's theoryPV1{\displaystyle \mathrm {PV} _{1}}formalizing polynomial-time reasoning and the Frege system corresponds to the theoryVNC1{\displaystyle \mathrm {VNC} ^{1}}formalizingNC1{\displaystyle {\mathsf {NC}}^{1}}reasoning.
The correspondence was introduced by Stephen Cook (1975), who showed that coNP theorems, formallyΠ1b{\displaystyle \Pi _{1}^{b}}formulas, of the theoryPV1{\displaystyle \mathrm {PV} _{1}}translate to sequences of tautologies with polynomial-size proofs in Extended Frege. Moreover, Extended Frege is the weakest such system: if another proof systemPhas this property, thenPsimulates Extended Frege.[19]
An alternative translation betweensecond-orderstatements and propositional formulas given byJeff ParisandAlex Wilkie(1985) has been more practical for capturing subsystems of Extended Frege such as Frege or constant-depth Frege.[20][21]
While the above-mentioned correspondence says that proofs in a theory translate to sequences of short proofs in the corresponding proof system, a form of the opposite implication holds as well. It is possible to derive lower bounds on size of proofs in a proof systemPby constructing suitablemodelsof a theoryTcorresponding to the systemP. This allows to prove complexity lower bounds viamodel-theoreticconstructions, an approach known asAjtai's method.[22]
Propositional proof systems can be interpreted as nondeterministic algorithms for recognizing tautologies. Proving a superpolynomial lower bound on a proof systemPthus rules out the existence of a polynomial-time algorithm for SAT based onP. For example, runs of theDPLL algorithmon unsatisfiable instances correspond to tree-like Resolution refutations. Therefore, exponential lower bounds for tree-like Resolution (see below) rule out the existence of efficient DPLL algorithms for SAT. Similarly, exponential Resolution lower bounds imply that SAT solvers based on Resolution, such asCDCLalgorithms cannot solve SAT efficiently (in worst-case).
Proving lower bounds on lengths of propositional proofs is generally very difficult. Nevertheless, several methods for proving lower bounds for weak proof systems have been discovered.
It is a long-standing open problem to derive a nontrivial lower bound for the Frege system.
Consider a tautology of the formA(x,y)→B(y,z){\displaystyle A(x,y)\rightarrow B(y,z)}. The tautology is true for every choice ofy{\displaystyle y}, and after fixingy{\displaystyle y}the evaluation ofA{\displaystyle A}andB{\displaystyle B}are independent because they are defined on disjoint sets of variables. This means that it is possible to define aninterpolantcircuitC(y){\displaystyle C(y)}, such that bothA(x,y)→C(y){\displaystyle A(x,y)\rightarrow C(y)}andC(y)→B(y,z){\displaystyle C(y)\rightarrow B(y,z)}hold. The interpolant circuit decides either ifA(x,y){\displaystyle A(x,y)}is false or ifB(y,z){\displaystyle B(y,z)}is true, by only consideringy{\displaystyle y}. The nature of the interpolant circuit can be arbitrary. Nevertheless, it is possible to use a proof of the initial tautologyA(x,y)→B(y,z){\displaystyle A(x,y)\rightarrow B(y,z)}as a hint on how to constructC{\displaystyle C}. A proof systemsPis said to havefeasible interpolationif the interpolantC(y){\displaystyle C(y)}is efficiently computable from any proof of the tautologyA(x,y)→B(y,z){\displaystyle A(x,y)\rightarrow B(y,z)}inP. The efficiency is measured with respect to the length of the proof: it is easier to compute interpolants for longer proofs, so this property seems to be anti-monotone in the strength of the proof system.
The following three statements cannot be simultaneously true: (a)A(x,y)→B(y,z){\displaystyle A(x,y)\rightarrow B(y,z)}has a short proof in a some proof system; (b) such proof system has feasible interpolation; (c) the interpolant circuit solves a computationally hard problem. It is clear that (a) and (b) imply that there is a small interpolant circuit, which is in contradiction with (c). Such relation allows the conversion of proof length upper bounds into lower bounds on computations, and dually to turn efficient interpolation algorithms into lower bounds on proof length.
Some proof systems such as Resolution and Cutting Planes admit feasible interpolation or its variants.[28][29]
Feasible interpolation can be seen as a weak form of automatability. In fact, for many proof systems, such as Extended Frege, feasible interpolation is equivalent to weak automatability. Specifically, many proof systemsPare able to prove their own soundness, which is a tautologyRefP(π,ϕ,x){\displaystyle \mathrm {Ref} _{P}(\pi ,\phi ,x)}stating that `ifπ{\displaystyle \pi }is aP-proof of a formulaϕ(x){\displaystyle \phi (x)}thenϕ(x){\displaystyle \phi (x)}holds'. Here,π,ϕ,x{\displaystyle \pi ,\phi ,x}are encoded by free variables. Moreover, it is possible to generateP-proofs ofRefP(π,ϕ,x){\displaystyle \mathrm {Ref} _{P}(\pi ,\phi ,x)}in polynomial-time given the length ofπ{\displaystyle \pi }andϕ{\displaystyle \phi }. Therefore, an efficient interpolant resulting from shortP-proofs of soundness ofPwould decide whether a given formulaϕ{\displaystyle \phi }admits a shortP-proofπ{\displaystyle \pi }. Such an interpolant can be used to define a proof systemRwitnessing thatPis weakly automatable.[30]On the other hand, weak automatability of a proof systemPimplies thatPadmits feasible interpolation. However, if a proof systemPdoes not prove efficiently its own soundness, then it might not be weakly automatable even if it admits feasible interpolation.
Many non-automatability results provide evidence against feasible interpolation in the respective systems.
The idea of comparing the size of proofs can be used for any automated reasoning procedure that generates a proof. Some research has been done about the size of proofs for propositionalnon-classical logics, in particular,intuitionistic,modal, andnon-monotonic logics.
Hrubeš (2007–2009) proved exponential lower bounds on size of proofs in the Extended Frege system in some modal logics and in intuitionistic logic using a version of monotone feasible interpolation.[34][35][36]
|
https://en.wikipedia.org/wiki/Proof_complexity
|
Quantum complexity theoryis the subfield ofcomputational complexity theorythat deals withcomplexity classesdefined usingquantum computers, acomputational modelbased onquantum mechanics. It studies the hardness ofcomputational problemsin relation to these complexity classes, as well as the relationship between quantum complexity classes and classical (i.e., non-quantum) complexity classes.
Two important quantum complexity classes areBQPandQMA.
Acomplexity classis a collection ofcomputational problemsthat can be solved by a computational model under certain resource constraints. For instance, the complexity classPis defined as the set of problems solvable by aTuring machineinpolynomial time. Similarly, quantum complexity classes may be defined using quantum models of computation, such as thequantum circuit modelor the equivalentquantum Turing machine. One of the main aims of quantum complexity theory is to find out how these classes relate to classical complexity classes such asP,NP,BPP, andPSPACE.
One of the reasons quantum complexity theory is studied are the implications of quantum computing for the modernChurch-Turing thesis. In short the modern Church-Turing thesis states that any computational model can be simulated in polynomial time with aprobabilistic Turing machine.[1][2]However, questions around the Church-Turing thesis arise in the context of quantum computing. It is unclear whether the Church-Turing thesis holds for the quantum computation model. There is much evidence that the thesis does not hold. It may not be possible for a probabilistic Turing machine to simulate quantum computation models in polynomial time.[1]
Both quantum computational complexity of functions and classical computational complexity of functions are often expressed withasymptotic notation. Some common forms of asymptotic notion of functions areO(T(n)){\displaystyle O(T(n))},Ω(T(n)){\displaystyle \Omega (T(n))}, andΘ(T(n)){\displaystyle \Theta (T(n))}.O(T(n)){\displaystyle O(T(n))}expresses that something is bounded above bycT(n){\displaystyle cT(n)}wherec{\displaystyle c}is a constant such thatc>0{\displaystyle c>0}andT(n){\displaystyle T(n)}is a function ofn{\displaystyle n},Ω(T(n)){\displaystyle \Omega (T(n))}expresses that something is bounded below bycT(n){\displaystyle cT(n)}wherec{\displaystyle c}is a constant such thatc>0{\displaystyle c>0}andT(n){\displaystyle T(n)}is a function ofn{\displaystyle n}, andΘ(T(n)){\displaystyle \Theta (T(n))}expresses bothO(T(n)){\displaystyle O(T(n))}andΩ(T(n)){\displaystyle \Omega (T(n))}.[3]These notations also have their own names.O(T(n)){\displaystyle O(T(n))}is calledBig O notation,Ω(T(n)){\displaystyle \Omega (T(n))}is called Big Omega notation, andΘ(T(n)){\displaystyle \Theta (T(n))}is called Big Theta notation.
The important complexity classes P, BPP, BQP, PP, and PSPACE can be compared based onpromise problems. A promise problem is a decision problem which has an input assumed to be selected from the set of all possible input strings. A promise problem is a pairA=(Ayes,Ano){\displaystyle A=(A_{\text{yes}},A_{\text{no}})}, whereAyes{\displaystyle A_{\text{yes}}}is the set of yes instances andAno{\displaystyle A_{\text{no}}}is the set of no instances, and the intersection of these sets is empty:Ayes∩Ano=∅{\displaystyle A_{\text{yes}}\cap A_{\text{no}}=\varnothing }. All of the previous complexity classes contain promise problems.[4]
The class ofproblemsthat can be efficiently solved by a quantum computer with bounded error is called BQP ("bounded error, quantum, polynomial time"). More formally, BQP is the class of problems that can be solved by a polynomial-timequantum Turing machinewith error probability of at most 1/3.
As a class of probabilistic problems, BQP is the quantum counterpart toBPP("bounded error, probabilistic, polynomial time"), the class of problems that can be efficiently solved byprobabilistic Turing machineswith bounded error.[6]It is known thatBPP⊆BQP{\displaystyle {\mathsf {BPP\subseteq BQP}}}and widely suspected, but not proven, thatBQP⊈BPP{\displaystyle {\mathsf {BQP\nsubseteq BPP}}}, which intuitively would mean that quantum computers are more powerful than classical computers in terms of time complexity.[7]BQP is a subset ofPP.
The exact relationship of BQP toP,NP, andPSPACEis not known. However, it is known thatP⊆BQP⊆PSPACE{\displaystyle {\mathsf {P\subseteq BQP\subseteq PSPACE}}}; that is, the class of problems that can be efficiently solved by quantum computers includes all problems that can be efficiently solved by deterministic classical computers but does not include any problems that cannot be solved by classical computers with polynomial space resources. It is further suspected that BQP is a strict superset of P, meaning there are problems that are efficiently solvable by quantum computers that are not efficiently solvable by deterministic classical computers. For instance,integer factorizationand thediscrete logarithm problemare known to be in BQP and are suspected to be outside of P. On the relationship of BQP to NP, little is known beyond the fact that some NP problems are in BQP (integer factorization and the discrete logarithm problem are both in NP, for example). It is suspected thatNP⊈BQP{\displaystyle {\mathsf {NP\nsubseteq BQP}}}; that is, it is believed that there are efficiently checkable problems that are not efficiently solvable by a quantum computer. As a direct consequence of this belief, it is also suspected that BQP is disjoint from the class ofNP-completeproblems (if any NP-complete problem were in BQP, then it follows fromNP-hardnessthat all problems in NP are in BQP).[8]
The relationship of BQP to the essential classical complexity classes can be summarized as:
It is also known that BQP is contained in the complexity class#P{\displaystyle \color {Blue}{\mathsf {\#P}}}(or more precisely in the associated class of decision problemsP#P{\displaystyle {\mathsf {P^{\#P}}}}),[8]which is a subset ofPSPACE.
There is no known way to efficiently simulate a quantum computational model with a classical computer. This means that a classical computer cannot simulate a quantum computational model in polynomial time. However, aquantum circuitofS(n){\displaystyle S(n)}qubits withT(n){\displaystyle T(n)}quantum gates can be simulated by a classical circuit withO(2S(n)T(n)3){\displaystyle O(2^{S(n)}T(n)^{3})}classical gates.[3]This number of classical gates is obtained by determining how many bit operations are necessary to simulate the quantum circuit. In order to do this, first the amplitudes associated with theS(n){\displaystyle S(n)}qubits must be accounted for. Each of the states of theS(n){\displaystyle S(n)}qubits can be described by a two-dimensional complex vector, or a state vector. These state vectors can also be described alinear combinationof itscomponent vectorswith coefficients called amplitudes. These amplitudes are complex numbers which are normalized to one, meaning the sum of the squares of the absolute values of the amplitudes must be one.[3]The entries of the state vector are these amplitudes. Which entry each of the amplitudes are correspond to the none-zero component of the component vector which they are the coefficients of in the linear combination description. As an equation this is described asα[10]+β[01]=[αβ]{\displaystyle \alpha {\begin{bmatrix}1\\0\end{bmatrix}}+\beta {\begin{bmatrix}0\\1\end{bmatrix}}={\begin{bmatrix}\alpha \\\beta \end{bmatrix}}}orα|1⟩+β|0⟩=[αβ]{\displaystyle \alpha \left\vert 1\right\rangle +\beta \left\vert 0\right\rangle ={\begin{bmatrix}\alpha \\\beta \end{bmatrix}}}usingDirac notation. The state of the entireS(n){\displaystyle S(n)}qubit system can be described by a single state vector. This state vector describing the entire system is the tensor product of the state vectors describing the individual qubits in the system. The result of the tensor products of theS(n){\displaystyle S(n)}qubits is a single state vector which has2S(n){\displaystyle 2^{S(n)}}dimensions and entries that are the amplitudes associated with each basis state or component vector. Therefore,2S(n){\displaystyle 2^{S(n)}}amplitudes must be accounted for with a2S(n){\displaystyle 2^{S(n)}}dimensional complex vector which is the state vector for theS(n){\displaystyle S(n)}qubit system.[9]In order to obtain an upper bound for the number of gates required to simulate a quantum circuit we need a sufficient upper bound for the amount data used to specify the information about each of the2S(n){\displaystyle 2^{S(n)}}amplitudes. To do thisO(T(n)){\displaystyle O(T(n))}bits of precision are sufficient for encoding each amplitude.[3]So it takesO(2S(n)T(n)){\displaystyle O(2^{S(n)}T(n))}classical bits to account for the state vector of theS(n){\displaystyle S(n)}qubit system. Next the application of theT(n){\displaystyle T(n)}quantum gates on2S(n){\displaystyle 2^{S(n)}}amplitudes must be accounted for. The quantum gates can be represented as2S(n)×2S(n){\displaystyle 2^{S(n)}\times 2^{S(n)}}sparse matrices.[3]So to account for the application of each of theT(n){\displaystyle T(n)}quantum gates, the state vector must be multiplied by a2S(n)×2S(n){\displaystyle 2^{S(n)}\times 2^{S(n)}}sparse matrix for each of theT(n){\displaystyle T(n)}quantum gates. Every time the state vector is multiplied by a2S(n)×2S(n){\displaystyle 2^{S(n)}\times 2^{S(n)}}sparse matrix,O(2S(n)){\displaystyle O(2^{S(n)})}arithmetic operations must be performed.[3]Therefore, there areO(2S(n)T(n)2){\displaystyle O(2^{S(n)}T(n)^{2})}bit operations for every quantum gate applied to the state vector. SoO(2S(n)T(n)2){\displaystyle O(2^{S(n)}T(n)^{2})}classical gate are needed to simulateS(n){\displaystyle S(n)}qubit circuit with just one quantum gate. Therefore,O(2S(n)T(n)3){\displaystyle O(2^{S(n)}T(n)^{3})}classical gates are needed to simulate a quantum circuit ofS(n){\displaystyle S(n)}qubits withT(n){\displaystyle T(n)}quantum gates.[3]While there is no known way to efficiently simulate a quantum computer with a classical computer, it is possible to efficiently simulate a classical computer with a quantum computer. This is evident from the fact thatBPP⊆BQP{\displaystyle {\mathsf {BPP\subseteq BQP}}}.[4]
One major advantage of using a quantum computational system instead of a classical one, is that a quantum computer may be able to give apolynomial time algorithmfor some problem for which no classical polynomial time algorithm exists, but more importantly, a quantum computer may significantly decrease the calculation time for a problem that a classical computer can already solve efficiently. Essentially, a quantum computer may be able to both determine how long it will take to solve a problem, while a classical computer may be unable to do so, and can also greatly improve the computational efficiency associated with the solution to a particular problem. Quantum query complexity refers to how complex, or how many queries to the graph associated with the solution of a particular problem, are required to solve the problem. Before we delve further into query complexity, let us consider some background regarding graphing solutions to particular problems, and the queries associated with these solutions.
One type of problem that quantum computing can make easier to solve are graph problems. If we are to consider the amount of queries to a graph that are required to solve a given problem, let us first consider the most common types of graphs, calleddirected graphs, that are associated with this type of computational modelling. In brief, directed graphs are graphs where all edges between vertices are unidirectional. Directed graphs are formally defined as the graphG=(N,E){\displaystyle G=(N,E)}, where N is the set of vertices, or nodes, and E is the set of edges.[10]
When considering quantum computation of the solution to directed graph problems, there are two important query models to understand. First, there is theadjacency matrixmodel, where the graph of the solution is given by the adjacency matrix:M∈{0,1}anXn{\displaystyle M\in \{0,1\}a^{n\mathrm {X} n}}, withMij=1{\displaystyle M_{ij}=1}, if and only if(vi,vj)∈E{\displaystyle (v_{i},v_{j})\in E}.[11]
Next, there is the slightly more complicated adjacency array model built on the idea ofadjacency lists, where every vertex,u{\displaystyle u},is associated with an array of neighboring vertices such thatfi:[di+]→[n]{\displaystyle f_{i}:[d_{i}^{+}]\rightarrow [n]}, for the out-degrees of verticesdi+,...,dn+{\displaystyle d_{i}^{+},...,d_{n}^{+}}, wheren{\displaystyle n}is the minimum value of the upper bound of this model, andfi(j){\displaystyle f_{i}(j)}returns the "jth{\displaystyle j^{th}}" vertex adjacent toi{\displaystyle i}. Additionally, the adjacency array model satisfies the simple graph condition,∀i∈[n],j,j′∈[k],j≠j′:fi(j)≠fi(j′){\displaystyle \forall i\in [n],j,j'\in [k],j\neq j':f_{i}(j)\neq f_{i}(j')}, meaning that there is only one edge between any pair of vertices, and the number of edges is minimized throughout the entire model (seeSpanning treemodel for more background).[11]
Both of the above models can be used to determine the query complexity of particular types of graphing problems, including theconnectivity,strong connectivity(a directed graph version of the connectivity model),minimum spanning tree, andsingle source shortest pathmodels of graphs. An important caveat to consider is that the quantum complexity of a particular type of graphing problem can change based on the query model (namely either matrix or array) used to determine the solution. The following table showing the quantum query complexities of these types of graphing problems illustrates this point well.
Notice the discrepancy between the quantum query complexities associated with a particular type of problem, depending on which query model was used to determine the complexity. For example, when the matrix model is used, the quantum complexity of the connectivity model inBig O notationisΘ(n3/2){\displaystyle \Theta (n^{3/2})}, but when the array model is used, the complexity isΘ(n){\displaystyle \Theta (n)}. Additionally, for brevity, we use the shorthandm{\displaystyle m}in certain cases, wherem=Θ(n2){\displaystyle m=\Theta (n^{2})}.[11]The important implication here is that the efficiency of the algorithm used to solve a graphing problem is dependent on the type of query model used to model the graph.
In the query complexity model, the input can also be given as an oracle (black box). The algorithm gets information about the input only by querying the oracle. The algorithm starts in some fixed quantum state and the state evolves as it queries the oracle.
Similar to the case of graphing problems, the quantum query complexity of a black-box problem is the smallest number of queries to the oracle that are required in order to calculate the function. This makes the quantum query complexity a lower bound on the overall time complexity of a function.
An example depicting the power of quantum computing isGrover's algorithmfor searching unstructured databases. The algorithm's quantum query complexity isO(N){\textstyle O{\left({\sqrt {N}}\right)}}, a quadratic improvement over the best possible classical query complexityO(N){\displaystyle O(N)}, which is alinear search. Grover's algorithm isasymptotically optimal; in fact, it uses at most a1+o(1){\displaystyle 1+o(1)}fraction more queries than the best possible algorithm.[12]
TheDeutsch-Jozsa algorithmis a quantum algorithm designed to solve a toy problem with a smaller query complexity than is possible with a classical algorithm. The toy problem asks whether a functionf:{0,1}n→{0,1}{\displaystyle f:\{0,1\}^{n}\rightarrow \{0,1\}}is constant or balanced, those being the only two possibilities.[2]The only way to evaluate the functionf{\displaystyle f}is to consult ablack boxororacle. A classicaldeterministic algorithmwill have to check more than half of the possible inputs to be sure of whether or not the function is constant or balanced. With2n{\displaystyle 2^{n}}possible inputs, the query complexity of the most efficient classical deterministic algorithm is2n−1+1{\displaystyle 2^{n-1}+1}.[2]The Deutsch-Jozsa algorithm takes advantage of quantum parallelism to check all of the elements of the domain at once and only needs to query the oracle once, making its query complexity1{\displaystyle 1}.[2]
It has been speculated that further advances in physics could lead to even faster computers. For instance, it has been shown that a non-local hidden variable quantum computer based onBohmian Mechanicscould implement a search of anN-item database in at mostO(N3){\displaystyle O({\sqrt[{3}]{N}})}steps, a slight speedup overGrover's algorithm, which runs inO(N){\displaystyle O({\sqrt {N}})}steps. Note, however, that neither search method would allow quantum computers to solveNP-completeproblems in polynomial time.[13]Theories ofquantum gravity, such asM-theoryandloop quantum gravity, may allow even faster computers to be built. However, defining computation in these theories is an open problem due to theproblem of time; that is, within these physical theories there is currently no obvious way to describe what it means for an observer to submit input to a computer at one point in time and then receive output at a later point in time.[14][15]
|
https://en.wikipedia.org/wiki/Quantum_complexity_theory
|
Incomputational complexity theoryofcomputer science, thestructural complexity theoryor simplystructural complexityis the study ofcomplexity classes, rather than computational complexity of individual problems and algorithms. It involves the research of both internal structures of various complexity classes and the relations between different complexity classes.[1]
The theory has emerged as a result of (still failing) attempts to resolve the first and still the most important question of this kind, theP = NP problem. Most of the research is done basing on the assumption of P not being equal to NP and on a more far-reaching conjecture that thepolynomial time hierarchyof complexity classes is infinite.[1]
The compression theorem is an important theorem about the complexity ofcomputable functions.
The theorem states that there exists no largestcomplexity class, with computable boundary, which contains all computable functions.
The space hierarchy theorems are separation results that show that both deterministic and nondeterministic machines can solve more problems in (asymptotically) more space, subject to certain conditions. For example, adeterministic Turing machinecan solve moredecision problemsin spacenlognthan in spacen. The somewhat weaker analogous theorems for time are thetime hierarchy theorems.
The time hierarchy theorems are important statements about time-bounded computation onTuring machines. Informally, these theorems say that given more time, a Turing machine can solve more problems. For example, there are problems that can be solved withn2time but notntime.
The Valiant–Vazirani theorem is a theorem incomputational complexity theory. It was proven byLeslie ValiantandVijay Vaziraniin their paper titledNP is as easy as detecting unique solutionspublished in 1986.[2]The theorem states that if there is apolynomial time algorithmforUnambiguous-SAT, thenNP=RP.
The proof is based on the Mulmuley–Vaziraniisolation lemma, which was subsequently used for a number of important applications intheoretical computer science.
The Sipser–Lautemann theorem orSipser–Gács–Lautemann theoremstates thatBounded-error Probabilistic Polynomial(BPP) time, is contained in thepolynomial time hierarchy, and more specifically Σ2∩ Π2.
Savitch's theorem, proved byWalter Savitchin 1970, gives a relationship between deterministic and non-deterministicspace complexity. It states that for any functionf∈Ω(log(n)){\displaystyle f\in \Omega (\log(n))},
Toda's theorem is a result that was proven bySeinosuke Todain his paper "PP is as Hard as the Polynomial-Time Hierarchy" (1991) and was given the 1998Gödel Prize. The theorem states that the entirepolynomial hierarchy PHis contained in PPP; this implies a closely related statement, that PH is contained in P#P.
The Immerman–Szelepcsényi theorem was proven independently byNeil ImmermanandRóbert Szelepcsényiin 1987, for which they shared the 1995Gödel Prize. In its general form the theorem states thatNSPACE(s(n)) = co-NSPACE(s(n)) for any functions(n) ≥ logn. The result is equivalently stated asNL= co-NL; although this is the special case whens(n) = logn, it implies the general theorem by a standardpadding argument[citation needed]. The result solved thesecond LBA problem.
Major directions of research in this area include:[1]
|
https://en.wikipedia.org/wiki/Structural_complexity_theory
|
Incomputational complexity theory, atranscomputational problemis aproblemthat requires processing of more than 1093bits of information.[1]Any number greater than 1093is called atranscomputational number. The number 1093, calledBremermann's limit, is, according toHans-Joachim Bremermann, the total number of bits processed by a hypothetical computer the size of theEarthwithin a time period equal to the estimated age of the Earth.[1][2]The termtranscomputationalwas coined by Bremermann.[3]
Exhaustively testing all combinations of anintegrated circuitwith 309booleaninputsand 1outputrequires testing of a total of 2309combinations of inputs. Since the number 2309is a transcomputational number (that is, a number greater than 1093), the problem of testing such a system ofintegrated circuitsis a transcomputational problem. This means that there is no way one can verify the correctness of the circuit for all combinations of inputs throughbrute forcealone.[1][4]
Consider aq×qarray of thechessboardtype, each square of which can have one ofkcolors. Altogether there arekncolorpatterns, wheren=q2. The problem of determining the best classification of the patterns, according to some chosen criterion, may be solved by a search through all possible color patterns. or by many other means, which we will be ignoring here. For two colors, such a search becomes "transcomputational" when the array is 18×18 or larger. For a 10×10 array, the problem becomes transcomputational when there are 9 or more colors.[1]Still, computers manage to recognize patterns in way larger arrays, thus disproving the fringe "transcomputational theory" from the early 1960s.
This has some relevance in the physiological studies of theretina. The retina contains about a millionlight-sensitivecells. Even if there were only two possible states for each cell (say, an active state and an inactive state) the processing of theretinaas a whole requires processing of more than 10300,000bits of information. This is far beyondBremermann's limit,[1]and proves that humans cannot see.
Asystemofnvariables, each of which can takekdifferent states, can haveknpossible system states. To analyze such a system, a minimum ofknbits of information are to be processed. The problem becomes transcomputational whenkn> 1093. This happens for the following values ofkandn:[1]
The existence of real-world transcomputational problems implies the limitations of computers as data processing tools. This point is best summarized in Bremermann's own words:[2]
|
https://en.wikipedia.org/wiki/Transcomputational_problem
|
The following tables list thecomputational complexityof variousalgorithmsfor commonmathematical operations.
Here, complexity refers to thetime complexityof performing computations on amultitape Turing machine.[1]Seebig O notationfor an explanation of the notation used.
Note: Due to the variety of multiplication algorithms,M(n){\displaystyle M(n)}below stands in for the complexity of the chosen multiplication algorithm.
This table lists the complexity of mathematical operations on integers.
On stronger computational models, specifically apointer machineand consequently also aunit-cost random-access machineit is possible to multiply twon-bit numbers in timeO(n).[6]
Here we consider operations over polynomials andndenotes their degree; for the coefficients we use aunit-costmodel, ignoring the number of bits in a number. In practice this means that we assume them to be machine integers.
Many of the methods in this section are given in Borwein & Borwein.[7]
Theelementary functionsare constructed by composing arithmetic operations, theexponential function(exp{\displaystyle \exp }), thenatural logarithm(log{\displaystyle \log }),trigonometric functions(sin,cos{\displaystyle \sin ,\cos }), and their inverses. The complexity of an elementary function is equivalent to that of its inverse, since all elementary functions areanalyticand hence invertible by means of Newton's method. In particular, if eitherexp{\displaystyle \exp }orlog{\displaystyle \log }in the complex domain can be computed with some complexity, then that complexity is attainable for all other elementary functions.
Below, the sizen{\displaystyle n}refers to the number of digits of precision at which the function is to be evaluated.
It is not known whetherO(M(n)logn){\displaystyle O(M(n)\log n)}is the optimal complexity for elementary functions. The best known lower bound is the trivial boundΩ{\displaystyle \Omega }(M(n)){\displaystyle (M(n))}.
This table gives the complexity of computing approximations to the given constants ton{\displaystyle n}correct digits.
Algorithms fornumber theoreticalcalculations are studied incomputational number theory.
The following complexity figures assume that arithmetic with individual elements has complexityO(1), as is the case with fixed-precisionfloating-point arithmeticor operations on afinite field.
In 2005,Henry Cohn,Robert Kleinberg,Balázs Szegedy, andChris Umansshowed that either of two different conjectures would imply that the exponent of matrix multiplication is 2.[34]
Algorithms for computingtransformsof functions (particularlyintegral transforms) are widely used in all areas of mathematics, particularlyanalysisandsignal processing.
|
https://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations
|
Taher Elgamal[a](Arabic: طاهر الجمل) (born 18 August 1955) is an Egyptian-Americancryptographerand tech executive.[1]Since January 2023, he has been a partner at venture capital firmEvolution Equity Partners.[2]Prior to that, he was the founder and CEO of Securify and the director of engineering at RSA Security. From 1995 to 1998, he was the chief scientist atNetscape Communications.From 2013 to 2023, he served as the Chief Technology Officer (CTO) of Security atSalesforce.[3][4]
Elgamal's 1985 paper entitled "A Public Key Cryptosystem and A Signature Scheme Based on Discrete Logarithms" proposed the design of theElGamal discrete log cryptosystemand of theElGamal signature scheme.[5]The latter scheme became the basis forDigital Signature Algorithm(DSA) adopted byNational Institute of Standards and Technology(NIST) as theDigital Signature Standard(DSS). His development of theSecure Sockets Layer(SSL)cryptographic protocolat Netscape in the 1990s was also the basis for theTransport Layer Security(TLS) andHTTPSInternet protocols.[6][7][8]
According to an article on Medium,[9]Elgamal's first love wasmathematics. Although he came to the United States to pursue a PhD in Electrical Engineering at Stanford University, he said that "cryptography was the most beautiful use of math he'd ever seen".
Elgamal earned a BSc fromCairo Universityin 1977, and MS and PhD degrees inElectrical EngineeringfromStanford Universityin 1981 and 1984, respectively.Martin Hellmanwas his dissertation advisor.[10]
Elgamal joined the technical staff atHP Labsin 1984. He served as chief scientist at Netscape Communications from 1995 to 1998,[11]where he was a driving force behindSecure Sockets Layer.[12]Network Worlddescribed him as the "father ofSSL."[6]SSL was the basis for theTransport Layer Security(TLS)[7]andHTTPSInternet protocols.[8][13]
He also was the director of engineering atRSA SecurityInc.[14]before foundingSecurifyin 1998 and becoming itschief executive officer. According to an interview with Elgamal,[15]when Securify was acquired byKroll-O'Gara,[16]he became the president of itsinformation securitygroup. After helping Securify spin out from Kroll-O'Gara,[17]Taher served as the company's chief technology officer (CTO) from 2001 to 2004.[18]In late 2008, Securify was acquired by Secure Computing[19]and is now part ofMcAfee.[20]In October 2006, he joinedTumbleweed Communicationsas a CTO.[21]Tumbleweed was acquired in 2008 by Axway Inc. Until 2023, Elgamal was CTO for security at Salesforce.com.[4][9][22]He now works as a partner at Evolution Equity Partners.[2]
Elgamal is a co-founder of NokNok Labs[23]and InfoSec Global.[citation needed]He serves as a director of Vindicia, Inc.,[24]which provides online payment services,ZixCorporation, which provides email encryption services, and Bay Dynamics.[25]He has served as an adviser to Cyphort, Bitglass, Onset Ventures, Glenbrook Partners, PGP corporation, Arcot Systems, Finjan, Actiance, Symplified, and Zetta. He served as Chief Security Officer ofAxway, Inc. He is vice chairman of SecureMisr.
Elgamal has also held executive roles at technology and security companies, including
As a scholar, Elgamal published 4 articles:
|
https://en.wikipedia.org/wiki/Taher_Elgamal
|
Homomorphic encryptionis a form ofencryptionthat allows computations to be performed on encrypted data without first having to decrypt it.[1]The resulting computations are left in an encrypted form which, when decrypted, result in an output that is identical to that of the operations performed on the unencrypted data. While homomorphic encryption does not protect against side-channel attacks that observe behavior, it can be used for privacy-preserving outsourcedstorageandcomputation. This allows data to be encrypted and outsourced to commercial cloud environments for processing, all while encrypted.
As an example of a practical application of homomorphic encryption: encrypted photographs can be scanned for points of interest, without revealing the contents of a photo. However, observation of side-channels can see a photograph being sent to a point-of-interest lookup service, revealing the fact that photographs were taken.
Thus, homomorphic encryption eliminates the need for processing data in the clear, thereby preventing attacks that would enable an attacker to access that data while it is being processed, usingprivilege escalation.[2]
For sensitive data, such as healthcare information, homomorphic encryption can be used to enable new services by removing privacy barriers inhibiting data sharing or increasing security to existing services. For example,predictive analyticsin healthcare can be hard to apply via a third-party service provider due tomedical data privacyconcerns. But if the predictive-analytics service provider could operate on encrypted data instead, without having the decryption keys, these privacy concerns are diminished. Moreover, even if the service provider's system is compromised, the data would remain secure.[3]
Homomorphic encryption is a form ofencryptionwith an additional evaluation capability for computing over encrypted data without access to thesecret key. The result of such a computation remains encrypted. Homomorphic encryption can be viewed as an extension ofpublic-key cryptography[how?].Homomorphicrefers tohomomorphismin algebra: the encryption and decryption functions can be thought of as homomorphisms between plaintext and ciphertext spaces.
Homomorphic encryption includes multiple types of encryption schemes that can perform different classes of computations over encrypted data.[4]The computations are represented as either Boolean or arithmetic circuits. Some common types of homomorphic encryption arepartiallyhomomorphic,somewhathomomorphic,leveledfullyhomomorphic, andfullyhomomorphic encryption:
For the majority of homomorphic encryption schemes, the multiplicative depth of circuits is the main practical limitation in performing computations over encrypted data. Homomorphic encryption schemes are inherentlymalleable. In terms of malleability, homomorphic encryption schemes have weaker security properties than non-homomorphic schemes.
Homomorphic encryption schemes have been developed using different approaches. Specifically, fully homomorphic encryption schemes are often grouped into generations corresponding to the underlying approach.[5]
The problem of constructing a fully homomorphic encryption scheme was first proposed in 1978, within a year of publishing of the RSA scheme.[6]For more than 30 years, it was unclear whether a solution existed. During that period, partial results included the following schemes:
Craig Gentry, usinglattice-based cryptography, described the first plausible construction for a fully homomorphic encryption scheme in 2009.[10]Gentry's scheme supports both addition and multiplication operations on ciphertexts, from which it is possible to construct circuits for performing arbitrary computation. The construction starts from asomewhat homomorphicencryption scheme, which is limited to evaluating low-degree polynomials over encrypted data; it is limited because each ciphertext is noisy in some sense, and this noise grows as one adds and multiplies ciphertexts, until ultimately the noise makes the resulting ciphertext indecipherable.
Gentry then shows how to slightly modify this scheme to make itbootstrappable, i.e., capable of evaluating its own decryption circuit and then at least one more operation. Finally, he shows that any bootstrappable somewhat homomorphic encryption scheme can be converted into a fully homomorphic encryption through a recursive self-embedding. For Gentry's "noisy" scheme, the bootstrapping procedure effectively "refreshes" the ciphertext by applying to it the decryption procedure homomorphically, thereby obtaining a new ciphertext that encrypts the same value as before but has lower noise. By "refreshing" the ciphertext periodically whenever the noise grows too large, it is possible to compute an arbitrary number of additions and multiplications without increasing the noise too much.
Gentry based the security of his scheme on the assumed hardness of two problems: certain worst-case problems overideal lattices, and the sparse (or low-weight) subset sum problem. Gentry's Ph.D. thesis[11]provides additional details. The Gentry-Halevi implementation of Gentry's original cryptosystem reported a timing of about 30 minutes per basic bit operation.[12]Extensive design and implementation work in subsequent years have improved upon these early implementations by many orders of magnitude runtime performance.
In 2010, Marten van Dijk,Craig Gentry,Shai Haleviand Vinod Vaikuntanathan presented a second fully homomorphic encryption scheme,[13]which uses many of the tools of Gentry's construction, but which does not requireideal lattices. Instead, they show that the somewhat homomorphic component of Gentry's ideal lattice-based scheme can be replaced with a very simple somewhat homomorphic scheme that uses integers. The scheme is therefore conceptually simpler than Gentry's ideal lattice scheme, but has similar properties with regards to homomorphic operations and efficiency. The somewhat homomorphic component in the work of Van Dijk et al. is similar to an encryption scheme proposed by Levieil andNaccachein 2008,[14]and also to one that was proposed byBram Cohenin 1998.[15]
Cohen's methodis not even additively homomorphic, however. The Levieil–Naccache scheme supports only additions, but it can be modified to also support a small number of multiplications. Many refinements and optimizations of the scheme of Van Dijk et al. were proposed in a sequence of works by Jean-Sébastien Coron, Tancrède Lepoint, Avradip Mandal,David Naccache, and Mehdi Tibouchi.[16][17][18][19]Some of these works included also implementations of the resulting schemes.
The homomorphic cryptosystems of this generation are derived from techniques that were developed starting in 2011–2012 byZvika Brakerski,Craig Gentry,Vinod Vaikuntanathan, and others. These innovations led to the development of much more efficient somewhat and fully homomorphic cryptosystems. These include:
The security of most of these schemes is based on the hardness of the(Ring) Learning With Errors(RLWE) problem, except for the LTV and BLLN schemes that rely on anoverstretched[26]variant of theNTRUcomputational problem. This NTRU variant was subsequently shown vulnerable to subfield lattice attacks,[27][26]which is why these two schemes are no longer used in practice.
All the second-generation cryptosystems still follow the basic blueprint of Gentry's original construction, namely they first construct a somewhat homomorphic cryptosystem and then convert it to a fully homomorphic cryptosystem using bootstrapping.
A distinguishing characteristic of the second-generation cryptosystems is that they all feature a much slower growth of the noise during the homomorphic computations. Additional optimizations byCraig Gentry,Shai Halevi, andNigel Smartresulted in cryptosystems with nearly optimal asymptotic complexity: PerformingT{\displaystyle T}operations on data encrypted with security parameterk{\displaystyle k}has complexity of onlyT⋅polylog(k){\displaystyle T\cdot \mathrm {polylog} (k)}.[28][29][30]These optimizations build on the Smart-Vercauteren techniques that enable packing of many plaintext values in a single ciphertext and operating on all these plaintext values in aSIMDfashion.[31]Many of the advances in these second-generation cryptosystems were also ported to the cryptosystem over the integers.[18][19]
Another distinguishing feature of second-generation schemes is that they are efficient enough for many applications even without invoking bootstrapping, instead operating in the leveled FHE mode.
In 2013,Craig Gentry,Amit Sahai, andBrent Waters(GSW) proposed a new technique for building FHE schemes that avoids an expensive "relinearization" step in homomorphic multiplication.[32]Zvika Brakerski and Vinod Vaikuntanathan observed that for certain types of circuits, the GSW cryptosystem features an even slower growth rate of noise, and hence better efficiency and stronger security.[33]Jacob Alperin-Sheriff and Chris Peikert then described a very efficient bootstrapping technique based on this observation.[34]
These techniques were further improved to develop efficient ring variants of the GSW cryptosystem: FHEW (2014)[35]and TFHE (2016).[36]The FHEW scheme was the first to show that by refreshing the ciphertexts after every single operation, it is possible to reduce the bootstrapping time to a fraction of a second. FHEW introduced a new method to compute Boolean gates on encrypted data that greatly simplifies bootstrapping and implemented a variant of the bootstrapping procedure.[34]The efficiency of FHEW was further improved by the TFHE scheme, which implements a ring variant of the bootstrapping procedure[37]using a method similar to the one in FHEW.
In 2016,Jung Hee Cheon, Andrey Kim, Miran Kim, and Yongsoo Song (CKKS)[38]proposed an approximate homomorphic encryption scheme that supports a special kind of fixed-point arithmetic that is commonly referred to asblock floating pointarithmetic. The CKKS scheme includes an efficient rescaling operation that scales down an encrypted message after a multiplication. For comparison, such rescaling requires bootstrapping in the BGV and BFV schemes. The rescaling operation makes CKKS scheme the most efficient method for evaluating polynomial approximations, and is the preferred approach for implementingprivacy-preserving machine learning applications. The scheme introduces several approximation errors, both nondeterministic and deterministic, that require special handling in practice.[39]
A 2020 article by Baiyu Li and Daniele Micciancio discusses passive attacks against CKKS, suggesting that the standard IND-CPA definition may not be sufficient in scenarios where decryption results are shared.[40]The authors apply the attack to four modern homomorphic encryption libraries (HEAAN, SEAL, HElib and PALISADE) and report that it is possible to recover the secret key from decryption results in several parameter configurations. The authors also propose mitigation strategies for these attacks, and include a Responsible Disclosure in the paper suggesting that the homomorphic encryption libraries already implemented mitigations for the attacks before the article became publicly available. Further information on the mitigation strategies implemented in the homomorphic encryption libraries has also been published.[41][42]
In the following examples, the notationE(x){\displaystyle {\mathcal {E}}(x)}is used to denote the encryption of the messagex{\displaystyle x}.
Unpadded RSA
If theRSApublic key has modulusn{\displaystyle n}and encryption exponente{\displaystyle e}, then the encryption of a messagem{\displaystyle m}is given byE(m)=memodn{\displaystyle {\mathcal {E}}(m)=m^{e}\;{\bmod {\;}}n}. The homomorphic property is then
ElGamal
In theElGamal cryptosystem, in a cyclic groupG{\displaystyle G}of orderq{\displaystyle q}with generatorg{\displaystyle g}, if the public key is(G,q,g,h){\displaystyle (G,q,g,h)}, whereh=gx{\displaystyle h=g^{x}}, andx{\displaystyle x}is the secret key, then the encryption of a messagem{\displaystyle m}isE(m)=(gr,m⋅hr){\displaystyle {\mathcal {E}}(m)=(g^{r},m\cdot h^{r})}, for some randomr∈{0,…,q−1}{\displaystyle r\in \{0,\ldots ,q-1\}}. The homomorphic property is then
Goldwasser–Micali
In theGoldwasser–Micali cryptosystem, if the public key is the modulusn{\displaystyle n}and quadratic non-residuex{\displaystyle x}, then the encryption of a bitb{\displaystyle b}isE(b)=xbr2modn{\displaystyle {\mathcal {E}}(b)=x^{b}r^{2}\;{\bmod {\;}}n}, for some randomr∈{0,…,n−1}{\displaystyle r\in \{0,\ldots ,n-1\}}. The homomorphic property is then
where⊕{\displaystyle \oplus }denotes addition modulo 2, (i.e.,exclusive-or).
Benaloh
In theBenaloh cryptosystem, if the public key is the modulusn{\displaystyle n}and the baseg{\displaystyle g}with a blocksize ofc{\displaystyle c}, then the encryption of a messagem{\displaystyle m}isE(m)=gmrcmodn{\displaystyle {\mathcal {E}}(m)=g^{m}r^{c}\;{\bmod {\;}}n}, for some randomr∈{0,…,n−1}{\displaystyle r\in \{0,\ldots ,n-1\}}. The homomorphic property is then
Paillier
In thePaillier cryptosystem, if the public key is the modulusn{\displaystyle n}and the baseg{\displaystyle g}, then the encryption of a messagem{\displaystyle m}isE(m)=gmrnmodn2{\displaystyle {\mathcal {E}}(m)=g^{m}r^{n}\;{\bmod {\;}}n^{2}}, for some randomr∈{0,…,n−1}{\displaystyle r\in \{0,\ldots ,n-1\}}. The homomorphic property is then
Other partially homomorphic cryptosystems
A cryptosystem that supportsarbitrary computationon ciphertexts is known as fully homomorphic encryption (FHE). Such a scheme enables the construction of programs for any desirable functionality, which can be run on encrypted inputs to produce an encryption of the result. Since such a program need never decrypt its inputs, it can be run by an untrusted party without revealing its inputs and internal state. Fully homomorphic cryptosystems have great practical implications in the outsourcing of private computations, for instance, in the context ofcloud computing.[45]
A list of open-source FHE libraries implementing second-generation (BGV/BFV), third-generation (FHEW/TFHE), and/or fourth-generation (CKKS) FHE schemes is provided below.
There are several open-source implementations of fully homomorphic encryption schemes. Second-generation and fourth-generation FHE scheme implementations typically operate in the leveled FHE mode (though bootstrapping is still available in some libraries) and support efficientSIMD-like packing of data; they are typically used to compute on encrypted integers or real/complex numbers. Third-generation FHE scheme implementations often bootstrap after each operation but have limited support for packing; they were initially used to compute Boolean circuits over encrypted bits, but have been extended to support integer arithmetics and univariate function evaluation. The choice of using a second-generation vs. third-generation vs fourth-generation scheme depends on the input data types and the desired computation.
Rust implementation of TFHE-extended. Supporting Boolean, integer operation and univariate function evaluation (via Programmable Bootstrapping[58]).
TFHE-extended compiler with a Python Frontend.[61]
In 2017, researchers fromIBM,Microsoft,Intel, theNIST, and others formed the openHomomorphic Encryption Standardization Consortium, which maintains a community securityHomomorphic Encryption Standard.[68][69][70]
|
https://en.wikipedia.org/wiki/Homomorphic_encryption
|
This article summarizes publicly knownattacksagainstblock ciphersandstream ciphers. Note that there are perhaps attacks that are not publicly known, and not all entries may be up to date.
This column lists the complexity of the attack:
Attacks that lead to disclosure of thekeyor plaintext.
Attacks that allow distinguishing ciphertext from random data.
Attacks that lead to disclosure of thekey.
Attacks that allow distinguishing ciphertext from random data.
|
https://en.wikipedia.org/wiki/Cipher_security_summary
|
eSTREAMis a project to "identify newstream cipherssuitable for widespread adoption",[1][2]organised by theEUECRYPTnetwork. It was set up as a result of the failure of all six stream ciphers submitted to theNESSIEproject. The call for primitives was first issued in November 2004. The project was completed in April 2008. The project was divided into separate phases and the project goal was to find algorithms suitable for different application profiles.
The submissions to eSTREAM fall into either or both of two profiles:
Both profiles contain an "A" subcategory (1A and 2A) with ciphers that also provide authentication in addition to encryption. In Phase 3 none of the ciphers providing authentication are being considered (The NLS cipher had authentication removed from it to improve its performance).
As of September 2011[update]the following ciphers make up the eSTREAM portfolio:[3]
These are all free for any use. Rabbit was the only one that had a patent pending during the eStream competition, but it was released into the public domain in October 2008.[4]
The original portfolio, published at the end of Phase 3, consisted of the above ciphers plusF-FCSRwhich was in Profile 2.[5]However,cryptanalysisof F-FCSR[6]led to a revision of the portfolio in September 2008 which removed that cipher.
Phase 1 included a general analysis of all submissions with the purpose of selecting a subset of the submitted designs for further scrutiny. The designs were scrutinized based on criteria of security, performance (with respect to theblock cipherAES—a US Government approved standard, as well as the other candidates), simplicity and flexibility, justification and supporting analysis, and clarity and completeness of the documentation. Submissions in Profile 1 were only accepted if they demonstrated software performance superior to AES-128 incounter mode.
Activities in Phase 1 included a large amount of analysis and presentations of analysis results as well as discussion. The project also developed a framework for testing the performance of the candidates. The framework was then used to benchmark the candidates on a wide variety of systems.
On 27 March 2006, the eSTREAM project officially announced the end of Phase 1.
On 1 August 2006, Phase 2 was officially started. For each of the profiles, a number of algorithms has been selected to be Focus Phase 2 algorithms. These are designs that eSTREAM finds of particular interest and encourages more cryptanalysis and performance evaluation on these algorithms. Additionally a number of algorithms for each profile are accepted as Phase 2 algorithms, meaning that they are still valid as eSTREAM candidates. The Focus 2 candidates will be re-classified every six months.
Phase 3 started in April 2007. Candidates for Profile 1 (software) were:
Candidates for Profile 2 (hardware) were:
Phase 3 ended on 15 April 2008, with the announcement of the candidates that had been selected for the final eSTREAM portfolio. The selected algorithms were:
The eSTREAM portfolio ciphers are, as of January 2012[update]:[7]
Versions of the eSTREAM portfolio ciphers that support extended key lengths:
Note that the 128-bit version of Grain v1 is no longer supported by its designers and has been replaced by Grain-128a. Grain-128a is not considered to be part of the eSTREAM portfolio.
As of December 2008[update]:
This cipher was in the original portfolio but was removed in revision 1, published in September 2008.
|
https://en.wikipedia.org/wiki/ESTREAM
|
Incomputing, alinear-feedback shift register(LFSR) is ashift registerwhose input bit is alinear functionof its previous state.
The most commonly used linear function of single bits isexclusive-or(XOR). Thus, an LFSR is most often a shift register whose input bit is driven by the XOR of some bits of the overall shift register value.
The initial value of the LFSR is called the seed, and because the operation of the register is deterministic, the stream of values produced by the register is completely determined by its current (or previous) state. Likewise, because the register has a finite number of possible states, it must eventually enter a repeating cycle. However, an LFSR with awell-chosen feedback functioncan produce a sequence of bits that appears random and has avery long cycle.
Applications of LFSRs include generatingpseudo-random numbers,pseudo-noise sequences, fast digital counters, andwhitening sequences. Both hardware and software implementations of LFSRs are common.
The mathematics of acyclic redundancy check, used to provide a quick check against transmission errors, are closely related to those of an LFSR.[1]In general, the arithmetics behind LFSRs makes them very elegant as an object to study and implement. One can produce relatively complex logics with simple building blocks. However, other methods, that are less elegant but perform better, should be considered as well.
The bit positions that affect the next state are called thetaps. In the diagram the taps are [16,14,13,11]. The rightmost bit of the LFSR is called the output bit, which is always also a tap. To obtain the next state, the tap bits are XOR-ed sequentially; then, all bits are shifted one place to the right, with the rightmost bit being discarded, and that result of XOR-ing the tap bits is fed back into the now-vacant leftmost bit. To obtain the pseudorandom output stream, read the rightmost bit after each state transition.
The sequence of numbers generated by an LFSR or its XNOR counterpart can be considered abinary numeral systemjust as valid asGray codeor the natural binary code.
The arrangement of taps for feedback in an LFSR can be expressed infinite field arithmeticas apolynomialmod2. This means that the coefficients of the polynomial must be 1s or 0s. This is called the feedback polynomial or reciprocal characteristic polynomial. For example, if the taps are at the 16th, 14th, 13th and 11th bits (as shown), the feedback polynomial is
The "one" in the polynomial does not correspond to a tap – it corresponds to the input to the first bit (i.e.x0, which is equivalent to 1). The powers of the terms represent the tapped bits, counting from the left. The first and last bits are always connected as an input and output tap respectively.
The LFSR is maximal-length if and only if the corresponding feedback polynomial isprimitiveover theGalois fieldGF(2).[3][4]This means that the following conditions are necessary (but not sufficient):
Tables of primitive polynomials from which maximum-length LFSRs can be constructed are given below and in the references.
There can be more than one maximum-length tap sequence for a given LFSR length. Also, once one maximum-length tap sequence has been found, another automatically follows. If the tap sequence in ann-bit LFSR is[n,A,B,C, 0], where the 0 corresponds to thex0= 1 term, then the corresponding "mirror" sequence is[n,n−C,n−B,n−A, 0]. So the tap sequence[32, 22, 2, 1, 0]has as its counterpart[32, 31, 30, 10, 0]. Both give a maximum-length sequence.
An example inCis below:
If a fastparityorpopcountoperation is available, the feedback bit can be computed more efficiently as thedot productof the register with the characteristic polynomial:
If a rotation operation is available, the new state can be computed as
This LFSR configuration is also known asstandard,many-to-oneorexternal XOR gates. The alternative Galois configuration is described in the next section.
A sample python implementation of a similar (16 bit taps at [16,15,13,4]) Fibonacci LFSR would be
Where a register of 16 bits is used and the xor tap at the fourth, 13th, 15th and sixteenth bit establishes a maximum sequence length.
Named after the French mathematicianÉvariste Galois, an LFSR in Galois configuration, which is also known asmodular,internal XORs, orone-to-many LFSR, is an alternate structure that can generate the same output stream as a conventional LFSR (but offset in time).[5]In the Galois configuration, when the system is clocked, bits that are not taps are shifted one position to the right unchanged. The taps, on the other hand, are XORed with the output bit before they are stored in the next position. The new output bit is the next input bit. The effect of this is that when the output bit is zero, all the bits in the register shift to the right unchanged, and the input bit becomes zero. When the output bit is one, the bits in the tap positions all flip (if they are 0, they become 1, and if they are 1, they become 0), and then the entire register is shifted to the right and the input bit becomes 1.
To generate the same output stream, the order of the taps is thecounterpart(see above) of the order for the conventional LFSR, otherwise the stream will be in reverse. Note that the internal state of the LFSR is not necessarily the same. The Galois register shown has the same output stream as the Fibonacci register in the first section. A time offset exists between the streams, so a different startpoint will be needed to get the same output each cycle.
Below is aCcode example for the 16-bit maximal-period Galois LFSR example in the figure:
The branchif(lsb)lfsr^=0xB400u;can also be written aslfsr^=(-lsb)&0xB400u;which may produce more efficient code on some compilers. In addition, the left-shifting variant may produce even better code, as themsbis thecarryfrom the addition oflfsrto itself.
State and resulting bits can also be combined and computed in parallel. The following function calculates the next 64 bits using the 63-bit polynomialx63+x62+1{\displaystyle x^{63}+x^{62}+1}:
Binary Galois LFSRs like the ones shown above can be generalized to anyq-ary alphabet {0, 1, ...,q− 1} (e.g., for binary,q= 2, and the alphabet is simply {0, 1}). In this case, the exclusive-or component is generalized to additionmodulo-q(note that XOR is addition modulo 2), and the feedback bit (output bit) is multiplied (modulo-q) by aq-ary value, which is constant for each specific tap point. Note that this is also a generalization of the binary case, where the feedback is multiplied by either 0 (no feedback, i.e., no tap) or 1 (feedback is present). Given an appropriate tap configuration, such LFSRs can be used to generateGalois fieldsfor arbitrary prime values ofq.
As shown byGeorge Marsaglia[6]and further analysed byRichard P. Brent,[7]linear feedback shift registers can be implemented using XOR and Shift operations. This approach lends itself to fast execution in software because these operations typically map efficiently into modern processor instructions.
Below is aCcode example for a 16-bit maximal-period Xorshift LFSR using the 7,9,13 triplet from John Metcalf:[8]
Binary LFSRs of both Fibonacci and Galois configurations can be expressed as linear functions using matrices inF2{\displaystyle \mathbb {F} _{2}}(seeGF(2)).[9]Using thecompanion matrixof the characteristic polynomial of the LFSR and denoting the seed as a column vector(a0,a1,…,an−1)T{\displaystyle (a_{0},a_{1},\dots ,a_{n-1})^{\mathrm {T} }}, the state of the register in Fibonacci configuration afterk{\displaystyle k}steps is given by
Matrix for the corresponding Galois form is :
For a suitable initialisation,
the top coefficient of the column vector :
gives the termakof the original sequence.
These forms generalize naturally to arbitrary fields.
The following table lists examples of maximal-length feedback polynomials (primitive polynomials) for shift-register lengths up to 24. The formalism for maximum-length LFSRs was developed bySolomon W. Golombin his 1967 book.[10]The number of differentprimitive polynomialsgrows exponentially with shift-register length and can be calculated exactly usingEuler's totient function[11](sequenceA011260in theOEIS).
LFSRs can be implemented in hardware, and this makes them useful in applications that require very fast generation of a pseudo-random sequence, such asdirect-sequence spread spectrumradio. LFSRs have also been used for generating an approximation ofwhite noisein variousprogrammable sound generators.
The repeating sequence of states of an LFSR allows it to be used as aclock divideror as a counter when a non-binary sequence is acceptable, as is often the case where computer index or framing locations need to be machine-readable.[12]LFSRcountershave simpler feedback logic than natural binary counters orGray-code counters, and therefore can operate at higher clock rates. However, it is necessary to ensure that the LFSR never enters a lockup state (all zeros for a XOR based LFSR, and all ones for a XNOR based LFSR), for example by presetting it at start-up to any other state in the sequence. It is possible to count up and down with a LFSR. LFSR have also been used as aProgram Counter for CPUs, this requires that the program itself is "scrambled" and it done to save on gates when they are a premium (using fewer gates than an adder) and for speed (as a LFSR does not require a long carry chain).
The table of primitive polynomials shows how LFSRs can be arranged in Fibonacci or Galois form to give maximal periods. One can obtain any other period by adding to an LFSR that has a longer period some logic that shortens the sequence by skipping some states.
LFSRs have long been used aspseudo-random number generatorsfor use instream ciphers, due to the ease of construction from simpleelectromechanicalorelectronic circuits, longperiods, and very uniformlydistributedoutput streams. However, an LFSR is a linear system, leading to fairly easycryptanalysis. For example, given a stretch ofknown plaintext and corresponding ciphertext, an attacker can intercept and recover a stretch of LFSR output stream used in the system described, and from that stretch of the output stream can construct an LFSR of minimal size that simulates the intended receiver by using theBerlekamp-Massey algorithm. This LFSR can then be fed the intercepted stretch of output stream to recover the remaining plaintext.
Three general methods are employed to reduce this problem in LFSR-based stream ciphers:
Important: LFSR-based stream ciphers includeA5/1andA5/2, used inGSMcell phones,E0, used inBluetooth, and theshrinking generator. The A5/2 cipher has been broken and both A5/1 and E0 have serious weaknesses.[14][15]
The linear feedback shift register has a strong relationship tolinear congruential generators.[16]
LFSRs are used in circuit testing for test-pattern generation (for exhaustive testing, pseudo-random testing or pseudo-exhaustive testing) and for signature analysis.
Complete LFSR are commonly used as pattern generators for exhaustive testing, since they cover all possible inputs for ann-input circuit. Maximal-length LFSRs and weighted LFSRs are widely used as pseudo-random test-pattern generators for pseudo-random test applications.
Inbuilt-in self-test(BIST) techniques, storing all the circuit outputs on chip is not possible, but the circuit output can be compressed to form a signature that will later be compared to the golden signature (of the good circuit) to detect faults. Since this compression is lossy, there is always a possibility that a faulty output also generates the same signature as the golden signature and the faults cannot be detected. This condition is called error masking or aliasing. BIST is accomplished with a multiple-input signature register (MISR or MSR), which is a type of LFSR. A standard LFSR has a single XOR or XNOR gate, where the input of the gate is connected to several "taps" and the output is connected to the input of the first flip-flop. A MISR has the same structure, but the input to every flip-flop is fed through an XOR/XNOR gate. For example, a 4-bit MISR has a 4-bit parallel output and a 4-bit parallel input. The input of the first flip-flop is XOR/XNORd with parallel input bit zero and the "taps". Every other flip-flop input is XOR/XNORd with the preceding flip-flop output and the corresponding parallel input bit. Consequently, the next state of the MISR depends on the last several states opposed to just the current state. Therefore, a MISR will always generate the same golden signature given that the input sequence is the same every time.
Recent applications[17]are proposing set-reset flip-flops as "taps" of the LFSR. This allows the BIST system to optimise storage, since set-reset flip-flops can save the initial seed to generate the whole stream of bits from the LFSR. Nevertheless, this requires changes in the architecture of BIST, is an option for specific applications.
To prevent short repeating sequences (e.g., runs of 0s or 1s) from forming spectral lines that may complicate symbol tracking at the
receiver or interfere with other transmissions, the data bit sequence is combined with the output of a linear-feedback register before modulation and transmission. This scrambling is removed at the receiver after demodulation. When the LFSR runs at the samebit rateas the transmitted symbol stream, this technique is referred to asscrambling. When the LFSR runs considerably faster than the symbol stream, the LFSR-generated bit sequence is calledchipping code. The chipping code is combined with the data usingexclusive orbefore transmitting usingbinary phase-shift keyingor a similar modulation method. The resulting signal has a higher bandwidth than the data, and therefore this is a method ofspread-spectrumcommunication. When used only for the spread-spectrum property, this technique is calleddirect-sequence spread spectrum; when used to distinguish several signals transmitted in the same channel at the same time and frequency, it is calledcode-division multiple access.
Neither scheme should be confused withencryptionorencipherment; scrambling and spreading with LFSRs donotprotect the information from eavesdropping. They are instead used to produce equivalent streams that possess convenient engineering properties to allow robust and efficient modulation and demodulation.
Digital broadcasting systems that use linear-feedback registers:
Other digital communications systems using LFSRs:
LFSRs are also used inradio jammingsystems to generate pseudo-random noise to raise the noise floor of a target communication system.
The German time signalDCF77, in addition to amplitude keying, employsphase-shift keyingdriven by a 9-stage LFSR to increase the accuracy of received time and the robustness of the data stream in the presence of noise.[19]
|
https://en.wikipedia.org/wiki/Linear-feedback_shift_register
|
Anonlinear-feedback shift register(NLFSR) is ashift registerwhose input bit is a non-linear function of its previous state.
For an n-bit shift registerrits next state is defined as:
ri+1(b0,b1,b2,…,bn−1)=ri(b1,b2,…,f(b0,b1,b2,…,bn−1)){\displaystyle r_{i+1}(b_{0},b_{1},b_{2},\ldots ,b_{n-1})=r_{i}(b_{1},b_{2},\ldots ,f(b_{0},b_{1},b_{2},\ldots ,b_{n-1}))},
wherefis the non-linear feedback function.[1]
Nonlinear-feedback shift registers are components in modernstream ciphers, especially inRFIDandsmartcardapplications. NLFSRs are known to be more resistant to cryptanalytic attacks than Linear Feedback Shift Registers (LFSRs).
It is known how to generate ann-bit NLFSR of maximal length2n, generating aDe Bruijn sequence, by extending a maximal-length LFSR withnstages;[2]but the construction of other large NLFSRs with guaranteed long periods remains an open problem.[3]Using bruteforce methods, a list of maximum-periodn-bit NLFSRs for n ≤ 25 has been made as well as for n=27.[4][1]
New methods suggest usage ofevolutionary algorithmsin order to introduce non-linearity.[5]In these works, an evolutionary algorithm learns how to apply different operations on strings fromLFSRto enhance their quality to meet the criteria of a fitness function, here theNISTprotocol,[6]effectively.
This cryptography-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Nonlinear-feedback_shift_register
|
TheA. W. Faber Model 366was an unusual model ofslide rule, manufactured in Germany by theA. W. Faber Companyaround 1909, with scales that followed a system invented by Johannes Schumacher (1858-1930) that useddiscrete logarithmsto calculate products of integers without approximation.[1][2][3]
The Model 366 is notable for its table of numbers, mapping the numbers 1 to 100 to a permutation of the numbers 0 to 99 in a pattern based on discrete logarithms. The markings on the table are:[2]
The slide rule has two scales on each side of the upper edge of the slider marked with the integers 1 to 100 in a different permuted order, evenly spaced apart. The ordering of the numbers on these scales is
which corresponds to theinverse permutationto the one given by the number table.
There are also two scales on each side of the lower edge of the slider, consisting of the integers 0 to 100 similarly spaced, but in ascending order, with the zero on the lower scales lining up with the 1 on the upper scales.
Schumacher's indices are an example ofJacobi indices, generated withp= 101 andg= 2.[5]Schumacher's system of indices correctly generates the desired products, but is not unique: several other similar systems have been created by others, including systems byLudgate,Remakand Korn.[6]
An elaborate system of rules had to be used to compute products of numbers larger than 101.[1]
Very few of the Model 366 slide rules remain, with only five known to have survived.[1]
|
https://en.wikipedia.org/wiki/A._W._Faber_Model_366
|
Percy Edwin Ludgate(2 August 1883 – 16 October 1922) was anIrishamateur scientist who designed the secondanalytical engine(general-purposeTuring-completecomputer) in history.[1][2]
Ludgate was born on 2 August 1883 inSkibbereen,County Cork, to Michael Ludgate and Mary McMahon.[3][2]In the 1901 census, he is listed asCivil ServantNational Education (Boy Copyist) inDublin.[4]In the 1911 census, he is also in Dublin, as a Commercial Clerk (Corn Merchant).[5]He studied accountancy atRathmines College of Commerce, earning a gold medal based on the results of his final examinations in 1917.[6]At some date before or after then, he joined Kevans & Son, accountants.[3]
It seems that Ludgate worked as a clerk for an unknown corn merchant, in Dublin, and pursued his interest in calculating machines at night.[6]Charles Babbagein 1843 and Ludgate in 1909 designed the only two mechanical analytical engines before the electromechanical analytical engine ofLeonardo Torres Quevedoof 1920 and its few successors, and the six first-generationelectronicanalytical engines of 1949.
Working alone, Ludgate designed an analytical engine while unaware of Babbage's designs, although he later went on to write about Babbage's machine. Ludgate's engine used multiplication as its base mechanism (unlike Babbage's which used addition). It incorporated the firstmultiplier-accumulator, and was the first to exploit a multiplier-accumulator to perform division, using multiplication seeded by reciprocal, via the convergent series(1 +x)−1.
Ludgate's engine also used a mechanism similar to slide rules, but employing unique, discrete "Logarithmic Indexes" (now known asIrish logarithms),[7]as well as a novel memory system utilizing concentric cylinders, storing numbers as displacements of rods in shuttles. His design featured several other novel features, including for program control (e.g.,preemptionandsubroutines– ormicrocode, depending on one's viewpoint). The design is so dissimilar from Babbage's that it can be considered a second, unique type ofanalytical engine, which thus preceded the third (electromechanical) and fourth (electronic) types. The engine's precise mechanism is unknown, as the only written accounts which survive do not detail its workings, although he stated in 1914 that "[c]omplete descriptive drawings of the machine exist, as well as a description in manuscript" – these have never been found.[8]
Ludgate was one of just a few independent workers in the field of science and mathematics.[citation needed]His inventions were worked on outside a lab. He worked on them only part-time, often until the early hours of the morning. Many publications refer to him as an accountant, but that came only after his 1909 analytical engine paper. Little is known about his personal life, as his only known records are his scientific writings. Prior to 2016, the best source of information about Ludgate and his significance was in the work of ProfessorBrian Randell.[9]Since then, further investigation is underway atTrinity College, Dublinunder the auspices of theJohn Gabriel Byrne Computer Science Collection.[10]
Ludgate died ofpneumoniaon 19 October 1922,[3]and is buried inMount Jerome Cemeteryin Dublin.[6]
In 1960, a German patent lawyer working on behalf ofIBMsuccessfully relied on Ludgate’s 1909 paper to defeat an important 1941 patent application by the pioneering computer scientistKonrad Zuse. Had the patent been approved, Zuse would have controlled the primary intellectual property for crucial techniques that all computers now use; this would have changed his career and could well have altered the commercial trajectory of the computer industry.[11][12]
In 1991, a prize for the best final-year project in the Moderatorship incomputer sciencecourse atTrinity College, Dublin– theLudgate Prize– was instituted in his honour,[13]and in 2016 the Ludgate Hub e-business incubation centre was opened inSkibbereen, where he was born.[6]
In October 2022, a plaque from theNational Committee for Commemorative Plaques in Science and Technologywas unveiled at Ludgate's home inDrumcondraby the Provost of Trinity College,Linda Doyle. (As can be seen in the photo, the year of birth is listed incorrectly on the plaque.)[14][15]
Also in 2022, a podcast with Dr Chris Horn discussed Percy Ludgate,[16]then in October 2024 an appealing and accurate podcast on Percy Ludgate was created by Google's Gemini A.I..[17]
|
https://en.wikipedia.org/wiki/Percy_Ludgate
|
TheIrish logarithmwas a system of number manipulation invented byPercy Ludgatefor machine multiplication. The system used a combination of mechanical cams aslookup tablesand mechanical addition to sum pseudo-logarithmic indices to produce partial products, which were then added to produce results.[1]
The technique is similar toZech logarithms(also known as Jacobi logarithms), but uses a system of indices original to Ludgate.[2]
Ludgate's algorithm compresses the multiplication of two single decimal numbers into twotable lookups(to convert the digits into indices), the addition of the two indices to create a new index which is input to a second lookup table that generates the output product.[3]Because both lookup tables are one-dimensional, and the addition of linear movements is simple to implement mechanically, this allows a less complex mechanism than would be needed to implement a two-dimensional 10×10 multiplication lookup table.
Ludgate stated that he deliberately chose the values in his tables to be as small as he could make them; given this, Ludgate's tables can be simply constructed from first principles, either via pen-and-paper methods, or a systematic search using only a few tens of lines of program code.[4]They do not correspond to either Zech logarithms,Remak indexesorKorn indexes.[4]
The following is an implementation of Ludgate's Irish logarithm algorithm in thePythonprogramming language:
Table 1 is taken from Ludgate's original paper; given the first table, the contents of Table 2 can be trivially derived from Table 1 and the definition of the algorithm. Note since that the last third of the second table is entirely zeros, this could be exploited to further simplify a mechanical implementation of the algorithm.
|
https://en.wikipedia.org/wiki/Irish_logarithm
|
Innumber theory, theDedekind psi functionis themultiplicative functionon the positive integers defined by
where the product is taken over all primesp{\displaystyle p}dividingn.{\displaystyle n.}(By convention,ψ(1){\displaystyle \psi (1)}, which is theempty product, has value 1.) The function was introduced byRichard Dedekindin connection withmodular functions.
The value ofψ(n){\displaystyle \psi (n)}for the first few integersn{\displaystyle n}is:
The functionψ(n){\displaystyle \psi (n)}is greater thann{\displaystyle n}for alln{\displaystyle n}greater than 1, and is even for alln{\displaystyle n}greater than 2. Ifn{\displaystyle n}is asquare-free numberthenψ(n)=σ(n){\displaystyle \psi (n)=\sigma (n)}, whereσ(n){\displaystyle \sigma (n)}is thesum-of-divisors function.
Theψ{\displaystyle \psi }function can also be defined by settingψ(pn)=(p+1)pn−1{\displaystyle \psi (p^{n})=(p+1)p^{n-1}}for powers of any primep{\displaystyle p}, and then extending the definition to all integers by multiplicativity. This also leads to a proof of thegenerating functionin terms of theRiemann zeta function, which is
This is also a consequence of the fact that we can write as aDirichlet convolutionofψ=Id∗|μ|{\displaystyle \psi =\mathrm {Id} *|\mu |}.
There is an additive definition of the psi function as well. Quoting from Dickson,[1]
R. Dedekind[2]proved that, ifn{\displaystyle n}is decomposed in every way into a productab{\displaystyle ab}and ife{\displaystyle e}is the g.c.d. ofa,b{\displaystyle a,b}then
wherea{\displaystyle a}ranges over all divisors ofn{\displaystyle n}andp{\displaystyle p}over the prime divisors ofn{\displaystyle n}andφ{\displaystyle \varphi }is thetotient function.
The generalization to higher orders via ratios ofJordan's totientis
with Dirichlet series
It is also theDirichlet convolutionof a power and the square
of theMöbius function,
If
is thecharacteristic functionof the squares, another Dirichlet convolution
leads to the generalizedσ-function,
|
https://en.wikipedia.org/wiki/Dedekind_psi_function
|
Inmathematics, and specifically innumber theory, adivisor functionis anarithmetic functionrelated to thedivisorsof aninteger. When referred to asthedivisor function, it counts thenumber of divisors of an integer(including 1 and the number itself). It appears in a number of remarkable identities, including relationships on theRiemann zeta functionand theEisenstein seriesofmodular forms. Divisor functions were studied byRamanujan, who gave a number of importantcongruencesandidentities; these are treated separately in the articleRamanujan's sum.
A related function is thedivisor summatory function, which, as the name implies, is a sum over the divisor function.
Thesum of positive divisors functionσz(n), for a real or complex numberz, is defined as thesumof thezthpowersof the positivedivisorsofn. It can be expressed insigma notationas
whered∣n{\displaystyle {d\mid n}}is shorthand for "ddividesn".
The notationsd(n),ν(n) andτ(n) (for the GermanTeiler= divisors) are also used to denoteσ0(n), or thenumber-of-divisors function[1][2](OEIS:A000005). Whenzis 1, the function is called thesigma functionorsum-of-divisors function,[1][3]and the subscript is often omitted, soσ(n) is the same asσ1(n) (OEIS:A000203).
Thealiquot sums(n) ofnis the sum of theproper divisors(that is, the divisors excludingnitself,OEIS:A001065), and equalsσ1(n) −n; thealiquot sequenceofnis formed by repeatedly applying the aliquot sum function.
For example,σ0(12) is the number of the divisors of 12:
whileσ1(12) is the sum of all the divisors:
and the aliquot sum s(12) of proper divisors is:
σ−1(n) is sometimes called theabundancy indexofn, and we have:
The casesx= 2 to 5 are listed inOEIS:A001157throughOEIS:A001160,x= 6 to 24 are listed inOEIS:A013954throughOEIS:A013972.
For aprime numberp,
because by definition, the factors of a prime number are 1 and itself. Also, wherepn# denotes theprimorial,
sincenprime factors allow a sequence of binary selection (pi{\displaystyle p_{i}}or 1) fromnterms for each proper divisor formed. However, these are not in general the smallest numbers whose number of divisors is apower of two; instead, the smallest such number may be obtained by multiplying together the firstnFermi–Dirac primes, prime powers whose exponent is a power of two.[4]
Clearly,1<σ0(n)<n{\displaystyle 1<\sigma _{0}(n)<n}for alln>2{\displaystyle n>2}, andσx(n)>n{\displaystyle \sigma _{x}(n)>n}for alln>1{\displaystyle n>1},x>0{\displaystyle x>0}.
The divisor function ismultiplicative(since each divisorcof the productmnwithgcd(m,n)=1{\displaystyle \gcd(m,n)=1}distinctively correspond to a divisoraofmand a divisorbofn), but notcompletely multiplicative:
The consequence of this is that, if we write
wherer=ω(n) is thenumber of distinct prime factorsofn,piis theith prime factor, andaiis the maximum power ofpiby whichnisdivisible, then we have:[5]
which, whenx≠ 0, is equivalent to the useful formula:[5]
Whenx= 0,σ0(n){\displaystyle \sigma _{0}(n)}is:[5]
This result can be directly deduced from the fact that all divisors ofn{\displaystyle n}are uniquely determined by the distinct tuples(x1,x2,...,xi,...,xr){\displaystyle (x_{1},x_{2},...,x_{i},...,x_{r})}of integers with0≤xi≤ai{\displaystyle 0\leq x_{i}\leq a_{i}}(i.e.ai+1{\displaystyle a_{i}+1}independent choices for eachxi{\displaystyle x_{i}}).
For example, ifnis 24, there are two prime factors (p1is 2;p2is 3); noting that 24 is the product of 23×31,a1is 3 anda2is 1. Thus we can calculateσ0(24){\displaystyle \sigma _{0}(24)}as so:
The eight divisors counted by this formula are 1, 2, 4, 8, 3, 6, 12, and 24.
Eulerproved the remarkable recurrence:[6][7][8]
whereσ1(0)=n{\displaystyle \sigma _{1}(0)=n}if it occurs andσ1(x)=0{\displaystyle \sigma _{1}(x)=0}forx<0{\displaystyle x<0}, and12(3i2∓i){\displaystyle {\tfrac {1}{2}}\left(3i^{2}\mp i\right)}are consecutive pairs of generalizedpentagonal numbers(OEIS:A001318, starting at offset 1). Indeed, Euler proved this by logarithmic differentiation of the identity in hispentagonal number theorem.
For a non-square integer,n, every divisor,d, ofnis paired with divisorn/dofnandσ0(n){\displaystyle \sigma _{0}(n)}is even; for a square integer, one divisor (namelyn{\displaystyle {\sqrt {n}}}) is not paired with a distinct divisor andσ0(n){\displaystyle \sigma _{0}(n)}is odd. Similarly, the numberσ1(n){\displaystyle \sigma _{1}(n)}is odd if and only ifnis a square or twice a square.[9]
We also notes(n) =σ(n) −n. Heres(n) denotes the sum of theproperdivisors ofn, that is, the divisors ofnexcludingnitself. This function is used to recognizeperfect numbers, which are thensuch thats(n) =n. Ifs(n) >n, thennis anabundant number, and ifs(n) <n, thennis adeficient number.
Ifnis a power of 2,n=2k{\displaystyle n=2^{k}}, thenσ(n)=2⋅2k−1=2n−1{\displaystyle \sigma (n)=2\cdot 2^{k}-1=2n-1}ands(n)=n−1{\displaystyle s(n)=n-1}, which makesnalmost-perfect.
As an example, for two primesp,q:p<q{\displaystyle p,q:p<q}, let
Then
and
whereφ(n){\displaystyle \varphi (n)}isEuler's totient function.
Then, the roots of
expresspandqin terms ofσ(n) andφ(n) only, requiring no knowledge ofnorp+q{\displaystyle p+q}, as
Also, knowingnand eitherσ(n){\displaystyle \sigma (n)}orφ(n){\displaystyle \varphi (n)}, or, alternatively,p+q{\displaystyle p+q}and eitherσ(n){\displaystyle \sigma (n)}orφ(n){\displaystyle \varphi (n)}allows an easy recovery ofpandq.
In 1984,Roger Heath-Brownproved that the equality
is true for infinitely many values ofn, seeOEIS:A005237.
By definition:σ=Id∗1{\displaystyle \sigma =\operatorname {Id} *\mathbf {1} }ByMöbius inversion:Id=σ∗μ{\displaystyle \operatorname {Id} =\sigma *\mu }
TwoDirichlet seriesinvolving the divisor function are:[10]
whereζ{\displaystyle \zeta }is theRiemann zeta function. The series ford(n) =σ0(n) gives:[10]
and aRamanujanidentity[11]
which is a special case of theRankin–Selberg convolution.
ALambert seriesinvolving the divisor function is:[12]
for arbitrarycomplex|q| ≤ 1 anda(Li{\displaystyle \operatorname {Li} }is thepolylogarithm). This summation also appears as theFourier series of the Eisenstein seriesand theinvariants of the Weierstrass elliptic functions.
Fork>0{\displaystyle k>0}, there is an explicit series representation withRamanujan sumscm(n){\displaystyle c_{m}(n)}as :[13]
The computation of the first terms ofcm(n){\displaystyle c_{m}(n)}shows its oscillations around the "average value"ζ(k+1)nk{\displaystyle \zeta (k+1)n^{k}}:
Inlittle-o notation, the divisor function satisfies the inequality:[14][15]
More precisely,Severin Wigertshowed that:[15]
On the other hand, sincethere are infinitely many prime numbers,[15]
InBig-O notation,Peter Gustav Lejeune Dirichletshowed that theaverage orderof the divisor function satisfies the following inequality:[16][17]
whereγ{\displaystyle \gamma }isEuler's gamma constant. Improving the boundO(x){\displaystyle O({\sqrt {x}})}in this formula is known asDirichlet's divisor problem.
The behaviour of the sigma function is irregular. The asymptotic growth rate of the sigma function can be expressed by:[18]
where lim sup is thelimit superior. This result isGrönwall's theorem, published in 1913 (Grönwall 1913). His proof usesMertens' third theorem, which says that:
wherepdenotes a prime.
In 1915, Ramanujan proved that under the assumption of theRiemann hypothesis, Robin's inequality
holds for all sufficiently largen(Ramanujan 1997). The largest known value that violates the inequality isn=5040. In 1984, Guy Robin proved that the inequality is true for alln> 5040if and only ifthe Riemann hypothesis is true (Robin 1984). This isRobin's theoremand the inequality became known after him. Robin furthermore showed that if the Riemann hypothesis is false then there are an infinite number of values ofnthat violate the inequality, and it is known that the smallest suchn> 5040 must besuperabundant(Akbary & Friggstad 2009). It has been shown that the inequality holds for large odd and square-free integers, and that the Riemann hypothesis is equivalent to the inequality just forndivisible by the fifth power of a prime (Choie et al. 2007).
Robin also proved, unconditionally, that the inequality:
holds for alln≥ 3.
A related bound was given byJeffrey Lagariasin 2002, who proved that the Riemann hypothesis is equivalent to the statement that:
for everynatural numbern> 1, whereHn{\displaystyle H_{n}}is thenthharmonic number, (Lagarias 2002).
|
https://en.wikipedia.org/wiki/Divisor_function
|
Infutures studiesand thehistory of technology,accelerating changeis the observedexponentialnature of the rate oftechnological changein recent history, which may suggest faster and more profound change in the future and may or may not be accompanied by equally profound social and cultural change.
Writing in 1904,Henry Brooks Adamsoutlined a "law of acceleration." Progress is accelerating including military progress. As coal-output of the world doubles every ten years, so will be the world output of bombs both in force and number. The bomb passage follows the "revolutionary" discovery of radium--an ore ofuranium--and states that power leaps from every atom. Resistance to the law of acceleration is futile and progress might outpace the mind. "If science were to go on doubling or quadrupling its complexities every ten years, even mathematics would soon succumb. An average mind had succumbed already in 1850; it could no longer understand the problem in 1900." But Adams remains optimistic because "bombs educate vigorously." Thus far in history, states his bottom line, the mind had successfully reacted and can keep this way, but it "would need to jump."
In 1910, during the town planning conference of London,Daniel Burnhamnoted, "But it is not merely in the number of facts or sorts of knowledge that progress lies: it is still more in the geometric ratio of sophistication, in thegeometric wideningof the sphere of knowledge, which every year is taking in a larger percentage of people as time goes on."[1]And later on, "It is the argument with which I began, that a mighty change having come about in fifty years, and our pace of development having immensely accelerated, our sons and grandsons are going to demand and get results that would stagger us."[1]
In 1938,Buckminster Fullerintroduced the wordephemeralizationto describe the trends of "doing more with less" in chemistry, health and other areas ofindustrial development.[2]In 1946, Fuller published a chart of the discoveries of the chemical elements over time to highlight the development of accelerating acceleration in human knowledge acquisition.[3]
By mid-century, forArnold J. Toynbeeit was "not an article of faith" but "a datum of observation and experience history" that history was accelerating, and "at an accelerating rate."[4]
In 1958,Stanislaw Ulamwrote in reference to a conversation withJohn von Neumann:
One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.[5]
In a series of published articles from 1974 to 1979, and then in his 1988 bookMind Children, computer scientist and futuristHans MoravecgeneralizesMoore's lawto make predictions about the future ofartificial life. Moore's law describes anexponential growthpattern in the complexity of integrated semiconductor circuits. Moravec extends this to include technologies from long before the integrated circuit to future forms of technology. Moravec outlines a timeline and a scenario[6][7]in which robots will evolve into a new series of artificial species, starting around 2030–2040.[8]InRobot: Mere Machine to Transcendent Mind, published in 1998, Moravec further considers the implications of evolvingrobot intelligence, generalizing Moore's law to technologies predating theintegrated circuit, and also plotting the exponentially increasing computational power of the brains of animals in evolutionary history. Extrapolating these trends, he speculates about a coming "mind fire" of rapidly expandingsuperintelligencesimilar to theexplosion of intelligencepredicted by Vinge.
In his TV seriesConnections(1978)—and sequelsConnections²(1994) andConnections³(1997)—James Burkeexplores an "Alternative View of Change" (the subtitle of the series) that rejects the conventional linear and teleological view of historical progress. Burke contends that one cannot consider the development of any particular piece of the modern world in isolation. Rather, the entire gestalt of the modern world is the result of a web of interconnected events, each one consisting of a person or group acting for reasons of their own motivations (e.g., profit, curiosity, religious) with no concept of the final, modern result to which the actions of either them or their contemporaries would lead. The interplay of the results of these isolated events is what drives history and innovation, and is also the main focus of the series and its sequels.
Burke also explores three corollaries to his initial thesis. The first is that, if history is driven by individuals who act only on what they know at the time, and not because of any idea as to where their actions will eventually lead, then predicting the future course of technological progress is merely conjecture. Therefore, if we are astonished by the connections Burke is able to weave among past events, then we will be equally surprised to what the events of today eventually will lead, especially events we were not even aware of at the time.
The second and third corollaries are explored most in the introductory and concluding episodes, and they represent the downside of an interconnected history. If history progresses because of the synergistic interaction of past events and innovations, then as history does progress, the number of these events and innovations increases. This increase in possible connections causes the process of innovation to not only continue, but to accelerate. Burke poses the question of what happens when this rate of innovation, or more importantly change itself, becomes too much for the average person to handle, and what this means for individual power, liberty, and privacy.[9]
In his bookMindsteps to the Cosmos(HarperCollins, August 1983),Gerald S. Hawkinselucidated his notion ofmindsteps, dramatic and irreversible changes toparadigmsor world views. He identified five distinct mindsteps in human history, and the technology that accompanied these "new world views": the invention of imagery, writing, mathematics, printing, the telescope, rocket, radio, TV, computer... "Each one takes the collective mind closer to reality, one stage further along in its understanding of the relation of humans to the cosmos." He noted: "The waiting period between the mindsteps is getting shorter. One can't help noticing the acceleration." Hawkins' empirical 'mindstep equation' quantified this, and gave dates for (to him) future mindsteps. The date of the next mindstep (5; the series begins at 0) he cited as 2021, with two further, successively closer mindsteps in 2045 and 2051, until the limit of the series in 2053. His speculations ventured beyond the technological:
The mindsteps... appear to have certain things in common—a new and unfolding human perspective, related inventions in the area of memes and communications, and a long formulative waiting period before the next mindstep comes along. None of the mindsteps can be said to have been truly anticipated, and most were resisted at the early stages. In looking to the future we may equally be caught unawares. We may have to grapple with the presently inconceivable, with mind-stretching discoveries and concepts.
The mathematicianVernor Vingepopularized his ideas about exponentially accelerating technological change in the science fiction novelMarooned in Realtime(1986), set in a world of rapidly accelerating progress leading to the emergence of more and more sophisticated technologies separated by shorter and shorter time intervals, until a point beyond human comprehension is reached. His subsequentHugo Award-winning novelA Fire Upon the Deep(1992) starts with an imaginative description of the evolution of asuperintelligencepassing through exponentially accelerating developmental stages ending in atranscendent, almostomnipotentpower unfathomable by mere humans. His already mentioned influential 1993 paper on thetechnological singularitycompactly summarizes the basic ideas.
In his 1999 bookThe Age of Spiritual Machines,Ray Kurzweilproposed "The Law of Accelerating Returns", according to which the rate of change in a wide variety of evolutionary systems (including but not limited to the growth of technologies) tends to increase exponentially.[10]He gave further focus to this issue in a 2001 essay entitled "The Law of Accelerating Returns".[11]In it, Kurzweil, after Moravec, argued for extending Moore's Law to describeexponential growthof diverse forms oftechnologicalprogress. Whenever a technology approaches some kind of a barrier, according to Kurzweil, a new technology will be invented to allow us to cross that barrier. He cites numerous past examples of this to substantiate his assertions. He predicts that suchparadigm shiftshave and will continue to become increasingly common, leading to "technological change so rapid and profound it represents a rupture in the fabric of human history". He believes the Law of Accelerating Returns implies that atechnological singularitywill occur before the end of the 21st century, around 2045. The essay begins:
An analysis of the history of technology shows that technological change is exponential, contrary to the common-sense 'intuitive linear' view. So we won't experience 100 years of progress in the 21st century—it will be more like 20,000 years of progress (at today's rate). The 'returns,' such as chip speed and cost-effectiveness, also increase exponentially. There's even exponential growth in the rate of exponential growth. Within a few decades, machine intelligence will surpass human intelligence, leading to the Singularity—technological change so rapid and profound it represents a rupture in the fabric of human history. The implications include the merger of biological and nonbiological intelligence, immortal software-based humans, and ultra-high levels of intelligence that expand outward in the universe at the speed of light.
The Law of Accelerating Returns has in many ways altered public perception ofMoore's law.[citation needed]It is a common (but mistaken) belief that Moore's law makes predictions regarding all forms of technology,[citation needed]when really it only concerns semiconductor circuits. Manyfuturistsstill use the term "Moore's law" to describe ideas like those put forth by Moravec, Kurzweil and others.
According to Kurzweil, since the beginning ofevolution, more complex life forms have been evolving exponentially faster, with shorter and shorter intervals between the emergence of radically new life forms, such as human beings, who have the capacity to engineer (i.e. intentionally design with efficiency) a new trait which replaces relatively blind evolutionary mechanisms of selection for efficiency. By extension, the rate of technical progress amongst humans has also been exponentially increasing: as we discover more effective ways to do things, we also discover more effective ways to learn, e.g.language, numbers, written language,philosophy,scientific method, instruments of observation, tallying devices, mechanical calculators, computers; each of these major advances in our ability to account for information occurs increasingly close to the previous. Already within the past sixty years, life in the industrialized world has changed almost beyond recognition except for living memories from the first half of the 20th century. This pattern will culminate in unimaginable technological progress in the 21st century, leading to a singularity. Kurzweil elaborates on his views in his booksThe Age of Spiritual MachinesandThe Singularity Is Near.
In the natural sciences, it is typical that processes characterized by exponential acceleration in their initial stages go into the saturation phase. This clearly makes it possible to realize that if an increase with acceleration is observed over a certain period of time, this does not mean an endless continuation of this process. On the contrary, in many cases this means an early exit to the plateau of speed. The processes occurring in natural science allow us to suggest that the observed picture of accelerating scientific and technological progress, after some time (in physical processes, as a rule, is short) will be replaced by a slowdown and a complete stop. Despite the possible termination / attenuation of the acceleration of the progress of science and technology in the foreseeable future, progress itself, and as a result, social transformations, will not stop or even slow down - it will continue with the achieved (possibly huge) speed, which has become constant.[12]
Accelerating change may not be restricted to theAnthropoceneEpoch,[13]but a general and predictable developmental feature of the universe.[14]The physical processes that generate an acceleration such as Moore's law are positive feedback loops giving rise to exponential or superexponential technological change.[15]These dynamics lead to increasingly efficient and dense configurations of Space, Time, Energy, and Matter (STEM efficiency and density, or STEM "compression").[16]At the physical limit, this developmental process of accelerating change leads to black hole density organizations, a conclusion also reached by studies of the ultimate physical limits of computation in the universe.[17][18]
Applying this vision to thesearch for extraterrestrial intelligenceleads to the idea that advanced intelligent life reconfigures itself into a black hole. Such advanced life forms would be interested in inner space, rather than outer space and interstellar expansion.[19]They would thus in some way transcend reality, not be observable and it would be a solution toFermi's paradoxcalled the "transcension hypothesis".[20][14][16]Another solution is that the black holes we observe could actually be interpreted as intelligent super-civilizations feeding on stars, or "stellivores".[21][22]This dynamics of evolution and development is an invitation to study the universe itself as evolving, developing.[23]If the universe is a kind of superorganism, it may possibly tend to reproduce, naturally[24]or artificially, with intelligent life playing a role.[25][26][27][28][29]
Dramatic changes in the rate of economic growth have occurred in the past because of some technological advancement. Based on population growth, the economy doubled every 250,000 years from thePaleolithicera until theNeolithic Revolution. The new agricultural economy doubled every 900 years, a remarkable increase. In the current era, beginning with the Industrial Revolution, the world's economic output doubles every fifteen years, sixty times faster than during the agricultural era. If the rise of superhuman intelligence causes a similar revolution, arguesRobin Hanson, then one would expect the economy to double at least quarterly and possibly on a weekly basis.[30]
In his 1981 bookCritical Path, futurist and inventorR. Buckminster Fullerestimated that if we took all the knowledge that mankind had accumulated and transmitted by the year One CE as equal to one unit of information, it probably took about 1500 years (or until the sixteenth century) for that amount of knowledge to double. The next doubling of knowledge from two to four 'knowledge units' took only 250 years, until about 1750 CE. By 1900, one hundred and fifty years later, knowledge had doubled again to 8 units. The observed speed at which information doubled was getting faster and faster.[31]In modern times, exponential knowledge progressions therefore change at an ever-increasing rate. Depending on the progression, this tends to lead toward explosive growth at some point. A simple exponential curve that represents this accelerating change phenomenon could be modeled by adoubling function. This fast rate of knowledge doubling leads up to the basic proposedhypothesisof thetechnological singularity: the rate at which technology progression surpasses human biological evolution.
BothTheodore Modisand Jonathan Huebner have argued—each from different perspectives—that the rate of technological innovation has not only ceased to rise, but is actually now declining.[32]
|
https://en.wikipedia.org/wiki/Accelerating_change
|
Beyond CMOSrefers to the possible futuredigital logictechnologies beyond thescaling limitsofCMOStechnology.[1][2][3][4]which limits device density and speeds due to heating effects.[5]
Beyond CMOSis the name of one of the 7 focus groups inITRS 2.0(2013) and in its successor, theInternational Roadmap for Devices and Systems.
CPUs using CMOS were released from 1986 (e.g. 12 MHzIntel 80386). As CMOS transistor dimensions were shrunk the clock speeds also increased. Since about 2004 CMOS CPU clock speeds have leveled off at about 3.5 GHz.
CMOS devices sizes continue to shrink – see Intel'sprocess–architecture–optimization model(and oldertick–tock model) andITRS:
It is not yet clear if CMOS transistors will still work below 3 nm.[4]See3 nanometer.
About 2010 theNanoelectronic Research Initiative(NRI) studied various circuits in various technologies.[2]
Nikonov benchmarked (theoretically) many technologies in 2012,[2]and updated it in 2014.[8]The 2014 benchmarking included 11 electronic, 8spintronic, 3orbitronic, 2ferroelectric, and 1straintronicstechnology.[8]
The 2015ITRS 2.0report included a detailed chapter onBeyond CMOS,[9]covering RAM and logic gates.
Superconducting computingincludes several beyond-CMOS technologies that use superconducting devices, namelyJosephson junctions, for electronic signals processing and computing. One variant calledrapid single-flux quantum(RSFQ) logic was considered promising by the NSA in a 2005 technology survey despite the drawback that available superconductors require cryogenic temperatures. More energy-efficient superconducting logic variants have been developed since 2005 and are being considered for use in large scale computing.[12][13]
|
https://en.wikipedia.org/wiki/Beyond_CMOS
|
Ephemeralization, a term coined byR. Buckminster Fullerin 1938, is the ability of technological advancement to do "more and more with less and less until eventually you can do everything with nothing," that is, an accelerating increase in the efficiency of achieving the same or more output (products, services, information, etc.) while requiring less input (effort, time, materials, resources, etc.).[1]The application of materials and technology in moderncell phones, compared to older computers and phones, exemplify the concepts of ephemeralization whereby technological advancement can drive efficiency in the form of fewermaterialsbeing used to provide greaterutility(more functionality with less resource use). Fuller's vision was that ephemeralization, through technological progress,[2]could result in ever-increasing standards of living for an ever-growing population. The concept has been embraced by those who argue againstMalthusianphilosophy.[1]
Fuller uses Henry Ford's assembly line (used by Henry Ford at his car factory), as an example of how ephemeralization can continuously lead to better products at lower cost with no upper bound on productivity. Fuller saw ephemeralization as an inevitable trend in human development.[1]
Francis Heylighen[3]andAlvin Toffler[4]have written that ephemeralization, though it may increase our power to solve physical problems, can make non-physical problems worse. According to Heylighen and Toffler, increasing system complexity andinformation overloadmake it difficult and stressful for the people who must control the ephemeralized systems. This might negate the advantages of ephemeralization.[3][4]
The solution proposed by Heylighen[5]is the integration of human intelligence,computer intelligence, and coordination mechanisms that direct an issue to the cognitive resource (document, person, or computer program) most fit to address it. This requires a distributed,self-organizing system, formed by all individuals, computers and the communication links that connect them. The self-organization can be achieved by algorithms. According to Heylighen, the effect is to superpose the contributions of many different human and computer agents into a collective map that may link the cognitive and physical resources relatively efficiently. The resulting information system could react relatively rapidly andadaptivelyto requests for guidance or changes in the situation.[5]
In Heylighen's view, the system could frequently be fed with new information from its myriad human users and computer agents, which it would take into account to offer the human users a list of the best possible approaches to achieve tasks.[5]Heylighen believes near-optimization could be achieved both at the level of the individual who makes the request, and at the level of society which attempts to minimize the conflicts between the desires of its different members and to aim at long term, global progress while as much as possible protecting individual liberty and privacy.[5]
|
https://en.wikipedia.org/wiki/Ephemeralization
|
Eroom's lawis the observation that drug discovery is becoming slower and more expensive over time, despite improvements in technology (such ashigh-throughput screening,biotechnology,combinatorial chemistry, and computationaldrug design), a trend first observed in the 1980s. Theinflation-adjusted cost of developing a new drug roughly doubles every nine years.[1]In order to highlight the contrast with the exponential advancements of other forms of technology (such astransistors) over time, the name given to the observation isMoore's lawspelled backwards.[2]The term was coined byJack Scannelland colleagues in 2012 inNature Reviews Drug Discovery.
The article that proposed the law attributes it to four main causes:[3]
While some suspect a lack of "low-hanging fruit" as a significant contribution to Eroom's law, this may be less important than the four main causes, as there are still many decades' worth of new potential drug targets relative to the number of targets which already have been exploited, even if the industry exploits 4 to 5 new targets per year.[3]There is also space to explore selectively non-selective drugs (or "dirty drugs") that interact with several molecular targets, and which may be particularly effective as central nervous system (CNS) therapeutics, even though few of them have been introduced in the last few decades.[5]
As of 2018, academic spinouts and small biotech startups have surpassed Big Pharma with respect to the number of best-selling drugs approved, with 24/30 (80%) originating outside of Big Pharma.[6]
An alternative hypothesis is that the pharmaceutical industry has become cartelized and formed a bureaucratic oligopoly, resulting in reduced innovation and efficiency. As of 2022, approximately 20 Big Pharma companies control the majority of global branded drug sales (on the scale of ±$1 trillion annually). Critics point out that Big Pharma has reduced investment in R&D, spending double on marketing, and have focused on elevating drug prices instead of risk-taking.[7]
|
https://en.wikipedia.org/wiki/Eroom%27s_law
|
Huang's lawis the observation in computer science and engineering that advancements ingraphics processing units(GPUs) are growing at a rate much faster than with traditionalcentral processing units(CPUs). The observation is in contrast toMoore's lawthat predicted the number oftransistorsin a denseintegrated circuit(IC) doubles about every two years.[1]Huang's law states that the performance of GPUs will more than double every two years.[2]The hypothesis is subject to questions about its validity.
The observation was made byJensen Huang, the chief executive officer ofNvidia, at its 2018 GPU Technology Conference (GTC) held inSan Jose, California.[3]He observed that Nvidia's GPUs were "25 times faster than five years ago" whereas Moore's law would have expected only a ten-fold increase.[2]As microchip components become smaller, it became harder for chip advancement to meet the speed of Moore's law.[4]
In 2006, Nvidia's GPU had a 4x performance advantage over other CPUs. In 2018 the Nvidia GPU was 20 times faster than a comparable CPU node: the GPUs were 1.7x faster each year. Moore's law would predict a doubling every two years, however Nvidia's GPU performance was more than tripled every two years, fulfilling Huang's law.[5]
Huang's law claims that a synergy betweenhardware,software, andartificial intelligencemakes the new 'law' possible.[A]Huang said, "The innovation isn't just about chips," he said, "It's about the entire stack." He said thatgraphics processorsespecially are important to a new paradigm.[3]Elimination ofbottleneckscan speed up the process and create advantages in getting to the goal. "Nvidia is a one trick pony," Huang has said.[7]According to Huang: "Accelerated computing is liberating, ... Let's say you have an airplane that has to deliver a package. It takes 12 hours to deliver it. Instead of making the plane go faster, concentrate on how to deliver the package faster, look at 3D printing at the destination." The object "… is to deliver the goal faster."[7]
For artificial intelligence tasks, Huang said that training the convolutional networkAlexNettook six days on two of Nvidia's GTX 580 processors to complete the training process but only 18 minutes on a modern DGX-2 AI server, resulting in a speed-up factor of 500. Compared to Moore's law, which focuses purely on CPU transistors, Huang's law describes a combination of advances in architecture, interconnects, memory technology, and algorithms.[2][6]
Bharath Ramsundar wrote thatdeep learningis being coupled with "[i]mprovements in custom architecture". For example, machine learning systems have been implemented in theblockchainworld, whereBitmainassaulted "manycryptocurrenciesby designing custom mining ASICs (application-specific integrated circuits)" which had been envisioned as undoable. "Nvidia's grand achievement however is in making the case that these improvement in architectures are not merely isolated victories for specific applications but perhaps broadly applicable to all of computer science." They have suggested that broad harnessing of GPUs and the GPU stack (cf.,CPU stack) can deliver "dramatic growth in deep learning architecture." "The magic" of Huang's law promise is that as nascent deep learning powered software becomes more availed, the improvements from GPU scaling and more generally from architectural improvements" will concretely improve "performance and behavior of modern software stacks."[8]
There has been criticism.JournalistJoel Hruska writing inExtremeTechin 2020 said "there is no such thing as Huang's Law", calling it an "illusion" that rests on the gains made possible by Moore's law; and that it is too soon to determine a law exists.[9]The research nonprofit Epoch has found that, between 2006 and 2021, GPU price performance (in terms of FLOPS/$) has tended to double approximately every 2.5 years, much slower than predicted by Huang's law.[10]
|
https://en.wikipedia.org/wiki/Huang%27s_law
|
Koomey's lawdescribes a trend in thehistory of computing hardware: for about a half-century, the number of computations perjouleof energy dissipated doubled about every 1.57 years. ProfessorJonathan Koomeydescribed the trend in a 2010 paper in which he wrote that "at a fixed computing load, the amount of battery you need will fall by a factor of two every year and a half."[1]
This trend had been remarkably stable since the 1950s (R2of over 98%). But in 2011, Koomey re-examined this data[2]and found that after 2000, the doubling slowed to about once every 2.6 years. This is related to the slowing[3]ofMoore's law, the ability to build smaller transistors; and the end around 2005 ofDennard scaling, the ability to build smaller transistors with constantpower density.
"The difference between these two growth rates is substantial. A doubling every year and a half results in a 100-fold increase in efficiency every decade. A doubling every two and a half years yields just a 16-fold increase", Koomey wrote.[4]
The implications of Koomey's law are that the amount of battery needed for a fixed computing load will fall by a factor of 100 every decade.[5]As computing devices become smaller and more mobile, this trend may be even more important than improvements in raw processing power for many applications. Furthermore, energy costs are becoming an increasing factor in the economics of data centers, further increasing the importance of Koomey's law.
The slowing of Koomey's law has implications for energy use in information and communications technology. However, because computers do not run at peak output continuously, the effect of this slowing may not be seen for a decade or more.[6]Koomey writes that "as with any exponential trend, this one will eventually end...in a decade or so, energy use will once again be dominated by the power consumed when a computer is active. And that active power will still be hostage to the physics behind the slowdown in Moore's Law."
Koomey was the lead author of the article inIEEE Annals of the History of Computingthat first documented the trend.[1]At about the same time, Koomey published a short piece about it inIEEE Spectrum.[7]
It was further discussed inMIT Technology Review,[8]and in a post byErik Brynjolfssonon the "Economics of Information" blog,[5]and atThe Economistonline.[9]
The trend was previously known fordigital signal processors, and it was then named "Gene's law". The name came from Gene Frantz, an electrical engineer atTexas Instruments. Frantz had documented that power dissipation in DSPs had been reduced by half every 18 months, over a 25-year period.[10][11]
Latest studies indicate that Koomey's Law has slowed to doubling every 2.6 years.[2]This rate is a statistical average over many technologies and many years, but there are exceptions. For example, in 2020 AMD reported that, since 2014, the company has managed to improve the efficiency of its mobile processors by a factor of 31.7, which is a doubling rate of 1.2 years.[12]In June 2020, Koomey responded to the report, writing, "I have reviewed the data and can report that AMD exceeded the 25×20 goal it set in 2014 through improved design, superior optimization, and a laser-like focus onenergy efficiency."[12]
By thesecond law of thermodynamicsandLandauer's principle, irreversible computing cannot continue to be made more energy efficient forever. Assuming that the energy efficiency of computing will continue to double every 2.6 years, and taking the most efficient super computer as of 2022,[13]the Landauer bound will be reached around 2080. Thus, after this point, Koomey's law can no longer hold. Landauer's principle, however, does not constrain the efficiency ofreversible computing. This, in conjunction with otherBeyond CMOScomputing technologies, could permit continued advances in efficiency.
|
https://en.wikipedia.org/wiki/Koomey%27s_law
|
Thislist ofeponymouslawsprovides links to articles onlaws,principles,adages, and other succinct observations or predictions named after a person. In some cases the person named has coined the law – such asParkinson's law. In others, the work or publications of the individual have led to the law being so named – as is the case withMoore's law. There are also laws ascribed to individuals by others, such asMurphy's law; or given eponymous names despite the absence of the named person. Named laws range from significant scientific laws such as Newton's laws of motion, to humorous examples such as Murphy's law.
|
https://en.wikipedia.org/wiki/List_of_eponymous_laws
|
This is alist of "laws"applied to various disciplines. These are oftenadagesor predictions with theappellation'Law', although they do not apply in thelegalsense, cannot be scientifically tested, or are intended only as rough descriptions (rather than applying in each case). These 'laws' are sometimes calledrules of thumb.
SeeList of legal topicsfor 'laws' in thelegalsense.
SeeList of scientific lawsforfalsifiablelaws that are said to apply universally and literally.
See alsoGlossary of sound laws in the Indo-European languages
|
https://en.wikipedia.org/wiki/List_of_laws#Technology
|
The first chips that could be consideredmicroprocessorswere designed and manufactured in the late 1960s and early 1970s, including the MP944 used in theGrumman F-14CADC.[1]Intel's 4004 of 1971 is widely regarded as the first commercial microprocessor.[2]
Designers predominantly usedMOSFETtransistors withpMOS logicin the early 1970s, switching tonMOS logicafter the mid-1970s. nMOS had the advantage that it could run on a single voltage, typically +5V, which simplified the power supply requirements and allowed it to be easily interfaced with the wide variety of +5Vtransistor-transistor logic(TTL) devices. nMOS had the disadvantage that it was more susceptible to electronic noise generated by slight impurities in the underlying silicon material, and it was not until the mid-1970s that these, sodium in particular, were successfully removed to the required levels. At that time, around 1975, nMOS quickly took over the market.[3]
This corresponded with the introduction of newsemiconductor maskingsystems, notably theMicralignsystem fromPerkin-Elmer. Micralign projected an image of the mask onto the silicon wafer, never touching it directly, which eliminated the previous problems when the mask would be lifted off the surface and take away some of thephotoresistalong with it, ruining the chips on that portion of the wafer.[4]By reducing the number of flawed chips, from about 70% to 10%, the cost of complex designs like early microprocessors fell by the same amount. Systems based on contact aligners cost on the order of $300 in single-unit quantities, theMOS 6502, designed specifically to take advantage of these improvements, cost only $25.[5]
This period also saw considerable experimentation with variousword lengths. Early on,4-bitprocessors were common, like the Intel 4004, simply because making a wider word length could not be accomplished cost-effectively in the room available on the small wafers of the era, especially when the majority would be defective. As yields improved, wafer sizes grew, and feature size continued to be reduced, more complex8-bitdesigns emerged like theIntel 8080and 6502.16-bitprocessors emerged early but were expensive; by the decade's end, low-cost 16-bit designs like theZilog Z8000were becoming common. Some unusual word lengths were also produced, including12-bitand 20-bit, often matching a design that had previously been implemented in a multi-chip format in aminicomputer. These had largely disappeared by the end of the decade as minicomputers moved to32-bitformats.
AsMoore's Lawcontinued to drive the industry towards more complex chip designs, the expected widespread move from 8-bit designs of the 1970s to 16-bit designs almost didn't occur; instead, new32-bitdesigns like theMotorola 68000andNational Semiconductor NS32000emerged that offered far more performance. The only widespread use of 16-bit systems was in theIBM PC, which had selected theIntel 8088in 1979 before the new designs had matured.
Another change was the move toCMOSgates as the primary method of building complex CPUs. CMOS had been available since the early 1970s;RCAintroduced theCOSMACprocessor using CMOS in 1975.[43]Whereas earlier systems used a singletransistoras the basis for each "gate", CMOS used a two-sided design, essentially making it twice as expensive to build. Its advantage was that its logic was not based on the voltage of a transistor compared to the silicon substrate, but thedifferencein voltages between the two sides, which was detectable at much lower power levels.[citation needed]As processor complexity continued to grow, power dissipation had become a significant concern and chips were prone to overheating; CMOS greatly reduced this problem and quickly took over the market.[44]This was aided by the uptake of CMOS by Japanese firms while US firms remained on nMOS, giving the Japanese industry a major advance during the 1980s.[45]
Semiconductor fabrication techniques continued to improve throughout. The Micralign, which had "created the modern IC industry", was obsolete by the early 1980s. They were replaced by the newsteppers, which used high magnifications and extremely powerful light sources to allow a large mask to be copied onto the wafer at ever-smaller sizes. This technology allowed the industry to break below the former 1 micron limit.
Keyhome computersin the early part of the decade predominantly use processors developed in the 1970s. Versions of the 6502, first released in 1975, powered theCommodore 64,Apple II,BBC Micro, andAtari 8-bit computers. The 8-bitZilog Z80(1976) is at the core of theZX Spectrum,MSXsystems and many others. The 8086-based IBM PC, launched in 1981, started the move to 16-bit, but was soon passed by the 68000-based 16/32-bitMacintosh, then theAtari STandAmiga. IBM PC compatibles moved to 32-bit with the introduction of theIntel 80386in late 1985, although 386-based systems were considerably expensive at the time.
In addition to ever-growing word lengths, microprocessors began to add additional functional units that had previously been optional external parts. By the middle of the decade,memory management units(MMUs) were becoming commonplace, first appearing on designs like theIntel 80286andMotorola 68030. By the end of the decade,floating point units(FPUs) were being added, first appearing on 1989sIntel 486and followed the next year by theMotorola 68040.
Another change that began during the 1980s involved overall design philosophy with the emergence of thereduced instruction set computer, or RISC. Although the concept was first developed by IBM in the 1970s, the company did not introduce powerful systems based on it, largely for fear of cannibalizing their sales of largermainframesystems. Market introduction was driven by smaller companies likeMIPS Technologies,SPARCandARM. These companies did not have access to high-end fabrication like Intel and Motorola, but were able to introduce chips that were highly competitive with those companies with a fraction of the complexity. By the end of the decade, every major vendor was introducing a RISC design of their own, like theIBM POWER,Intel i860andMotorola 88000.
The32-bitmicroprocessor dominated the consumer market in the 1990s. Processor clock speeds increased by more than tenfold between 1990 and 1999, and64-bitprocessors began to emerge later in the decade. In the 1990s, microprocessors no longer used the same clock speed for the processor and theRAM. Processors began to have afront-side bus(FSB) clock speed used in communication with RAM and other components. Typically, the processor itself ran at a clock speed that was a multiple of the FSB clock speed. Intel's Pentium III, for example, had an internal clock speed of 450–600 MHz and an FSB speed of 100–133 MHz. Only the processor's internal clock speed is shown here.
64-bitprocessors became mainstream in the 2000s. Microprocessor clock speeds reached a ceiling because of theheat dissipationbarrier[citation needed]. Instead of implementing expensive and impractical cooling systems, manufacturers turned toparallel computingin the form of themulti-core processor.Overclockinghad its roots in the 1990s, but came into its own in the 2000s. Off-the-shelf cooling systems designed for overclocked processors became common, and thegaming PChad its advent as well. Over the decade, transistor counts increased by about an order of magnitude, a trend continued from previous decades. Process sizes decreased about fourfold, from 180 nm to 45 nm.
A new trend appears, themulti-chip modulemade of severalchiplets. This is multiple monolithic chips in a single package. This allows higher integration with several smaller and easier to manufacture chips.
|
https://en.wikipedia.org/wiki/Microprocessor_chronology
|
Inmachine learning, aneural scaling lawis an empiricalscaling lawthat describes howneural networkperformance changes as key factors are scaled up or down. These factors typically include the number of parameters,training datasetsize,[1][2]and training cost.
In general, adeep learningmodel can be characterized by four parameters: model size, training dataset size, training cost, and the post-training error rate (e.g., the test set error rate). Each of these variables can be defined as a real number, usually written asN,D,C,L{\displaystyle N,D,C,L}(respectively: parameter count, dataset size, computing cost, andloss).
A neural scaling law is a theoretical or empiricalstatistical lawbetween these parameters. There are also other parameters with other scaling laws.
In most cases, the model's size is simply the number of parameters. However, one complication arises with the use of sparse models, such asmixture-of-expert models.[3]With sparse models, during inference, only a fraction of their parameters are used. In comparison, most other kinds of neural networks, such astransformermodels, always use all their parameters during inference.
The size of the training dataset is usually quantified by the number of data points within it. Larger training datasets are typically preferred, as they provide a richer and more diverse source of information from which the model can learn. This can lead to improved generalization performance when the model is applied to new, unseen data.[4]However, increasing the size of the training dataset also increases the computational resources and time required for model training.
With the "pretrain, then finetune" method used for mostlarge language models, there are two kinds of training dataset: thepretrainingdataset and thefinetuningdataset. Their sizes have different effects on model performance. Generally, the finetuning dataset is less than 1% the size of pretraining dataset.[5]
In some cases, a small amount of high quality data suffices for finetuning, and more data does not necessarily improve performance.[5]
Training cost is typically measured in terms of time (how long it takes to train the model) and computational resources (how much processing power and memory are required). It is important to note that the cost of training can be significantly reduced with efficient training algorithms, optimized software libraries, andparallel computingon specialized hardware such asGPUsorTPUs.
The cost of training a neural network model is a function of several factors, including model size, training dataset size, the training algorithmcomplexity, and the computational resources available.[4]In particular, doubling the training dataset size does not necessarily double the cost of training, because one may train the model for several times over the same dataset (each being an "epoch").
The performance of a neural network model is evaluated based on its ability to accurately predict the output given some input data. Common metrics for evaluating model performance include:[4]
Performance can be improved by using more data, larger models, different training algorithms,regularizingthe model to preventoverfitting, and early stopping using a validation set.
When the performance is a number bounded within the range of[0,1]{\displaystyle [0,1]}, such as accuracy, precision, etc., it often scales as asigmoid functionof cost, as seen in the figures.
The 2017 paper[2]is a common reference point for neural scaling laws fitted by statistical analysis on experimental data. Previous works before the 2000s, as cited in the paper, were either theoretical or orders of magnitude smaller in scale. Whereas previous works generally found the scaling exponent to scale likeL∝D−α{\displaystyle L\propto D^{-\alpha }}, withα∈{0.5,1,2}{\displaystyle \alpha \in \{0.5,1,2\}}, the paper found thatα∈[0.07,0.35]{\displaystyle \alpha \in [0.07,0.35]}.
Of the factors they varied, only task can change the exponentα{\displaystyle \alpha }. Changing the architecture optimizers, regularizers, and loss functions, would only change the proportionality factor, not the exponent. For example, for the same task, one architecture might haveL=1000D−0.3{\displaystyle L=1000D^{-0.3}}while another might haveL=500D−0.3{\displaystyle L=500D^{-0.3}}. They also found that for a given architecture, the number of parameters necessary to reach lowest levels of loss, given a fixed dataset size, grows likeN∝Dβ{\displaystyle N\propto D^{\beta }}for another exponentβ{\displaystyle \beta }.
They studied machine translation with LSTM (α∼0.13{\displaystyle \alpha \sim 0.13}), generative language modelling with LSTM (α∈[0.06,0.09],β≈0.7{\displaystyle \alpha \in [0.06,0.09],\beta \approx 0.7}), ImageNet classification with ResNet (α∈[0.3,0.5],β≈0.6{\displaystyle \alpha \in [0.3,0.5],\beta \approx 0.6}), and speech recognition with two hybrid (LSTMs complemented by either CNNs or an attention decoder) architectures (α≈0.3{\displaystyle \alpha \approx 0.3}).
A 2020 analysis[10]studied statistical relations betweenC,N,D,L{\displaystyle C,N,D,L}over a wide range of values and found similar scaling laws, over the range ofN∈[103,109]{\displaystyle N\in [10^{3},10^{9}]},C∈[1012,1021]{\displaystyle C\in [10^{12},10^{21}]}, and over multiple modalities (text, video, image, text to image, etc.).[10]
In particular, the scaling laws it found are (Table 1 of[10]):
The scaling law ofL=L0+(C0/C)0.048{\displaystyle L=L_{0}+(C_{0}/C)^{0.048}}was confirmed during the training ofGPT-3(Figure 3.1[11]).
One particular scaling law ("Chinchilla scaling") states that, for alarge language model(LLM) autoregressively trained for one epoch, with a cosinelearning rateschedule, we have:[13]{C=C0NDL=ANα+BDβ+L0{\displaystyle {\begin{cases}C=C_{0}ND\\L={\frac {A}{N^{\alpha }}}+{\frac {B}{D^{\beta }}}+L_{0}\end{cases}}}where the variables are
and the statistical parameters are
Although Besiroglu et al.[15]claims that the statistical estimation is slightly off, and should beα=0.35,β=0.37,A=482.01,B=2085.43,L0=1.82{\displaystyle \alpha =0.35,\beta =0.37,A=482.01,B=2085.43,L_{0}=1.82}.
The statistical laws were fitted over experimental data withN∈[7×107,1.6×1010],D∈[5×109,5×1011],C∈[1018,1024]{\displaystyle N\in [7\times 10^{7},1.6\times 10^{10}],D\in [5\times 10^{9},5\times 10^{11}],C\in [10^{18},10^{24}]}.
Since there are 4 variables related by 2 equations, imposing 1 additional constraint and 1 additionaloptimization objectiveallows us to solve for all four variables. In particular, for any fixedC{\displaystyle C}, we can uniquely solve for all 4 variables that minimizesL{\displaystyle L}. This provides us with the optimalDopt(C),Nopt(C){\displaystyle D_{opt}(C),N_{opt}(C)}for any fixedC{\displaystyle C}:Nopt(C)=G(C6)a,Dopt(C)=G−1(C6)b,whereG=(αAβB)1α+β,a=βα+β, andb=αα+β.{\displaystyle N_{opt}(C)=G\left({\frac {C}{6}}\right)^{a},\quad D_{opt}(C)=G^{-1}\left({\frac {C}{6}}\right)^{b},\quad {\text{ where }}\quad G=\left({\frac {\alpha A}{\beta B}}\right)^{\frac {1}{\alpha +\beta }},\quad a={\frac {\beta }{\alpha +\beta }}{\text{, and }}b={\frac {\alpha }{\alpha +\beta }}{\text{. }}}Plugging in the numerical values, we obtain the "Chinchilla efficient" model size and training dataset size, as well as the test loss achievable:{Nopt(C)=0.6C0.45Dopt(C)=0.3C0.55Lopt(C)=1070C−0.154+1.7{\displaystyle {\begin{cases}N_{opt}(C)=0.6\;C^{0.45}\\D_{opt}(C)=0.3\;C^{0.55}\\L_{opt}(C)=1070\;C^{-0.154}+1.7\end{cases}}}Similarly, we may find the optimal training dataset size and training compute budget for any fixed model parameter size, and so on.
There are other estimates for "Chinchilla efficient" model size and training dataset size. The above is based on a statistical model ofL=ANα+BDβ+L0{\displaystyle L={\frac {A}{N^{\alpha }}}+{\frac {B}{D^{\beta }}}+L_{0}}. One can also directly fit a statistical law forDopt(C),Nopt(C){\displaystyle D_{opt}(C),N_{opt}(C)}without going through the detour, for which one obtains:{Nopt(C)=0.1C0.5Dopt(C)=1.7C0.5{\displaystyle {\begin{cases}N_{opt}(C)=0.1\;C^{0.5}\\D_{opt}(C)=1.7\;C^{0.5}\end{cases}}}or as tabulated:
The Chinchilla scaling law analysis for trainingtransformerlanguage models suggests that for a given training compute budget (C{\displaystyle C}), to achieve the minimal pretraining loss for that budget, the number of model parameters (N{\displaystyle N}) and the number of training tokens (D{\displaystyle D}) should be scaled in equal proportions,Nopt(C)∝C0.5,Dopt(C)∝C0.5{\displaystyle N_{opt}(C)\propto C^{0.5},D_{opt}(C)\propto C^{0.5}}.
This conclusion differs from analysis conducted by Kaplan et al.,[14]which found thatN{\displaystyle N}should be increased more quickly thanD{\displaystyle D},Nopt(C)∝C0.73,Dopt(C)∝C0.27{\displaystyle N_{opt}(C)\propto C^{0.73},D_{opt}(C)\propto C^{0.27}}.
This discrepancy can primarily be attributed to the two studies using different methods for measuring model size. Kaplan et al.:[16]
Secondary effects also arise due to differences in hyperparameter tuning and learning rate schedules. Kaplan et al.:[17]
As Chinchilla scaling has been the reference point for many large-scaling training runs, there had been a concurrent effort to go "beyond Chinchilla scaling", meaning to modify some of the training pipeline in order to obtain the same loss with less effort, or deliberately train for longer than what is "Chinchilla optimal".
Usually, the goal is to make the scaling law exponent larger, which means the same loss can be trained for much less compute. For instance, filtering data can make the scaling law exponent larger.[18]
Another strand of research studies how to deal with limited data, as according to Chinchilla scaling laws, the training dataset size for the largest language models already approaches what is available on the internet.[19]found that augmenting the dataset with a mix of "denoising objectives" constructed from the dataset improves performance.[20]studies optimal scaling when all available data is already exhausted (such as in rare languages), so one must train multiple epoches over the same dataset (whereas Chinchilla scaling requires only one epoch). The Phi series of small language models were trained on textbook-like data generated by large language models, for which data is only limited by amount of compute available.[21]
Chinchilla optimality was defined as "optimal for training compute", whereas in actual production-quality models, there will be a lot of inference after training is complete. "Overtraining" during training means better performance during inference.[22]LLaMAmodels were overtrained for this reason. Subsequent studies discovered scaling laws in the overtraining regime, for dataset sizes up to 32x more than Chinchilla-optimal.[23]
A 2022 analysis[24]found that many scaling behaviors of artificial neural networks follow asmoothly broken power lawfunctional form:
y=a+(bx−c0)∏i=1n(1+(xdi)1/fi)−ci∗fi{\displaystyle y=a+{\bigg (}bx^{-c_{0}}{\bigg )}\prod _{i=1}^{n}\left(1+\left({\frac {x}{d_{i}}}\right)^{1/f_{i}}\right)^{-c_{i}*f_{i}}}
in whichx{\displaystyle x}refers to the quantity being scaled (i.e.C{\displaystyle C},N{\displaystyle N},D{\displaystyle D}, number of training steps, number of inference steps, or model input size) andy{\displaystyle y}refers to thedownstream(or upstream) performance evaluation metric of interest (e.g. prediction error,cross entropy, calibration error,AUROC,BLEU score percentage,F1 score, reward,Elo rating, solve rate, orFIDscore) inzero-shot,prompted, orfine-tunedsettings. The parametersa,b,c0,c1...cn,d1...dn,f1...fn{\displaystyle a,b,c_{0},c_{1}...c_{n},d_{1}...d_{n},f_{1}...f_{n}}are found by statistical fitting.
On alog–log plot, whenfi{\displaystyle f_{i}}is not too large anda{\displaystyle a}is subtracted out from the y-axis, this functional form looks like a series of linear segments connected by arcs; then{\displaystyle n}transitions between the segments are called "breaks", hence the namebroken neural scaling laws (BNSL).
The scenarios in which the scaling behaviors of artificial neural networks were found to follow this functional form include large-scalevision,language, audio, video,diffusion,generative modeling,multimodal learning,contrastive learning,AI alignment, AI capabilities,robotics, out-of-distribution (OOD) generalization, continual learning,transfer learning,uncertainty estimation/calibration,out-of-distribution detection,adversarial robustness,distillation, sparsity, retrieval, quantization,pruning,fairness, molecules, computer programming/coding, math word problems, arithmetic,emergent abilities,double descent,supervised learning,unsupervised/self-supervisedlearning, andreinforcement learning(single agent andmulti-agent).
The architectures for which the scaling behaviors of artificial neural networks were found to follow this functional form includeresidual neural networks,transformers,MLPs,MLP-mixers,recurrent neural networks,convolutional neural networks,graph neural networks,U-nets,encoder-decoder(andencoder-only) (and decoder-only) models,ensembles(and non-ensembles),MoE(mixture of experts) (and non-MoE) models, andsparse pruned(and non-sparse unpruned) models.
Other than scaling up training compute, one can also scale up inference compute (or "test-time compute"[25]). As an example, theElo ratingofAlphaGoimproves steadily as it is allowed to spend more time on itsMonte Carlo Tree Searchper play.[26]: Fig 4ForAlphaGo Zero, increasing Elo by 120 requires either 2x model size and training, or 2x test-time search.[27]Similarly, a language model for solving competition-level coding challenges, AlphaCode, consistently improved (log-linearly) in performance with more search time.[28]
ForHex, 10x training-time compute trades for 15x test-time compute.[8]ForLibratusfor heads upno-limitTexas hold 'em, andCiceroforDiplomacy, and many other abstract games of partial information, inference-time searching improves performance at a similar tradeoff ratio, for up to 100,000x effective increase in training-time compute.[27]
In 2024, theOpenAI o1report documented that o1's performance consistently improved with both increased train-time compute and test-time compute, and gave numerous examples of test-time compute scaling in mathematics, scientific reasoning, and coding tasks.[29][30]
One method for scaling up test-time compute isprocess-based supervision, where a model generates a step-by-step reasoning chain to answer a question, and another model (either human or AI) provides a reward score on some of the intermediate steps, not just the final answer. Process-based supervision can be scaled arbitrarily by using synthetic reward score without another model, for example, by running Monte Carlo rollouts and scoring each step in the reasoning according to how likely it leads to the right answer. Another method is byrevision models, which are models trained to solve a problem multiple times, each time revising the previous attempt.[31]
Vision transformers, similar to language transformers, exhibit scaling laws. A 2022 research trained vision transformers, with parameter countsN∈[5×106,2×109]{\displaystyle N\in [5\times 10^{6},2\times 10^{9}]}, on image sets of sizesD∈[3×107,3×109]{\displaystyle D\in [3\times 10^{7},3\times 10^{9}]}, for computingC∈[0.2,104]{\displaystyle C\in [0.2,10^{4}]}(in units of TPUv3-core-days).[32]
After training the model, it is finetuned onImageNettraining set. LetL{\displaystyle L}be the error probability of the finetuned model classifying ImageNet test set. They foundminN,DL=0.09+0.26(C+0.01)0.35{\displaystyle \min _{N,D}L=0.09+{\frac {0.26}{(C+0.01)^{0.35}}}}.
Ghorbani, Behrooz et al.[33]studied scaling laws forneural machine translation(specifically, English as source, and German as target) in encoder-decoderTransformermodels, trained until convergence on the same datasets (thus they did not fit scaling laws for computing costC{\displaystyle C}or dataset sizeD{\displaystyle D}). They variedN∈[108,3.5×109]{\displaystyle N\in [10^{8},3.5\times 10^{9}]}They found three results:
The authors hypothesize that source-natural datasets have uniform and dull target sentences, and so a model that is trained to predict the target sentences would quickly overfit.
[35]trained Transformers for machine translations with sizesN∈[4×105,5.6×107]{\displaystyle N\in [4\times 10^{5},5.6\times 10^{7}]}on dataset sizesD∈[6×105,6×109]{\displaystyle D\in [6\times 10^{5},6\times 10^{9}]}. They found the Kaplan et al. (2020)[14]scaling law applied to machine translation:L(N,D)=[(NCN)αNαD+DCD]αD{\displaystyle L(N,D)=\left[\left({\frac {N_{C}}{N}}\right)^{\frac {\alpha _{N}}{\alpha _{D}}}+{\frac {D_{C}}{D}}\right]^{\alpha _{D}}}. They also found the BLEU score scaling asBLEU≈Ce−kL{\displaystyle BLEU\approx Ce^{-kL}}.
Hernandez, Danny et al.[36]studied scaling laws fortransfer learningin language models. They trained a family of Transformers in three ways:
The idea is that pretraining on English should help the model achieve low loss on a test set of Python text. Suppose the model has parameter countN{\displaystyle N}, and after being finetuned onDF{\displaystyle D_{F}}Python tokens, it achieves some lossL{\displaystyle L}. We say that its "transferred token count" isDT{\displaystyle D_{T}}, if another model with the sameN{\displaystyle N}achieves the sameL{\displaystyle L}after training onDF+DT{\displaystyle D_{F}+D_{T}}Python tokens.
They foundDT=1.9e4(DF).18(N).38{\displaystyle D_{T}=1.9e4\left(D_{F}\right)^{.18}(N)^{.38}}for pretraining on English text, andDT=2.1e5(DF).096(N).38{\displaystyle D_{T}=2.1e5\left(D_{F}\right)^{.096}(N)^{.38}}for pretraining on English and non-Python code.
Kumar et al.[37]study scaling laws for numerical precision in the training of language models. They train a family of language models with weights, activations, and KV cache in varying numerical precision in both integer and floating-point type to measure the effects on loss as a function of precision. For training, their scaling law accounts for lower precision by wrapping the effects of precision into an overall "effective parameter count" that governs loss scaling, using the parameterizationN↦Neff(P)=N(1−e−P/γ){\displaystyle N\mapsto N_{\text{eff}}(P)=N(1-e^{-P/\gamma })}. This illustrates how training in lower precision degrades performance by reducing the true capacity of the model in a manner that varies exponentially with bits.
For inference, they find that extreme overtraining of language models past Chinchilla-optimality can lead to models being more sensitive to quantization, a standard technique for efficient deep learning. This is demonstrated by observing that the degradation in loss due to weight quantization increases as an approximate power law in the token/parameter ratioD/N{\displaystyle D/N}seen during pretraining, so that models pretrained on extreme token budgets can perform worse in terms of validation loss than those trained on more modest token budgets if post-training quantization is applied. Other work examining the effects of overtraining include Sardana et al.[38]and Gadre et al.[39]
Xiao et al.[7]considered the parameter efficiency ("density") of models over time. The idea is that over time, researchers would discover models that use their parameters more efficiently, in that models with the same performance can have fewer parameters.
A model can have an actual parameter countN{\displaystyle N}, defined as the actual number of parameters in the model, and an "effective" parameter countN^{\displaystyle {\hat {N}}}, defined as how many parameters it would have taken a previous well-known model to reach he same performance on some benchmarks, such asMMLU.N^{\displaystyle {\hat {N}}}is not measured directly, but rather by measuring the actual model performanceS{\displaystyle S}, then plugging it back to a previously fitted scaling law, such as the Chinchilla scaling law, to obtain whatN^{\displaystyle {\hat {N}}}would be required to reach that performanceS{\displaystyle S}, according to that previously fitted scaling laws.
A densing law states thatln(N^N)max=At+B{\displaystyle \ln \left({\frac {\hat {N}}{N}}\right)_{max}=At+B}, wheret{\displaystyle t}is real-world time, measured in days.
|
https://en.wikipedia.org/wiki/Neural_scaling_law
|
Wirth's lawis anadageoncomputer performancewhich states thatsoftwareis getting slower more rapidly thanhardwareis becoming faster.
The adage is named afterNiklaus Wirth, a computer scientist who discussed it in his 1995 article "A Plea for Lean Software".[1][2]
Wirth attributed the saying toMartin Reiser, who in the preface to his book on theOberon Systemwrote: "The hope is that the progress in hardware will cure all software ills. However, a critical observer may observe that software manages to outgrow hardware in size and sluggishness."[3]Other observers had noted this for some time before; indeed, the trend was becoming obvious as early as 1987.[4]
He states two contributing factors to the acceptance of ever-growing software as: "rapidly growing hardware performance" and "customers' ignorance of features that are essential versus nice-to-have".[1]Enhanced user convenience and functionality supposedly justify the increased size of software, but Wirth argues that people are increasingly misinterpreting complexity as sophistication, that "these details are cute but not essential, and they have a hidden cost".[1]As a result, he calls for the creation of "leaner" software and pioneered the development ofOberon, a software system developed between 1986 and 1989 based on nothing but hardware. Its primary goal was to show that software can be developed with a fraction of the memory capacity and processor power usually required, without sacrificing flexibility, functionality, or user convenience.[1]
The law was restated in 2009 and attributed toGoogleco-founderLarry Page. It has been referred to asPage's law.[5]The first use of that name is attributed to fellow Google co-founderSergey Brinat the 2009Google I/OConference.[6]
Other common forms use the names of the leadinghardwareand software companies of the 1990s,IntelandMicrosoft, or their CEOs,Andy GroveandBill Gates, for example "What Intel giveth, Microsoft taketh away"[7]andAndy and Bill's law: "What Andy giveth, Bill taketh away".[8]
Gates's law("The speed of software halves every 18 months"[9]) is an anonymously coined variant on Wirth's law, its name referencing Bill Gates,[9]co-founder of Microsoft. It is an observation that the speed of commercial software generally slows by 50% every 18 months, thereby negating all the benefits ofMoore's law. This could occur for a variety of reasons:feature creep,code cruft, developer laziness, lack of funding, forced updates, forced porting (to a newerOSor to support a new technology) or a management turnover whose design philosophy does not coincide with the previous manager.[10]
May's law, named afterDavid May, is a variant stating: "Software efficiency halves every 18 months, compensating Moore's law".[11]
|
https://en.wikipedia.org/wiki/Wirth%27s_law
|
Rent's rulepertains to the organization of computing logic, specifically the relationship between the number of external signal connections to a logic block (i.e., the number of "pins") with the number of logic gates in the logic block, and has been applied to circuits ranging from small digital circuits to mainframe computers. Put simply, it states that there is a simple power law relationship between these two values (pins and gates).
In the 1960s, E. F. Rent, anIBMemployee, found a remarkable trend between the number of pins (terminals,T) at the boundaries ofintegrated circuitdesigns atIBMand the number of internal components (g), such as logic gates or standard cells. On alog–log plot, these datapoints were on a straight line, implying a power-law relationT=tgp{\displaystyle T=tg^{p}}, wheretandpare constants (p< 1.0, and generally 0.5 <p< 0.8).
Rent's findings inIBM-internal memoranda were published in the IBM Journal of Research and Development in 2005,[1]but the relation was described in 1971 by Landman and Russo.[2]They performed a hierarchical circuit partitioning in such a way that at each hierarchical level (top-down) the fewest interconnections had to be cut to partition the circuit (in more or less equal parts). At each partitioning step, they noted the number of terminals and the number of components in each partition and then partitioned the sub-partitions further. They found the power-law rule applied to the resultingTversusgplot and named it "Rent's rule".
Rent's rule is an empirical result based on observations of existing designs, and therefore it is less applicable to the analysis of non-traditional circuit architectures. However, it provides a useful framework with which to compare similar architectures.
Christie and Stroobandt[3]later derived Rent's rule theoretically for homogeneous systems and pointed out that the amount of optimization achieved inplacementis reflected by the parameterp{\displaystyle p}, the "Rent exponent", which also depends on the circuit topology. In particular, valuesp<1{\displaystyle p<1}correspond to a greater fraction of short interconnects. The constantt{\displaystyle t}in Rent's rule can be viewed as the average number of terminals required by a single logic block, sinceT=t{\displaystyle T=t}wheng=1{\displaystyle g=1}.
Random arrangement of logic blocks typically havep=1{\displaystyle p=1}. Larger values are impossible, since the maximal number of terminals for any region containingglogic components in a homogeneous system is given byT=tg{\displaystyle T=tg}. Lower bounds onpdepend on the interconnection topology, since it is generally impossible to make all wires short. This lower boundp∗{\displaystyle p*}is often called the "intrinsic Rent exponent", a notion first introduced by Hagen et al.[4]It can be used to characterize optimal placements and also measure the interconnection complexity of a circuit. Higher (intrinsic) Rent exponent values correspond to a higher topological complexity. One extreme example (p=0{\displaystyle p=0}) is a long chain of logic blocks, while acliquehasp=1{\displaystyle p=1}. In realistic 2D circuits,p∗{\displaystyle p*}ranges from 0.5 for highly-regular circuits (such asSRAM) to 0.75 for random logic.[5]
System performance analysis tools such asBACPACtypically use Rent's rule to calculate expected wiring lengths and wiring demands.
Rent's rule has been shown to apply among the regions of the brain ofDrosophilafruit fly, using synapses instead of gates, and neurons which extend both inside and outside the region as pins.[6]
To estimate Rent's exponent, one can use top-down partitioning, as used in min-cut placement. For every partition, count the number of terminals connected to the partition and compare it to the number of logic blocks in the partition. Rent's exponent can then be found by fitting these datapoints on a log–log plot, resulting in an exponentp'. For optimally partitioned circuits,p′=p∗{\displaystyle p'=p*}but this is no longer the case for practical (heuristic) partitioning approaches. For partitioning-based placement algorithmsp∗≤p′≤p{\displaystyle p^{*}\leq p'\leq p}.[7]
Landman and Russo found a deviation of Rent's rule near the "far end", i.e., for partitions with a large number of blocks, which is known as "Region II" of Rent's Rule.[2]A similar deviation also exists for small partitions and has been found by Stroobandt,[8]who called it "Region III".
AnotherIBMemployee, Donath, discovered that Rent's rule can be used to estimate the average wirelength and the wirelength distribution inVLSIchips.[9][10]This motivated the System Level Interconnect Prediction workshop, founded in 1999, and an entire community working on wirelength prediction (see a survey by Stroobandt[11]). The resulting wirelength estimates have been improved significantly since then and are now used for "technology exploration".[12]The use of Rent's rule allows to perform such estimatesa priori(i.e., before actual placement) and thus predict the properties of future technologies (clock frequencies, number of routing layers needed, area, power) based on limited information about future circuits and technologies.
A comprehensive overview of work based on Rent's rule has been published by Stroobandt.[11][13]
|
https://en.wikipedia.org/wiki/Rent%27s_rule
|
TheLenstra elliptic-curve factorizationor theelliptic-curve factorization method(ECM) is a fast, sub-exponential running time, algorithm forinteger factorization, which employselliptic curves. Forgeneral-purposefactoring, ECM is the third-fastest known factoring method. The second-fastest is themultiple polynomial quadratic sieve, and the fastest is thegeneral number field sieve. The Lenstra elliptic-curve factorization is named afterHendrik Lenstra.
Practically speaking, ECM is considered a special-purpose factoring algorithm, as it is most suitable for finding small factors. Currently[update], it is still the best algorithm fordivisorsnot exceeding 50 to 60digits, as its running time is dominated by the size of the smallest factorprather than by the size of the numbernto be factored. Frequently, ECM is used to remove small factors from a very large integer with many factors; if the remaining integer is still composite, then it has only large factors and is factored using general-purpose techniques. The largest factor found using ECM so far has 83 decimal digits and was discovered on 7 September 2013 by R. Propper.[1]Increasing the number of curves tested improves the chances of finding a factor, but they are notlinearwith the increase in the number of digits.
The Lenstra elliptic-curve factorization method to find a factor of a given natural numbern{\displaystyle n}works as follows:
The time complexity depends on the size of the number's smallest prime factor and can be represented byexp[(√2+o(1))√lnpln lnp], wherepis the smallest factor ofn, orLp[12,2]{\displaystyle L_{p}\left[{\frac {1}{2}},{\sqrt {2}}\right]}, inL-notation.
Ifpandqare two prime divisors ofn, theny2=x3+ax+b(modn)implies the same equation alsomodulopandmoduloq.These two smaller elliptic curves with the⊞{\displaystyle \boxplus }-addition are now genuinegroups. If these groups haveNpandNqelements, respectively, then for any pointPon the original curve, byLagrange's theorem,k> 0is minimal such thatkP=∞{\displaystyle kP=\infty }on the curve modulopimplies thatkdividesNp; moreover,NpP=∞{\displaystyle N_{p}P=\infty }. The analogous statement holds for the curve moduloq. When the elliptic curve is chosen randomly, thenNpandNqare random numbers close top+ 1andq+ 1,respectively (see below). Hence it is unlikely that most of the prime factors ofNpandNqare the same, and it is quite likely that while computingeP, we will encounter somekPthat is ∞modulopbut notmoduloq,or vice versa. When this is the case,kPdoes not exist on the original curve, and in the computations we found somevwith eithergcd(v,p) =porgcd(v,q) =q,but not both. That is,gcd(v,n)gave a non-trivial factorofn.
ECM is at its core an improvement of the olderp− 1algorithm. Thep− 1algorithm finds prime factorspsuch thatp− 1isb-powersmoothfor small values ofb. For anye, a multiple ofp− 1,and anyarelatively primetop, byFermat's little theoremwe haveae≡ 1 (modp). Thengcd(ae− 1,n)is likely to produce a factor ofn. However, the algorithm fails whenp− 1has large prime factors, as is the case for numbers containingstrong primes, for example.
ECM gets around this obstacle by considering thegroupof a randomelliptic curveover thefinite fieldZp, rather than considering themultiplicative groupofZpwhich always has orderp− 1.
The order of the group of an elliptic curve overZpvaries (quite randomly) betweenp+ 1 − 2√pandp+ 1 + 2√pbyHasse's theorem, and is likely to be smooth for some elliptic curves. Although there is no proof that a smooth group order will be found in the Hasse-interval, by usingheuristicprobabilistic methods, theCanfield–Erdős–Pomerance theoremwith suitably optimized parameter choices, and theL-notation, we can expect to tryL[√2/2,√2]curves before getting a smooth group order. This heuristic estimate is very reliable in practice.
The following example is fromTrappe & Washington (2006), with some details added.
We want to factorn= 455839.Let's choose the elliptic curvey2=x3+ 5x− 5,with the pointP= (1, 1)on it, and let's try to compute(10!)P.
The slope of the tangent line at some pointA=(x,y) iss= (3x2+ 5)/(2y) (mod n). Usingswe can compute 2A. If the value ofsis of the forma/bwhereb> 1 and gcd(a,b) = 1, we have to find themodular inverseofb. If it does not exist, gcd(n,b) is a non-trivial factor ofn.
First wecompute 2P. We haves(P) =s(1,1) = 4,so the coordinates of2P= (x′,y′)arex′=s2− 2x= 14andy′=s(x−x′) −y= 4(1 − 14) − 1 = −53,all numbers understood(modn).Just to check that this 2Pis indeed on the curve:(−53)2= 2809 = 143+ 5·14 − 5.
Then we compute 3(2P). We haves(2P) =s(14, −53) = −593/106 (modn).Using theEuclidean algorithm:455839 = 4300·106 + 39,then106 = 2·39 + 28,then39 = 28 + 11,then28 = 2·11 + 6,then11 = 6 + 5,then6 = 5 + 1.Hencegcd(455839, 106) = 1,and working backwards (a version of theextended Euclidean algorithm):1 = 6 − 5 = 2·6 − 11 = 2·28 − 5·11= 7·28 − 5·39 = 7·106 − 19·39 = 81707·106 − 19·455839.Hence106−1= 81707 (mod 455839),and−593/106 = −133317 (mod 455839).Given thiss, we can compute the coordinates of 2(2P), just as we did above:4P= (259851, 116255).Just to check that this is indeed a point on the curve:y2= 54514 =x3+ 5x− 5 (mod 455839).After this, we can compute3(2P)=4P⊞2P{\displaystyle 3(2P)=4P\boxplus 2P}.
We can similarly compute 4!P, and so on, but 8!Prequires inverting599 (mod 455839).The Euclidean algorithm gives that 455839 is divisible by 599, and we have found afactorization 455839 = 599·761.
The reason that this worked is that the curve(mod 599)has640 = 27·5points, while(mod 761)it has777 = 3·7·37points. Moreover, 640 and 777 are the smallest positive integersksuch thatkP= ∞on the curve(mod 599)and(mod 761),respectively. Since8!is a multiple of 640 but not a multiple of 777, we have8!P= ∞on the curve(mod 599),but not on the curve(mod 761),hence the repeated addition broke down here, yielding the factorization.
Before considering the projective plane over(Z/nZ)/∼,{\displaystyle (\mathbb {Z} /n\mathbb {Z} )/\sim ,}first consider a 'normal'projective spaceoverR{\displaystyle \mathbb {R} }: Instead of points, lines through the origin are studied. A line may be represented as a non-zero point(x,y,z){\displaystyle (x,y,z)}, under an equivalence relation ~ given by:(x,y,z)∼(x′,y′,z′){\displaystyle (x,y,z)\sim (x',y',z')}⇔ ∃c≠ 0 such thatx' =cx,y' =cyandz' =cz. Under this equivalence relation, the space is calledthe projective planeP2{\displaystyle \mathbb {P} ^{2}}; points, denoted by(x:y:z){\displaystyle (x:y:z)}, correspond to lines in a three-dimensional space that pass through the origin. Note that the point(0:0:0){\displaystyle (0:0:0)}does not exist in this space since to draw a line in any possible direction requires at least one of x',y' or z' ≠ 0. Now observe that almost all lines go through any given reference plane - such as the (X,Y,1)-plane, whilst the lines precisely parallel to this plane, having coordinates (X,Y,0), specify directions uniquely, as 'points at infinity' that are used in the affine (X,Y)-plane it lies above.
In the algorithm, only the group structure of an elliptic curve over the fieldR{\displaystyle \mathbb {R} }is used. Since we do not necessarily need the fieldR{\displaystyle \mathbb {R} }, a finite field will also provide a group structure on an elliptic curve. However, considering the same curve and operation over(Z/nZ)/∼{\displaystyle (\mathbb {Z} /n\mathbb {Z} )/\sim }withnnot a prime does not give a group. The Elliptic Curve Method makes use of the failure cases of the addition law.
We now state the algorithm in projective coordinates. The neutral element is then given by the point at infinity(0:1:0){\displaystyle (0:1:0)}. Letnbe a (positive) integer and consider the elliptic curve (a set of points with some structure on it)E(Z/nZ)={(x:y:z)∈P2|y2z=x3+axz2+bz3}{\displaystyle E(\mathbb {Z} /n\mathbb {Z} )=\{(x:y:z)\in \mathbb {P} ^{2}\ |\ y^{2}z=x^{3}+axz^{2}+bz^{3}\}}.
In point 5 it is said that under the right circumstances a non-trivial divisor can be found. As pointed out in Lenstra's article (Factoring Integers with Elliptic Curves) the addition needs the assumptiongcd(x1−x2,n)=1{\displaystyle \gcd(x_{1}-x_{2},n)=1}. IfP,Q{\displaystyle P,Q}are not(0:1:0){\displaystyle (0:1:0)}and distinct (otherwise addition works similarly, but is a little different), then addition works as follows:
If addition fails, this will be due to a failure calculatingλ.{\displaystyle \lambda .}In particular, because(x1−x2)−1{\displaystyle (x_{1}-x_{2})^{-1}}can not always be calculated ifnis not prime (and thereforeZ/nZ{\displaystyle \mathbb {Z} /n\mathbb {Z} }is not a field). Without making use ofZ/nZ{\displaystyle \mathbb {Z} /n\mathbb {Z} }being a field, one could calculate:
This calculation is always legal and if the gcd of theZ-coordinate withn≠ (1 orn), so when simplifying fails, a non-trivial divisor ofnis found.
The use ofEdwards curvesneeds fewer modular multiplications and less time than the use ofMontgomery curvesorWeierstrass curves(other used methods). Using Edwards curves you can also find more primes.
Definition.Letk{\displaystyle k}be a field in which2≠0{\displaystyle 2\neq 0}, and leta,d∈k∖{0}{\displaystyle a,d\in k\setminus \{0\}}witha≠d{\displaystyle a\neq d}. Then the twisted Edwards curveEE,a,d{\displaystyle E_{E,a,d}}is given byax2+y2=1+dx2y2.{\displaystyle ax^{2}+y^{2}=1+dx^{2}y^{2}.}An Edwards curve is a twisted Edwards curve in whicha=1{\displaystyle a=1}.
There are five known ways to build a set of points on an Edwards curve: the set of affine points, the set of projective points, the set of inverted points, the set of extended points and the set of completed points.
The set of affine points is given by:
The addition law is given by
The point (0,1) is its neutral element and the inverse of(e,f){\displaystyle (e,f)}is(−e,f){\displaystyle (-e,f)}.
The other representations are defined similar to how the projective Weierstrass curve follows from the affine.
Anyelliptic curvein Edwards form has a point of order 4. So thetorsion groupof an Edwards curve overQ{\displaystyle \mathbb {Q} }is isomorphic to eitherZ/4Z,Z/8Z,Z/12Z,Z/2Z×Z/4Z{\displaystyle \mathbb {Z} /4\mathbb {Z} ,\mathbb {Z} /8\mathbb {Z} ,\mathbb {Z} /12\mathbb {Z} ,\mathbb {Z} /2\mathbb {Z} \times \mathbb {Z} /4\mathbb {Z} }orZ/2Z×Z/8Z{\displaystyle \mathbb {Z} /2\mathbb {Z} \times \mathbb {Z} /8\mathbb {Z} }.
The most interesting cases for ECM areZ/12Z{\displaystyle \mathbb {Z} /12\mathbb {Z} }andZ/2Z×Z/8Z{\displaystyle \mathbb {Z} /2\mathbb {Z} \times \mathbb {Z} /8\mathbb {Z} }, since they force the group orders of the curve modulo primes to be divisible by 12 and 16 respectively. The following curves have a torsion group isomorphic toZ/12Z{\displaystyle \mathbb {Z} /12\mathbb {Z} }:
Every Edwards curve with a point of order 3 can be written in the ways shown above. Curves with torsion group isomorphic toZ/2Z×Z/8Z{\displaystyle \mathbb {Z} /2\mathbb {Z} \times \mathbb {Z} /8\mathbb {Z} }andZ/2Z×Z/4Z{\displaystyle \mathbb {Z} /2\mathbb {Z} \times \mathbb {Z} /4\mathbb {Z} }may be more efficient at finding primes.[2]
The above text is about the first stage of elliptic curve factorisation. There one hopes to find a prime divisorpsuch thatsP{\displaystyle sP}is the neutral element ofE(Z/pZ){\displaystyle E(\mathbb {Z} /p\mathbb {Z} )}.
In the second stage one hopes to have found a prime divisorqsuch thatsP{\displaystyle sP}has small prime order inE(Z/qZ){\displaystyle E(\mathbb {Z} /q\mathbb {Z} )}.
We hope the order to be betweenB1{\displaystyle B_{1}}andB2{\displaystyle B_{2}}, whereB1{\displaystyle B_{1}}is determined in stage 1 andB2{\displaystyle B_{2}}is new stage 2 parameter.
Checking for a small order ofsP{\displaystyle sP}, can be done by computing(ls)P{\displaystyle (ls)P}modulonfor each primel.
The use of Twisted Edwards elliptic curves, as well as other techniques were used by Bernstein et al[2]to provide an optimized implementation of ECM. Its only drawback is that it works on smaller composite numbers than the more general purpose implementation, GMP-ECM of Zimmermann.
There are recent developments in usinghyperelliptic curvesto factor integers. Cosset shows in his article (of 2010) that one can build a hyperelliptic curve with genus two (so a curvey2=f(x){\displaystyle y^{2}=f(x)}withfof degree 5), which gives the same result as using two "normal" elliptic curves at the same time. By making use of the Kummer surface, calculation is more efficient. The disadvantages of the hyperelliptic curve (versus an elliptic curve) are compensated by this alternative way of calculating. Therefore, Cosset roughly claims that using hyperelliptic curves for factorization is no worse than using elliptic curves.
Bernstein,Heninger, Lou, and Valenta suggest GEECM, a quantum version of ECM with Edwards curves.[3]It usesGrover's algorithmto roughly double the length of the primes found compared to standard EECM, assuming a quantum computer with sufficiently many qubits and of comparable speed to the classical computer running EECM.
|
https://en.wikipedia.org/wiki/Lenstra_elliptic_curve_factorization
|
Inmathematics, acovering system(also called acomplete residue system) is a collection
of finitely manyresidue classes
whose union contains every integer.
The notion of covering system was introduced byPaul Erdősin the early 1930s.
The following are examples of covering systems:
A covering system is calleddisjoint(orexact) if no two members overlap.
A covering system is calleddistinct(orincongruent) if all the modulini{\displaystyle n_{i}}are different (and bigger than 1). Hough and Nielsen (2019)[1]proved that any distinct covering system has a modulus that is divisible by either 2 or 3.
A covering system is calledirredundant(orminimal) if all the residue classes are required to cover the integers.
The first two examples are disjoint.
The third example is distinct.
A system (i.e., an unordered multi-set)
of finitely many residue classes is called anm{\displaystyle m}-cover if it covers every integer at leastm{\displaystyle m}times, and anexactm{\displaystyle m}-cover if it covers each integer exactlym{\displaystyle m}times. It is known that for eachm=2,3,…{\displaystyle m=2,3,\ldots }there are exactm{\displaystyle m}-covers which cannot be written as a union of two covers. For example,
is an exact 2-cover which is not a union of two covers.
The first example above is an exact 1-cover (also called anexact cover). Another exact cover in common use is that of odd andeven numbers, or
This is just one case of the following fact: For every positive integer modulusm{\displaystyle m}, there is an exact cover:
The Mirsky–Newman theorem, a special case of theHerzog–Schönheim conjecture, states that there is no disjoint distinct covering system. This result was conjectured in 1950 byPaul Erdősand proved soon thereafter byLeon MirskyandDonald J. Newman. However, Mirsky and Newman never published their proof. The same proof was also found independently byHarold DavenportandRichard Rado.[2]
Covering systems can be used to findprimefree sequences, sequences of integers satisfying the samerecurrence relationas theFibonacci numbers, such that consecutive numbers in the sequence arerelatively primebut all numbers in the sequence arecomposite numbers. For instance, a sequence of this type found byHerbert Wilfhas initial terms
In this sequence, the positions at which the numbers in the sequence are divisible by a primepform an arithmetic progression; for instance, the even numbers in the sequence are the numbersaiwhereiis congruent to 1 mod 3. The progressions divisible by different primes form a covering system, showing that every number in the sequence is divisible by at least one prime.
Paul Erdősasked whether for any arbitrarily largeNthere exists an incongruent covering system the minimum of whose moduli is at leastN. It is easy to construct examples where the minimum of the moduli in such a system is 2, or 3 (Erdős gave an example where the moduli are in the set of the divisors of 120; a suitable cover is 0(3), 0(4), 0(5), 1(6), 1(8), 2(10), 11(12), 1(15), 14(20), 5(24), 8(30), 6(40), 58(60), 26(120) ) D. Swift gave an example where the minimum of the moduli is 4 (and the moduli are in the set of the divisors of 2880). S. L. G. Choi proved[3]that it is possible to give an example forN= 20, and Pace P Nielsen demonstrates[4]the existence of an example withN= 40, consisting of more than1050{\displaystyle 10^{50}}congruences. Tyler Owens[5]demonstrates the existence of an example withN = 42.
Erdős's question was resolved in the negative by Bob Hough.[6]Hough used theLovász local lemmato show that there is some maximumN<1016which can be the minimum modulus on a covering system.
There is a famous unsolved conjecture from Erdős andSelfridge: an incongruent covering system (with the minimum modulus greater than 1) whose moduli are odd, does not exist. It is known that if such a system exists with square-free moduli, the overall modulus must have at least 22 prime factors.[7]
|
https://en.wikipedia.org/wiki/Covering_system
|
Inmathematics,Helmut Hasse'slocal–global principle, also known as theHasse principle, is the idea that one can find aninteger solution to an equationby using theChinese remainder theoremto piece together solutionsmodulopowers of each differentprime number. This is handled by examining the equation in thecompletionsof therational numbers: thereal numbersand thep-adic numbers. A more formal version of the Hasse principle states that certain types of equations have a rational solutionif and only ifthey have a solution in thereal numbersandin thep-adic numbers for each primep.
Given a polynomial equation with rational coefficients, if it has a rational solution, then this also yields a real solution and ap-adic solution, as the rationals embed in the reals andp-adics: a global solution yields local solutions at each prime. The Hasse principle asks when the reverse can be done, or rather, asks what the obstruction is: when can you patch together solutions over the reals andp-adics to yield a solution over the rationals: when can local solutions be joined to form a global solution?
One can ask this for otherringsorfields: integers, for instance, ornumber fields. For number fields, rather than reals andp-adics, one uses complex embeddings andp{\displaystyle {\mathfrak {p}}}-adics, forprime idealsp{\displaystyle {\mathfrak {p}}}.
TheHasse–Minkowski theoremstates that the local–global principle holds for the problem ofrepresenting 0byquadratic formsover therational numbers(which isMinkowski's result); and more generally over anynumber field(as proved by Hasse), when one uses all the appropriatelocal fieldnecessary conditions.Hasse's theorem on cyclic extensionsstates that the local–global principle applies to the condition of being a relative norm for acyclic extensionof number fields.
A counterexample byErnst S. Selmershows that the Hasse–Minkowski theorem cannot be extended to forms of degree 3: The cubic equation 3x3+ 4y3+ 5z3= 0 has a solution in real numbers, and in all p-adic fields, but it has no nontrivial solution in whichx,y, andzare all rational numbers.[1]
Roger Heath-Brownshowed[2]that every cubic form over the integers in at least 14 variables represents 0, improving on earlier results ofDavenport.[3]Since every cubic form over the p-adic numbers with at least ten variables represents 0,[2]the local–global principle holds trivially for cubic forms over the rationals in at least 14 variables.
Restricting to non-singular forms, one can do better than this: Heath-Brown proved that every non-singular cubic form over the rational numbers in at least 10 variables represents 0,[4]thus trivially establishing the Hasse principle for this class of forms. It is known that Heath-Brown's result is best possible in the sense that there exist non-singular cubic forms over the rationals in 9 variables that do not represent zero.[5]However,Hooleyshowed that the Hasse principle holds for the representation of 0 by non-singular cubic forms over the rational numbers in at least nine variables.[6]Davenport, Heath-Brown and Hooley all used theHardy–Littlewood circle methodin their proofs. According to an idea ofManin, the obstructions to the Hasse principle holding for cubic forms can be tied into the theory of theBrauer group; this is theBrauer–Manin obstruction, which accounts completely for the failure of the Hasse principle for some classes of variety. However,Skorobogatovhas shown that the Brauer–Manin obstruction cannot explain all the failures of the Hasse principle.[7]
Counterexamples byFujiwaraandSudoshow that the Hasse–Minkowski theorem is not extensible to forms of degree 10n+ 5, wherenis a non-negative integer.[8]
On the other hand,Birch's theoremshows that ifdis any odd natural number, then there is a numberN(d) such that any form of degreedin more thanN(d) variables represents 0: the Hasse principle holds trivially.
TheAlbert–Brauer–Hasse–Noether theoremestablishes a local–global principle for the splitting of acentral simple algebraAover an algebraic number fieldK. It states that ifAsplits over everycompletionKvthen it is isomorphic to amatrix algebraoverK.
The Hasse principle foralgebraic groupsstates that ifGis a simply-connected algebraic group defined over theglobal fieldkthen the map
is injective, where the product is over all placessofk.
The Hasse principle for orthogonal groups is closely related to the Hasse principle for the corresponding quadratic forms.
Kneser (1966)and several others verified the Hasse principle by case-by-case proofs for each group. The last case was the groupE8which was only completed byChernousov (1989)many years after the other cases.
The Hasse principle for algebraic groups was used in the proofs of theWeil conjecture for Tamagawa numbersand thestrong approximation theorem.
|
https://en.wikipedia.org/wiki/Hasse_principle
|
Aresidue number systemorresidue numeral system(RNS) is anumeral systemrepresentingintegersby their valuesmoduloseveralpairwise coprimeintegers called the moduli. This representation is allowed by theChinese remainder theorem, which asserts that, ifMis the product of the moduli, there is, in an interval of lengthM, exactly one integer having any given set of modular values.
Using a residue numeral system forarithmetic operationsis also calledmulti-modular arithmetic.
Multi-modular arithmetic is widely used for computation with large integers, typically inlinear algebra, because it provides faster computation than with the usual numeral systems, even when the time for converting between numeral systems is taken into account. Other applications of multi-modular arithmetic includepolynomial greatest common divisor,Gröbner basiscomputation andcryptography.
A residue numeral system is defined by a set ofkintegers
called themoduli, which are generally supposed to bepairwise coprime(that is, any two of them have agreatest common divisorequal to one). Residue number systems have been defined for non-coprime moduli, but are not commonly used because of worse properties.[1]
An integerxis represented in the residue numeral system by thefamilyof its remainders (indexed by the moduli of the indexes of the moduli)
underEuclidean divisionby the moduli. That is
and
for everyi
LetMbe the product of all themi{\displaystyle m_{i}}. Two integers whose difference is a multiple ofMhave the same representation in the residue numeral system defined by themis. More precisely, theChinese remainder theoremasserts that each of theMdifferent sets of possible residues represents exactly oneresidue classmoduloM. That is, each set of residues represents exactly one integerX{\displaystyle X}in the interval0,…,M−1{\displaystyle 0,\dots ,M-1}. For signed numbers, the dynamic range is−⌊M/2⌋≤X≤⌊(M−1)/2⌋{\textstyle {-\lfloor M/2\rfloor }\leq X\leq \lfloor (M-1)/2\rfloor }(whenM{\displaystyle M}is even, generally an extra negative value is represented).[2]
For adding, subtracting and multiplying numbers represented in a residue number system, it suffices to perform the samemodular operationon each pair of residues. More precisely, if
is the list of moduli, the sum of the integersxandy, respectively represented by the residues[x1,…,xk]{\displaystyle [x_{1},\ldots ,x_{k}]}and[y1,…,yk],{\displaystyle [y_{1},\ldots ,y_{k}],}is the integerzrepresented by[z1,…,zk],{\displaystyle [z_{1},\ldots ,z_{k}],}such that
fori= 1, ...,k(as usual, mod denotes themodulo operationconsisting of taking the remainder of theEuclidean divisionby the right operand). Subtraction and multiplication are defined similarly.
For a succession of operations, it is not necessary to apply the modulo operation at each step. It may be applied at the end of the computation, or, during the computation, for avoidingoverflowof hardware operations.
However, operations such as magnitude comparison, sign computation, overflow detection, scaling, and division are difficult to perform in a residue number system.[3]
If two integers are equal, then all their residues are equal. Conversely, if all residues are equal, then the two integers are equal or differ by a multiple ofM. It follows that testing equality is easy.
At the opposite, testing inequalities (x<y) is difficult and, usually, requires to convert integers to the standard representation. As a consequence, this representation of numbers is not suitable for algorithms using inequality tests, such asEuclidean divisionandEuclidean algorithm.
Division in residue numeral systems is problematic. On the other hand, ifB{\displaystyle B}is coprime withM{\displaystyle M}(that isbi≠0{\displaystyle b_{i}\not =0}) then
can be easily calculated by
whereB−1{\displaystyle B^{-1}}ismultiplicative inverseofB{\displaystyle B}moduloM{\displaystyle M}, andbi−1{\displaystyle b_{i}^{-1}}is multiplicative inverse ofbi{\displaystyle b_{i}}modulomi{\displaystyle m_{i}}.
RNS have applications in the field ofdigitalcomputer arithmetic. By decomposing in this a large integer into a set of smaller integers, a large calculation can be performed as a series of smaller calculations that can be performed independently and in parallel.
|
https://en.wikipedia.org/wiki/Residue_number_system
|
In cryptanalysis andcomputer security,password crackingis the process of guessing passwords[1]protecting acomputer system. A common approach (brute-force attack) is to repeatedly try guesses for the password and to check them against an availablecryptographic hashof the password.[2]Another type of approach ispassword spraying, which is often automated and occurs slowly over time in order to remain undetected, using a list of common passwords.[3]
The purpose of password cracking might be to help a user recover a forgotten password (due to the fact that installing an entirely new password would involve System Administration privileges), to gain unauthorized access to a system, or to act as a preventive measure wherebysystem administratorscheck for easily crackable passwords. On a file-by-file basis, password cracking is utilized to gain access to digital evidence to which a judge has allowed access, when a particular file's permissions restricted.
The time to crack a password is related to bit strength, which is a measure of the password'sentropy, and the details of how the password is stored. Most methods of password cracking require the computer to produce many candidate passwords, each of which is checked. One example isbrute-forcecracking, in which a computer trieseverypossible key or password until it succeeds. With multiple processors, this time can be optimized through searching from the last possible group of symbols and the beginning at the same time, with other processors being placed to search through a designated selection of possible passwords.[4]More common methods of password cracking, such asdictionary attacks, pattern checking, and variations of common words, aim to optimize the number of guesses and are usually attempted before brute-force attacks. Higher password bit strength exponentially increases the number of candidate passwords that must be checked, on average, to recover the password and reduces the likelihood that the password will be found in any cracking dictionary.[5]
The ability to crack passwords using computer programs is also a function of the number of possible passwords per second which can be checked. If a hash of the target password is available to the attacker, this number can be in the billions or trillions per second, since anoffline attackis possible. If not, the rate depends on whether the authentication software limits how often a password can be tried, either by time delays,CAPTCHAs, or forced lockouts after some number of failed attempts. Another situation where quick guessing is possible is when the password is used to form acryptographic key. In such cases, an attacker can quickly check to see if a guessed password successfully decodes encrypted data.
For some kinds of password hash, ordinary desktop computers can test over a hundred million passwords per second using password cracking tools running on a general purpose CPU and billions of passwords per second using GPU-based password cracking tools[1][6][7](seeJohn the Ripperbenchmarks).[8]The rate of password guessing depends heavily on the cryptographic function used by the system to generate password hashes. A suitable password hashing function, such asbcrypt, is many orders of magnitude better than a naive function like simpleMD5orSHA. A user-selected eight-character password with numbers, mixed case, and symbols, with commonly selected passwords and other dictionary matches filtered out, reaches an estimated 30-bit strength, according to NIST. 230is only one billion permutations[9]and would be cracked in seconds if the hashing function were naive. When ordinary desktop computers are combined in a cracking effort, as can be done withbotnets, the capabilities of password cracking are considerably extended. In 2002,distributed.netsuccessfully found a 64-bitRC5key in four years, in an effort which included over 300,000 different computers at various times, and which generated an average of over 12 billion keys per second.[10]
Graphics processing unitscan speed up password cracking by a factor of 50 to 100 over general purpose computers for specific hashing algorithms. As an example, in 2011, available commercial products claimed the ability to test up to 2,800,000,000NTLMpasswords a second on a standard desktop computer using a high-end graphics processor.[11]Such a device can crack a 10-letter single-case password in one day. The work can be distributed over many computers for an additional speedup proportional to the number of available computers with comparable GPUs. However some algorithms run slowly, or even are specifically designed to run slowly, on GPUs. Examples areDES,Triple DES,bcrypt,scrypt, andArgon2.
Hardware acceleration in aGPUhas enabled resources to be used to increase the efficiency and speed of a brute force attack for most hashing algorithms. In 2012, Stricture Consulting Group unveiled a 25-GPU cluster that achieved a brute force attack speed of 350 billion guesses of NTLM passwords per second, allowing them to check958{\textstyle 95^{8}}password combinations in 5.5 hours, enough to crack all 8-character alpha-numeric-special-character passwords commonly used in enterprise settings. Using ocl-HashcatPlus on a VirtualOpenCLcluster platform,[12]the Linux-basedGPU clusterwas used to "crack 90 percent of the 6.5 million password hashes belonging to users of LinkedIn".[13]
For some specific hashing algorithms, CPUs and GPUs are not a good match. Purpose-made hardware is required to run at high speeds. Custom hardware can be made usingFPGAorASICtechnology. Development for both technologies is complex and (very) expensive. In general, FPGAs are favorable in small quantities, ASICs are favorable in (very) large quantities, more energy efficient, and faster. In 1998, theElectronic Frontier Foundation(EFF) built a dedicated password cracker using ASICs. Their machine,Deep Crack, broke a DES 56-bit key in 56 hours, testing over 90 billion keys per second.[14]In 2017, leaked documents showed that ASICs were used for a military project that had a potential to code-break many parts of the Internet communications with weaker encryption.[15]Since 2019, John the Ripper supports password cracking for a limited number of hashing algorithms using FPGAs.[16]Commercial companies are now using FPGA-based setups for password cracking.[17]
Passwords that are difficult to remember will reduce the security of a system because:
Similarly, the more stringent the requirements for password strength, e.g. "have a mix of uppercase and lowercase letters and digits" or "change it monthly", the greater the degree to which users will subvert the system.[18]
In "The Memorability and Security of Passwords",[19]Jeff Yanet al.examine the effect of advice given to users about a good choice of password. They found that passwords based on thinking of a phrase and taking the first letter of each word are just as memorable as naively selected passwords, and just as hard to crack as randomly generated passwords. Combining two unrelated words is another good method. Having a personally designed "algorithm" for generating obscure passwords is another good method.
However, asking users to remember a password consisting of a "mix of uppercase and lowercase characters" is similar to asking them to remember a sequence of bits: hard to remember, and only a little bit harder to crack (e.g. only 128 times harder to crack for 7-letter passwords, less if the user simply capitalizes one of the letters). Asking users to use "both letters and digits" will often lead to easy-to-guess substitutions such as 'E' → '3' and 'I' → '1': substitutions which are well known to attackers. Similarly, typing the password one keyboard row higher is a common trick known to attackers.
Research detailed in an April 2015 paper by several professors atCarnegie Mellon Universityshows that people's choices of password structure often follow several known patterns. For example, when password requirements require a long minimum length such as 16 characters, people tend to repeat characters or even entire words within their passwords.[20]As a result, passwords may be much more easily cracked than their mathematical probabilities would otherwise indicate. Passwords containing one digit, for example, disproportionately include it at the end of the password.[20]
On July 16, 1998,CERTreported an incident where an attacker had found 186,126 encrypted passwords. By the time the breach was discovered, 47,642 passwords had already been cracked.[21]
In December 2009, a major password breach ofRockyou.comoccurred that led to the release of 32 million passwords. The attacker then leaked the full list of the 32 million passwords (with no other identifiable information) to the internet. Passwords were stored incleartextin the database and were extracted through anSQL injectionvulnerability. TheImpervaApplication Defense Center (ADC) did an analysis on the strength of the passwords.[22]Some of the key findings were:
In June 2011,NATO(North Atlantic Treaty Organization) suffered a security breach that led to the public release of first and last names, usernames, and passwords of more than 11,000 registered users of their e-bookshop. The data were leaked as part ofOperation AntiSec, a movement that includesAnonymous,LulzSec, and other hacking groups and individuals.[23]
On July 11, 2011,Booz Allen Hamilton, a large American consulting firm that does a substantial amount of work forthe Pentagon, had its servers hacked byAnonymousand leaked the same day. "The leak, dubbed 'Military Meltdown Monday', includes 90,000 logins of military personnel—including personnel fromUSCENTCOM,SOCOM, theMarine Corps, variousAir Forcefacilities,Homeland Security,State Departmentstaff, and what looks like private-sector contractors."[24]These leaked passwords were found to be hashed withunsaltedSHA-1, and were later analyzed by the ADC team atImperva, revealing that even some military personnel used passwords as weak as "1234".[25]
On July 18, 2011, Microsoft Hotmail banned the password: "123456".[26]
In July 2015, a group calling itself "The Impact Team"stole the user data of Ashley Madison.[27]Many passwords were hashed using both the relatively strongbcryptalgorithm and the weakerMD5hash. Attacking the latter algorithm allowed some 11 million plaintext passwords to be recovered by password cracking group CynoSure Prime.[28]
One method of preventing a password from being cracked is to ensure that attackers cannot get access even to the hashed password. For example, on theUnixoperating system, hashed passwords were originally stored in a publicly accessible file/etc/passwd. On modern Unix (and similar) systems, on the other hand, they are stored in theshadow passwordfile/etc/shadow, which is accessible only to programs running with enhanced privileges (i.e., "system" privileges). This makes it harder for a malicious user to obtain the hashed passwords in the first instance, however many collections of password hashes have been stolen despite such protection. And some common network protocols transmit passwords in cleartext or use weak challenge/response schemes.[29][30]
The use ofsalt, a random value unique to each password that is incorporated in the hashing, prevents multiple hashes from being attacked simultaneously and also prevents the creation of pre-computed dictionaries such asrainbow tables.
Another approach is to combine a site-specific secret key with the password hash, which prevents plaintext password recovery even if the hashed values are purloined. Howeverprivilege escalationattacks that can steal protected hash files may also expose the site secret. A third approach is to usekey derivation functionsthat reduce the rate at which passwords can be guessed.[31]: 5.1.1.2
Modern Unix Systems have replaced the traditionalDES-based password hashing functioncrypt()with stronger methods such ascrypt-SHA,bcrypt, andscrypt.[32]Other systems have also begun to adopt these methods. For instance, the Cisco IOS originally used a reversibleVigenère cipherto encrypt passwords, but now uses md5-crypt with a 24-bit salt when the "enable secret" command is used.[33]These newer methods use large salt values which prevent attackers from efficiently mounting offline attacks against multiple user accounts simultaneously. The algorithms are also much slower to execute which drastically increases the time required to mount a successful offline attack.[34]
Many hashes used for storing passwords, such asMD5and theSHAfamily, are designed for fast computation with low memory requirements and efficient implementation in hardware. Multiple instances of these algorithms can be run in parallel ongraphics processing units(GPUs), speeding cracking. As a result, fast hashes are ineffective in preventing password cracking, even with salt. Somekey stretchingalgorithms, such asPBKDF2andcrypt-SHAiteratively calculate password hashes and can significantly reduce the rate at which passwords can be tested, if the iteration count is high enough. Other algorithms, such asscryptarememory-hard, meaning they require relatively large amounts of memory in addition to time-consuming computation and are thus more difficult to crack using GPUs and custom integrated circuits.
In 2013 a long-termPassword Hashing Competitionwas announced to choose a new, standard algorithm for password hashing,[35]withArgon2chosen as the winner in 2015. Another algorithm,Balloon, is recommended byNIST.[36]Both algorithms are memory-hard.
Solutions like asecurity tokengive aformal proofanswer[clarification needed]by constantly shifting password. Those solutions abruptly reduce the timeframe available forbrute forcing(the attacker needs to break and use the password within a single shift) and they reduce the value of the stolen passwords because of its short time validity.
There are many password cracking software tools, but the most popular[37]areAircrack-ng,Cain & Abel,John the Ripper,Hashcat,Hydra,DaveGrohl, andElcomSoft. Manylitigation support softwarepackages also include password cracking functionality. Most of these packages employ a mixture of cracking strategies; algorithms with brute-force and dictionary attacks proving to be the most productive.[38]
The increased availability of computing power and beginner friendly automated password cracking software for a number of protection schemes has allowed the activity to be taken up byscript kiddies.[39]
|
https://en.wikipedia.org/wiki/Password_cracking
|
Incryptography, anonceis an arbitrary number that can be used just once in a cryptographic communication.[1]It is often arandomorpseudo-randomnumber issued in anauthentication protocolto ensure that each communication session is unique, and therefore that old communications cannot be reused inreplay attacks. Nonces can also be useful asinitialization vectorsand incryptographic hash functions.
A nonce is an arbitrary number used only once in a cryptographic communication, in the spirit of anonce word. They are oftenrandomorpseudo-randomnumbers. Many nonces also include atimestampto ensure exact timeliness, though this requiresclock synchronisationbetween organisations. The addition of a client nonce ("cnonce") helps to improve the security in some ways as implemented indigest access authentication. To ensure that a nonce is used only once, it should be time-variant (including a suitably fine-grained timestamp in its value), or generated with enough random bits to ensure an insignificantly low chance of repeating a previously generated value. Some authors define pseudo-randomness (or unpredictability) as a requirement for a nonce.[2]
Nonce is a word dating back toMiddle Englishfor something only used once or temporarily (often with the construction "for the nonce"). It descends from the construction "then anes" ("the one [purpose]").[3]Afalse etymologyclaiming it to stand for "number used once"[4]or similar is incorrect.
Authentication protocolsmay use nonces to ensure that old communications cannot be reused inreplay attacks. For instance, nonces are used inHTTPdigest access authenticationto calculate anMD5digest of thepassword. The nonces are different each time the 401 authentication challengeresponse codeis presented, thus makingreplay attacksvirtually impossible. The scenario of ordering products over the Internet can provide an example of the usefulness of nonces in replay attacks. An attacker could take the encrypted information and—without needing to decrypt—could continue to send a particular order to the supplier, thereby ordering products over and over again under the same name and purchase information. The nonce is used to give 'originality' to a given message so that if the company receives any other orders from the same person with the same nonce, it will discard those as invalid orders.
A nonce may be used to ensure security for astream cipher. Where the same key is used for more than one message and then a different nonce is used to ensure that thekeystreamis different for different messages encrypted with that key; often the message number is used.
Secret nonce values are used by theLamport signaturescheme as a signer-side secret which can be selectively revealed for comparison to public hashes for signature creation and verification.
Initialization vectorsmay be referred to as nonces, as they are typically random or pseudo-random.
Nonces are used inproof-of-work systemsto vary the input to acryptographic hash functionso as to obtain a hash for a certain input that fulfils certain arbitrary conditions. In doing so, it becomes far more difficult to create a "desirable" hash than to verify it, shifting the burden of work onto one side of a transaction or system. For example, proof of work, using hash functions, was considered as a means to combatemail spamby forcing email senders to find a hash value for the email (which included a timestamp to prevent pre-computation of useful hashes for later use) that had an arbitrary number of leading zeroes, by hashing the same input with a large number of values until a "desirable" hash was obtained.
Similarly, theBitcoinblockchainhashing algorithm can be tuned to an arbitrary difficulty by changing the required minimum/maximum value of the hash so that the number of bitcoins awarded for new blocks does not increase linearly with increased network computation power as new users join. This is likewise achieved by forcing Bitcoin miners to add nonce values to the value being hashed to change the hash algorithm output. As cryptographic hash algorithms cannot easily be predicted based on their inputs, this makes the act of blockchain hashing and the possibility of being awarded bitcoins something of a lottery, where the first "miner" to find a nonce that delivers a desirable hash is awarded bitcoins.
|
https://en.wikipedia.org/wiki/Cryptographic_nonce
|
Incryptography, aninitialization vector(IV) orstarting variable[1]is an input to acryptographic primitivebeing used to provide the initial state. The IV is typically required to berandomorpseudorandom, but sometimes an IV only needs to be unpredictable or unique.Randomizationis crucial for someencryptionschemes to achievesemantic security, a property whereby repeated usage of the scheme under the samekeydoes not allow an attacker to infer relationships between (potentially similar) segments of the encrypted message. Forblock ciphers, the use of an IV is described by themodes of operation.
Some cryptographic primitives require the IV only to be non-repeating, and the required randomness is derived internally. In this case, the IV is commonly called anonce(a number used only once), and the primitives (e.g.CBC) are consideredstatefulrather thanrandomized. This is because an IV need not be explicitly forwarded to a recipient but may be derived from a common state updated at both sender and receiver side. (In practice, a short nonce is still transmitted along with the message to consider message loss.) An example of stateful encryption schemes is thecounter modeof operation, which has asequence numberfor a nonce.
The IV size depends on the cryptographic primitive used; for block ciphers it is generally the cipher's block-size. In encryption schemes, the unpredictable part of the IV has at best the same size as the key to compensate for time/memory/data tradeoff attacks.[2][3][4][5]When the IV is chosen at random, the probability of collisions due to thebirthday problemmust be taken into account. Traditional stream ciphers such asRC4do not support an explicit IV as input, and a custom solution for incorporating an IV into the cipher's key or internal state is needed. Some designs realized in practice are known to be insecure; theWEPprotocol is a notable example, and is prone to related-IV attacks.
Ablock cipheris one of the most basicprimitivesin cryptography, and frequently used for dataencryption. However, by itself, it can only be used to encode a data block of a predefined size, called theblock size. For example, a single invocation of theAESalgorithm transforms a 128-bitplaintextblock into aciphertextblock of 128 bits in size. Thekey, which is given as one input to the cipher, defines the mapping between plaintext and ciphertext. If data of arbitrary length is to be encrypted, a simple strategy is to split the data into blocks each matching the cipher's block size, and encrypt each block separately using the same key. This method is not secure as equal plaintext blocks get transformed into equal ciphertexts, and a third party observing the encrypted data may easily determine its content even when not knowing the encryption key.
To hide patterns in encrypted data while avoiding the re-issuing of a new key after each block cipher invocation, a method is needed torandomizethe input data. In 1980, theNISTpublished a national standard document designatedFederal Information Processing Standard(FIPS) PUB 81, which specified four so-calledblock cipher modes of operation, each describing a different solution for encrypting a set of input blocks. The first mode implements the simple strategy described above, and was specified as theelectronic codebook(ECB) mode. In contrast, each of the other modes describe a process where ciphertext from one block encryption step gets intermixed with the data from the next encryption step. To initiate this process, an additional input value is required to be mixed with the first block, and which is referred to as aninitialization vector. For example, thecipher-block chaining(CBC) mode requires an unpredictable value, of size equal to the cipher's block size, as additional input. This unpredictable value is added to the first plaintext block before subsequent encryption. In turn, the ciphertext produced in the first encryption step is added to the second plaintext block, and so on. The ultimate goal for encryption schemes is to providesemantic security: by this property, it is practically impossible for an attacker to draw any knowledge from observed ciphertext. It can be shown that each of the three additional modes specified by the NIST are semantically secure under so-calledchosen-plaintext attacks.
Properties of an IV depend on the cryptographic scheme used. A basic requirement isuniqueness, which means that no IV may be reused under the same key. For block ciphers, repeated IV values devolve the encryption scheme into electronic codebook mode: equal IV and equal plaintext result in equal ciphertext. Instream cipherencryption uniqueness is crucially important as plaintext may be trivially recovered otherwise.
Many schemes require the IV to beunpredictableby anadversary. This is effected by selecting the IV atrandomorpseudo-randomly. In such schemes, the chance of a duplicate IV isnegligible, but the effect of thebirthday problemmust be considered. As for the uniqueness requirement, a predictable IV may allow recovery of (partial) plaintext.
Depending on whether the IV for a cryptographic scheme must be random or only unique the scheme is either calledrandomizedorstateful. While randomized schemes always require the IV chosen by a sender to be forwarded to receivers, stateful schemes allow sender and receiver to share a common IV state, which is updated in a predefined way at both sides.
Block cipher processing of data is usually described as a mode of operation. Modes are primarily defined for encryption as well asauthentication, though newer designs exist that combine both security solutions in so-calledauthenticated encryptionmodes. While encryption and authenticated encryption modes usually take an IV matching the cipher's block size, authentication modes are commonly realized asdeterministic algorithms, and the IV is set to zero or some other fixed value.
In stream ciphers, IVs are loaded into the keyed internal secret state of the cipher, after which a number of cipher rounds are executed prior to releasing the first bit of output. For performance reasons, designers of stream ciphers try to keep that number of rounds as small as possible, but because determining the minimal secure number of rounds for stream ciphers is not a trivial task, and considering other issues such asentropyloss, unique to each cipher construction, related-IVs and other IV-related attacks are a known security issue for stream ciphers, which makes IV loading in stream ciphers a serious concern and a subject of ongoing research.
The802.11encryptionalgorithmcalled WEP (short forWired Equivalent Privacy) used a short, 24-bit IV, leading to reused IVs with the same key, which led to it being easily cracked.[7]Packet injectionallowed for WEP to be cracked in times as short as several seconds. This ultimately led to the deprecation of WEP.
Incipher-block chaining mode(CBC mode), the IV need not be secret, but must be unpredictable (In particular, for any given plaintext, it must not be possible to predict the IV that will be associated to the plaintext in advance of the generation of the IV.) at encryption time. Additionally for theoutput feedback mode(OFB mode), the IV must be unique.[8]In particular, the (previously) common practice of re-using the last ciphertext block of a message as the IV for the next message is insecure (for example, this method was used by SSL 2.0). If an attacker knows the IV (or the previous block of ciphertext) before he specifies the next plaintext, he can check his guess about plaintext of some block that was encrypted with the same key before. This is known as the TLS CBC IV attack, also called theBEAST attack.[9]
|
https://en.wikipedia.org/wiki/Initialization_vector
|
Incryptography,paddingis any of a number of distinct practices which all include adding data to the beginning, middle, or end of a message prior to encryption. In classical cryptography, padding may include adding nonsense phrases to a message to obscure the fact that many messages end in predictable ways, e.g.sincerely yours.
Official messages often start and end in predictable ways:My dear ambassador, Weather report, Sincerely yours, etc. The primary use of padding withclassical ciphersis to prevent the cryptanalyst from using that predictability to findknown plaintext[1]that aids in breaking the encryption. Random length padding also prevents an attacker from knowing the exact length of the plaintext message.
A famous example of classical padding which caused a great misunderstanding is "the world wonders" incident, which nearly caused an Allied loss at the World War IIBattle off Samar, part of the largerBattle of Leyte Gulf. In that example,Admiral Chester Nimitz, theCommander in Chief, U.S. Pacific Fleetin WWII, sent the following message toAdmiral Bull Halsey, commander of Task Force Thirty Four (the main Allied fleet) at the Battle of Leyte Gulf, on October 25, 1944:[2]
Where is, repeat, where is Task Force Thirty Four?[3]
With padding (bolded) andmetadataadded, the message became:
TURKEY TROTS TO WATERGG FROM CINCPAC ACTION COM THIRD FLEET INFO COMINCH CTF SEVENTY-SEVEN X WHERE IS RPT WHERE IS TASK FORCE THIRTY FOUR RRTHE WORLD WONDERS[3]
Halsey's radio operator mistook some of the padding for the message and so Admiral Halsey ended up reading the following message:
Where is, repeat, where is Task Force Thirty Four? The world wonders[3]
Admiral Halsey interpreted the padding phrase "the world wonders" as a sarcastic reprimand, which caused him to have an emotional outburst and then lock himself in his bridge and sulk for an hour before he moved his forces to assist at the Battle off Samar.[2]Halsey's radio operator should have been tipped off by the lettersRRthat "the world wonders" was padding; all other radio operators who received Admiral Nimitz's message correctly removed both padding phrases.[2]
Many classical ciphers arrange the plaintext into particular patterns (e.g., squares, rectangles, etc.) and if the plaintext does not exactly fit, it is often necessary to supply additional letters to fill out the pattern. Using nonsense letters for this purpose has a side benefit of making some kinds of cryptanalysis more difficult.
Most moderncryptographic hash functionsprocess messages in fixed-length blocks; all but the earliest hash functions include some sort of padding scheme. It is critical for cryptographic hash functions to employ termination schemes that prevent a hash from being vulnerable tolength extension attacks.
Many padding schemes are based on appending predictable data to the final block. For example, the pad could be derived from the total length of the message. This kind of padding scheme is commonly applied to hash algorithms that use theMerkle–Damgård constructionsuch asMD-5,SHA-1, andSHA-2 familysuch as SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/224, and SHA-512/256[4]
Cipher-block chaining(CBC) mode is an example ofblock cipher mode of operation. Some block cipher modes (CBC and PCBC essentially) forsymmetric-key encryption algorithmsrequire plain text input that is a multiple of the block size, so messages may have to be padded to bring them to this length.
There is currently[when?]a shift to use streaming mode of operation instead of block mode of operation.[citation needed]An example of streaming mode encryption is thecounter mode of operation.[5]Streaming modes of operation can encrypt and decrypt messages of any size and therefore do not require padding. More intricate ways of ending a message such asciphertext stealingorresidual block terminationavoid the need for padding.
A disadvantage of padding is that it makes the plain text of the message susceptible topadding oracle attacks. Padding oracle attacks allow the attacker to gain knowledge of the plain text without attacking the block cipher primitive itself. Padding oracle attacks can be avoided by making sure that an attacker cannot gain knowledge about the removal of the padding bytes. This can be accomplished by verifying amessage authentication code (MAC)ordigital signaturebeforeremoval of the padding bytes, or by switching to a streaming mode of operation.
Bit padding can be applied to messages of any size.
A single '1' bit is added to the message and then as many '0' bits as required (possibly none) are added. The number of '0' bits added will depend on the block boundary to which the message needs to be extended. In bit terms this is "1000 ... 0000".
This method can be used to pad messages which are any number of bits long, not necessarily a whole number of bytes long. For example, a message of 23 bits that is padded with 9 bits in order to fill a 32-bit block:
This padding is the first step of a two-step padding scheme used in manyhash functionsincludingMD5andSHA. In this context, it is specified byRFC1321step 3.1.
This padding scheme is defined byISO/IEC 9797-1as Padding Method 2.
Byte padding can be applied to messages that can be encoded as an integral number ofbytes.
In ANSI X9.23, between 1 and 8 bytes are always added as padding. The block is padded with random bytes (although many implementations use 00) and the last byte of the block is set to the number of bytes added.[6]
Example:
In the following example the block size is 8 bytes, and padding is required for 4 bytes (in hexadecimal format)
ISO 10126 (withdrawn, 2007[7][8]) specifies that the padding should be done at the end of that last block with random bytes, and the padding boundary should be specified by the last byte.
Example:
In the following example the block size is 8 bytes and padding is required for 4 bytes
PKCS#7is described inRFC 5652.
Padding is in whole bytes. The value of each added byte is the number of bytes that are added, i.e.Nbytes, each of valueNare added. The number of bytes added will depend on the block boundary to which the message needs to be extended.
The padding will be one of:
This padding method (as well as the previous two) is well-defined if and only ifNis less than 256.
Example:
In the following example, the block size is 8 bytes and padding is required for 4 bytes
If the length of the original data is an integer multiple of the block sizeB, then an extra block of bytes with valueBis added. This is necessary so the deciphering algorithm can determine with certainty whether the last byte of the last block is a pad byte indicating the number of padding bytes added or part of the plaintext message. Consider a plaintext message that is an integer multiple ofBbytes with the last byte of plaintext being01. With no additional information, the deciphering algorithm will not be able to determine whether the last byte is a plaintext byte or a pad byte. However, by addingBbytes each of valueBafter the01plaintext byte, the deciphering algorithm can always treat the last byte as a pad byte and strip the appropriate number of pad bytes off the end of the ciphertext; said number of bytes to be stripped based on the value of the last byte.
PKCS#5 padding is identical to PKCS#7 padding, except that it has only been defined for block ciphers that use a 64-bit (8-byte) block size. In practice, the two can be used interchangeably.
The maximum block size is 255, as it is the biggest number a byte can contain.
ISO/IEC 7816-4:2005[9]is identical to the bit padding scheme, applied to a plain text ofNbytes. This means in practice that the first byte is a mandatory byte valued '80' (Hexadecimal) followed, if needed, by 0 toN− 1 bytes set to '00', until the end of the block is reached. ISO/IEC 7816-4 itself is a communication standard for smart cards containing a file system, and in itself does not contain any cryptographic specifications.
Example:
In the following example the block size is 8 bytes and padding is required for 4 bytes
The next example shows a padding of just one byte
All the bytes that are required to be padded are padded with zero. The zero padding scheme has not been standardized for encryption,[citation needed]although it is specified for hashes and MACs as Padding Method 1 in ISO/IEC 10118-1[10]andISO/IEC 9797-1.[11]
Example:
In the following example the block size is 8 bytes and padding is required for 4 bytes
Zero padding may not be reversible if the original file ends with one or more zero bytes, making it impossible to distinguish between plaintext data bytes and padding bytes. It may be used when the length of the message can be derivedout-of-band. It is often applied to binary encoded[clarification needed]strings(null-terminated string) as thenull charactercan usually be stripped off aswhitespace.
Zero padding is sometimes also referred to as "null padding" or "zero byte padding". Some implementations may add an additional block of zero bytes if the plaintext is already divisible by the block size.[citation needed]
Inpublic key cryptography, padding is the process of preparing a message for encryption or signing using a specification or scheme such asPKCS#1v2.2,OAEP,PSS, PSSR, IEEE P1363 EMSA2 and EMSA5. A modern form of padding for asymmetric primitives isOAEPapplied to theRSA algorithm, when it is used to encrypt a limited number of bytes.
The operation is referred to as "padding" because originally, random material was simply appended to the message to make it long enough for the primitive. This form of padding is not secure and is therefore no longer applied. A modern padding scheme aims to ensure that the attacker cannot manipulate the plaintext to exploit the mathematical structure of the primitive and will usually be accompanied by a proof, often in therandom oracle model, that breaking the padding scheme is as hard as solving the hard problem underlying the primitive.
Even if perfect cryptographic routines are used, the attacker can gain knowledge of the amount of traffic that was generated. The attacker might not know whatAlice and Bobwere talking about, but can know that theyweretalking andhow muchthey talked. In some circumstances this leakage can be highly compromising. Consider for example when a military is organising a secret attack against another nation: it may suffice to alert the other nation for them to know merely that thereisa lot of secret activity going on.
As another example, when encryptingVoice Over IPstreams that use variable bit rate encoding, the number of bits per unit of time is not obscured, and this can be exploited to guess spoken phrases.[12]Similarly, the burst patterns that common video encoders produce are often sufficient to identify the streaming video a user is watching uniquely.[13]Even thetotal sizeof an object alone, such as a website, file, software package download, or online video, can uniquely identify an object, if the attacker knows or can guess a known set the object comes from.[14][15][16]Theside-channelof encrypted content length was used to extract passwords fromHTTPScommunications in the well-knownCRIMEandBREACHattacks.[17]
Padding an encrypted message can maketraffic analysisharder by obscuring the true length of its payload. The choice of length to pad a message to may be made either deterministically or randomly; each approach has strengths and weaknesses that apply in different contexts.
A random number of additional padding bits or bytes may be appended to the end of a message, together with an indication at the end how much padding was added. If the amount of padding is chosen as a uniform random number between 0 and some maximum M, for example, then an eavesdropper will be unable to determine the message's length precisely within that range. If the maximum padding M is small compared to the message's total size, then this padding will not add muchoverhead, but the padding will obscure only the least-significant bits of the object's total length, leaving the approximate length of large objects readily observable and hence still potentially uniquely identifiable by their length. If the maximum padding M is comparable to the size of the payload, in contrast, an eavesdropper's uncertainty about the message's true payload size is much larger, at the cost that padding may add up to 100% overhead (2×blow-up) to the message.
In addition, in common scenarios in which an eavesdropper has the opportunity to seemanysuccessive messages from the same sender, and those messages are similar in ways the attacker knows or can guess, then the eavesdropper can use statistical techniques to decrease and eventually even eliminate the benefit of randomized padding. For example, suppose a user's application regularly sends messages of the same length, and the eavesdropper knows or can guess fact based on fingerprinting the user's application for example. Alternatively, an active attacker might be able toinducean endpoint to send messages regularly, such as if the victim is a public server. In such cases, the eavesdropper can simply compute the average over many observations to determine the length of the regular message's payload.
A deterministic padding scheme always pads a message payload of a given length to form an encrypted message of a particular corresponding output length. When many payload lengths map to the same padded output length, an eavesdropper cannot distinguish or learn any information about the payload's true length within one of these lengthbuckets, even after many observations of the identical-length messages being transmitted. In this respect, deterministic padding schemes have the advantage of not leaking any additional information with each successive message of the same payload size.
On the other hand, suppose an eavesdropper can benefit from learning aboutsmallvariations in payload size, such as plus or minus just one byte in a password-guessing attack for example. If the message sender is unlucky enough to send many messages whose payload lengths vary by only one byte, and that length is exactly on the border between two of the deterministic padding classes, then these plus-or-minus one payload lengths will consistently yield different padded lengths as well (plus-or-minus one block for example), leaking exactly the fine-grained information the attacker desires. Against such risks, randomized padding can offer more protection by independently obscuring the least-significant bits of message lengths.
Common deterministic padding methods include padding to a constant block size and padding to the next-larger power of two. Like randomized padding with a small maximum amountM, however, padding deterministically to a block size much smaller than the message payload obscures only the least-significant bits of the messages true length, leaving the messages's true approximate length largely unprotected. Padding messages to a power of two (or any other fixed base) reduces the maximum amount ofinformationthat the message can leak via its length fromO(logM)toO(log logM). Padding to a power of two increases message size overhead by up to 100%, however, and padding to powers of larger integer bases increase maximum overhead further.
The PADMÉ scheme, proposed forpadded uniform random blobs or PURBs, deterministically pads messages to lengths representable as afloating point numberwhose mantissa is no longer (i.e., contains no more significant bits) than its exponent.[16]This length constraint ensures that a message leaks at mostO(log logM)bits of information via its length, like padding to a power of two, but incurs much less overhead of at most 12% for tiny messages and decreasing gradually with message size.
|
https://en.wikipedia.org/wiki/Padding_(cryptography)
|
TheHasty Pudding cipher(HPC) is a variable-block-sizeblock cipherdesigned byRichard Schroeppel, which was an unsuccessful candidate in the competition for selecting theU.S.Advanced Encryption Standard(AES). It has a number of unusual properties for a block cipher: its input block size and key length are variable, and it includes an additional input parameter called the "spice" for use as a secondary, non-secret key. The Hasty Pudding cipher was the only AES candidate designed exclusively by U.S. cryptographers.[1][2]
The Hasty Pudding cipher is in thepublic domain,[3]andopen sourceimplementations are available.[4]
The Hasty Pudding cipher consists of 5 different sub-ciphers:[5]
The Hasty Pudding cipher algorithms all use 64-bit words internally. The cipher is designed to run on 64-bitmachines, which can easily perform simple operations on 64-bit words.
The Hasty Pudding cipher can take a key of any number of bits for any one of the five subciphers. The cipher itself uses akey tableof 16,384 bits (256 64-bit words). To derive the key table from the key, the key expansion function uses the following algorithm:[5]
Each of the subciphers uses a different algorithm, but there are certain similarities. Three inputs are used to determine the ciphertext: the plaintext (in several 64-bit words plus one "fragment"), the spice (eight 64-bit words, with default value 0), and the key table. The operations within the cipher consist ofstirring, which combines internal variables in various ways with values from the key table and spice at regular intervals. HPC-Short uses two fixed permutations in addition, and HPC-Tiny consists of many special sub-cases.
Decryption involves undoing the steps of encryption one by one. Many operations are easily undone (e.g.s0=s0+s1is undone by computings0=s0−s1). Other operations are more complex to undo. Some of the ideas involved include:
The Hasty Pudding cipher can also be used to encrypt values in a range that do not translate to strings with an integral number of bits; for instance, it can encrypt a number from 0 to N by producing another number from 0 toN. It does this by using the smallest subcipher that can handle the input as a bit string, and applying it to the input as a bit string, repeatedly, until the output is in the proper range.[5]
Schroeppel claimed that the Hasty Pudding cipher was the fastest AES candidate on a 64-bit architecture;[6]Schroeppel claimed that it was twice as fast as its nearest competitor,DFC, and three times as fast as the other candidates, and that its performance on a 32-bit machine was adequate.[6]Comments from others did not support this view; for instance,Schneieret al.'s analysis ranked the Hasty Pudding cipher 4th best (376 cycles) on a 64-bit machine, although forRijndaelandTwofish, the performance was only estimated.[7]On a 32-bitPentium, Hasty Pudding encryption was rated by Schneier et al. at 1600 clock cycles, 10th best out of the 15 candidates.[7]Schneier et al., and Schroeppel, noted that the speed of the cipher would be significantly impacted on a 32-bit machine because of its heavy use of 64-bit operations, particularly bit shifts.[3][7]
The Hasty Pudding cipher's key setup was rated as relatively slow; 120000 cycles on a Pentium.[7]
The cipher was criticized for its performance onsmartcards. Specifically, some comments pointed out the difficulty of keeping over 2KB of RAM for the key table.[8]
There have been relatively few results on attacking the Hasty Pudding cipher. Early in the AES process,David Wagnernoted that relatively large classes of Hasty Pudding keys were equivalent in that they led to the same key table.[9]This was expanded upon by D'Halluin et al., who noted that for 128-bit keys, approximately 2120keys areweak keysthat each have 230equivalent keys each.[10]In response to this attack, Schroeppel modified the key expansion algorithm to include one additional step.[5]
Despite the relative lack of cryptanalysis, the Hasty Pudding cipher was criticized for its hard-to-understand design and its lack of grounding in research results.[9][11]Schroeppel has offered a bottle ofDom Pérignon champagneto the best paper presenting progress on the Hasty Pudding cipher.[3]It did not make the second round of consideration for AES.[12]
The Hasty Pudding cipher is considered the firsttweakable block cipher.[13]
|
https://en.wikipedia.org/wiki/Hasty_Pudding_cipher#Encryption_and_decryption
|
Arainbow tableis aprecomputedtablefor caching the outputs of acryptographic hash function, usually forcrackingpassword hashes. Passwords are typically stored not in plain text form, but as hash values. If such a database of hashed passwords falls into the hands of attackers, they can use a precomputed rainbow table to recover the plaintext passwords. A common defense against this attack is to compute the hashes using akey derivation functionthat adds a "salt" to each password before hashing it, with different passwords receiving different salts, which are stored in plain text along with the hash.
Rainbow tables are a practical example of aspace–time tradeoff: they use less computer processing time and more storage than abrute-force attackwhich calculates a hash on every attempt, but more processing time and less storage than a simple table that stores the hash of every possible password.
Rainbow tables were invented by Philippe Oechslin[1]as an application of an earlier, simpler algorithm byMartin Hellman.[2]
For userauthentication, passwords are stored either asplaintextorhashes. Since passwords stored as plaintext are easily stolen if database access is compromised, databases typically store hashes instead. Thus, no one – including the authentication system – can learn a password merely by looking at the value stored in the database.
When a user enters a password for authentication, a hash is computed for it and then compared to the stored hash for that user. Authentication fails if the two hashes do not match; moreover, authentication would equally fail if a hashed value were entered as a password, since the authentication system would hash it a second time.
To learn a password from a hash is to find a string which, when input into the hash function, creates that same hash. This is the same asinvertingthe hash function.
Thoughbrute-force attacks(e.g.dictionary attacks) may be used to try to invert a hash function, they can become infeasible when the set of possible passwords is large enough. An alternative to brute-force is to useprecomputed hash chaintables. Rainbow tables are a special kind of such table that overcome certaintechnical difficulties.
The termrainbow tableswas first used in Oechslin's initial paper. The term refers to the way different reduction functions are used to increase the success rate of the attack. The original method by Hellman uses many small tables with a different reduction function each. Rainbow tables are much bigger and use a different reduction function in each column. When colors are used to represent the reduction functions, a rainbow appears in the rainbow table.
Figure 2 of Oechslin's paper contains a black-and-white graphic that illustrates how these sections are related. For his presentation at the Crypto 2003 conference, Oechslin added color to the graphic in order to make the rainbow association more clear. The enhanced graphic that was presented at the conference is shown in the illustration.
Given a password hash function H and a finite set of passwords P, the goal is to precompute a data structure that, given any outputhof the hash function, can either locate an elementpin P such that H(p) =h, or determine that there is no suchpin P. The simplest way to do this is compute H(p) for allpin P, but then storing the table requiresΘ(|P|n) bits of space, where |P| is the size of the set P andnis the size of an output of H, which is prohibitive for large |P|. Hash chains are a technique for decreasing this space requirement. The idea is to define areduction functionR that maps hash values back into values in P. Note, however, that the reduction function is not actually an inverse of the hash function, but rather a different function with a swappeddomainandcodomainof the hash function. By alternating the hash function with the reduction function,chainsof alternating passwords and hash values are formed. For example, if P were the set of lowercase alphabetic 6-character passwords, and hash values were 32 bits long, a chain might look like this:
The only requirement for the reduction function is to be able to return a "plain text" value in a specific size.
To generate the table, we choose a random set ofinitial passwordsfrom P, compute chains of some fixed lengthkfor each one, and storeonlythe first and last password in each chain. The first password is called thestarting pointand the last one is called theendpoint. In the example chain above, "aaaaaa" would be the starting point and "kiebgt" would be the endpoint, and none of the other passwords (or the hash values) would be stored.
Now, given a hash valuehto invert (find the corresponding password for), compute a chain starting withhby applying R, then H, then R, and so on. If at any point a value matches one of the endpoints in the table, the corresponding starting point allows to recreate the complete chain. There's a high chance that this chain will contain the valueh, and if so, the immediately preceding value in the chain is the passwordpthat we seek.
For example, given the hash920ECF10, its chain can be computed by first applying R:
Since "kiebgt" is one of the endpoints in our table, the corresponding starting password "aaaaaa" allows to follow its chain until920ECF10is reached:
Thus, the password is "sgfnyd" (or a different password that has the same hash value).
Note however that this chain does notalwayscontain the hash valueh; it may so happen that the chain starting athmerges with a chain having a different starting point. For example, the chain of hash valueFB107E70, also leads tokiebgt:
The chain generated by the corresponding starting password "aaaaaa" is then followed untilFB107E70is reached. The search will end without reachingFB107E70because this value is not contained in the chain. This is called afalse alarm. In this case, the match is ignored and the chain ofhis extended looking for another match. If the chain ofhgets extended to lengthkwith no good matches, then the password was never produced in any of the chains.
The table content does not depend on the hash value to be inverted. It is created once and then repeatedly used for the lookups unmodified. Increasing the length of the chain decreases the size of the table. However, it also increases the time required to perform lookups, and this is the time-memory trade-off of the rainbow table. In a simple case of one-item chains, the lookup is very fast, but the table is very big. Once chains get longer, the lookup slows, but the table size goes down.
Simple hash chains have several flaws. Most serious if at any point two chainscollide(produce the same value), they will merge and consequently the table will not cover as many passwords despite having paid the same computational cost to generate. Because previous chains are not stored in their entirety, this is impossible to detect efficiently. For example, if the third value in chain 3 matches the second value in chain 7, the two chains will cover almost the same sequence of values, but their final values will not be the same. The hash function H is unlikely to produce collisions as it is usually considered an important security feature not to do so, but the reduction function R, because of its need to correctly cover the likely plaintexts, cannot becollision resistant.
Other difficulties result from the importance of choosing the correct function for R. Picking R to be the identity is little better than a brute force approach. Only when the attacker has a good idea of likely plaintexts will they be able to choose a function R that makes sure time and space are only used for likely plaintexts, not the entire space of possible passwords. In effect R shepherds the results of prior hash calculations back to likely plaintexts but this benefit comes with the drawback that R likely won't produce every possible plaintext in the class the attacker wishes to check denying certainty to the attacker that no passwords came from their chosen class. Also it can be difficult to design the function R to match the expected distribution of plaintexts.[2]
Rainbow tables effectively solve the problem ofcollisionswith ordinary hash chains by replacing the single reduction function R with a sequence of related reduction functions R1through Rk. In this way, for two chains to collide and merge they must hit the same valueon the same iteration: consequently, the final values in these chains will be identical. A final postprocessing pass can sort the chains in the table and remove any "duplicate" chains that have the same final values as other chains. New chains are then generated to fill out the table. These chains are notcollision-free(they may overlap briefly) but they will not merge, drastically reducing the overall number of collisions.[citation needed]
Using sequences of reduction functions changes how lookup is done: because the hash value of interest may be found at any location in the chain, it's necessary to generatekdifferent chains. The first chain assumes the hash value is in the last hash position and just applies Rk; the next chain assumes the hash value is in the second-to-last hash position and applies Rk−1, then H, then Rk; and so on until the last chain, which applies all the reduction functions, alternating with H. This creates a new way of producing a false alarm: an incorrect "guess" of the position of the hash value may needlessly evaluate a chain.
Although rainbow tables have to follow more chains, they make up for this by having fewer tables: simple hash chain tables cannot grow beyond a certain size without rapidly becoming inefficient due to merging chains; to deal with this, they maintain multiple tables, and each lookup must search through each table. Rainbow tables can achieve similar performance with tables that arektimes larger, allowing them to perform a factor ofkfewer lookups.
Rainbow tables use a refined algorithm with a different reduction function for each "link" in a chain, so that when there is a hash collision in two or more chains, the chains will not merge as long as the collision doesn't occur at the same position in each chain. This increases the probability of a correct crack for a given table size, at the cost of squaring the number of steps required per lookup, as the lookup routine now also needs to iterate through the index of the first reduction function used in the chain.[1]
Rainbow tables are specific to the hash function they were created for e.g.,MD5tables can crack only MD5 hashes. The theory of this technique was invented by Philippe Oechslin[3]as a fast form oftime/memory tradeoff,[1]which he implemented in theWindowspassword crackerOphcrack. The more powerfulRainbowCrackprogram was later developed that can generate and use rainbow tables for a variety of character sets and hashing algorithms, includingLM hash,MD5, andSHA-1.
In the simple case where the reduction function and the hash function have no collision, given a complete rainbow table (one that makes sure to find the corresponding password given any hash) the size of the password set |P|, the timeTthat had been needed to compute the table, the length of the tableLand the average timetneeded to find a password matching a given hash are directly related:[citation needed]
Thus the 8-character lowercase alphanumeric passwords case (|P| ≃ 3×1012) would be easily tractable with a personal computer while the 16-character lowercase alphanumeric passwords case (|P| ≃ 1025) would be completely intractable.
A rainbow table is ineffective against one-way hashes that include largesalts. For example, consider a password hash that is generated using the following function (where "+" is theconcatenationoperator):
saltedhash(password) = hash(password + salt)
Or
saltedhash(password) = hash(hash(password) + salt)
The salt value is not secret and may be generated at random and stored with the password hash. A large salt value prevents precomputation attacks, including rainbow tables, by ensuring that each user's password is hashed uniquely. This means that two users with the same password will have different password hashes (assuming different salts are used). In order to succeed, an attacker needs to precompute tables for each possible salt value. The salt must be large enough, otherwise an attacker can make a table for each salt value. For olderUnix passwordswhich used a 12-bit salt this would require 4096 tables, a significant increase in cost for the attacker, but not impractical with terabyte hard drives. TheSHA2-cryptandbcryptmethods—used inLinux,BSDUnixes, andSolaris—have salts of 128 bits.[4]These larger salt values make precomputation attacks against these systems infeasible for almost any length of a password. Even if the attacker could generate a million tables per second, they would still need billions of years to generate tables for all possible salts.
Another technique that helps prevent precomputation attacks iskey stretching. When stretching is used, the salt, password, and some intermediate hash values are run through the underlying hash function multiple times to increase the computation time required to hash each password.[5]For instance, MD5-Crypt uses a 1000 iteration loop that repeatedly feeds the salt, password, and current intermediate hash value back into the underlying MD5 hash function.[4]The user's password hash is the concatenation of the salt value (which is not secret) and the final hash. The extra time is not noticeable to users because they have to wait only a fraction of a second each time they log in. On the other hand, stretching reduces the effectiveness of brute-force attacks in proportion to the number of iterations because it reduces the number of attempts an attacker can perform in a given time frame. This principle is applied in MD5-Crypt and in bcrypt.[6]It also greatly increases the time needed to build a precomputed table, but in the absence of salt, this needs only be done once.
An alternative approach, calledkey strengthening, deploys two salts, one public and one secret, but then (unlike in key stretching) securely deletes the secret salt. This forces both the attacker and legitimate users to perform a brute-force search for the secret salt value. The secret salt size is chosen so that the brute force search is imperceptible to the legitimate user. However, it makes the rainbow dictionary needed by the attacker much larger.[7]Although the paper that introduced key stretching[8]referred to this earlier technique and intentionally chose a different name, the term "key strengthening" is now often (arguably incorrectly) used to refer to key stretching.
Rainbow tables and other precomputation attacks do not work against passwords that contain symbols outside the range presupposed, or that are longer than those precomputed by the attacker. However, tables can be generated that take into account common ways in which users attempt to choose more secure passwords, such as adding a number or special character. Because of the sizable investment in computing processing, rainbow tables beyond fourteen places in length are not yet common. So, choosing a password that is longer than fourteen characters may force an attacker to resort to brute-force methods.[citation needed]
Specific intensive efforts focused onLM hash, an older hash algorithm used by Microsoft, are publicly available. LM hash is particularly vulnerable because passwords longer than 7 characters are broken into two sections, each of which is hashed separately. Choosing a password that is fifteen characters or longer guarantees that an LM hash will not be generated.[9]
Nearly all distributions and variations ofUnix,Linux, andBSDuse hashes with salts, though many applications use just a hash (typicallyMD5) with no salt. The Microsoft Windows NT/2000 family uses theLAN ManagerandNT LAN Managerhashing method (based onMD4) and is also unsalted, which makes it one of the most popularly generated tables. Rainbow tables have seen reduced usage as of 2020 as salting is more common and GPU-basedbrute force attackshave become more practical. However, rainbow tables are available for eight and nine characterNTLMpasswords.[10]
|
https://en.wikipedia.org/wiki/Rainbow_table
|
Incryptography, apepperis a secret added to an input such as apasswordduringhashingwith acryptographic hash function. This value differs from asaltin that it is not stored alongside a password hash, but rather the pepper is kept separate in some other medium, such as a Hardware Security Module.[1]Note that theNational Institute of Standards and Technologyrefers to this value as asecret keyrather than apepper. A pepper is similar in concept to asaltor anencryption key. It is like a salt in that it is a randomized value that is added to a password hash, and it is similar to an encryption key in that it should be kept secret.
A pepper performs a comparable role to asaltor anencryption key, but while a salt is not secret (merely unique) and can be stored alongside the hashed output, a pepper is secret and must not be stored with the output. The hash and salt are usually stored in a database, but a pepper must be stored separately to prevent it from being obtained by the attacker in case of a database breach.[2]A pepper should be long enough to remain secret from brute force attempts to discover it (NIST recommends at least 112 bits).
The idea of a site- or service-specific salt (in addition to a per-user salt) has a long history, withSteven M. Bellovinproposing alocal parameterin aBugtraqpost in 1995.[3]In 1996Udi Manberalso described the advantages of such a scheme, terming it asecret salt.[4]The termpepperhas been used, by analogy to salt, but with a variety of meanings. For example, when discussing achallenge-response scheme, pepper has been used for a salt-like quantity, though not used for password storage;[5]it has been used for a data transmission technique where a pepper must be guessed;[6]and even as a part of jokes.[7]
The termpepperwas proposed for a secret or local parameter stored separately from the password in a discussion of protecting passwords fromrainbow tableattacks.[8]This usage did not immediately catch on: for example, Fred Wenzel added support toDjangopassword hashing for storage based on a combination ofbcryptandHMACwith separately storednonces, without using the term.[9]Usage has since become more common.[10][11][12]
There are multiple different types of pepper:
An incomplete example of using a pepper constant to save passwords is given below.
This table contains two combinations of username and password. The password is not saved, and the 8-byte (64-bit) 44534C70C6883DE2 pepper is saved in a safe place separate from the output values of the hash.
Unlike thesalt, the pepper does not provide protection to users who use the same password, but protects againstdictionary attacks, unless the attacker has the pepper value available. Since the same pepper is not shared between different applications, an attacker is unable to reuse the hashes of one compromised database to another. A complete scheme for saving passwords usually includes both salt and pepper use.
In the case of a shared-secret pepper, a single compromised password (via password reuse or other attack) along with a user's salt can lead to an attack to discover the pepper, rendering it ineffective. If an attacker knows a plaintext password and a user's salt, as well as the algorithm used to hash the password, then discovering the pepper can be a matter of brute forcing the values of the pepper. This is why NIST recommends the secret value be at least 112 bits, so that discovering it by exhaustive search is intractable. The pepper must be generated anew for every application it is deployed in, otherwise a breach of one application would result in lowered security of another application. Without knowledge of the pepper, other passwords in the database will be far more difficult to extract from their hashed values, as the attacker would need to guess the password as well as the pepper.
A pepper adds security to a database of salts and hashes because unless the attacker is able to obtain the pepper, cracking even a single hash is intractable, no matter how weak the original password. Even with a list of (salt, hash) pairs, an attacker must also guess the secret pepper in order to find the password which produces the hash. The NIST specification for a secret salt suggests using aPassword-Based Key Derivation Function (PBKDF)with an approvedPseudorandom Functionsuch asHMACwithSHA-3as the hash function of the HMAC. The NIST recommendation is also to perform at least 1000 iterations of the PBKDF, and a further minimum 1000 iterations using the secret salt in place of the non-secret salt.
In the case of a pepper that is unique to each user, the tradeoff is gaining extra security at the cost of storing more information securely. Compromising one password hash and revealing its secret pepper will have no effect on other password hashes and their secret pepper, so each pepper must be individually discovered, which greatly increases the time taken to attack the password hashes.
|
https://en.wikipedia.org/wiki/Pepper_(cryptography)
|
This is a list ofhash functions, includingcyclic redundancy checks,checksumfunctions, andcryptographic hash functions.
Adler-32is often mistaken for a CRC, but it is not: it is achecksum.
|
https://en.wikipedia.org/wiki/List_of_hash_functions
|
Adistributed hash table(DHT) is adistributed systemthat provides a lookup service similar to ahash table.Key–value pairsare stored in a DHT, and any participatingnodecan efficiently retrieve the value associated with a given key. The main advantage of a DHT is that nodes can be added or removed with minimum work around re-distributing keys.[1]Keysare unique identifiers which map to particularvalues, which in turn can be anything from addresses, todocuments, to arbitrarydata.[2]Responsibility for maintaining the mapping from keys to values is distributed among the nodes, in such a way that a change in the set of participants causes a minimal amount of disruption. This allows a DHT toscaleto extremely large numbers of nodes and to handle continual node arrivals, departures, and failures.
DHTs form an infrastructure that can be used to build more complex services, such asanycast, cooperativeweb caching,distributed file systems,domain name services,instant messaging,multicast, and alsopeer-to-peer file sharingandcontent distributionsystems. Notable distributed networks that use DHTs includeBitTorrent's distributed tracker, theKad network, theStorm botnet, theTox instant messenger,Freenet, theYaCysearch engine, and theInterPlanetary File System.
DHT research was originally motivated, in part, bypeer-to-peer(P2P) systems such asFreenet,Gnutella,BitTorrentandNapster, which took advantage of resources distributed across the Internet to provide a single useful application. In particular, they took advantage of increasedbandwidthandhard diskcapacity to provide a file-sharing service.[3]
These systems differed in how they located the data offered by their peers. Napster, the first large-scale P2P content delivery system, required a central index server: each node, upon joining, would send a list of locally held files to the server, which would perform searches and refer the queries to the nodes that held the results. This central component left the system vulnerable to attacks and lawsuits.
Gnutella and similar networks moved to aquery floodingmodel – in essence, each search would result in a message being broadcast to every machine in the network. While avoiding asingle point of failure, this method was significantly less efficient than Napster. Later versions of Gnutella clients moved to a dynamic querying model which vastly improved efficiency.[4]
Freenet is fully distributed, but employs aheuristickey-based routingin which each file is associated with a key, and files with similar keys tend to cluster on a similar set of nodes. Queries are likely to be routed through the network to such a cluster without needing to visit many peers.[5]However, Freenet does not guarantee that data will be found.
Distributed hash tables use a more structured key-based routing in order to attain both the decentralization of Freenet and Gnutella, and the efficiency and guaranteed results of Napster. One drawback is that, like Freenet, DHTs only directly support exact-match search, rather than keyword search, although Freenet'srouting algorithmcan be generalized to any key type where a closeness operation can be defined.[6]
In 2001, four systems—CAN,[7]Chord,[8]Pastry, andTapestry—brought attention to DHTs.
A project called the Infrastructure for Resilient Internet Systems (Iris) was funded by a $12 million grant from the United StatesNational Science Foundationin 2002.[9]Researchers includedSylvia Ratnasamy,Ion Stoica,Hari BalakrishnanandScott Shenker.[10]Outside academia, DHT technology has been adopted as a component of BitTorrent and inPlanetLabprojects such as the Coral Content Distribution Network.[11]
DHTs characteristically emphasize the following properties:
A key technique used to achieve these goals is that any one node needs to coordinate with only a few other nodes in the system— most commonly,O(logn) of thenparticipants (see below)— so that only a limited amount of work needs to be done for each change in membership.
Some DHT designs seek to besecureagainst malicious participants[13]and to allow participants to remainanonymous, though this is less common than in many other peer-to-peer (especiallyfile sharing) systems; seeanonymous P2P.
The structure of a DHT can be decomposed into several main components.[14][15]The foundation is an abstractkeyspace, such as the set of 160-bitstrings. A keyspacepartitioningscheme splits ownership of this keyspace among the participating nodes. Anoverlay networkthen connects the nodes, allowing them to find the owner of any given key in the keyspace.
Once these components are in place, a typical use of the DHT for storage and retrieval might proceed as follows. Suppose the keyspace is the set of 160-bit strings. To index a file with givenfilenameanddatain the DHT, theSHA-1hash offilenameis generated, producing a 160-bit keyk, and a messageput(k, data)is sent to any node participating in the DHT. The message is forwarded from node to node through the overlay network until it reaches the single node responsible for keykas specified by the keyspace partitioning. That node then stores the key and the data. Any other client can then retrieve the contents of the file by again hashingfilenameto producekand asking any DHT node to find the data associated withkwith a messageget(k). The message will again be routed through the overlay to the node responsible fork, which will reply with the storeddata.
The keyspace partitioning and overlay network components are described below with the goal of capturing the principal ideas common to most DHTs; many designs differ in the details.
Most DHTs use some variant ofconsistent hashingorrendezvous hashingto map keys to nodes. The two algorithms appear to have been devised independently and simultaneously to solve the distributed hash table problem.
Both consistent hashing and rendezvous hashing have the essential property that removal or addition of one node changes only the set of keys owned by the nodes with adjacent IDs, and leaves all other nodes unaffected. Contrast this with a traditionalhash tablein which addition or removal of one bucket causes nearly the entire keyspace to be remapped. Since any change in ownership typically corresponds to bandwidth-intensive movement of objects stored in the DHT from one node to another, minimizing such reorganization is required to efficiently support high rates ofchurn(node arrival and failure).
Consistent hashing employs a functionδ(k1,k2){\displaystyle \delta (k_{1},k_{2})}that defines an abstract notion of the distance between the keysk1{\displaystyle k_{1}}andk2{\displaystyle k_{2}}, which is unrelated to geographical distance ornetwork latency. Each node is assigned a single key called itsidentifier(ID). A node with IDix{\displaystyle i_{x}}owns all the keyskm{\displaystyle k_{m}}for whichix{\displaystyle i_{x}}is the closest ID, measured according toδ(km,ix){\displaystyle \delta (k_{m},i_{x})}.
For example, theChord DHTuses consistent hashing, which treats nodes as points on a circle, andδ(k1,k2){\displaystyle \delta (k_{1},k_{2})}is the distance traveling clockwise around the circle fromk1{\displaystyle k_{1}}tok2{\displaystyle k_{2}}. Thus, the circular keyspace is split into contiguous segments whose endpoints are the node identifiers. Ifi1{\displaystyle i_{1}}andi2{\displaystyle i_{2}}are two adjacent IDs, with a shorter clockwise distance fromi1{\displaystyle i_{1}}toi2{\displaystyle i_{2}}, then the node with IDi2{\displaystyle i_{2}}owns all the keys that fall betweeni1{\displaystyle i_{1}}andi2{\displaystyle i_{2}}.
In rendezvous hashing, also called highest random weight (HRW) hashing, all clients use the same hash functionh(){\displaystyle h()}(chosen ahead of time) to associate a key to one of thenavailable servers.
Each client has the same list of identifiers{S1,S2, ...,Sn}, one for each server.
Given some keyk, a client computesnhash weightsw1=h(S1,k),w2=h(S2,k), ...,wn=h(Sn,k).
The client associates that key with the server corresponding to the highest hash weight for that key.
A server with IDSx{\displaystyle S_{x}}owns all the keyskm{\displaystyle k_{m}}for which the hash weighth(Sx,km){\displaystyle h(S_{x},k_{m})}is higher than the hash weight of any other node for that key.
Locality-preserving hashing ensures that similar keys are assigned to similar objects. This can enable a more efficient execution of range queries, however, in contrast to using consistent hashing, there is no more assurance that the keys (and thus the load) is uniformly randomly distributed over the key space and the participating peers. DHT protocols such as Self-Chord and Oscar[16]address such issues. Self-Chord decouples object keys from peer IDs and sorts keys along the ring with a statistical approach based on theswarm intelligenceparadigm.[17]Sorting ensures that similar keys are stored by neighbour nodes and that discovery procedures, includingrange queries, can be performed in logarithmic time. Oscar constructs a navigablesmall-world networkbased onrandom walksampling also assuring logarithmic search time.
Each node maintains a set oflinksto other nodes (itsneighborsorrouting table). Together, these links form the overlay network.[18]A node picks its neighbors according to a certain structure, called thenetwork's topology.
All DHT topologies share some variant of the most essential property: for any keyk, each node either has a node ID that ownskor has a link to a node whose node ID isclosertok, in terms of the keyspace distance defined above. It is then easy to route a message to the owner of any keykusing the followinggreedy algorithm(that is not necessarily globally optimal): at each step, forward the message to the neighbor whose ID is closest tok. When there is no such neighbor, then we must have arrived at the closest node, which is the owner ofkas defined above. This style of routing is sometimes calledkey-based routing.
Beyond basic routing correctness, two important constraints on the topology are to guarantee that the maximum number ofhopsin any route (route length) is low, so that requests complete quickly; and that the maximum number of neighbors of any node (maximum nodedegree) is low, so that maintenance overhead is not excessive. Of course, having shorter routes requires highermaximum degree. Some common choices for maximum degree and route length are as follows, wherenis the number of nodes in the DHT, usingBig O notation:
The most common choice,O(logn){\displaystyle O(\log n)}degree/route length, is not optimal in terms of degree/route length tradeoff, but such topologies typically allow more flexibility in choice of neighbors. Many DHTs use that flexibility to pick neighbors that are close in terms of latency in the physical underlying network. In general, all DHTs construct navigable small-world network topologies, which trade-off route length vs. network degree.[19]
Maximum route length is closely related todiameter: the maximum number of hops in any shortest path between nodes. Clearly, the network's worst case route length is at least as large as its diameter, so DHTs are limited by the degree/diameter tradeoff[20]that is fundamental ingraph theory. Route length can be greater than diameter, since the greedy routing algorithm may not find shortest paths.[21]
Aside from routing, there exist many algorithms that exploit the structure of the overlay network for sending a message to all nodes, or a subset of nodes, in a DHT.[22]These algorithms are used by applications to dooverlay multicast, range queries, or to collect statistics. Two systems that are based on this approach are Structella,[23]which implements flooding and random walks on a Pastry overlay, and DQ-DHT, which implements a dynamic querying search algorithm over a Chord network.[24]
Because of the decentralization, fault tolerance, and scalability of DHTs, they are inherently more resilient against a hostile attacker than a centralized system.[vague]
Open systems fordistributed data storagethat are robust against massive hostile attackers are feasible.[25]
A DHT system that is carefully designed to haveByzantine fault tolerancecan defend against a security weakness, known as theSybil attack, which affects most current DHT designs.[26][27]Whanau is a DHT designed to be resistant to Sybil attacks.[28]
Petar Maymounkov, one of the original authors ofKademlia, has proposed a way to circumvent the weakness to the Sybil attack by incorporating social trust relationships into the system design.[29]The new system, codenamed Tonika or also known by its domain name as 5ttt, is based on an algorithm design known as "electric routing" and co-authored with the mathematician Jonathan Kelner.[30]Maymounkov has now undertaken a comprehensive implementation effort of this new system. However, research into effective defences against Sybil attacks is generally considered an open question, and wide variety of potential defences are proposed every year in top security research conferences.[citation needed]
Most notable differences encountered in practical instances of DHT implementations include at least the following:
|
https://en.wikipedia.org/wiki/Distributed_hash_table
|
AnIdenticonis a visual representation of ahash value, usually of anIP address, that serves to identify a user of a computer system as a form ofavatarwhile protecting the user's privacy. The original Identicon was a 9-block graphic, and the representation has been extended to other graphic forms by third parties.
Don Park came up with the Identicon idea on January 18, 2007. In his words:
I originally came up with this idea to be used as an easy means of visually distinguishing multiple units of information, anything that can be reduced to bits. It's not just IPs but also people, places, and things. IMHO, too much of the web what we read are textual or numeric information which are not easy to distinguish at a glance when they are jumbled up together. So I think adding visual identifiers will make the user experience much more enjoyable.[1]
A similar method had previously been described by Adrian Perrig and Dawn Song in their 1999 publication on hash visualization,[2]which had already seen wide use such as in the random art ofSSH keys.
|
https://en.wikipedia.org/wiki/Identicon
|
Inmathematics, alow-discrepancy sequenceis asequencewith the property that for all values ofN{\displaystyle N}, its subsequencex1,…,xN{\displaystyle x_{1},\ldots ,x_{N}}has a lowdiscrepancy.
Roughly speaking, the discrepancy of a sequence is low if the proportion of points in the sequence falling into an arbitrary setBis close to proportional to themeasureofB, as would happen on average (but not for particular samples) in the case of anequidistributed sequence. Specific definitions of discrepancy differ regarding the choice ofB(hyperspheres,hypercubes, etc.) and how the discrepancy for every B is computed (usually normalized) and combined (usually by taking the worst value).
Low-discrepancy sequences are also calledquasirandomsequences, due to their common use as a replacement of uniformly distributedrandom numbers.
The "quasi" modifier is used to denote more clearly that the values of a low-discrepancy sequence are neither random norpseudorandom, but such sequences share some properties of random variables and in certain applications such as thequasi-Monte Carlo methodtheir lower discrepancy is an important advantage.
Quasirandom numbers have an advantage over pure random numbers in that they cover the domain of interest quickly and evenly.
Two useful applications are in finding thecharacteristic functionof aprobability density function, and in finding thederivativefunction of a deterministic function with a small amount of noise. Quasirandom numbers allow higher-ordermomentsto be calculated to high accuracy very quickly.
Applications that don't involve sorting would be in finding themean,standard deviation,skewnessandkurtosisof a statistical distribution, and in finding theintegraland globalmaxima and minimaof difficult deterministic functions. Quasirandom numbers can also be used for providing starting points for deterministic algorithms that only work locally, such asNewton–Raphson iteration.
Quasirandom numbers can also be combined with search algorithms. With a search algorithm, quasirandom numbers can be used to find themode,median,confidence intervalsandcumulative distributionof a statistical distribution, and alllocal minimaand all solutions of deterministic functions.
Various methods ofnumerical integrationcan be phrased as approximating the integral of a functionf{\displaystyle f}in some interval, e.g. [0,1], as the average of the function evaluated at a set{x1,…,xN}{\displaystyle \{x_{1},\dots ,x_{N}}\}in that interval:
If the points are chosen asxi=i/N{\displaystyle x_{i}=i/N}, this is therectangle rule.
If the points are chosen to be randomly (orpseudorandomly) distributed, this is theMonte Carlo method.
If the points are chosen as elements of a low-discrepancy sequence, this is thequasi-Monte Carlo method.
A remarkable result, theKoksma–Hlawka inequality(stated below), shows that the error of such a method can be bounded by the product of two terms, one of which depends only onf{\displaystyle f}, and the other one is the discrepancy of the set{x1,…,xN}{\displaystyle \{x_{1},\dots ,x_{N}}\}.
It is convenient to construct the set{x1,…,xN}{\displaystyle \{x_{1},\dots ,x_{N}}\}in such a way that if a set withN+1{\displaystyle N+1}elements is constructed, the previousN{\displaystyle N}elements need not be recomputed.
The rectangle rule uses points set which have low discrepancy, but in general the elements must be recomputed ifN{\displaystyle N}is increased.
Elements need not be recomputed in the random Monte Carlo method ifN{\displaystyle N}is increased,
but the point sets do not have minimal discrepancy.
By using low-discrepancy sequences we aim for low discrepancy and no need for recomputations, but actually low-discrepancy sequences can only be incrementally good on discrepancy if we allow no recomputation.
Thediscrepancyof a setP={x1,…,xN}{\displaystyle P=\{x_{1},\dots ,x_{N}}\}is defined, usingNiederreiter'snotation, as
whereλs{\displaystyle \lambda _{s}}is thes{\displaystyle s}-dimensionalLebesgue measure,A(B;P){\displaystyle A(B;P)}is the number of points inP{\displaystyle P}that fall intoB{\displaystyle B},
andJ{\displaystyle J}is the set ofs{\displaystyle s}-dimensional intervals or boxes of the form
where0≤ai<bi≤1{\displaystyle 0\leq a_{i}<b_{i}\leq 1}.
Thestar-discrepancyDN∗(P){\displaystyle D_{N}^{*}(P)}is defined similarly, except that the supremum is taken over the setJ∗{\displaystyle J^{*}}of rectangular boxes of the form
whereui{\displaystyle u_{i}}is in the half-open interval [0, 1).
The two are related by
Note: With these definitions, discrepancy represents the worst-case or maximum point density deviation of a uniform set. However, also other error measures are meaningful, leading to other definitions and variation measures. For instance,L2{\displaystyle L^{2}}-discrepancy or modified centeredL2{\displaystyle L^{2}}-discrepancy are also used intensively to compare the quality of uniform point sets. Both are much easier to calculate for largeN{\displaystyle N}ands{\displaystyle s}.
LetI¯s{\displaystyle {\overline {I}}^{s}}be thes{\displaystyle s}-dimensional unit cube,I¯s=[0,1]×⋯×[0,1]{\displaystyle {\overline {I}}^{s}=[0,1]\times \cdots \times [0,1]}.
Letf{\displaystyle f}havebounded variationV(f){\displaystyle V(f)}onI¯s{\displaystyle {\overline {I}}^{s}}in the sense ofHardyand Krause.
Then for anyx1,…,xN{\displaystyle x_{1},\ldots ,x_{N}}inIs=[0,1)s=[0,1)×⋯×[0,1){\displaystyle I^{s}=[0,1)^{s}=[0,1)\times \cdots \times [0,1)},
TheKoksma–Hlawkainequality is sharp in the following sense: For any point set{x1,…,xN}{\displaystyle \{x_{1},\ldots ,x_{N}\}}inIs{\displaystyle I^{s}}and anyε>0{\displaystyle \varepsilon >0}, there is a functionf{\displaystyle f}with bounded variation andV(f)=1{\displaystyle V(f)=1}such that
Therefore, the quality of a numerical integration rule depends only on the discrepancyDN∗(x1,…,xN){\displaystyle D_{N}^{*}(x_{1},\ldots ,x_{N})}.
LetD={1,2,…,d}{\displaystyle D=\{1,2,\ldots ,d\}}. For∅≠u⊆D{\displaystyle \emptyset \neq u\subseteq D}we
write
and denote by(xu,1){\displaystyle (x_{u},1)}the point obtained fromxby replacing the
coordinates not inuby1{\displaystyle 1}. Then
wheredisc(z)=1N∑i=1N∏j=1d1[0,zj)(xi,j)−∏j=1dzi{\displaystyle \operatorname {disc} (z)={\frac {1}{N}}\sum _{i=1}^{N}\prod _{j=1}^{d}1_{[0,z_{j})}(x_{i,j})-\prod _{j=1}^{d}z_{i}}is the discrepancy function.
Applying theCauchy–Schwarz inequalityfor integrals and sums to the Hlawka–Zaremba identity, we obtain anL2{\displaystyle L^{2}}version of the Koksma–Hlawka inequality:
where
and
L2{\displaystyle L^{2}}discrepancy has a high practical importance because fast explicit calculations are possible for a given point set. This way it is easy to create point set optimizers usingL2{\displaystyle L^{2}}discrepancy as criteria.
It is computationally hard to find the exact value of the discrepancy of large point sets. TheErdős–Turán–Koksmainequality provides an upper bound.
Letx1,…,xN{\displaystyle x_{1},\ldots ,x_{N}}be points inIs{\displaystyle I^{s}}andH{\displaystyle H}be an arbitrary positive integer. Then
where
Conjecture 1.There is a constantcs{\displaystyle c_{s}}depending only on the dimensions{\displaystyle s}, such that
for any finite point setx1,…,xN{\displaystyle {x_{1},\ldots ,x_{N}}}.
Conjecture 2.There is a constantcs′{\displaystyle c'_{s}}depending only on :s{\displaystyle s}, such that:DN∗(x1,…,xN)≥cs′(lnN)sN{\displaystyle D_{N}^{*}(x_{1},\ldots ,x_{N})\geq c'_{s}{\frac {(\ln N)^{s}}{N}}}
for infinite number ofN{\displaystyle N}for any infinite sequencex1,x2,x3,…{\displaystyle x_{1},x_{2},x_{3},\ldots }.
These conjectures are equivalent. They have been proved fors≤2{\displaystyle s\leq 2}byW. M. Schmidt. In higher dimensions, the corresponding problem is still open. The best-known lower bounds are due toMichael Laceyand collaborators.
Lets=1{\displaystyle s=1}. Then
for any finite point set{x1,…,xN}{\displaystyle \{x_{1},\dots ,x_{N}}\}.
Lets=2{\displaystyle s=2}.W. M. Schmidtproved that for any finite point set{x1,…,xN}{\displaystyle \{x_{1},\dots ,x_{N}}\},
where
For arbitrary dimensionss>1{\displaystyle s>1},K. F. Rothproved that
for any finite point set{x1,…,xN}{\displaystyle \{x_{1},\dots ,x_{N}}\}.
Jozef Beck[1]established a double log improvement of this result in three dimensions.
This was improved by D. Bilyk andM. T. Laceyto a power of a single logarithm. The best known bound fors> 2 is due
D. Bilyk andM. T. Laceyand A. Vagharshakyan.[2]There exists at>0{\displaystyle t>0}depending onsso that
for any finite point set{x1,…,xN}{\displaystyle \{x_{1},\dots ,x_{N}}\}.
Because any distribution of random numbers can be mapped onto a uniform distribution, and quasirandom numbers are mapped in the same way, this article only concerns generation of quasirandom numbers on a multidimensional uniform distribution.
There are constructions of sequences known such that
whereC{\displaystyle C}is a certain constant, depending on the sequence. After Conjecture 2, these sequences are believed to have the best possible order of convergence. Examples below are thevan der Corput sequence, theHalton sequences, and theSobol’ sequences. One general limitation is that construction methods can usually only guarantee the order of convergence. Practically, low discrepancy can be only achieved ifN{\displaystyle N}is large enough, and for large given s this minimumN{\displaystyle N}can be very large. This means running a Monte-Carlo analysis with e.g.s=20{\displaystyle s=20}variables andN=1000{\displaystyle N=1000}points from a low-discrepancy sequence generator may offer only a very minor accuracy improvement[citation needed].
Sequences of quasirandom numbers can be generated from random numbers by imposing a negative correlation on those random numbers. One way to do this is to start with a set of random numbersri{\displaystyle r_{i}}on[0,0.5){\displaystyle [0,0.5)}and construct quasirandom numberssi{\displaystyle s_{i}}which are uniform on[0,1){\displaystyle [0,1)}using:
si=ri{\displaystyle s_{i}=r_{i}}fori{\displaystyle i}odd andsi=0.5+ri{\displaystyle s_{i}=0.5+r_{i}}fori{\displaystyle i}even.
A second way to do it with the starting random numbers is to construct a random walk with offset 0.5 as in:
That is, take the previous quasirandom number, add 0.5 and the random number, and take the resultmodulo1.
For more than one dimension,Latin squaresof the appropriate dimension can be used to provide offsets to ensure that the whole domain is covered evenly.
For any irrationalα{\displaystyle \alpha }, the sequence
has discrepancy tending to1/N{\displaystyle 1/N}. Note that the sequence can be defined recursively by
A good value ofα{\displaystyle \alpha }gives lower discrepancy than a sequence of independent uniform random numbers.
The discrepancy can be bounded by theapproximation exponentofα{\displaystyle \alpha }. If the approximation exponent isμ{\displaystyle \mu }, then for anyε>0{\displaystyle \varepsilon >0}, the following bound holds:[3]
By theThue–Siegel–Roth theorem, the approximation exponent of any irrational algebraic number is 2, giving a bound ofN−1+ε{\displaystyle N^{-1+\varepsilon }}above.
The recurrence relation above is similar to the recurrence relation used by alinear congruential generator, a poor-quality pseudorandom number generator:[4]
For the low discrepancy additive recurrence above,aandmare chosen to be 1. Note, however, that this will not generate independent random numbers, so should not be used for purposes requiring independence.
The value ofc{\displaystyle c}with lowest discrepancy is the fractional part of thegolden ratio:[5]
Another value that is nearly as good is the fractional part of thesilver ratio, which is the fractional part of the square root of 2:
In more than one dimension, separate quasirandom numbers are needed for each dimension. A convenient set of values that are used, is the square roots ofprimesfrom two up, all taken modulo 1:
However, a set of values based on the generalised golden ratio has been shown to produce more evenly distributed points.[6]
Thelist of pseudorandom number generatorslists methods for generating independent pseudorandom numbers.Note: In few dimensions, recursive recurrence leads to uniform sets of good quality, but for largers{\displaystyle s}(likes>8{\displaystyle s>8}) other point set generators can offer much lower discrepancies.
Let
be theb{\displaystyle b}-ary representation of the positive integern≥1{\displaystyle n\geq 1}, i.e.0≤dk(n)<b{\displaystyle 0\leq d_{k}(n)<b}. Set
Then there is a constantC{\displaystyle C}depending only onb{\displaystyle b}such that(gb(n))n≥1{\displaystyle (g_{b}(n))_{n\geq 1}}satisfies
whereDN∗{\displaystyle D_{N}^{*}}is thestar discrepancy.
The Halton sequence is a natural generalization of the van der Corput sequence to higher dimensions. Letsbe an arbitrary dimension andb1, ...,bsbe arbitrarycoprimeintegers greater than 1. Define
Then there is a constantCdepending only onb1, ...,bs, such that sequence {x(n)}n≥1is as-dimensional sequence with
Letb1,…,bs−1{\displaystyle b_{1},\ldots ,b_{s-1}}becoprimepositive integers greater than 1. For givens{\displaystyle s}andN{\displaystyle N}, thes{\displaystyle s}-dimensionalHammersleyset of sizeN{\displaystyle N}is defined by[7]
forn=1,…,N{\displaystyle n=1,\ldots ,N}. Then
whereC{\displaystyle C}is a constant depending only onb1,…,bs−1{\displaystyle b_{1},\ldots ,b_{s-1}}.
Note: The formulas show that the Hammersley set is actually the Halton sequence, but we get one more dimension for free by adding a linear sweep. This is only possible ifN{\displaystyle N}is known upfront. A linear set is also the set with lowest possible one-dimensional discrepancy in general. Unfortunately, for higher dimensions, no such "discrepancy record sets" are known. Fors=2{\displaystyle s=2}, most low-discrepancy point set generators deliver at least near-optimum discrepancies.
The Antonov–Saleev variant of the Sobol’ sequence generates numbers between zero and one directly as binary fractions of lengthw,{\displaystyle w,}from a set ofw{\displaystyle w}special binary fractions,Vi,i=1,2,…,w{\displaystyle V_{i},i=1,2,\dots ,w}called direction numbers. The bits of theGray codeofi{\displaystyle i},G(i){\displaystyle G(i)}, are used to select direction numbers. To get the Sobol’ sequence valuesi{\displaystyle s_{i}}take theexclusive orof the binary value of the Gray code ofi{\displaystyle i}with the appropriate direction number. The number of dimensions required affects the choice ofVi{\displaystyle V_{i}}.
Poisson disk samplingis popular in video games to rapidly place objects in a way that appears random-looking
but guarantees that every two points are separated by at least the specified minimum distance.[8]This does not guarantee low discrepancy (as e. g. Sobol’), but at least a significantly lower discrepancy than pure random sampling. The goal of these sampling patterns is based on frequency analysis rather than discrepancy, a type of so-called "blue noise" patterns.
The points plotted below are the first 100, 1000, and 10000 elements in a sequence of the Sobol' type.
For comparison, 10000 elements of a sequence of pseudorandom points are also shown.
The low-discrepancy sequence was generated byTOMSalgorithm 659.[9]An implementation of the algorithm inFortranis available fromNetlib.
|
https://en.wikipedia.org/wiki/Low-discrepancy_sequence
|
Atransposition tableis a cache of previously seen positions, and associated evaluations, in a game tree generated by a computer game playing program. If a position recurs via a different sequence of moves, the value of the position is retrieved from the table, avoiding re-searching the game tree below that position. Transposition tables are primarily useful inperfect-information games(where the entire state of the game is known to all players at all times). The usage of transposition tables is essentiallymemoizationapplied to the tree search and is a form ofdynamic programming.
Transposition tables are typically implemented ashash tablesencoding the current board position as the hash index. The number of possible positions that may occur in a game tree is an exponential function of depth of search, and can be thousands to millions or even much greater. Transposition tables may therefore consume most of available system memory and are usually most of thememory footprintof game playing programs.
Game-playing programs work by analyzing millions of positions that could arise in the next few moves of the game. Typically, these programs employ strategies resemblingdepth-first search, which means that they do not keep track of all the positions analyzed so far. In many games, it is possible to reach a given position in more than one way. These are calledtranspositions.[1]Inchess, for example, the sequence of moves1.d4 Nf62.c4 g6(seealgebraic chess notation) has 4 possible transpositions, since either player may swap their move order. In general, afternmoves, an upper limit on the possible transpositions is (n!)2. Although many of these are illegal move sequences, it is still likely that the program will end up analyzing the same position several times.
To avoid this problem, transposition tables are used. Such a table is ahash tableof each of the positions analyzed so far up to a certain depth. On encountering a new position, the program checks the table to see whether the position has already been analyzed; this can be done quickly, in amortized constant time. If so, the table contains the value that was previously assigned to this position; this value is used directly. If not, the value is computed, and the new position is entered into the hash table.
The number of positions searched by a computer often greatly exceeds the memory constraints of the system it runs on; thus not all positions can be stored. When the table fills up, less-used positions are removed to make room for new ones; this makes the transposition table a kind ofcache.
The computation saved by a transposition table lookup is not just the evaluation of a single position. Instead, the evaluation of an entire subtree is avoided. Thus, transposition table entries for nodes at a shallower depth in the game tree are more valuable (since the size of the subtree rooted at such a node is larger) and are therefore given more importance when the table fills up and some entries must be discarded.
The hash table implementing the transposition table can have other uses than finding transpositions. Inalpha–beta pruning, the search is fastest (in fact, optimal) when the child of a node corresponding to the best move is always considered first. Of course, there is no way of knowing the best move beforehand, but wheniterative deepeningis used, the move that was found to be the best in a shallower search is a good approximation. Therefore this move is tried first. For storing the best child of a node, the entry corresponding to that node in the transposition table is used.
Use of a transposition table can lead to incorrect results if the graph-history interaction problem is not studiously avoided. This problem arises in certain games because the history of a position may be important. For example, inchessa player may not castle if the king or the rook to be castled with has moved during the course of the game. A common solution to this problem is to add the castling rights as part of theZobrist hashingkey. Another example isdraw by repetition: given a position, it may not be possible to determine whether it has already occurred. A solution to the general problem is to store history information in each node of the transposition table, but this is inefficient and rarely done in practice.
A transposition table is a cache whose maximum size is limited by available system memory, and it may overflow at any time. In fact, it is expected to overflow, and the number of positions cacheable at any time may be only a small fraction (even orders of magnitude smaller) than the number of nodes in the game tree. The vast majority of nodes are not transposition nodes, i.e. positions that will recur, so effective replacement strategies that retain potential transposition nodes and replace other nodes can result in significantly reduced tree size. Replacement is usually based on tree depth and aging: nodes higher in the tree (closer to the root) are favored, because the subtrees below them are larger and result in greater savings; and more recent nodes are favored because older nodes are no longer similar to the current position, so transpositions to them are less likely.
Other strategies are to retain nodes in the principal variation, nodes with larger subtrees regardless of depth in the tree, and nodes that caused cutoffs.
Though the fraction of nodes that will be transpositions is small, the game tree is an exponential structure, so caching a very small number of such nodes can make a significant difference. In chess, search time reductions of 0-50% in complex middle game positions and up to a factor of 5 in the end game have been reported.[2]
|
https://en.wikipedia.org/wiki/Transposition_table
|
Incryptography,DES-X(orDESX) is a variant on theDES(Data Encryption Standard)symmetric-keyblock cipherintended to increase the complexity of abrute-force attack. The technique used to increase the complexity is calledkey whitening.
The original DES algorithm was specified in 1976 with a 56-bitkey size: 256possibilities for thekey. There was criticism that an exhaustive search might be within the capabilities of large governments, particularly the United States'National Security Agency(NSA). One scheme to increase the key size of DES without substantially altering the algorithm was DES-X, proposed byRon Rivestin May 1984.
The algorithm has been included inRSA Security'sBSAFEcryptographic library since the late 1980s.
DES-X augments DES byXORingan extra 64 bits of key (K1) to theplaintextbeforeapplying DES, and then XORing another 64 bits of key (K2)afterthe encryption:
DES-X(M)=K2⊕DESK(M⊕K1){\displaystyle {\mbox{DES-X}}(M)=K_{2}\oplus {\mbox{DES}}_{K}(M\oplus K_{1})}
The key size is thereby increased to 56 + (2 × 64) = 184 bits.
However, the effective key size (security) is only increased to 56+64−1−lb(M)= 119 −lb(M)= ~119 bits, whereMis the number ofchosen plaintext/ciphertext pairsthe adversary can obtain, andlbdenotes thebinary logarithm. Moreover, effective key size drops to 88 bits given 232.5known plaintext and using advanced slide attack.
DES-X also increases the strength of DES againstdifferential cryptanalysisandlinear cryptanalysis, although the improvement is much smaller than in the case of brute force attacks. It is estimated that differentialcryptanalysiswould require 261chosen plaintexts (vs. 247for DES), while linear cryptanalysis would require 260known plaintexts (vs. 243for DES or 261for DES with independent subkeys.[1]) Note that with 264plaintexts (known or chosen being the same in this case), DES (or indeed any otherblock cipherwith a 64 bitblock size) is totally broken as the whole cipher's codebook becomes available.
Although the differential and linear attacks, currently best attack on DES-X is a known-plaintext slide attack
discovered by Biryukov-Wagner[2]which has complexity of 232.5known plaintexts and 287.5time of analysis. Moreover the attack is easily converted into a ciphertext-only attack with the same data complexity and 295offline time complexity.
|
https://en.wikipedia.org/wiki/DES-X
|
Incryptography, aFeistel cipher(also known asLuby–Rackoff block cipher) is asymmetric structureused in the construction ofblock ciphers, named after theGerman-bornphysicistand cryptographerHorst Feistel, who did pioneering research while working forIBM; it is also commonly known as aFeistel network. A large number ofblock ciphersuse the scheme, including the USData Encryption Standard, the Soviet/RussianGOSTand the more recentBlowfishandTwofishciphers. In a Feistel cipher, encryption and decryption are very similar operations, and both consist of iteratively running a function called a "round function" a fixed number of times.
Many modern symmetric block ciphers are based on Feistel networks. Feistel networks were first seen commercially in IBM'sLucifercipher, designed byHorst FeistelandDon Coppersmithin 1973. Feistel networks gained respectability when the U.S. Federal Government adopted theDES(a cipher based on Lucifer, with changes made by theNSA) in 1976. Like other components of the DES, the iterative nature of the Feistel construction makes implementing the cryptosystem in hardware easier (particularly on the hardware available at the time of DES's design).
A Feistel network uses around function, a function which takes two inputs – a data block and a subkey – and returns one output of the same size as the data block.[1]In each round, the round function is run on half of the data to be encrypted, and its output is XORed with the other half of the data. This is repeated a fixed number of times, and the final output is the encrypted data. An important advantage of Feistel networks compared to other cipher designs such assubstitution–permutation networksis that the entire operation is guaranteed to be invertible (that is, encrypted data can be decrypted), even if the round function is not itself invertible. The round function can be made arbitrarily complicated, since it does not need to be designed to be invertible.[2]: 465[3]: 347Furthermore, theencryptionanddecryptionoperations are very similar, even identical in some cases, requiring only a reversal of thekey schedule. Therefore, the size of the code or circuitry required to implement such a cipher is nearly halved. Unlike substitution-permutation networks, Feistel networks also do not depend on a substitution box that could cause timing side-channels in software implementations.
The structure and properties of Feistel ciphers have been extensively analyzed bycryptographers.
Michael LubyandCharles Rackoffanalyzed the Feistel cipher construction and proved that if the round function is a cryptographically securepseudorandom function, withKiused as the seed, then 3 rounds are sufficient to make the block cipher apseudorandom permutation, while 4 rounds are sufficient to make it a "strong" pseudorandom permutation (which means that it remains pseudorandom even to an adversary who getsoracleaccess to its inverse permutation).[4]Because of this very important result of Luby and Rackoff, Feistel ciphers are sometimes called Luby–Rackoff block ciphers.
Further theoretical work has generalized the construction somewhat and given more precise bounds for security.[5][6]
LetF{\displaystyle \mathrm {F} }be the round function and letK0,K1,…,Kn{\displaystyle K_{0},K_{1},\ldots ,K_{n}}be the sub-keys for the rounds0,1,…,n{\displaystyle 0,1,\ldots ,n}respectively.
Then the basic operation is as follows:
Split the plaintext block into two equal pieces: (L0{\displaystyle L_{0}},R0{\displaystyle R_{0}}).
For each roundi=0,1,…,n{\displaystyle i=0,1,\dots ,n}, compute
where⊕{\displaystyle \oplus }meansXOR. Then the ciphertext is(Rn+1,Ln+1){\displaystyle (R_{n+1},L_{n+1})}.
Decryption of a ciphertext(Rn+1,Ln+1){\displaystyle (R_{n+1},L_{n+1})}is accomplished by computing fori=n,n−1,…,0{\displaystyle i=n,n-1,\ldots ,0}
Then(L0,R0){\displaystyle (L_{0},R_{0})}is the plaintext again.
The diagram illustrates both encryption and decryption. Note the reversal of the subkey order for decryption; this is the only difference between encryption and decryption.
Unbalanced Feistel ciphers use a modified structure whereL0{\displaystyle L_{0}}andR0{\displaystyle R_{0}}are not of equal lengths.[7]TheSkipjackcipher is an example of such a cipher. TheTexas Instrumentsdigital signature transponderuses a proprietary unbalanced Feistel cipher to performchallenge–response authentication.[8]
TheThorp shuffleis an extreme case of an unbalanced Feistel cipher in which one side is a single bit. This has better provable security than a balanced Feistel cipher but requires more rounds.[9]
The Feistel construction is also used in cryptographic algorithms other than block ciphers. For example, theoptimal asymmetric encryption padding(OAEP) scheme uses a simple Feistel network to randomize ciphertexts in certainasymmetric-key encryptionschemes.
A generalized Feistel algorithm can be used to create strong permutations on small domains of size not a power of two (seeformat-preserving encryption).[9]
Whether the entire cipher is a Feistel cipher or not, Feistel-like networks can be used as a component of a cipher's design. For example,MISTY1is a Feistel cipher using a three-round Feistel network in its round function,Skipjackis a modified Feistel cipher using a Feistel network in its G permutation, andThreefish(part ofSkein) is a non-Feistel block cipher that uses a Feistel-like MIX function.
Feistel or modified Feistel:
Generalised Feistel:
|
https://en.wikipedia.org/wiki/Feistel_cipher
|
Walter Tuchmanled theData Encryption Standarddevelopment team atIBM.[1][2]He was also responsible for the development ofTriple DES.[citation needed]
This article about amathematicianis astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Walter_Tuchman
|
This article details the varioustablesreferenced in theData Encryption Standard(DES)block cipher.
All bits and bytes are arranged inbig endianorder in this document. That is, bit number 1 is always the most significant bit.
This table specifies the input permutation on a 64-bit block. The meaning is as follows: the first bit of the output is taken from the 58th bit of the input; the second bit from the 50th bit, and so on, with the last bit of the output taken from the 7th bit of the input.
This information is presented as a table for ease of presentation; it is a vector, not a matrix.
The final permutation is the inverse of the initial permutation; the table is interpreted similarly.
The expansion function is interpreted as for the initial and final permutations. Note that some bits from the input are duplicated at the output; e.g. the fifth bit of the input is duplicated in both the sixth and eighth bit of the output. Thus, the 32-bit half-block is expanded to 48 bits.
The P permutation shuffles the bits of a 32-bit half-block.
The "Left" and "Right" halves of the table show which bits from the inputkeyform the left and right sections of the key schedule state. Note that only 56 bits of the 64 bits of the input are selected; the remaining eight (8, 16, 24, 32, 40, 48, 56, 64) were specified for use asparity bits.
This permutation selects the 48-bit subkey for each round from the 56-bit key-schedule state. This permutation will ignore 8 bits below:
Permuted Choice 2 "PC-2" ignored bits 9, 18, 22, 25, 35, 38, 43, 54.
This table lists the eight S-boxes used in DES. Each S-box replaces a 6-bit input with a 4-bit output. Given a 6-bit input, the 4-bit output is found by selecting the row using the outer two bits, and the column using the inner four bits. For example, an input "011011" has outer bits "01" and inner bits "1101"; noting that the first row is "00" and the first column is "0000", the corresponding output for S-box S5would be "1001" (=9), the value in the second row, 14th column. (SeeS-box).
The main key supplied from user is of 64 bits. The following operations are performed with it.
Drop the bits of the grey positions (8x) to make 56 bit space for further operation for each round.
After that bits are permuted according to the following table,
The table is row major way, means,
ActualBit position= Substitute with the bit ofrow * 8 + column.
Before the round sub-key is selected, each half of the key schedule state is rotated left by a number of places. This table specifies the number of places rotated.
• The key is divided into two 28-bit parts
• Each part is shifted left (circular) one or two bits
• After shifting, two parts are then combined to form a 56 bit temp-key again
• The compression P-box changes the 56 bits key to 48 bits key, which is used as a key for the corresponding round.
The table is row major way, means,
ActualBit position= Substitute with the bit ofrow * 8 + column.
After this return the Round-Key of 48 bits to the called function,i.e.the Round.
|
https://en.wikipedia.org/wiki/DES_supplementary_material
|
Incryptography,Skipjackis ablock cipher—analgorithmfor encryption—developed by theU.S.National Security Agency(NSA). Initiallyclassified, it was originally intended for use in the controversialClipper chip. Subsequently, the algorithm was declassified.[5]
Skipjack was proposed as the encryption algorithm in a US government-sponsored scheme ofkey escrow, and thecipherwas provided for use in theClipper chip, implemented intamperproofhardware. Skipjack is used only for encryption; the key escrow is achieved through the use of a separate mechanism known as theLaw Enforcement Access Field(LEAF).[5]
The algorithm was initially secret, and was regarded with considerable suspicion by many for that reason. It wasdeclassifiedon 24 June 1998, shortly after its basic design principle had been discovered independently by the public cryptography community.[5][6]
To ensure public confidence in the algorithm, several academic researchers from outside the government were called in to evaluate the algorithm.[7][5]The researchers found no problems with either the algorithm itself or the evaluation process. Moreover, their report gave some insight into the (classified) history and development of Skipjack:
[Skipjack] is representative of a family of encryption algorithms developed in 1980 as part of the NSA suite of "Type I" algorithms... Skipjack was designed using building blocks and techniques that date back more than forty years. Many of the techniques are related to work that was evaluated by some of the world's most accomplished and famous experts incombinatoricsandabstract algebra. Skipjack's more immediate heritage dates to around 1980, and its initial design to 1987...The specific structures included in Skipjack have a long evaluation history, and the cryptographic properties of those structures had many prior years of intense study before the formal process began in 1987.[7][8]
In March 2016,NISTpublished a draft of its cryptographic standard which no longer certifies Skipjack for US government applications.[9][10]
Skipjack uses an80-bitkeyto encrypt or decrypt64-bitdata blocks. It is anunbalanced Feistel networkwith 32 rounds.[11]It was designed to be used in secured phones.
Eli BihamandAdi Shamirdiscovered an attack against 16 of the 32 rounds within one day of declassification,[8]and (withAlex Biryukov) extended this to 31 of the 32 rounds (but with an attack only slightly faster than exhaustive search) within months usingimpossible differential cryptanalysis.[4]
A truncated differential attack was also published against 28 rounds of Skipjack cipher.[12]
A claimed attack against the full cipher was published in 2002,[13]but a later paper with attack designer as a co-author clarified in 2009 that no attack on the full 32 round cipher was then known.[14]
An algorithm named Skipjack forms part of theback-storytoDan Brown's 1998 novelDigital Fortress. In Brown's novel, Skipjack is proposed as the newpublic-key encryptionstandard, along with aback doorsecretly inserted by the NSA ("a few lines of cunning programming") which would have allowed them to decrypt Skipjack using a secret password and thereby "read the world's email". When details of the cipher are publicly released, programmer Greg Hale discovers and announces details of the backdoor. In real life there is evidence to suggest that the NSA has added back doors to at least one algorithm; theDual_EC_DRBGrandom number algorithm may contain a backdoor accessible only to the NSA.
Additionally, in theHalf-Life 2modificationDystopia, the "encryption" program used in cyberspace apparently uses both Skipjack andBlowfishalgorithms.[15]
|
https://en.wikipedia.org/wiki/Skipjack_(cipher)
|
Books oncryptographyhave been published sporadically and with variable quality for a long time. This is despite theparadoxthat secrecy is of the essence in sending confidential messages – seeKerckhoffs' principle.
In contrast, the revolutions incryptographyand securecommunicationssince the 1970s are covered in the available literature.
An early example of a book about cryptography was a Roman work,[which?]now lost and known only by references. Many early cryptographic works were esoteric, mystical, and/or reputation-promoting; cryptography being mysterious, there was much opportunity for such things. At least one work byTrithemiuswas banned by the Catholic Church and put on theIndex Librorum Prohibitorumas being about black magic or witchcraft. Many writers claimed to have invented unbreakableciphers. None were, though it sometimes took a long while to establish this.
In the 19th century, the general standard improved somewhat (e.g., works byAuguste Kerckhoffs,Friedrich Kasiski, andÉtienne Bazeries). ColonelParker HittandWilliam Friedmanin the early 20th century also wrote books on cryptography. These authors, and others, mostly abandoned any mystical or magical tone.
With the invention of radio, much of military communications went wireless, allowing the possibility of enemy interception much more readily than tapping into a landline. This increased the need to protect communications. By the end ofWorld War I, cryptography and its literature began to be officially limited. One exception was the 1931 bookThe American Black ChamberbyHerbert Yardley, which gave some insight into American cryptologic success stories, including theZimmermann telegramand the breaking of Japanese codes during theWashington Naval Conference.
Significant books on cryptography include:
From the end of World War II until the early 1980s most aspects of modern cryptography were regarded as the special concern of governments and the military and were protected by custom and, in some cases, by statute. The most significant work to be published on cryptography in this period is undoubtedlyDavid Kahn'sThe Codebreakers,[7]which was published at a time (mid-1960s) when virtually no information on the modern practice of cryptography was available.[8]Kahn has said that over ninety percent of its content was previously unpublished.[9]
The book caused serious concern at theNSAdespite its lack of coverage of specific modern cryptographic practice, so much so that after failing to prevent the book being published, NSA staff were informed to not even acknowledge the existence of the book if asked. In the US military, mere possession of a copy by cryptographic personnel was grounds for some considerable suspicion[citation needed]. Perhaps the single greatest importance of the book was the impact it had on the next generation of cryptographers.Whitfield Diffiehas made comments in interviews about the effect it had on him.[10][failed verification]
|
https://en.wikipedia.org/wiki/Books_on_cryptography
|
GNU Privacy Guard(GnuPGorGPG) is afree-softwarereplacement forSymantec'scryptographicsoftware suitePGP. The software is compliant with the now obsoleted[4]RFC4880, theIETFstandards-track specification ofOpenPGP. Modern versions of PGP areinteroperablewith GnuPG and other OpenPGP v4-compliant systems.[5]
November 2023 saw two drafts aiming to update the 2007 OpenPGP v4 specification (RFC4880), ultimately resulting in theRFC 9580standard in July 2024. The proposal from the GnuPG developers, which is called LibrePGP, was not taken up by the OpenPGP Working Group and future versions of GnuPG will not support the current version of OpenPGP.[6]
GnuPG is part of theGNU Projectand received major funding from theGerman governmentin 1999.[7]
GnuPG is ahybrid-encryptionsoftware program because it uses a combination of conventionalsymmetric-key cryptographyfor speed, andpublic-key cryptographyfor ease of secure key exchange, typically by using the recipient's public key to encrypt asession keywhich is used only once. This mode of operation is part of the OpenPGP standard and has been part of PGP from its first version.
The GnuPG 1.x series uses an integrated cryptographic library, while the GnuPG 2.x series replaces this withLibgcrypt.
GnuPG encrypts messages usingasymmetric key pairsindividually generated by GnuPG users. The resulting public keys may be exchanged with other users in a variety of ways, such as Internetkey servers. They must always be exchanged carefully to prevent identity spoofing by corrupting public key ↔ "owner" identity correspondences. It is also possible to add a cryptographicdigital signatureto a message, so the message integrity and sender can be verified, if a particular correspondence relied upon has not been corrupted.
GnuPG also supportssymmetric encryptionalgorithms. By default, GnuPG uses theAESsymmetrical algorithm since version 2.1,[8]CAST5was used in earlier versions. GnuPG does not use patented or otherwise restricted software or algorithms. Instead, GnuPG uses a variety of other, non-patented algorithms.[9]For a long time, it did not support theIDEAencryption algorithm used in PGP. It was in fact possible to use IDEA in GnuPG by downloading a plugin for it, however, this might require a license for some uses in countries in which IDEA was patented. Starting with versions 1.4.13 and 2.0.20, GnuPG supports IDEA because the last patent of IDEA expired in 2012. Support of IDEA is intended "to get rid of all the questions from folks either trying to decrypt old data or migrating keys from PGP to GnuPG",[10]and hence is not recommended for regular use.
More recent releases of GnuPG 2.x ("modern" and the now deprecated "stable" series) expose most cryptographic functions and algorithmsLibgcrypt(its cryptography library) provides, including support forelliptic-curve cryptography(ECDH, ECDSA and EdDSA)[11]in the "modern" series (i.e. since GnuPG 2.1).
As of 2.3 or 2.2 versions, GnuPG supports the following algorithms:
GnuPG was initially developed byWerner Koch.[12][13]The first production version, version 1.0.0, was released on September 7, 1999, almost two years after the first GnuPG release (version 0.0.0).[14][12]TheGerman Federal Ministry of Economics and Technologyfunded the documentation and the port toMicrosoft Windowsin 2000.[13]
GnuPG is a system compliant to the OpenPGP standard, thus the history of OpenPGP is of importance; it was designed to interoperate withPGP, an email encryption program initially designed and developed byPhil Zimmermann.[15][16]
On February 7, 2014, a GnuPGcrowdfundingeffort closed, raising€36,732 for a new website and infrastructure improvements.[17]
Since the release of a stable GnuPG 2.3, starting with version 2.3.3 in October 2021, three stable branches of GnuPG are actively maintained:[18]
Before GnuPG 2.3, two stable branches of GnuPG were actively maintained:
Different GnuPG 2.x versions (e.g. from the 2.2 and 2.0 branches) cannot be installed at the same time. However, it is possible to install a "classic" GnuPG version (i.e. from the 1.4 branch) along with any GnuPG 2.x version.[11]
Before the release of GnuPG 2.2 ("modern"), the now deprecated "stable" branch (2.0) was recommended for general use, initially released on November 13, 2006.[22]This branch reached itsend-of-lifeon December 31, 2017;[23]Its last version is 2.0.31, released on December 29, 2017.[24]
Before the release of GnuPG 2.0, all stable releases originated from a single branch; i.e., before November 13, 2006, no multiple release branches were maintained in parallel. These former, sequentially succeeding (up to 1.4) release branches were:
(Note that before the release of GnuPG 2.3.0, branches with an odd minor release number (e.g. 2.1, 1.9, 1.3) were development branches leading to a stable release branch with a "+ 0.1" higher version number (e.g. 2.2, 2.0, 1.4); hence branches 2.2 and 2.1 both belong to the "modern" series, 2.0 and 1.9 both to the "stable" series, while the branches 1.4 and 1.3 both belong to the "classic" series.
With the release of GnuPG 2.3.0, this nomenclature was altered to be composed of a "stable" and "LTS" branch from the "modern" series, plus 1.4 as the last maintained "classic" branch. Also note that even or odd minor release numbers do not indicate a stable or development release branch, anymore.)
Although the basic GnuPG program has acommand-line interface, there exists variousfront-endsthat provide it with agraphical user interface. For example, GnuPG encryption support has been integrated intoKMailandEvolution, the graphicalemail clientsfound inKDEandGNOME, the most popularLinuxdesktops. There are also graphical GnuPG front-ends, for exampleSeahorsefor GNOME andKGPGandKleopatrafor KDE.
GPGTools provides a number of front-ends for OS integration of encryption andkey managementas well as GnuPG installations viaInstallerpackages[28]formacOS. GPG Suite[28]installs all related OpenPGP applications (GPG Keychain), plugins (GPG Mail) and dependencies (MacGPG), along with GPG Services (integration into macOS Services menu) to use GnuPG based encryption.
Instant messagingapplications such asPsiand Fire can automatically secure messages when GnuPG is installed and configured. Web-based software such asHordealso makes use of it. The cross-platformextensionEnigmailprovides GnuPG support forMozilla ThunderbirdandSeaMonkey. Similarly, Enigform provides GnuPG support forMozilla Firefox. FireGPG was discontinued June 7, 2010.[29]
In 2005, g10 Code GmbH and Intevation GmbH releasedGpg4win, a software suite that includes GnuPG for Windows, GNU Privacy Assistant, and GnuPG plug-ins forWindows ExplorerandOutlook. These tools are wrapped in a standard Windows installer, making it easier for GnuPG to be installed and used on Windows systems.[30]
The OpenPGP standard specifies several methods ofdigitally signingmessages. In 2003, due to an error in a change to GnuPG intended to make one of those methods more efficient, a security vulnerability was introduced.[31]It affected only one method of digitally signing messages, only for some releases of GnuPG (1.0.2 through 1.2.3), and there were fewer than 1000 such keys listed on the key servers.[32]Most people did not use this method, and were in any case discouraged from doing so, so the damage caused (if any, since none has been publicly reported) would appear to have been minimal. Support for this method has been removed from GnuPG versions released after this discovery (1.2.4 and later).
Two further vulnerabilities were discovered in early 2006; the first being that scripted uses of GnuPG for signature verification may result infalse positives,[33]the second that non-MIME messages were vulnerable to the injection of data which while not covered by the digital signature, would be reported as being part of the signed message.[34]In both cases updated versions of GnuPG were made available at the time of the announcement.
In June 2017, a vulnerability (CVE-2017-7526) was discovered withinLibgcryptby Bernstein, Breitner and others: a library used by GnuPG, which enabled a full key recovery for RSA-1024 and about more than 1/8th of RSA-2048 keys. Thisside-channel attackexploits the fact thatLibgcryptused asliding windows method for exponentiationwhich leads to the leakage of exponent bits and to full key recovery.[35][36]Again, an updated version of GnuPG was made available at the time of the announcement.
Around June 2018, theSigSpoofattacks were announced. These allowed an attacker to convincingly spoof digital signatures.[37][38]
In January 2021, Libgcrypt 1.9.0 was released, which was found to contain a severe bug that was simple to exploit. A fix was released 10 days later in Libgcrypt 1.9.1.[39]
|
https://en.wikipedia.org/wiki/GNU_Privacy_Guard
|
Identity-based encryption(IBE), is an important primitive ofidentity-based cryptography. As such it is a type ofpublic-key encryptionin which thepublic keyof a user is some unique information about the identity of the user (e.g. a user's email address). This means that a sender who has access to the public parameters of the system can encrypt a message using e.g. the text-value of the receiver's name or email address as a key. The receiver obtains its decryption key from a central authority, which needs to be trusted as it generates secret keys for every user.
Identity-based encryption was proposed byAdi Shamirin 1984.[1]He was however only able to give an instantiation ofidentity-based signatures. Identity-based encryption remained an open problem for many years.
Thepairing-basedBoneh–Franklin scheme[2]andCocks's encryption scheme[3]based onquadratic residuesboth solved the IBE problem in 2001.
Identity-based systems allow any party to generate a public key from a known identity value such as an ASCII string. A trusted third party, called thePrivate Key Generator(PKG), generates the corresponding private keys. To operate, the PKG first publishes a master public key, and retains the correspondingmaster private key(referred to asmaster key). Given the master public key, any party can compute a public key corresponding to the identity by combining the master public key with the identity value. To obtain a corresponding private key, the party authorized to use the identityIDcontacts the PKG, which uses the master private key to generate the private key for identityID.
As a result, parties may encrypt messages (or verify signatures) with no prior distribution of keys between individual participants. This is extremely useful in cases where pre-distribution of authenticated keys is inconvenient or infeasible due to technical restraints. However, to decrypt or sign messages, the authorized user must obtain the appropriate private key from the PKG. A caveat of this approach is that the PKG must be highly trusted, as it is capable of generating any user's private key and may therefore decrypt (or sign) messages without authorization. Because any user's private key can be generated through the use of the third party's secret, this system has inherentkey escrow. A number of variant systems have been proposed which remove the escrow includingcertificate-based encryption,[4]secure key issuing cryptography[5]andcertificateless cryptography.[6]
The steps involved are depicted in this diagram:
Dan BonehandMatthew K. Franklindefined a set of four algorithms that form a complete IBE system:
In order for the whole system to work, one has to postulate that:
The most efficient identity-based encryption schemes are currently based onbilinear pairingsonelliptic curves, such as theWeilorTatepairings. The first of these schemes was developed byDan BonehandMatthew K. Franklin(2001), and performsprobabilistic encryptionof arbitrary ciphertexts using anElgamal-like approach. Though theBoneh-Franklin schemeisprovably secure, the security proof rests on relatively new assumptions about the hardness of problems in certain elliptic curve groups.
Another approach to identity-based encryption was proposed byClifford Cocksin 2001. TheCocks IBE schemeis based on well-studied assumptions (thequadratic residuosity assumption) but encrypts messages one bit at a time with a high degree ofciphertext expansion. Thus it is highly inefficient and impractical for sending all but the shortest messages, such as a session key for use with asymmetric cipher.
A third approach to IBE is through the use of lattices.
The following lists practical identity-based encryption algorithms
All these algorithms havesecurity proofs.
One of the major advantages of any identity-based encryption scheme is that if there are only a finite number of users, after all users have been issued with keys the third party's secret can be destroyed. This can take place because this system assumes that, once issued, keys are always valid (as this basic system lacks a method ofkey revocation). The majority of derivatives of this system which have key revocation lose this advantage.
Moreover, as public keys are derived from identifiers, IBE eliminates the need for a public key distribution infrastructure. Theauthenticityof the public keys is guaranteed implicitly as long as the transport of the private keys to the corresponding user is kept secure (authenticity,integrity,confidentiality).
Apart from these aspects, IBE offers interesting features emanating from the possibility to encode additional information into the identifier. For instance, a sender might specify an expiration date for a message. He appends this timestamp to the actual recipient's identity (possibly using some binary format like X.509). When the receiver contacts the PKG to retrieve the private key for this public key, the PKG can evaluate the identifier and decline the extraction if the expiration date has passed. Generally, embedding data in the ID corresponds to opening an additional channel between sender and PKG with authenticity guaranteed through the dependency of the private key on the identifier.
|
https://en.wikipedia.org/wiki/Identity-based_encryption
|
Key escrow(also known as a"fair" cryptosystem)[1]is an arrangement in which thekeysneeded to decryptencrypteddata are held inescrowso that, under certain circumstances, an authorizedthird partymay gain access to those keys. These third parties may include businesses, who may want access to employees' secure business-relatedcommunications, orgovernments, who may wish to be able to view the contents of encrypted communications (also known asexceptional access).[2]
The technical problem is a largely structural one. Access to protectedinformationmust be providedonlyto the intended recipient and at least one third party. The third party should be permitted access only under carefully controlled conditions, for instance, acourt order. Thus far, no system design has been shown to meet this requirement fully on a technical basis alone. All proposed systems also require correct functioning of some social linkage, for instance the process of request for access, examination of request for 'legitimacy' (as by acourt), and granting of access by technical personnel charged with access control. All such linkages / controls have serious problems from a system design security perspective. Systems in which the key may not be changed easily are rendered especially vulnerable as the accidental release of the key will result in many devices becoming totally compromised, necessitating an immediate key change or replacement of the system.
On a national level, key escrow is controversial in many countries for at least two reasons. One involves mistrust of the security of the structural escrow arrangement. Many countries have a long history of less than adequate protection of others' information by assorted organizations, public and private, even when the information is held only under an affirmative legal obligation to protect it from unauthorized access. Another is technical concerns for the additional vulnerabilities likely to be introduced by supporting key escrow operations.[2]Thus far, no key escrow system has been designed which meets both objections and nearly all have failed to meet even one.
Key escrow is proactive, anticipating the need for access to keys; a retroactive alternative iskey disclosure law, where users are required to surrender keys upon demand by law enforcement, or else face legal penalties. Key disclosure law avoids some of the technical issues and risks of key escrow systems, but also introduces new risks like loss of keys and legal issues such as involuntaryself-incrimination. The ambiguous termkey recoveryis applied to both types of systems.
|
https://en.wikipedia.org/wiki/Key_escrow
|
In cryptography, akey-agreement protocolis a protocol whereby two (or more) parties generate a cryptographickeyas a function of information provided by each honest party so that no party can predetermine the resulting value.[1]In particular, all honest participants influence the outcome. A key-agreement protocol is a specialisation of a key-exchange protocol.[2]
At the completion of the protocol, all parties share the same key. A key-agreement protocol precludes undesired third parties from forcing a key choice on the agreeing parties. A secure key agreement can ensureconfidentialityanddata integrity[3]in communications systems, ranging from simple messaging applications to complex banking transactions.
Secure agreement is defined relative to a security model, for example the Universal Model.[2]More generally, when evaluating protocols, it is important to state security goals and the security model.[4]For example, it may be required for the session key to beauthenticated. A protocol can be evaluated for success only in the context of its goals and attack model.[5]An example of an adversarial model is theDolev–Yao model.
In many key exchange systems, one party generates the key, and sends that key to the other party;[6]the other party has no influence on the key.
The first publicly known[6]public-key agreement protocol that meets the above criteria was theDiffie–Hellman key exchange, in which two parties jointlyexponentiatea generator with random numbers, in such a way that an eavesdropper cannot feasibly determine what the resultant shared key is.
Exponential key agreement in and of itself does not specify any prior agreement or subsequent authentication between the participants. It has thus been described as an anonymous key agreement protocol.
Symmetric key agreement (SKA) is a method of key agreement that uses solelysymmetric cryptographyandcryptographic hash functionsascryptographic primitives. It is related to symmetric authenticated key exchange.[7]
SKA may assume the use of initialshared secrets[7]or atrusted third partywith whom the agreeing parties share a secret is assumed.[8]If no third party is present, then achieving SKA can be trivial: we tautologically assume that two parties that share an initial secret and have achieved SKA.
SKA contrasts with key-agreement protocols that include techniques fromasymmetric cryptography, such askey encapsulation mechanisms.
The initial exchange of a shared key must be done in a manner that is private and integrity-assured. Historically, this was achieved by physical means, such as by using a trustedcourier.
An example of a SKA protocol is theNeedham–Schroeder protocol. It establishes asession keybetween two parties on the samenetwork, using aserveras a trusted third party.
The original Needham–Schroeder protocol is vulnerable to a replay attack.Timestampsandnoncesare included to fix this attack. It forms the basis for theKerberos protocol.
Boyd et al.[9]classify two-party key agreement protocols according to two criteria as follows:
The pre-shared key may be shared between the two parties, or each party may share a key with a trusted third party. If there is no secure channel (as may be established via a pre-shared key), it is impossible to create an authenticated session key.[10]
The session key may be generated via: key transport, key agreement and hybrid. If there is no trusted third party, then the cases of key transport and hybrid session key generation are indistinguishable. SKA is concerned with protocols in which the session key is established using only symmetric primitives.
Anonymous key exchange, like Diffie–Hellman, does not provideauthenticationof the parties, and is thus vulnerable toman-in-the-middle attacks.
A wide variety of cryptographic authentication schemes and protocols have been developed to provide authenticated key agreement to prevent man-in-the-middle and related attacks. These methods generally mathematically bind the agreed key to other agreed-upon data, such as the following:
A widely used mechanism for defeating such attacks is the use ofdigitally signedkeys that must be integrity-assured: if Bob's key is signed by atrusted third partyvouching for his identity, Alice can have considerable confidence that a signed key she receives is not an attempt to intercept by Eve. WhenAlice and Bobhave a public-key infrastructure, they may digitally sign an agreed Diffie–Hellman key, or exchanged Diffie–Hellman public keys. Such signed keys, sometimes signed by acertificate authority, are one of the primary mechanisms used for secureweb traffic(includingHTTPS,SSLorTLSprotocols). Other specific examples areMQV,YAKand theISAKMPcomponent of the IPsec protocol suite for securing Internet Protocol communications. However, these systems require care in endorsing the match between identity information and public keys by certificate authorities in order to work properly.
Hybrid systems use public-key cryptography to exchange secret keys, which are then used in a symmetric-key cryptography systems. Most practical applications of cryptography use a combination of cryptographic functions to implement an overall system that provides all of the four desirable features of secure communications (confidentiality, integrity, authentication, and non-repudiation).
Password-authenticated key agreementprotocols require the separate establishment of apassword(which may be smaller than a key) in a manner that is both private and integrity-assured. These are designed to resist man-in-the-middle and other active attacks on the password and the established keys. For example, DH-EKE,SPEKE, andSRPare password-authenticated variations of Diffie–Hellman.
If one has an integrity-assured way to verify a shared key over a public channel, one may engage in aDiffie–Hellman key exchangeto derive a short-term shared key, and then subsequently authenticate that the keys match. One way is to use a voice-authenticated read-out of the key, as inPGPfone. Voice authentication, however, presumes that it is infeasible for a man-in-the-middle to spoof one participant's voice to the other in real-time, which may be an undesirable assumption. Such protocols may be designed to work with even a small public value, such as a password. Variations on this theme have been proposed forBluetoothpairing protocols.
In an attempt to avoid using any additional out-of-band authentication factors, Davies and Price proposed the use of theinterlock protocolofRon RivestandAdi Shamir, which has been subject to both attack and subsequent refinement.
|
https://en.wikipedia.org/wiki/Key-agreement_protocol
|
ThePGP Word List("Pretty Good Privacyword list", also called abiometric word listfor reasons explained below) is a list ofwordsfor conveying databytesin a clear unambiguous way via a voice channel. They are analogous in purpose to theNATO phonetic alphabet, except that a longer list of words is used, each word corresponding to one of the 256 distinct numeric byte values.
The PGP Word List was designed in 1995 byPatrick Juola, acomputational linguist, andPhilip Zimmermann, creator ofPGP.[1][2]The words were carefully chosen for theirphoneticdistinctiveness, usinggenetic algorithmsto select lists of words that had optimum separations inphonemespace. The candidate word lists were randomly drawn fromGrady Ward'sMoby Pronunciatorlist as raw material for the search, successively refined by the genetic algorithms. The automated search converged to an optimized solution in about 40 hours on aDEC Alpha, a particularly fast machine in that era.
The Zimmermann–Juola list was originally designed to be used inPGPfone, a secure VoIP application, to allow the two parties to verbally compare a short authentication string to detect aman-in-the-middle attack(MiTM). It was called abiometricword list because the authentication depended on the two human users recognizing each other's distinct voices as they read and compared the words over the voice channel, binding the identity of the speaker with the words, which helped protect against the MiTM attack. The list can be used in many other situations where a biometric binding of identity is not needed, so calling it a biometric word list may be imprecise. Later, it was used inPGPto compare and verify PGPpublic keyfingerprintsover a voice channel. This is known in PGP applications as the "biometric" representation. When it was applied to PGP, the list of words was further refined, with contributions byJon Callas. More recently, it has been used inZfoneand theZRTPprotocol, the successor to PGPfone.
The list is actually composed of two lists, each containing 256phoneticallydistinct words, in which each word represents a different byte value between 0 and 255. Two lists are used because reading aloud long random sequences of human words usually risks three kinds of errors: 1) transposition of two consecutive words, 2) duplicate words, or 3) omitted words. To detect all three kinds of errors, the two lists are used alternately for the even-offset bytes and the odd-offset bytes in the byte sequence. Each byte value is actually represented by two different words, depending on whether that byte appears at an even or an odd offset from the beginning of the byte sequence. The two lists are readily distinguished by the number ofsyllables; the even list has words of two syllables, the odd list has three. The two lists have a maximum word length of 9 and 11 letters, respectively. Using a two-list scheme was suggested by Zhahai Stewart.
Here are the two lists of words as presented in the PGPfone Owner's Manual.[3]
Each byte in a bytestring is encoded as a single word. A sequence of bytes is rendered innetwork byte order, from left to right. For example, the leftmost (i.e. byte 0) is considered "even" and is encoded using the PGP Even Word table. The next byte to the right (i.e. byte 1) is considered "odd" and is encoded using the PGP Odd Word table. This process repeats until all bytes are encoded. Thus, "E582" produces "topmost Istanbul", whereas "82E5" produces "miser travesty".
A PGP public key fingerprint that displayed inhexadecimalas
would display in PGP Words (the "biometric" fingerprint) as
The order of bytes in a bytestring depends onendianness.
There are several other word lists for conveying data in a clear unambiguous way via a voice channel:
|
https://en.wikipedia.org/wiki/PGP_word_list
|
Post-quantum cryptography(PQC), sometimes referred to asquantum-proof,quantum-safe, orquantum-resistant, is the development ofcryptographicalgorithms (usuallypublic-keyalgorithms) that are currently thought to be secure against acryptanalytic attackby aquantum computer. Most widely-used public-key algorithms rely on the difficulty of one of three mathematical problems: theinteger factorization problem, thediscrete logarithm problemor theelliptic-curve discrete logarithm problem. All of these problems could be easily solved on a sufficiently powerful quantum computer runningShor's algorithm[1][2]or possibly alternatives.[3]
As of 2024, quantum computers lack theprocessing powerto break widely used cryptographic algorithms;[4]however, because of the length of time required for migration to quantum-safe cryptography, cryptographers are already designing new algorithms to prepare forY2QorQ-Day, the day when current algorithms will be vulnerable to quantum computing attacks.Mosca's theoremprovides the risk analysis framework that helps organizations identify how quickly they need to start migrating.
Their work has gained attention from academics and industry through the PQCryptoconferenceseries hosted since 2006, several workshops on Quantum Safe Cryptography hosted by theEuropean Telecommunications Standards Institute(ETSI), and theInstitute for Quantum Computing.[5][6][7]The rumoured existence of widespreadharvest now, decrypt laterprograms has also been seen as a motivation for the early introduction of post-quantum algorithms, as data recorded now may still remain sensitive many years into the future.[8][9][10]
In contrast to the threat quantum computing poses to current public-key algorithms, most currentsymmetric cryptographic algorithmsandhash functionsare considered to be relatively secure against attacks by quantum computers.[2][11]While the quantumGrover's algorithmdoes speed up attacks against symmetric ciphers, doubling the key size can effectively counteract these attacks.[12]Thus post-quantum symmetric cryptography does not need to differ significantly from current symmetric cryptography.
In 2024, the U.S.National Institute of Standards and Technology(NIST) released final versions of its first three Post-Quantum Cryptography Standards.[13]
Post-quantum cryptography research is mostly focused on six different approaches:[2][6]
This approach includes cryptographic systems such aslearning with errors,ring learning with errors(ring-LWE),[14][15][16]thering learning with errors key exchangeand thering learning with errors signature, the olderNTRUorGGHencryption schemes, and thenewer NTRU signatureandBLISS signatures.[17]Some of these schemes like NTRU encryption have been studied for many years without anyone finding a feasible attack. Others like the ring-LWE algorithms have proofs that their security reduces to a worst-case problem.[18]The Post-Quantum Cryptography Study Group sponsored by the European Commission suggested that the Stehle–Steinfeld variant of NTRU be studied for standardization rather than the NTRU algorithm.[19][20]At that time, NTRU was still patented. Studies have indicated that NTRU may have more secure properties than other lattice based algorithms.[21]
This includes cryptographic systems such as the Rainbow (Unbalanced Oil and Vinegar) scheme which is based on the difficulty of solving systems of multivariate equations. Various attempts to build secure multivariate equation encryption schemes have failed. However, multivariate signature schemes like Rainbow could provide the basis for a quantum secure digital signature.[22]The Rainbow Signature Scheme is patented (the patent expires in August 2029).
This includes cryptographic systems such asLamport signatures, theMerkle signature scheme, the XMSS,[23]the SPHINCS,[24]and the WOTS schemes. Hash based digital signatures were invented in the late 1970s byRalph Merkleand have been studied ever since as an interesting alternative to number-theoretic digital signatures like RSA and DSA. Their primary drawback is that for any hash-based public key, there is a limit on the number of signatures that can be signed using the corresponding set of private keys. This fact reduced interest in these signatures until interest was revived due to the desire for cryptography that was resistant to attack by quantum computers. There appear to be no patents on the Merkle signature scheme[citation needed]and there exist many non-patented hash functions that could be used with these schemes. The stateful hash-based signature scheme XMSS developed by a team of researchers under the direction ofJohannes Buchmannis described in RFC 8391.[25]
Note that all the above schemes are one-time or bounded-time signatures,Moni NaorandMoti YunginventedUOWHFhashing in 1989 and designed a signature based on hashing (the Naor-Yung scheme)[26]which can be unlimited-time in use (the first such signature that does not require trapdoor properties).
This includes cryptographic systems which rely onerror-correcting codes, such as theMcElieceandNiederreiterencryption algorithms and the relatedCourtois, Finiasz and Sendrier Signaturescheme. The original McEliece signature using randomGoppa codeshas withstood scrutiny for over 40 years. However, many variants of the McEliece scheme, which seek to introduce more structure into the code used in order to reduce the size of the keys, have been shown to be insecure.[27]The Post-Quantum Cryptography Study Group sponsored by the European Commission has recommended the McEliece public key encryption system as a candidate for long term protection against attacks by quantum computers.[19]
These cryptographic systems rely on the properties ofisogenygraphs ofelliptic curves(and higher-dimensionalabelian varieties) over finite fields, in particularsupersingular isogeny graphs, to create cryptographic systems. Among the more well-known representatives of this field are theDiffie–Hellman-like key exchangeCSIDH, which can serve as a straightforward quantum-resistant replacement for the Diffie–Hellman andelliptic curve Diffie–Hellmankey-exchange methods that are in widespread use today,[28]and the signature schemeSQIsignwhich is based on the categorical equivalence between supersingular elliptic curves and maximal orders in particular types of quaternion algebras.[29]Another widely noticed construction,SIDH/SIKE, was spectacularly broken in 2022.[30]The attack is however specific to the SIDH/SIKE family of schemes and does not generalize to other isogeny-based constructions.[31]
Provided one uses sufficiently large key sizes, the symmetric key cryptographic systems likeAESandSNOW 3Gare already resistant to attack by a quantum computer.[32]Further, key management systems and protocols that use symmetric key cryptography instead of public key cryptography likeKerberosand the3GPP Mobile Network Authentication Structureare also inherently secure against attack by a quantum computer. Given its widespread deployment in the world already, some researchers recommend expanded use of Kerberos-like symmetric key management as an efficient way to get post-quantum cryptography today.[33]
In cryptography research, it is desirable to prove the equivalence of a cryptographic algorithm and a known hard mathematical problem. These proofs are often called "security reductions", and are used to demonstrate the difficulty of cracking the encryption algorithm. In other words, the security of a given cryptographic algorithm is reduced to the security of a known hard problem. Researchers are actively looking for security reductions in the prospects for post-quantum cryptography. Current results are given here:
In some versions ofRing-LWEthere is a security reduction to theshortest-vector problem (SVP)in a lattice as a lower bound on the security. The SVP is known to beNP-hard.[34]Specific ring-LWE systems that have provable security reductions include a variant of Lyubashevsky's ring-LWE signatures defined in a paper by Güneysu, Lyubashevsky, and Pöppelmann.[15]The GLYPH signature scheme is a variant of theGüneysu, Lyubashevsky, and Pöppelmann (GLP) signaturewhich takes into account research results that have come after the publication of the GLP signature in 2012. Another Ring-LWE signature is Ring-TESLA.[35]There also exists a "derandomized variant" of LWE, called Learning with Rounding (LWR), which yields "improved speedup (by eliminating sampling small errors from a Gaussian-like distribution with deterministic errors) and bandwidth".[36]While LWE utilizes the addition of a small error to conceal the lower bits, LWR utilizes rounding for the same purpose.
The security of theNTRUencryption scheme and the BLISS[17]signature is believed to be related to, but not provably reducible to, theclosest vector problem (CVP)in a lattice. The CVP is known to beNP-hard. The Post-Quantum Cryptography Study Group sponsored by the European Commission suggested that the Stehle–Steinfeld variant of NTRU, whichdoeshave a security reduction be studied for long term use instead of the original NTRU algorithm.[19]
Unbalanced Oil and Vinegarsignature schemes are asymmetriccryptographicprimitives based onmultivariate polynomialsover afinite fieldF{\displaystyle \mathbb {F} }. Bulygin, Petzoldt and Buchmann have shown a reduction of generic multivariate quadratic UOV systems to the NP-Hardmultivariate quadratic equation solving problem.[37]
In 2005, Luis Garcia proved that there was a security reduction ofMerkle Hash Treesignatures to the security of the underlying hash function. Garcia showed in his paper that if computationally one-way hash functions exist then the Merkle Hash Tree signature is provably secure.[38]
Therefore, if one used a hash function with a provable reduction of security to a known hard problem one would have a provable security reduction of theMerkle treesignature to that known hard problem.[39]
ThePost-Quantum Cryptography Study Groupsponsored by theEuropean Commissionhas recommended use of Merkle signature scheme for long term security protection against quantum computers.[19]
The McEliece Encryption System has a security reduction to the syndrome decoding problem (SDP). The SDP is known to beNP-hard.[40]The Post-Quantum Cryptography Study Group sponsored by the European Commission has recommended the use of this cryptography for long term protection against attack by a quantum computer.[19]
In 2016, Wang proposed a random linear code encryption scheme RLCE[41]which is based on McEliece schemes. RLCE scheme can be constructed using any linear code such as Reed-Solomon code by inserting random columns in the underlying linear code generator matrix.
Security is related to the problem of constructing an isogeny between two supersingular curves with the same number of points. The most recent investigation of the difficulty of this problem is by Delfs and Galbraith indicates that this problem is as hard as the inventors of the key exchange suggest that it is.[42]There is no security reduction to a known NP-hard problem.
One common characteristic of many post-quantum cryptography algorithms is that they require larger key sizes than commonly used "pre-quantum" public key algorithms. There are often tradeoffs to be made in key size, computational efficiency and ciphertext or signature size. The table lists some values for different schemes at a 128-bit post-quantum security level.
A practical consideration on a choice among post-quantum cryptographic algorithms is the effort required to send public keys over the internet. From this point of view, the Ring-LWE, NTRU, and SIDH algorithms provide key sizes conveniently under 1 kB, hash-signature public keys come in under 5 kB, and MDPC-based McEliece takes about 1 kB. On the other hand, Rainbow schemes require about 125 kB and Goppa-based McEliece requires a nearly 1 MB key.
The fundamental idea of using LWE and Ring LWE for key exchange was proposed and filed at the University of Cincinnati in 2011 by Jintai Ding. The basic idea comes from the associativity of matrix multiplications, and the errors are used to provide the security. The paper[53]appeared in 2012 after a provisional patent application was filed in 2012.
In 2014, Peikert[54]presented a key transport scheme following the same basic idea of Ding's, where the new idea of sending additional 1 bit signal for rounding in Ding's construction is also utilized. For somewhat greater than 128bits of security, Singh presents a set of parameters which have 6956-bit public keys for the Peikert's scheme.[55]The corresponding private key would be roughly 14,000 bits.
In 2015, an authenticated key exchange with provable forward security following the same basic idea of Ding's was presented at Eurocrypt 2015,[56]which is an extension of the HMQV[57]construction in Crypto2005. The parameters for different security levels from 80 bits to 350 bits, along with the corresponding key sizes are provided in the paper.[56]
For 128 bits of security in NTRU, Hirschhorn, Hoffstein, Howgrave-Graham and Whyte, recommend using a public key represented as a degree 613 polynomial with coefficientsmod(210){\displaystyle {\bmod {\left(2^{10}\right)}}}This results in a public key of 6130 bits. The corresponding private key would be 6743 bits.[44]
For 128 bits of security and the smallest signature size in a Rainbow multivariate quadratic equation signature scheme, Petzoldt, Bulygin and Buchmann, recommend using equations in GF(31) with a public key size of just over 991,000 bits, a private key of just over 740,000 bits and digital signatures which are 424 bits in length.[45]
In order to get 128 bits of security for hash based signatures to sign 1 million messages using the fractal Merkle tree method of Naor Shenhav and Wool the public and private key sizes are roughly 36,000 bits in length.[58]
For 128 bits of security in a McEliece scheme, The European Commission's Post-Quantum Cryptography Study group recommends using a binary Goppa code of length at leastn= 6960and dimension at leastk= 5413, and capable of correctingt= 119errors. With these parameters the public key for the McEliece system will be a systematic generator matrix whose non-identity part takesk× (n−k) = 8373911bits. The corresponding private key, which consists of the code support withn= 6960elements from GF(213) and a generator polynomial of witht= 119coefficients from GF(213), will be 92,027 bits in length.[19]
The group is also investigating the use of Quasi-cyclic MDPC codes of length at leastn= 216+ 6 = 65542and dimension at leastk= 215+ 3 = 32771, and capable of correctingt= 264errors. With these parameters the public key for the McEliece system will be the first row of a systematic generator matrix whose non-identity part takesk= 32771bits. The private key, a quasi-cyclic parity-check matrix withd= 274nonzero entries on a column (or twice as much on a row), takes no more thand× 16 = 4384bits when represented as the coordinates of the nonzero entries on the first row.
Barreto et al. recommend using a binary Goppa code of length at leastn= 3307and dimension at leastk= 2515, and capable of correctingt= 66errors. With these parameters the public key for the McEliece system will be a systematic generator matrix whose non-identity part takesk× (n−k) = 1991880bits.[59]The corresponding private key, which consists of the code support withn= 3307elements from GF(212) and a generator polynomial of witht= 66coefficients from GF(212), will be 40,476 bits in length.
For 128 bits of security in the supersingular isogeny Diffie–Hellman (SIDH) method, De Feo, Jao and Plut recommend using a supersingular curve modulo a 768-bit prime. If one uses elliptic curve point compression the public key will need to be no more than 8x768 or 6144 bits in length.[60]A March 2016 paper by authors Azarderakhsh, Jao, Kalach, Koziel, and Leonardi showed how to cut the number of bits transmitted in half, which was further improved by authors Costello, Jao, Longa, Naehrig, Renes and Urbanik resulting in a compressed-key version of the SIDH protocol with public keys only 2640 bits in size.[52]This makes the number of bits transmitted roughly equivalent to the non-quantum secure RSA and Diffie–Hellman at the same classical security level.[61]
As a general rule, for 128 bits of security in a symmetric-key–based system, one can safely use key sizes of 256 bits. The best quantum attack against arbitrary symmetric-key systems is an application ofGrover's algorithm, which requires work proportional to the square root of the size of the key space. To transmit an encrypted key to a device that possesses the symmetric key necessary to decrypt that key requires roughly 256 bits as well. It is clear that symmetric-key systems offer the smallest key sizes for post-quantum cryptography.[citation needed]
A public-key system demonstrates a property referred to as perfectforward secrecywhen it generates random public keys per session for the purposes of key agreement. This means that the compromise of one message cannot lead to the compromise of others, and also that there is not a single secret value which can lead to the compromise of multiple messages. Security experts recommend using cryptographic algorithms that support forward secrecy over those that do not.[62]The reason for this is that forward secrecy can protect against the compromise of long term private keys associated with public/private key pairs. This is viewed as a means of preventing mass surveillance by intelligence agencies.
Both the Ring-LWE key exchange and supersingular isogeny Diffie–Hellman (SIDH) key exchange can support forward secrecy in one exchange with the other party. Both the Ring-LWE and SIDH can also be used without forward secrecy by creating a variant of the classicElGamal encryptionvariant of Diffie–Hellman.
The other algorithms in this article, such as NTRU, do not support forward secrecy as is.
Any authenticated public key encryption system can be used to build a key exchange with forward secrecy.[63]
TheOpen Quantum Safe(OQS) project was started in late 2016 and has the goal of developing and prototyping quantum-resistant cryptography.[64][65]It aims to integrate current post-quantum schemes in one library:liboqs.[66]liboqs is an open sourceClibrary for quantum-resistant cryptographic algorithms. It initially focuses on key exchange algorithms but by now includes several signature schemes. It provides a common API suitable for post-quantum key exchange algorithms, and will collect together various implementations. liboqs will also include a test harness and benchmarking routines to compare performance of post-quantum implementations. Furthermore, OQS also provides integration of liboqs intoOpenSSL.[67]
As of March 2023, the following key exchange algorithms are supported:[64]
As of August 2024, NIST has published 3 algorithms below as FIPS standards and the 4th is expected near end of the year:[68]
Older supported versions that have been removed because of the progression of theNIST Post-Quantum Cryptography StandardizationProject are:
One of the main challenges in post-quantum cryptography is considered to be the implementation of potentially quantum safe algorithms into existing systems. There are tests done, for example byMicrosoft Researchimplementing PICNIC in aPKIusingHardware security modules.[86]Test implementations forGoogle'sNewHopealgorithm have also been done byHSMvendors. In August 2023, Google released aFIDO2security key implementation of anECC/Dilithium hybrid signature schema which was done in partnership withETH Zürich.[87]
TheSignal ProtocolusesPost-Quantum Extended Diffie–Hellman(PQXDH).[88]
On February 21, 2024,Appleannounced that they were going to upgrade theiriMessageprotocol with a new PQC protocol called "PQ3", which will utilize ongoing keying.[89][90][91]Apple stated that, although quantum computers don't exist yet, they wanted to mitigate risks from future quantum computers as well as so-called "Harvest now, decrypt later" attack scenarios. Apple stated that they believe their PQ3 implementation provides protections that "surpass those in all other widely deployed messaging apps", because it utilizes ongoing keying.
Apple intends to fully replace the existing iMessage protocol within all supported conversations with PQ3 by the end of 2024. Apple also defined a scale to make it easier to compare the security properties of messaging apps, with a scale represented by levels ranging from 0 to 3: 0 for no end-to-end by default, 1 for pre-quantum end-to-end by default, 2 for PQC key establishment only (e.g. PQXDH), and 3 for PQC key establishmentandongoing rekeying (PQ3).[89]
Other notable implementations include:
Google has maintained the use of "hybrid encryption" in its use of post-quantum cryptography: whenever a relatively new post-quantum scheme is used, it is combined with a more proven, non-PQ scheme. This is to ensure that the data are not compromised even if the relatively new PQ algorithm turns out to be vulnerable to non-quantum attacks before Y2Q. This type of scheme is used in its 2016 and 2019 tests for post-quantum TLS,[94]and in its 2023 FIDO2 key.[87]Indeed, one of the algorithms used in the 2019 test, SIKE, was broken in 2022, but the non-PQ X25519 layer (already used widely in TLS) still protected the data.[94]Apple's PQ3 and Signal's PQXDH are also hybrid.[89]
The NSA and GCHQ argues against hybrid encryption, claiming that it adds complexity to implementation and transition.Daniel J. Bernstein, who supports hybrid encryption, argues that the claims are bogus.[94]
|
https://en.wikipedia.org/wiki/Post-quantum_cryptography
|
Pretty Good Privacy(PGP) is anencryption programthat providescryptographicprivacyandauthenticationfordata communication. PGP is used forsigning, encrypting, and decrypting texts,e-mails, files, directories, and whole disk partitions and to increase thesecurityof e-mail communications.Phil Zimmermanndeveloped PGP in 1991.[4]
PGP and similar software follow the OpenPGP standard (RFC 4880), anopen standardforencryptingand decryptingdata. Modern versions of PGP areinteroperablewithGnuPGand other OpenPGP-compliant systems.[5]
The OpenPGP standard has received criticism for its long-lived keys and steep learning curve,[6]as well as theEfailsecurity vulnerability that previously arose when select e-mail programs used OpenPGP with S/MIME.[7][8]The new OpenPGP standard (RFC 9580) has also been criticised by the maintainer ofGnuPGWerner Koch, who in response created his own specification LibrePGP.[9]This response was dividing, with some embracing his alternative specification,[10]and others considering it to be insecure.[11]
PGP encryption uses a serial combination ofhashing,data compression,symmetric-key cryptography, and finallypublic-key cryptography; each step uses one of several supportedalgorithms. Each public key is bound to a username or an e-mail address. The first version of this system was generally known as aweb of trustto contrast with theX.509system, which uses a hierarchical approach based oncertificate authorityand which was added to PGP implementations later. Current versions of PGP encryption include options through an automated key management server.
Apublic key fingerprintis a shorter version of a public key. From a fingerprint, someone can validate the correct corresponding public key. A fingerprint such as C3A6 5E46 7B54 77DF 3C4C 9790 4D22 B3CA 5B32 FF66 can be printed on a business card.[12][13]
As PGP evolves, versions that support newer features andalgorithmscan create encrypted messages that older PGP systems cannot decrypt, even with a valid private key. Therefore, it is essential that partners in PGP communication understand each other's capabilities or at least agree on PGP settings.[14]
PGP can be used to send messages confidentially.[15]For this, PGP uses ahybrid cryptosystemby combiningsymmetric-key encryptionand public-key encryption. The message is encrypted using a symmetric encryption algorithm, which requires asymmetric keygenerated by the sender. The symmetric key is used only once and is also called asession key. The message and its session key are sent to the receiver. The session key must be sent to the receiver so they know how to decrypt the message, but to protect it during transmission it is encrypted with the receiver's public key. Only the private key belonging to the receiver can decrypt the session key, and use it to symmetrically decrypt the message.
PGP supports message authentication and integrity checking. The latter is used to detect whether a message has been altered since it was completed (themessage integrityproperty) and the former, to determine whether it was actually sent by the person or entity claimed to be the sender (adigital signature). Because the content is encrypted, any changes in the message will fail the decryption with the appropriate key. The sender uses PGP to create a digital signature for the message with one of several supported public-key algorithms. To do so, PGP computes ahash, or digest, from the plaintext and then creates the digital signature from that hash using the sender's private key.
Both when encrypting messages and when verifying signatures, it is critical that the public key used to send messages to someone or some entity actually does 'belong' to the intended recipient. Simply downloading a public key from somewhere is not a reliable assurance of that association; deliberate (or accidental) impersonation is possible. From its first version, PGP has always included provisions for distributing user's public keys in an 'identity certification', which is also constructed cryptographically so that any tampering (or accidental garble) is readily detectable. However, merely making a certificate that is impossible to modify without being detected is insufficient; this can prevent corruption only after the certificate has been created, not before. Users must also ensure by some means that the public key in a certificate actually does belong to the person or entity claiming it. A given public key (or more specifically, information binding a user name to a key) may be digitally signed by a third-party user to attest to the association between someone (actually a user name) and the key. There are several levels of confidence that can be included in such signatures. Although many programs read and write this information, few (if any) include this level of certification when calculating whether to trust a key.
The web of trust protocol was first described by Phil Zimmermann in 1992, in the manual for PGP version 2.0:
As time goes on, you will accumulate keys from other people that you may want to designate as trusted introducers. Everyone else will each choose their own trusted introducers. And everyone will gradually accumulate and distribute with their key a collection of certifying signatures from other people, with the expectation that anyone receiving it will trust at least one or two of the signatures. This will cause the emergence of a decentralized fault-tolerant web of confidence for all public keys.
The web of trust mechanism has advantages over a centrally managedpublic key infrastructurescheme such as that used byS/MIMEbut has not been universally used. Users have to be willing to accept certificates and check their validity manually or have to simply accept them. No satisfactory solution has been found for the underlying problem.
In the (more recent) OpenPGP specification,trust signaturescan be used to support creation ofcertificate authorities. A trust signature indicates both that the key belongs to its claimed owner and that the owner of the key is trustworthy to sign other keys at one level below their own. A level 0 signature is comparable to a web of trust signature since only the validity of the key is certified. A level 1 signature is similar to the trust one has in a certificate authority because a key signed to level 1 is able to issue an unlimited number of level 0 signatures. A level 2 signature is highly analogous to the trust assumption users must rely on whenever they use the default certificate authority list (like those included in web browsers); it allows the owner of the key to make other keys certificate authorities.
PGP versions have always included a way to cancel ('revoke') public key certificates. A lost or compromised private key will require this if communication security is to be retained by that user. This is, more or less, equivalent to thecertificate revocation listsof centralised PKI schemes. Recent PGP versions have also supported certificate expiration dates.
The problem of correctly identifying a public key as belonging to a particular user is not unique to PGP. All public key/private key cryptosystems have the same problem, even if in slightly different guises, and no fully satisfactory solution is known. PGP's original scheme at least leaves the decision as to whether or not to use its endorsement/vetting system to the user, while most other PKI schemes do not, requiring instead that every certificate attested to by a centralcertificate authoritybe accepted as correct.
To the best of publicly available information, there is no known method which will allow a person or group to break PGP encryption by cryptographic or computational means. Indeed, in 1995,cryptographerBruce Schneiercharacterized an early version as being "the closest you're likely to get to military-grade encryption."[16]Early versions of PGP have been found to have theoretical vulnerabilities and so current versions are recommended.[17]In addition to protectingdata in transitover a network, PGP encryption can also be used to protect data in long-term data storage such as disk files. These long-term storage options are also known as data at rest, i.e. data stored, not in transit.
The cryptographic security of PGP encryption depends on the assumption that the algorithms used are unbreakable by directcryptanalysiswith current equipment and techniques.
In the original version, theRSAalgorithm was used to encrypt session keys. RSA's security depends upon theone-way functionnature of mathematicalinteger factoring.[18]Similarly, the symmetric key algorithm used in PGP version 2 wasIDEA, which might at some point in the future be found to have previously undetected cryptanalytic flaws. Specific instances of current PGP or IDEA insecurities (if they exist) are not publicly known. As current versions of PGP have added additional encryption algorithms, their cryptographic vulnerability varies with the algorithm used. However, none of the algorithms in current use are publicly known to have cryptanalytic weaknesses.
New versions of PGP are released periodically and vulnerabilities fixed by developers as they come to light. Any agency wanting to read PGP messages would probably use easier means than standard cryptanalysis, e.g.rubber-hose cryptanalysisorblack-bag cryptanalysis(e.g. installing some form oftrojan horseorkeystroke loggingsoftware/hardware on the target computer to capture encryptedkeyringsand their passwords). TheFBIhas already used this attack against PGP[19][20]in its investigations. However, any such vulnerabilities apply not just to PGP but to any conventional encryption software.
In 2003, an incident involving seizedPsionPDAsbelonging to members of theRed Brigadeindicated that neither theItalian policenor the FBI were able to decrypt PGP-encrypted files stored on them.[21][unreliable source?]
A second incident in December 2006, (seeIn re Boucher), involvingUS customs agentswho seized alaptop PCthat allegedly containedchild pornography, indicates that US government agencies find it "nearly impossible" to access PGP-encrypted files. Additionally, a magistrate judge ruling on the case in November 2007 has stated that forcing the suspect to reveal his PGP passphrase would violate hisFifth Amendmentrights i.e. a suspect's constitutional right not to incriminate himself.[22][23]The Fifth Amendment issue was opened again as the government appealed the case, after which a federal district judge ordered the defendant to provide the key.[24]
Evidence suggests that as of 2007[update],British policeinvestigators are unable to break PGP,[25]so instead have resorted to usingRIPAlegislation to demand the passwords/keys. In November 2009 a British citizen was convicted under RIPA legislation and jailed for nine months for refusing to provide police investigators with encryption keys to PGP-encrypted files.[26]
PGP as acryptosystemhas been criticized for complexity of the standard, implementation and very low usability of the user interface[27]including by recognized figures in cryptography research.[28][29]It uses an ineffective serialization format for storage of both keys and encrypted data, which resulted in signature-spamming attacks on public keys of prominent developers ofGNU Privacy Guard. Backwards compatibility of the OpenPGP standard results in usage of relatively weak default choices of cryptographic primitives (CAST5cipher,CFBmode, S2K password hashing).[30]The standard has been also criticized for leaking metadata, usage of long-term keys and lack offorward secrecy. Popular end-user implementations have suffered from various signature-striping, cipher downgrade and metadata leakage vulnerabilities which have been attributed to the complexity of the standard.[31]
Phil Zimmermanncreated the first version of PGP encryption in 1991. The name, "Pretty Good Privacy" was inspired by the name of agrocerystore, "Ralph's Pretty Good Grocery", featured in radio hostGarrison Keillor's fictional town,Lake Wobegon.[32]This first version included asymmetric-key algorithmthat Zimmermann had designed himself, namedBassOmaticafter aSaturday Night Livesketch. Zimmermann had been a long-timeanti-nuclear activist, and created PGP encryption so that similarly inclined people might securely useBBSsand securely store messages and files. No license fee was required for its non-commercial use, and the completesource codewas included with all copies.
In a posting of June 5, 2001, entitled "PGP Marks 10th Anniversary",[33]Zimmermann describes the circumstances surrounding his release of PGP:
It was on this day in 1991 that I sent the first release of PGP to a couple of my friends for uploading to the Internet. First, I sent it to Allan Hoeltje, who posted it to Peacenet, an ISP that specialized in grassroots political organizations, mainly in the peace movement. Peacenet was accessible to political activists all over the world. Then, I uploaded it to Kelly Goen, who proceeded to upload it to a Usenet newsgroup that specialized in distributing source code. At my request, he marked the Usenet posting as "US only". Kelly also uploaded it to many BBS systems around the country. I don't recall if the postings to the Internet began on June 5th or 6th.
It may be surprising to some that back in 1991, I did not yet know enough about Usenet newsgroups to realize that a "US only" tag was merely an advisory tag that had little real effect on how Usenet propagated newsgroup postings. I thought it actually controlled how Usenet routed the posting. But back then, I had no clue how to post anything on a newsgroup, and didn't even have a clear idea what a newsgroup was.
PGP found its way onto theInternetand rapidly acquired a considerable following around the world. Users and supporters included dissidents in totalitarian countries (some affecting letters to Zimmermann have been published, some of which have been included in testimony before the US Congress),civil libertariansin other parts of the world (see Zimmermann's published testimony in various hearings), and the 'free communications' activists who called themselvescypherpunks(who provided both publicity and distribution); decades later,CryptoPartyactivists did much the same viaTwitter.
Shortly after its release, PGP encryption found its way outside theUnited States, and in February 1993 Zimmermann became the formal target of a criminal investigation by the US Government for "munitionsexport without a license". At the time, cryptosystems using keys larger than40 bitswere considered munitions within the definition of theUS export regulations; PGP has never used keys smaller than 128 bits, so it qualified at that time. Penalties for violation, if found guilty, were substantial. After several years, the investigation of Zimmermann was closed without filing criminal charges against him or anyone else.
Zimmermann challenged these regulations in an imaginative way. In 1995, he published the entiresource codeof PGP in a hardback book,[34]viaMIT Press, which was distributed and sold widely. Anybody wishing to build their own copy of PGP could cut off the covers, separate the pages, and scan them using anOCRprogram (or conceivably enter it as atype-in programif OCR software was not available), creating a set of source code text files. One could then build the application using the freely availableGNU Compiler Collection. PGP would thus be available anywhere in the world. The claimed principle was simple: export ofmunitions—guns, bombs, planes, and software—was (and remains) restricted; but the export ofbooksis protected by theFirst Amendment. The question was never tested in court with respect to PGP. In cases addressing other encryption software, however, two federal appeals courts have established the rule that cryptographic software source code is speech protected by the First Amendment (theNinth Circuit Court of Appealsin theBernstein caseand theSixth Circuit Court of Appealsin theJunger case).
US export regulationsregarding cryptography remain in force, but were liberalized substantially throughout the late 1990s. Since 2000, compliance with the regulations is also much easier. PGP encryption no longer meets the definition of a non-exportable weapon, and can be exported internationally except to seven specific countries and a list of named groups and individuals[35](with whom substantially all US trade is prohibited under various US export controls).
The criminal investigation was dropped in 1996.[36]
During this turmoil, Zimmermann's team worked on a new version of PGP encryption called PGP 3. This new version was to have considerable security improvements, including a new certificate structure that fixed small security flaws in the PGP 2.x certificates as well as permitting a certificate to include separate keys for signing and encryption. Furthermore, the experience with patent and export problems led them to eschew patents entirely. PGP 3 introduced the use of theCAST-128(a.k.a. CAST5) symmetric key algorithm, and theDSAandElGamalasymmetric key algorithms, all of which were unencumbered by patents.
After the Federal criminal investigation ended in 1996, Zimmermann and his team started a company to produce new versions of PGP encryption. They merged with Viacrypt (to whom Zimmermann had sold commercial rights and who hadlicensedRSA directly fromRSADSI), which then changed its name to PGP Incorporated. The newly combined Viacrypt/PGP team started work on new versions of PGP encryption based on the PGP 3 system. Unlike PGP 2, which was an exclusivelycommand lineprogram, PGP 3 was designed from the start as asoftware libraryallowing users to work from a command line or inside aGUIenvironment. The original agreement between Viacrypt and the Zimmermann team had been that Viacrypt would have even-numbered versions and Zimmermann odd-numbered versions. Viacrypt, thus, created a new version (based on PGP 2) that they called PGP 4. To remove confusion about how it could be that PGP 3 was the successor to PGP 4, PGP 3 was renamed and released as PGP 5 in May 1997.
In December 1997, PGP Inc. was acquired byNetwork Associates, Inc.("NAI"). Zimmermann and the PGP team became NAI employees. NAI was the first company to have a legal export strategy by publishing source code. Under NAI, the PGP team added disk encryption, desktop firewalls, intrusion detection, andIPsecVPNsto the PGP family. After the export regulation liberalizations of 2000 which no longer required publishing of source, NAI stopped releasing source code.[37]
In early 2001, Zimmermann left NAI. He served as Chief Cryptographer forHush Communications, who provide an OpenPGP-based e-mail service,Hushmail. He has also worked with Veridis and other companies. In October 2001, NAI announced that its PGP assets were for sale and that it was suspending further development of PGP encryption. The only remaining asset kept was the PGP E-Business Server (the original PGP Commandline version). In February 2002, NAI canceled all support for PGP products, with the exception of the renamed commandline product.[38][39]
NAI, now known asMcAfee, continued to sell and support the commandline product under the name McAfee E-Business Server until 2013.[40]In 2010,Intel CorporationacquiredMcAfee. In 2013, the McAfee E-Business Server was transferred to Software Diversified Services (SDS), which now sells, supports, and develops it under the name SDS E-Business Server.[40][38]
For the enterprise, Townsend Security currently[when?]offers a commercial version of PGP for theIBM iandIBM zmainframe platforms. Townsend Security partnered with Network Associates in 2000 to create a compatible version of PGP for the IBM i platform. Townsend Security again ported PGP in 2008, this time to the IBM z mainframe. This version of PGP relies on a free z/OS encryption facility, which utilizes hardware acceleration. SDS also offers a commercial version of PGP (SDS E-Business Server) for theIBM zmainframe.
In August 2002, several ex-PGP team members formed a new company,PGP Corporation, and bought the PGP assets (except for the command line version) from NAI. The new company was funded by Rob Theis of Doll Capital Management (DCM) and Terry Garnett of Venrock Associates. PGP Corporation supported existing PGP users and honored NAI's support contracts. Zimmermann served as a special advisor and consultant to PGP Corporation while continuing to run his own consulting company. In 2003, PGP Corporation created a new server-based product called PGP Universal. In mid-2004, PGP Corporation shipped its own command line version called PGP Command Line, which integrated with the other PGP Encryption Platform applications. In 2005, PGP Corporation made its first acquisition: theGermansoftware company Glück & Kanja Technology AG,[41]which became PGP Deutschland AG.[42]In 2010, PGP Corporation acquired Hamburg-based certificate authority TC TrustCenter and its parent company,ChosenSecurity, to form its PGP TrustCenter[43]division.[44]
After the 2002 purchase of NAI's PGP assets, PGP Corporation offered worldwide PGP technical support from its offices inDraper, Utah;Offenbach,Germany; andTokyo,Japan.
On April 29, 2010,Symantec Corp.announced that it would acquire PGP Corporation for $300 million with the intent of integrating it into its Enterprise Security Group.[45]This acquisition was finalized and announced to the public on June 7, 2010. The source code of PGP Desktop 10 is available for peer review.[46]
In May 2018, a bug namedEFAILwas discovered in certain implementations of PGP which from 2003 could reveal the plaintext contents of emails encrypted with it.[47][48]The chosen mitigation for this vulnerability in PGP Desktop is to mandate the useSEIPprotected packets in the ciphertext, which can lead to old emails or other encrypted objects to be no longer decryptable after upgrading to the software version that has the mitigation.[49]
On August 9, 2019,Broadcom Inc.announced they would be acquiring the Enterprise Security software division of Symantec, which includes PGP Corporation.
While originally used primarily for encrypting the contents of e-mail messages and attachments from a desktop client, PGP products have been diversified since 2002 into a set of encryption applications that can be managed by an optional central policy server. PGP encryption applications include e-mails and attachments, digital signatures, full disk encryption, file and folder security, protection for IM sessions, batch file transfer encryption, and protection for files and folders stored on network servers and, more recently, encrypted or signed HTTP request/responses by means of a client-side (Enigform) and a server-side (mod openpgp) module. There is also a WordPress plugin available, called wp-enigform-authentication, that takes advantage of the session management features of Enigform with mod_openpgp.
The PGP Desktop 9.x family includes PGP Desktop Email, PGP Whole Disk Encryption, and PGP NetShare. Additionally, a number of Desktop bundles are also available. Depending on the application, the products feature desktop e-mail, digital signatures, IM security, whole disk encryption, file, and folder security, encryptedself-extracting archives, andsecure shreddingof deleted files. Capabilities are licensed in different ways depending on the features required.
The PGP Universal Server 2.x management console handles centralized deployment, security policy, policy enforcement, key management, and reporting. It is used for automated e-mail encryption in the gateway and manages PGP Desktop 9.x clients. In addition to its localkeyserver, PGP Universal Server works with the PGP public keyserver—called the PGP Global Directory—to find recipient keys. It has the capability of delivering e-mail securely when no recipient key is found via a secure HTTPS browser session.
With PGP Desktop 9.x managed by PGP Universal Server 2.x, first released in 2005, all PGP encryption applications are based on a new proxy-based architecture. These newer versions of PGP software eliminate the use of e-mail plug-ins and insulate the user from changes to other desktop applications. All desktop and server operations are now based on security policies and operate in an automated fashion. The PGP Universal server automates the creation, management, and expiration of keys, sharing these keys among all PGP encryption applications.
The Symantec PGP platform has now undergone a rename. PGP Desktop is now known as Symantec Encryption Desktop (SED), and the PGP Universal Server is now known as Symantec Encryption Management Server (SEMS). The current shipping versions are Symantec Encryption Desktop 10.3.0 (Windows and macOS platforms) and Symantec Encryption Server 3.3.2.
Also available are PGP Command-Line, which enables command line-based encryption and signing of information for storage, transfer, and backup, as well as the PGP Support Package for BlackBerry which enables RIM BlackBerry devices to enjoy sender-to-recipient messaging encryption.
New versions of PGP applications use both OpenPGP and theS/MIME, allowing communications with any user of aNISTspecified standard.[50]
Within PGP Inc., there was still concern surrounding patent issues. RSADSI was challenging the continuation of the Viacrypt RSA license to the newly merged firm. The company adopted an informal internal standard that they called "Unencumbered PGP" which would "use no algorithm with licensing difficulties". Because of PGP encryption's importance worldwide, many wanted to write their own software that would interoperate with PGP 5. Zimmermann became convinced that anopen standardfor PGP encryption was critical for them and for the cryptographic community as a whole. In July 1997, PGP Inc. proposed to theIETFthat there be a standard called OpenPGP. They gave the IETF permission to use the name OpenPGP to describe this new standard as well as any program that supported the standard. The IETF accepted the proposal and started the OpenPGPWorking Group.
OpenPGP is on theInternet Standards Trackand is under active development. Many e-mail clients provide OpenPGP-compliant email security as described in RFC 3156. The current specification is RFC 9580 (July 2024), the successor to RFC 4880. RFC 9580 specifies a suite of required algorithms consisting ofX25519,Ed25519,SHA2-256andAES-128. In addition to these algorithms, the standard recommendsX448,Ed448,SHA2-384,SHA2-512andAES-256. Beyond these, many other algorithms are supported.
OpenPGP's encryption can ensure the secure delivery of files and messages, as well as provide verification of who created or sent the message using a process called digital signing. Theopen sourceoffice suiteLibreOfficeimplemented document signing with OpenPGP as of version 5.4.0 on Linux.[52]Using OpenPGP for communication requires participation by both the sender and recipient. OpenPGP can also be used to secure sensitive files when they are stored in vulnerable places like mobile devices or in the cloud.[53]
In late 2023, a schism occurred in the OpenPGP world: IETF's OpenPGP working group decided to choose a "crypto-refresh" update strategy for the RFC 4880 specification, rather than a more gradual "4880bis" path preferred by Werner Koch, author of GnuPG. As a result, Koch took his draft, now abandoned by the workgroup, and forked it into a "LibrePGP" specification.[9]
TheFree Software Foundationhas developed its own OpenPGP-compliant software suite calledGNU Privacy Guard, freely available together with all source code under theGNU General Public Licenseand is maintained separately from severalgraphical user interfacesthat interact with the GnuPG library for encryption, decryption, and signing functions (seeKGPG,Seahorse,MacGPG).[undue weight?–discuss]Several other vendors[specify]have also developed OpenPGP-compliant software.
The development of anopen sourceOpenPGP-compliant library, OpenPGP.js, written inJavaScriptand supported by theHorizon 2020 Framework Programmeof theEuropean Union,[54]has allowed web-based applications to use PGP encryption in the web browser.
PGP keys are supported inMozilla Thunderbird(Built-in in version 78 onwards on PC,[55]and with theOpenKeychainapp as of version 9 on Android[56]),GitHub,[57]andGitLab.[58]
With the advancement of cryptography, parts of PGP and OpenPGP have been criticized for being dated:
In October 2017, theROCA vulnerabilitywas announced, which affects RSA keys generated by buggy Infineon firmware used onYubikey4 tokens, often used with OpenPGP. Many published PGP keys were found to be susceptible.[60]Yubico offers free replacement of affected tokens.[61]
|
https://en.wikipedia.org/wiki/Pretty_Good_Privacy
|
Apseudonym(/ˈsjuːdənɪm/; fromAncient Greekψευδώνυμος(pseudṓnumos)'falsely named') oralias(/ˈeɪli.əs/) is a fictitious name that a person assumes for a particular purpose, which differs from their original or true meaning (orthonym).[1][2]This also differs from a new name that entirely or legally replaces an individual's own. Many pseudonym holders use them because they wish to remainanonymousand maintain privacy, though this may be difficult to achieve as a result of legal issues.[3]
Pseudonyms includestage names,user names,ring names,pen names, aliases,superheroor villain identities and code names, gamertags, andregnal namesof emperors, popes, and other monarchs. In some cases, it may also includenicknames. Historically, they have sometimes taken the form ofanagrams, Graecisms, andLatinisations.[4]
Pseudonyms should not be confused with new names that replace old ones and become the individual's full-time name. Pseudonyms are "part-time" names, used only in certain contexts: to provide a more clear-cut separation between one's private and professional lives, to showcase or enhance a particular persona, or to hide an individual's real identity, as with writers' pen names, graffiti artists' tags,resistance fighters'or terrorists'noms de guerre, computerhackers'handles, and otheronline identitiesfor services such associal media,online gaming, andinternet forums. Actors, musicians, and other performers sometimes usestage namesfor a degree of privacy, to better market themselves, and other reasons.[5]
In some cases, pseudonyms are adopted because they are part of a cultural or organisational tradition; for example,devotional namesare used by members of somereligious institutes,[6]and "cadre names" are used byCommunist partyleaders such asTrotskyandLenin.
Acollective nameorcollective pseudonymis one shared by two or more persons, for example, the co-authors of a work, such asCarolyn Keene,Erin Hunter,Ellery Queen,Nicolas Bourbaki, orJames S. A. Corey.
The termpseudonymis derived from the Greek word "ψευδώνυμον" (pseudṓnymon),[7]literally"false name", fromψεῦδος(pseûdos) 'lie, falsehood'[8]andὄνομα(ónoma) "name".[9]The termaliasis a Latinadverbmeaning "at another time, elsewhere".[10]
Sometimes people change their names in such a manner that the new name becomes permanent and is used by all who know the person. This is not an alias or pseudonym, but in fact a new name. In many countries, includingcommon lawcountries, a name change can be ratified by a court and become a person's new legal name.
Pseudonymous authors may still have their various identities linked together throughstylometricanalysis of their writing style. The precise degree of this unmasking ability and its ultimate potential is uncertain, but the privacy risks are expected to grow with improved analytic techniques andtext corpora. Authors may practiceadversarial stylometryto resist such identification.[11]
Businesspersons of ethnic minorities in some parts of the world are sometimes advised by an employer to use a pseudonym that is common or acceptable in that area when conducting business, to overcome racial or religious bias.[12]
Criminals may use aliases,fictitious business names, anddummy corporations(corporate shells) to hide their identity, or to impersonate other persons or entities in order to commit fraud. Aliases and fictitious business names used for dummy corporations may become so complex that, in the words ofThe Washington Post, "getting to the truth requires a walk down a bizarre labyrinth" and multiple government agencies may become involved to uncover the truth.[13]Giving a false name to a law enforcement officer is a crime in many jurisdictions.
Apen nameis a pseudonym (sometimes a particular form of the real name) adopted by anauthor(or on the author's behalf by their publishers). English usage also includes the French-language phrasenom de plume(which in French literally means "pen name").[14]
The concept of pseudonymity has a long history. In ancient literature it was common to write in the name of a famous person, not for concealment or with any intention of deceit; in the New Testament, the second letter of Peter is probably such. A more modern example is all ofThe Federalist Papers, which were signed by Publius, a pseudonym representing the trio ofJames Madison,Alexander Hamilton, andJohn Jay. The papers were written partially in response to severalAnti-Federalist Papers, also written under pseudonyms. As a result of this pseudonymity, historians know that the papers were written by Madison, Hamilton, and Jay, but have not been able to discern with certainty which of the three authored a few of the papers. There are also examples of modern politicians and high-ranking bureaucrats writing under pseudonyms.[15][16]
Some female authors have used male pen names, in particular in the 19th century, when writing was a highly male-dominated profession. TheBrontë sistersused pen names for their early work, so as not to reveal their gender (see below) and so that local residents would not suspect that the books related to people of their neighbourhood.Anne Brontë'sThe Tenant of Wildfell Hall(1848) was published under the name Acton Bell, whileCharlotte Brontëused the name Currer Bell forJane Eyre(1847) andShirley(1849), andEmily Brontëadopted Ellis Bell as cover forWuthering Heights(1847). Other examples from the nineteenth-century are novelist Mary Ann Evans (George Eliot) and French writer Amandine Aurore Lucile Dupin (George Sand). Pseudonyms may also be used due to cultural or organization or political prejudices.
Similarly, some 20th- and 21st-century male romance novelists – a field dominated by women – have used female pen names.[17]A few examples are Brindle Chase,Peter O'Donnell(as Madeline Brent),Christopher Wood(as Penny Sutton and Rosie Dixon), andHugh C. Rae(as Jessica Sterling).[17]
A pen name may be used if a writer's real name is likely to be confused with the name of another writer or notable individual, or if the real name is deemed unsuitable.
Authors who write both fiction and non-fiction, or in different genres, may use different pen names to avoid confusing their readers. For example, the romance writerNora Robertswrites mystery novels under the nameJ. D. Robb.
In some cases, an author may become better known by his pen name than their real name. Some famous examples of that include Samuel Clemens, writing asMark Twain, Theodor Geisel, better known asDr. Seuss, and Eric Arthur Blair (George Orwell). The British mathematician Charles Dodgson wrote fantasy novels asLewis Carrolland mathematical treatises under his own name.
Some authors, such asHarold Robbins, use several literary pseudonyms.[18]
Some pen names have been used for long periods, even decades, without the author's true identity being discovered, as withElena FerranteandTorsten Krol.
Joanne Rowling[19]published theHarry Potterseries as J. K. Rowling. Rowling also published theCormoran Strikeseries of detective novels includingThe Cuckoo's Callingunder the pseudonym Robert Galbraith.
Winston Churchillwrote asWinston S. Churchill(from his full surname Spencer Churchill which he did not otherwise use) in an attempt to avoid confusion with anAmerican novelist of the same name. The attempt was not wholly successful – the two are still sometimes confused by booksellers.[20][21]
A pen name may be used specifically to hide the identity of the author, as withexposébooks about espionage or crime, or explicit erotic fiction.Erwin von Busseused a pseudonym when he published short stories about sexually charged encounters between men in Germany in 1920.[22]Some prolific authors adopt a pseudonym to disguise the extent of their published output, e. g.Stephen Kingwriting asRichard Bachman. Co-authors may choose to publish under a collective pseudonym, e. g.,P. J. TracyandPerri O'Shaughnessy.Frederic DannayandManfred Leeused the nameEllery Queenas a pen name for their collaborative works and as the name of their main character.[23]Asa Earl Carter, a Southern white segregationist affiliated with the KKK, wrote Western books under a fictional Cherokee persona to imply legitimacy and conceal his history.[24]
A famous case in French literature wasRomain Gary. Already a well-known writer, he started publishing books as Émile Ajar to test whether his new books would be well received on their own merits, without the aid of his established reputation. They were: Émile Ajar, like Romain Gary before him, was awarded the prestigiousPrix Goncourtby a jury unaware that they were the same person. Similarly, TV actorRonnie Barkersubmitted comedy material under the name Gerald Wiley.
A collective pseudonym may represent an entire publishing house, or any contributor to a long-running series, especially with juvenile literature. Examples includeWatty Piper,Victor Appleton,Erin Hunter, and Kamiru M. Xhan.
Another use of a pseudonym in literature is to present a story as being written by the fictional characters in the story. The series of novels known asA Series of Unfortunate Eventsare written byDaniel Handlerunder the pen name ofLemony Snicket, a character in the series. This applies also to some of the several 18th-century English and American writers who used the nameFidelia.
Ananonymity pseudonymormultiple-use nameis a name used by many different people to protect anonymity.[25]It is a strategy that has been adopted by many unconnected radical groups and by cultural groups, where the construct of personal identity has been criticised. This has led to the idea of the "open pop star", such asMonty Cantsin.[clarification needed]
Pseudonyms andacronymsare often employed in medical research toprotect subjects' identitiesthrough a process known asde-identification.
Nicolaus Copernicusput forward his theory of heliocentrism in the manuscriptCommentariolusanonymously, in part because of his employment as a law clerk for achurch-government organization.[26]
Sophie GermainandWilliam Sealy Gossetused pseudonyms to publish their work in the field of mathematics – Germain, to avoid rampant 19th century academicmisogyny, and Gosset, to avoid revealing brewing practices of his employer, theGuinness Brewery.[27][28]
Satoshi Nakamotois a pseudonym of a still unknown author or authors' group behind awhite paperaboutbitcoin.[29][30][31][32]
While taking part in military activities, such as fighting in a war, the pseudonym might be known as anom de guerre. It is chosen by the person involved in the activity.[33][34]
Individuals using a computeronlinemay adopt or be required to use a form of pseudonym known as a "handle" (a term deriving fromCB slang), "username", "loginname", "avatar", or, sometimes, "screen name", "gamertag", "IGN (InGame (Nick)Name)" or "nickname". On the Internet,pseudonymous remailersusecryptographythat achieves persistent pseudonymity, so that two-way communication can be achieved, and reputations can be established, without linking physicalidentitiesto their respective pseudonyms.Aliasingis the use of multiple names for the same data location.
More sophisticated cryptographic systems, such as anonymousdigital credentials, enable users to communicate pseudonymously (i.e., by identifying themselves by means of pseudonyms). In well-defined abuse cases, a designated authority may be able to revoke the pseudonyms and reveal the individuals' real identity.[citation needed]
Use of pseudonyms is common among professionaleSportsplayers, despite the fact that many professional games are played onLAN.[35]
Pseudonymity has become an important phenomenon on the Internet and other computer networks. In computer networks, pseudonyms possess varying degrees of anonymity,[36]ranging from highly linkablepublic pseudonyms(the link between the pseudonym and a human being is publicly known or easy to discover), potentially linkablenon-public pseudonyms(the link is known to system operators but is not publicly disclosed), andunlinkable pseudonyms(the link is not known to system operators and cannot be determined).[37]For example, trueanonymous remailerenables Internet users to establish unlinkable pseudonyms; those that employ non-public pseudonyms (such as the now-defunctPenet remailer) are calledpseudonymous remailers.
The continuum of unlinkability can also be seen, in part, on Wikipedia. Some registered users make no attempt to disguise their real identities (for example, by placing their real name on their user page). The pseudonym of unregistered users is theirIP address, which can, in many cases, easily be linked to them. Other registered users prefer to remain anonymous, and do not disclose identifying information. However, in certain cases,Wikipedia's privacy policypermits system administrators to consult the server logs to determine the IP address, and perhaps the true name, of a registered user. It is possible, in theory, to create an unlinkable Wikipedia pseudonym by using anOpen proxy, a Web server that disguises the user's IP address. But most open proxy addresses are blocked indefinitely due to their frequent use by vandals. Additionally, Wikipedia's public record of a user's interest areas, writing style, and argumentative positions may still establish an identifiable pattern.[38][39]
System operators (sysops) at sites offering pseudonymity, such as Wikipedia, are not likely to build unlinkability into their systems, as this would render them unable to obtain information about abusive users quickly enough to stop vandalism and other undesirable behaviors. Law enforcement personnel, fearing an avalanche of illegal behavior, are equally unenthusiastic.[40]Still, some users and privacy activists like theAmerican Civil Liberties Unionbelieve that Internet users deserve stronger pseudonymity so that they can protect themselves against identity theft, illegal government surveillance, stalking, and other unwelcome consequences of Internet use (includingunintentional disclosures of their personal informationanddoxing, as discussed in the next section). Their views are supported by laws in some nations (such as Canada) that guarantee citizens a right to speak using a pseudonym.[41]This right does not, however, give citizens the right to demand publication of pseudonymous speech on equipment they do not own.
Most Web sites that offer pseudonymity retain information about users. These sites are often susceptible to unauthorized intrusions into their non-public database systems. For example, in 2000, a Welsh teenager obtained information about more than 26,000 credit card accounts, including that of Bill Gates.[42][43]In 2003, VISA and MasterCard announced that intruders obtained information about 5.6 million credit cards.[44]Sites that offer pseudonymity are also vulnerable to confidentiality breaches. In a study of a Web dating service and apseudonymous remailer,University of Cambridgeresearchers discovered that the systems used by these Web sites to protect user data could be easily compromised, even if the pseudonymous channel is protected by strong encryption. Typically, the protected pseudonymous channel exists within a broader framework in which multiple vulnerabilities exist.[45]Pseudonym users should bear in mind that, given the current state of Web security engineering, their true names may be revealed at any time.
Pseudonymity is an important component of the reputation systems found in online auction services (such aseBay), discussion sites (such asSlashdot), and collaborative knowledge development sites (such asWikipedia). A pseudonymous user who has acquired a favorable reputation gains the trust of other users. When users believe that they will be rewarded by acquiring a favorable reputation, they are more likely to behave in accordance with the site's policies.[46]
If users can obtain new pseudonymous identities freely or at a very low cost, reputation-based systems are vulnerable to whitewashing attacks,[47]also calledserial pseudonymity, in which abusive users continuously discard their old identities and acquire new ones in order to escape the consequences of their behavior: "On the Internet, nobody knows that yesterday you were a dog, and therefore should be in the doghouse today."[48]Users of Internet communities who have been banned only to return with new identities are calledsock puppets. Whitewashing is one specific form of aSybil attackon distributed systems.
The social cost of cheaply discarded pseudonyms is that experienced users lose confidence in new users,[51]and may subject new users to abuse until they establish a good reputation.[48]System operators may need to remind experienced users that most newcomers are well-intentioned (see, for example,Wikipedia's policy about biting newcomers). Concerns have also been expressed about sock puppets exhausting the supply of easily remembered usernames. In addition a recent research paper demonstrated that people behave in a potentially more aggressive manner when using pseudonyms/nicknames (due to theonline disinhibition effect) as opposed to being completely anonymous.[52][53]In contrast, research by the blog comment hosting serviceDisqusfound pseudonymous users contributed the "highest quantity and quality of comments", where "quality" is based on an aggregate of likes, replies, flags, spam reports, and comment deletions,[49][50]and found that users trusted pseudonyms and real names equally.[54]
Researchers at the University of Cambridge showed that pseudonymous comments tended to be more substantive and engaged with other users in explanations, justifications, and chains of argument, and less likely to use insults, than either fully anonymous or real name comments.[55]Proposals have been made to raise the costs of obtaining new identities, such as by charging a small fee or requiring e-mail confirmation. Academic research has proposed cryptographic methods to pseudonymize social media identities[56]or government-issued identities,[57]to accrue and useanonymous reputationin online forums,[58]or to obtain one-per-person and hence less readily-discardable pseudonyms periodically at physical-worldpseudonym parties.[59]Others point out that Wikipedia's success is attributable in large measure to its nearly non-existent initial participation costs.
People seeking privacy often use pseudonyms to make appointments and reservations.[60]Those writing toadvice columnsin newspapers and magazines may use pseudonyms.[61]Steve Wozniakused a pseudonym when attending theUniversity of California, Berkeleyafter co-foundingApple Computer, because "[he] knew [he] wouldn't have time enough to be an A+ student."[62]
When used by an actor, musician, radio disc jockey, model, or other performer or "show business" personality a pseudonym is called astage name, or, occasionally, aprofessional name, orscreen name.
Members of a marginalized ethnic or religious group have often adopted stage names, typically changing their surname or entire name to mask their original background.
Stage names are also used to create a more marketable name, as in the case of Creighton Tull Chaney, who adopted the pseudonymLon Chaney Jr., a reference to his famous fatherLon Chaney.
Chris CurtisofDeep Purplefame was christened as Christopher Crummey ("crummy" is UK slang for poor quality). In this and similar cases a stage name is adopted simply to avoid an unfortunate pun.
Pseudonyms are also used to comply with the rules of performing-artsguilds(Screen Actors Guild(SAG),Writers Guild of America, East(WGA),AFTRA, etc.), which do not allow performers to use an existing name, in order to avoid confusion. For example, these rules required film and television actor Michael Fox to add a middle initial and becomeMichael J. Fox, to avoid being confused with another actor namedMichael Fox. This was also true of author and actressFannie Flagg, who shared her real name, Patricia Neal, withanother well-known actress;Rick Copp, who chose the pseudonym name Richard Hollis, which is also the name of a character in the anthology TV seriesFemme Fatales; and British actorStewart Granger, whose real name was James Stewart. The film-making team ofJoel and Ethan Coen, for instance, share credit for editing under the alias Roderick Jaynes.[63]
Some stage names are used to conceal a person's identity, such as the pseudonymAlan Smithee, which was used by directors in theDirectors Guild of America(DGA) to remove their name from a film they feel was edited or modified beyond their artistic satisfaction. In theatre, the pseudonymsGeorge or Georgina Spelvin, andWalter Plingeare used to hide the identity of a performer, usually when he or she is "doubling" (playing more than one role in the same play).
David Agnewwas a name used by the BBC to conceal the identity of a scriptwriter, such as for theDoctor WhoserialCity of Death, which had three writers, includingDouglas Adams, who was at the time of writing, the show's script editor.[64]In another Doctor Who serial,The Brain of Morbius, writerTerrance Dicksdemanded the removal of his name from the credits saying it could go out under a "bland pseudonym".[citation needed][65]This ended up as "Robin Bland".[65][66]
Pornographic actors regularly use stage names.[67][68][69]Sometimes these are referred to asnom de porn(like withnom de plume, this is English-language users creating a French-language phrase to use in English). Having acted in pornographic films can be a serious detriment to finding another career.[70][71]
Musicians and singers can use pseudonyms to allow artists to collaborate with artists on other labels while avoiding the need to gain permission from their own labels, such as the artistJerry Samuels, who made songs under Napoleon XIV. Rock singer-guitaristGeorge Harrison, for example, played guitar onCream's song "Badge" using a pseudonym.[72]In classical music, some record companies issued recordings under anom de disquein the 1950s and 1960s to avoid paying royalties. A number of popular budget LPs of piano music were released under the pseudonymPaul Procopolis.[73]Another example is thatPaul McCartneyused his fictional name "Bernerd Webb" forPeter and Gordon's songWoman.[74]
Pseudonyms are used as stage names inheavy metalbands, such asTracii GunsinLA Guns,Axl RoseandSlashinGuns N' Roses,Mick MarsinMötley Crüe,Dimebag DarrellinPantera, orC.C. DevilleinPoison. Some such names have additional meanings, like that of Brian Hugh Warner, more commonly known asMarilyn Manson: Marilyn coming fromMarilyn Monroeand Manson from convicted serial killerCharles Manson.Jacoby ShaddixofPapa Roachwent under the name "Coby Dick" during theInfestera. He changed back to his birth name whenlovehatetragedywas released.
David Johansen, front man for the hard rock bandNew York Dolls, recorded and performed pop and lounge music under the pseudonym Buster Poindexter in the late 1980s and early 1990s. The music video for Poindexter's debut single,Hot Hot Hot, opens with a monologue from Johansen where he notes his time with the New York Dolls and explains his desire to create more sophisticated music.
Ross Bagdasarian Sr., creator ofAlvin and the Chipmunks, wrote original songs, arranged, and produced the records under his real name, but performed on them asDavid Seville. He also wrote songs as Skipper Adams. Danish pop pianistBent Fabric, whose full name is Bent Fabricius-Bjerre, wrote his biggest instrumental hit "Alley Cat" as Frank Bjorn.
For a time, the musicianPrinceused an unpronounceable "Love Symbol" as a pseudonym ("Prince" is his actual first name rather than a stage name). He wrote the song "Sugar Walls" forSheena Eastonas "Alexander Nevermind" and "Manic Monday" forthe Banglesas "Christopher Tracy". (He also produced albums early in his career as "Jamie Starr").
Many Italian-American singers have used stage names, as their birth names were difficult to pronounce or considered too ethnic for American tastes. Singers changing their names includedDean Martin(born Dino Paul Crocetti),Connie Francis(born Concetta Franconero),Frankie Valli(born Francesco Castelluccio),Tony Bennett(born Anthony Benedetto), andLady Gaga(born Stefani Germanotta)
In 2009, the British rock bandFeederbriefly changed their name toRenegadesso they could play a whole show featuring a set list in which 95 per cent of the songs played were from their forthcoming new album of the same name, with none of their singles included. Front manGrant Nicholasfelt that if they played as Feeder, there would be uproar over his not playing any of the singles, so used the pseudonym as a hint. A series of small shows were played in 2010, at 250- to 1,000-capacity venues with the plan not to say who the band really are and just announce the shows as if they were a new band.
In many cases, hip-hop and rap artists prefer to use pseudonyms that represents some variation of their name, personality, or interests. Examples includeIggy Azalea(her stage name is a combination of her dog's name, Iggy, and her home street inMullumbimby, Azalea Street),Ol' Dirty Bastard(known under at least six aliases),Diddy(previously known at various times as Puffy, P. Diddy, and Puff Daddy),Ludacris,Flo Rida(whose stage name is a tribute to his home state,Florida), British-Jamaican hip-hop artistStefflon Don(real name Stephanie Victoria Allen),LL Cool J, andChingy.Black metalartists also adopt pseudonyms, usually symbolizing dark values, such asNocturno Culto,Gaahl, Abbath, and Silenoz. In punk and hardcore punk, singers and band members often replace real names with tougher-sounding stage names such asSid Viciousof the late 1970s bandSex Pistolsand "Rat" of the early 1980s bandThe Varukersand the 2000s re-formation ofDischarge. The punk rock bandThe Ramoneshad every member take the last name of Ramone.[citation needed]
Henry John Deutschendorf Jr., an American singer-songwriter, used the stage nameJohn Denver. The Australian country musician born Robert Lane changed his name toTex Morton. Reginald Kenneth Dwight legally changed his name in 1972 toElton John.
|
https://en.wikipedia.org/wiki/Pseudonym
|
Inpublic-key cryptography, apublic key fingerprintis a short sequence ofbytesused to identify a longerpublic key. Fingerprints are created by applying acryptographic hash functionto a public key. Since fingerprints are shorter than the keys they refer to, they can be used to simplify certain key management tasks. InMicrosoftsoftware, "thumbprint" is used instead of "fingerprint."
A public key fingerprint is typically created through the following steps:
This process produces a short fingerprint which can be used to authenticate a much larger public key. For example, whereas a typicalRSApublic key will be 2048 bits in length or longer, typicalMD5orSHA-1fingerprints are only 128 or 160 bits in length.
When displayed for human inspection, fingerprints are usually encoded intohexadecimalstrings. These strings are then formatted into groups of characters for readability. For example, a 128-bit MD5 fingerprint forSSHwould be displayed as follows:
When a public key is received over an untrusted channel, such as theInternet, the recipient often wishes toauthenticatethe public key. Fingerprints can help accomplish this, since their small size allows them to be passed over trusted channels where public keys won't easily fit.
For example, if Alice wishes to authenticate a public key as belonging to Bob, she can contact Bob over the phone or in person and ask him to read his fingerprint to her, or give her a scrap of paper with the fingerprint written down. Alice can then check that this trusted fingerprint matches the fingerprint of the public key. Exchanging and comparing values like this is much easier if the values are short fingerprints instead of long public keys.
Fingerprints can also be useful when automating the exchange or storage of key authentication data. For example, if key authentication data needs to be transmitted through a protocol or stored in adatabasewhere the size of a full public key is a problem, then exchanging or storing fingerprints may be a more viable solution.
In addition, fingerprints can be queried with search engines in order to ensure that the public key that a user just downloaded can be seen by third party search engines. If the search engine returns hits referencing the fingerprint linked to the proper site(s), one can feel more confident that the key is not being injected by an attacker, such as aMan-in-the-middle attack.
PGPdeveloped thePGP word listto facilitate the exchange of public key fingerprints over voice channels.
In systems such as SSH, users can exchange and check fingerprints manually to perform key authentication. Once a user has accepted another user's fingerprint, that fingerprint (or the key it refers to) will be stored locally along with a record of the other user's name or address, so that future communications with that user can be automatically authenticated.
In systems such as X.509-basedPKI, fingerprints are primarily used to authenticate root keys. These root keys issue certificates which can be used to authenticate user keys. This use of certificates eliminates the need for manual fingerprint verification between users.
In systems such asPGPorGroove, fingerprints can be used for either of the above approaches: they can be used to authenticate keys belonging to other users, or keys belonging to certificate-issuing authorities. In PGP, normal users can issue certificates to each other, forming aweb of trust, and fingerprints are often used to assist in this process (e.g., atkey-signing parties).
In systems such asCGAorSFSand most cryptographicpeer-to-peer networks, fingerprints are embedded into pre-existing address and name formats (such asIPv6addresses,file namesor other identification strings). If addresses and names are already being exchanged through trusted channels, this approach allows fingerprints to piggyback on them.[1]
In PGP, most keys are created in such a way that what is called the "key ID" is equal to the lower 32 or 64 bits respectively of a key fingerprint. PGP uses key IDs to refer to public keys for a variety of purposes. These are not, properly speaking, fingerprints, since their short length prevents them from being able to securely authenticate a public key. 32bit key ids should not be used as current hardware can generate a colliding 32bit key id in just 4 seconds.[2]
The primary threat to the security of a fingerprint is asecond-preimage attack, where an attacker constructs a key pair whose public key hashes to a fingerprint that matches the victim's fingerprint. The attacker could then present his public key in place of the victim's public key to masquerade as the victim.
A secondary threat to some systems is acollision attack, where an attacker constructs multiple key pairs which hash to his own fingerprint. This may allow an attacker to repudiate signatures he has created, or cause other confusion.
To prevent preimage attacks, the cryptographic hash function used for a fingerprint should possess the property of second preimage resistance. If collision attacks are a threat, the hash function should also possess the property of collision-resistance. While it is acceptable to truncate hash function output for the sake of shorter, more usable fingerprints, the truncated fingerprints must be long enough to preserve the relevant properties of the hash function againstbrute-force searchattacks.
In practice, most fingerprints commonly used today are based on non-truncated MD5 or SHA-1 hashes. As of 2017, collisions but not preimages can be found in MD5 and SHA-1. The future is therefore likely to bring increasing use of newer hash functions such asSHA-256. However, fingerprints based on SHA-256 and other hash functions with long output lengths are more likely to be truncated than (relatively short) MD5 or SHA-1 fingerprints.
In situations where fingerprint length must be minimized at all costs, fingerprint security can be boosted by increasing the cost of calculating the fingerprint. For example, in the context ofCryptographically Generated Addresses, this is called "Hash Extension" and requires anyone calculating a fingerprint to search for ahashsumstarting with a fixed number of zeroes,[3]which is assumed to be an expensive operation.
|
https://en.wikipedia.org/wiki/Public_key_fingerprint
|
TheSecure Shell Protocol(SSH Protocol) is acryptographicnetwork protocolfor operatingnetwork servicessecurely over an unsecured network.[1]Its most notable applications are remoteloginandcommand-lineexecution.
SSH was designed forUnix-likeoperating systems as a replacement forTelnetandunsecuredremoteUnix shellprotocols, such as the BerkeleyRemote Shell(rsh) and the relatedrloginandrexecprotocols, which all use insecure,plaintextmethods of authentication, likepasswords.
Since mechanisms likeTelnetandRemote Shellare designed to access and operate remote computers, sending the authentication tokens (e.g. username andpassword) for this access to these computers across a public network in an unsecured way poses a great risk of 3rd parties obtaining the password and achieving the same level of access to the remote system as the telnet user. Secure Shell mitigates this risk through the use of encryption mechanisms that are intended to hide the contents of the transmission from an observer, even if the observer has access to the entire data stream.[2]
Finnish computer scientist Tatu Ylönen designed SSH in 1995 and provided an implementation in the form of two commands,sshandslogin, as secure replacements forrshandrlogin, respectively. Subsequent development of the protocol suite proceeded in several developer groups, producing several variants of implementation. The protocol specification distinguishes two major versions, referred to as SSH-1 and SSH-2. The most commonly implemented software stack isOpenSSH, released in 1999 as open-source software by theOpenBSDdevelopers. Implementations are distributed for all types of operating systems in common use, including embedded systems.
SSH applications are based on aclient–serverarchitecture, connecting anSSH clientinstance with anSSH server.[3]SSH operates as a layered protocol suite comprising three principal hierarchical components: thetransport layerprovides server authentication, confidentiality, and integrity; theuser authentication protocolvalidates the user to the server; and theconnection protocolmultiplexes the encrypted tunnel into multiple logical communication channels.[1]
SSH usespublic-key cryptographytoauthenticatethe remote computer and allow it to authenticate the user, if necessary.[3]
SSH may be used in several methodologies. In the simplest manner, both ends of a communication channel use automatically generated public-private key pairs to encrypt a network connection, and then use apasswordto authenticate the user.
When the public-private key pair is generated by the user manually, the authentication is essentially performed when the key pair is created, and a session may then be opened automatically without a password prompt. In this scenario, the public key is placed on all computers that must allow access to the owner of the matching private key, which the owner keeps private. While authentication is based on the private key, the key is never transferred through the network during authentication. SSH only verifies that the same person offering the public key also owns the matching private key.
In all versions of SSH it is important to verify unknownpublic keys, i.e.associate the public keys with identities, before accepting them as valid. Accepting an attacker's public key without validation will authorize an unauthorized attacker as a valid user.
OnUnix-likesystems, the list of authorized public keys is typically stored in the home directory of the user that is allowed to log in remotely, in the file~/.ssh/authorized_keys.[4]This file is respected by SSH only if it is not writable by anything apart from the owner and root. When the public key is present on the remote end and the matching private key is present on the local end, typing in the password is no longer required. However, for additional security the private key itself can be locked with a passphrase.
The private key can also be looked for in standard places, and its full path can be specified as a command line setting (the option-ifor ssh). Thessh-keygenutility produces the public and private keys, always in pairs.
SSH is typically used to log into a remote computer'sshellorcommand-line interface(CLI) and to execute commands on a remote server. It also supports mechanisms fortunneling,forwardingofTCP portsandX11connections and it can be used to transfer files using the associatedSSH File Transfer Protocol(SFTP) orSecure Copy Protocol(SCP).[3]
SSH uses theclient–server model. An SSHclientprogram is typically used for establishing connections to an SSHdaemon, such as sshd, accepting remote connections. Both are commonly present on most modernoperating systems, includingmacOS, most distributions ofLinux,OpenBSD,FreeBSD,NetBSD,SolarisandOpenVMS. Notably, versions ofWindowsprior to Windows 10 version 1709 do not include SSH by default, butproprietary,freewareandopen sourceversions of various levels of complexity and completeness did and do exist (seeComparison of SSH clients). In 2018Microsoftbegan porting theOpenSSHsource code to Windows[5]and inWindows 10 version 1709, an official Win32 port of OpenSSH is now available.
File managers for UNIX-like systems (e.g.Konqueror) can use theFISHprotocol to provide a split-pane GUI with drag-and-drop. The open source Windows programWinSCP[6]provides similar file management (synchronization, copy, remote delete) capability using PuTTY as a back-end. Both WinSCP[7]and PuTTY[8]are available packaged to run directly off a USB drive, without requiring installation on the client machine. Crostini onChromeOScomes with OpenSSH by default. Setting up an SSH server in Windows typically involves enabling a feature in the Settings app.
SSH is important incloud computingto solve connectivity problems, avoiding the security issues of exposing a cloud-based virtual machine directly on the Internet. An SSH tunnel can provide a secure path over the Internet, through a firewall to a virtual machine.[9]
TheIANAhas assignedTCPport22,UDPport 22 andSCTPport 22 for this protocol.[10]IANA had listed the standard TCP port 22 for SSH servers as one of thewell-known portsas early as 2001.[11]SSH can also be run usingSCTPrather than TCP as the connection oriented transport layer protocol.[12]
In 1995,Tatu Ylönen, a researcher atHelsinki University of Technologyin Finland designed the first version of the protocol (now calledSSH-1) prompted by a password-sniffingattack at hisuniversity network.[13]The goal of SSH was to replace the earlierrlogin,TELNET,FTP[14]andrshprotocols, which did not provide strong authentication nor guarantee confidentiality. He chose the port number 22 because it is betweentelnet(port 23) andftp(port 21).[15]
Ylönen released his implementation asfreewarein July 1995, and the tool quickly gained in popularity. Towards the end of 1995, the SSH user base had grown to 20,000 users in fifty countries.[16]
In December 1995, Ylönen founded SSH Communications Security to market and develop SSH. The original version of the SSH software used various pieces offree software, such asGNU libgmp, but later versions released by SSH Communications Security evolved into increasinglyproprietary software.
It was estimated that by 2000 the number of users had grown to 2 million.[17]
In 2006, after being discussed in a working group named "secsh",[18]a revised version of the SSH protocol,SSH-2was adopted as a standard.[19]This version offers improved security and new features, but is not compatible with SSH-1. For example, it introduces new key-exchange mechanisms likeDiffie–Hellman key exchange, improveddata integritychecking viamessage authentication codeslikeMD5orSHA-1, which can be negotiated between client and server. SSH-2 also adds stronger encryption methods likeAESwhich eventually replaced weaker and compromised ciphers from the previous standard like3DES.[20][21][19]New features of SSH-2 include the ability to run any number ofshellsessions over a single SSH connection.[22]Due to SSH-2's superiority and popularity over SSH-1, some implementations such as libssh (v0.8.0+),[23]Lsh[24]andDropbear[25]eventually supported only the SSH-2 protocol.
In January 2006, well after version 2.1 was established,RFC4253specified that an SSH server supporting 2.0 as well as prior versions should identify its protocol version as 1.99.[26]This version number does not reflect a historical software revision, but a method to identifybackward compatibility.
In 1999, developers, desiring availability of a free software version, restarted software development from the 1.2.12 release of the original SSH program, which was the last released under anopen source license.[27]This served as a code base for Björn Grönvall's OSSH software.[28]Shortly thereafter,OpenBSDdevelopersforkedGrönvall's code and createdOpenSSH, which shipped with Release 2.6 of OpenBSD. From this version, a "portability" branch was formed to port OpenSSH to other operating systems.[29]
As of 2005[update],OpenSSHwas the single most popular SSH implementation, being the default version in a large number of operating system distributions. OSSH meanwhile has become obsolete.[30]OpenSSH continues to be maintained and supports the SSH-2 protocol, having expunged SSH-1 support from the codebase in the OpenSSH 7.6 release.
In 2023, an alternative to traditional SSH was proposed under the name SSH3[31][32][33]by PhD student François Michel and Professor Olivier Bonaventure and its code has been made open source.[34]This new version implements the original SSH Connection Protocol but operates on top ofHTTP/3, which runs onQUIC. It offers multiple features such as:
However, the name SSH3 is under discussion, and the project aims to rename itself to a more suitable name.[35]The discussion stems from the fact that this new implementation significantly revises the SSH protocol, suggesting it should not be called SSH3.
SSH is a protocol that can be used for many applications across many platforms including mostUnixvariants (Linux, theBSDsincludingApple'smacOS, andSolaris), as well asMicrosoft Windows. Some of the applications below may require features that are only available or compatible with specific SSH clients or servers. For example, using the SSH protocol to implement aVPNis possible, but presently only with theOpenSSHserver and client implementation.
The Secure Shell protocols are used in several file transfer mechanisms.
The SSH protocol has a layered architecture with three separate components:
This open architecture provides considerable flexibility, allowing the use of SSH for a variety of purposes beyond a secure shell. The functionality of the transport layer alone is comparable toTransport Layer Security(TLS); the user-authentication layer is highly extensible with custom authentication methods; and the connection layer provides the ability to multiplex many secondary sessions into a single SSH connection, a feature comparable toBEEPand not available in TLS.
In 1998, a vulnerability was described in SSH 1.5 which allowed the unauthorized insertion of content into an encrypted SSH stream due to insufficient data integrity protection fromCRC-32used in this version of the protocol.[42][43]A fix known as SSH Compensation Attack Detector[44]was introduced into most implementations. Many of these updated implementations contained a newinteger overflowvulnerability[45]that allowed attackers to execute arbitrary code with the privileges of the SSH daemon, typically root.
In January 2001 a vulnerability was discovered that allows attackers to modify the last block of anIDEA-encrypted session.[46]The same month, another vulnerability was discovered that allowed a malicious server to forward a client authentication to another server.[47]
Since SSH-1 has inherent design flaws which make it vulnerable, it is now generally considered obsolete and should be avoided by explicitly disabling fallback to SSH-1.[47]Most modern servers and clients support SSH-2.[48]
In November 2008, a theoretical vulnerability was discovered for all versions of SSH which allowed recovery of up to 32 bits of plaintext from a block of ciphertext that was encrypted using what was then the standard default encryption mode,CBC.[49]The most straightforward solution is to useCTR, counter mode, instead of CBC mode, since this renders SSH resistant to the attack.[49]
On December 28, 2014Der Spiegelpublished classified information[50]leaked by whistleblowerEdward Snowdenwhich suggests that theNational Security Agencymay be able to decrypt some SSH traffic. The technical details associated with such a process were not disclosed. A 2017 analysis of theCIAhacking toolsBothanSpyandGyrfalconsuggested that the SSH protocol was not compromised.[51]
A novel man-in-the-middle attack against most current ssh implementations was discovered in 2023. It was named theTerrapin attackby its discoverers.[52][53]However, the risk is mitigated by the requirement to intercept a genuine ssh session, and that the attack is restricted in its scope, fortuitously resulting mostly in failed connections.[54][55]The ssh developers have stated that the major impact of the attack is to degrade thekeystroke timingobfuscation features of ssh.[55]The vulnerability was fixed in OpenSSH 9.6, but requires both client and server to be upgraded for the fix to be fully effective.
The followingRFCpublications by theIETF"secsh"working groupdocument SSH-2 as a proposedInternet standard.
The protocol specifications were later updated by the following publications:
In addition, theOpenSSHproject includes several vendor protocol specifications/extensions:
|
https://en.wikipedia.org/wiki/Secure_Shell
|
Athreshold cryptosystem, the basis for the field ofthreshold cryptography, is acryptosystemthat protects information by encrypting it and distributing it among a cluster of fault-tolerant computers. The message is encrypted using apublic key, and the corresponding private key issharedamong the participating parties. With a threshold cryptosystem, in order to decrypt an encrypted message or to sign a message, several parties (more than some threshold number) must cooperate in the decryption or signatureprotocol.
Perhaps the first system with complete threshold properties for atrapdoor function(such asRSA) and a proof of security was published in 1994 by Alfredo De Santis, Yvo Desmedt, Yair Frankel, andMoti Yung.[1]
Historically, only organizations with very valuable secrets, such ascertificate authorities, the military, and governments made use of this technology. One of the earliest implementations was done in the 1990s byCertcofor the planned deployment of the originalSecure electronic transaction.[2]However, in October 2012, after a number of large public website password ciphertext compromises,RSA Securityannounced that it would release software to make the technology available to the general public.[3]
In March 2019, the National Institute of Standards and Technology (NIST) conducted a workshop on threshold cryptography to establish consensus on applications, and define specifications.[4]In July 2020, NIST published "Roadmap Toward Criteria for Threshold Schemes for Cryptographic Primitives" as NISTIR 8214A.[5]
Letn{\displaystyle n}be the number of parties. Such a system is called(t,n)-threshold, if at leasttof these parties can efficiently decrypt the ciphertext, while fewer thanthave no useful information. Similarly it is possible to define a(t,n)-thresholdsignature scheme, where at leasttparties are required for creating a signature.[6]
The most common application is in the storage of secrets in multiple locations to prevent the capture of the secret and the subsequentcryptanalysisof that system. Most often the secrets that are "split" are the secret key material of apublic key cryptographyor of aDigital signaturescheme. The method primarily enforces the decryption or the signing operation to take place only if a threshold of the secret sharer operates (otherwise the operation is not made). This makes the method a primary trust sharing mechanism, besides its safety of storage aspects.
Threshold versions of encryption or signature schemes can be built for manyasymmetric cryptographicschemes. The natural goal of such schemes is to be as secure as the original scheme. Such threshold versions have been defined by the above and by the following:[7]
|
https://en.wikipedia.org/wiki/Threshold_cryptosystem
|
Incryptography, acryptosystemis a suite ofcryptographic algorithmsneeded to implement a particular security service, such as confidentiality (encryption).[1]
Typically, a cryptosystem consists of three algorithms: one forkey generation, one for encryption, and one for decryption. The termcipher(sometimescypher) is often used to refer to a pair of algorithms, one for encryption and one for decryption. Therefore, the termcryptosystemis most often used when the key generation algorithm is important. For this reason, the termcryptosystemis commonly used to refer topublic keytechniques; however both "cipher" and "cryptosystem" are used forsymmetric keytechniques.
Mathematically, a cryptosystem or encryption scheme can be defined as atuple(P,C,K,E,D){\displaystyle ({\mathcal {P}},{\mathcal {C}},{\mathcal {K}},{\mathcal {E}},{\mathcal {D}})}with the following properties.
For eache∈K{\displaystyle e\in {\mathcal {K}}}, there isd∈K{\displaystyle d\in {\mathcal {K}}}such thatDd(Ee(p))=p{\displaystyle D_{d}(E_{e}(p))=p}for allp∈P{\displaystyle p\in {\mathcal {P}}}.[2]
Note; typically this definition is modified in order to distinguish an encryption scheme as being either asymmetric-keyorpublic-keytype of cryptosystem.
A classical example of a cryptosystem is theCaesar cipher. A more contemporary example is theRSAcryptosystem.
Another example of a cryptosystem is theAdvanced Encryption Standard(AES). AES is a widely used symmetric encryption algorithm that has become the standard for securing data in various applications.
Paillier cryptosystemis another example used to preserve and maintain privacy and sensitive information. It is featured in electronic voting, electronic lotteries and electronic auctions.[3]
|
https://en.wikipedia.org/wiki/Cryptosystem
|
Incomputer security, acold boot attack(or to a lesser extent, aplatform reset attack) is a type ofside channel attackin which an attacker withphysical accessto a computer performs amemory dumpof a computer'srandom-access memory (RAM)by performing a hard reset of the target machine. Typically, cold boot attacks are used for retrieving encryptionkeysfrom a runningoperating systemfor malicious or criminal investigative reasons.[1][2][3]The attack relies on thedata remanenceproperty ofDRAMandSRAMto retrieve memory contents thatremain readablein the seconds to minutes following a power switch-off.[2][4][5]
An attacker with physical access to a running computer typically executes a cold boot attack bycold-bootingthe machine and booting a lightweight operating system from a removable disk to dump the contents of pre-boot physicalmemoryto a file.[6][2]An attacker is then free to analyze the datadumpedfrom memory to find sensitive data, such as thekeys, using various forms ofkey finding attacks.[7][8]Since cold boot attacks targetrandom-access memory,full disk encryptionschemes, even with atrusted platform moduleinstalled are ineffective against this kind of attack.[2]This is because the problem is fundamentally ahardware(insecure memory) and not asoftwareissue. However, malicious access can be prevented by limiting physical access and using modern techniques to avoid storing sensitive data inrandom-access memory.
DIMM memory modulesgradually lose data over time as they lose power, but do not immediately lose all data when power is lost.[2]With certain memory modules, the time window for an attack can be extended to hours or even a week by cooling them withfreeze sprayand liquid nitrogen. Furthermore, as thebitsdisappear in memory over time, they can be reconstructed, as they fade away in a predictable manner.[2]Consequently, an attacker can perform amemory dumpof its contents by executing a cold boot attack. The ability to execute the cold boot attack successfully varies considerably across different systems, types of memory, memory manufacturers and motherboard properties, and may be more difficult to carry out than software-based methods or aDMA attack.[9]While the focus of current research is on disk encryption, any sensitive data held in memory is vulnerable to the attack.[2]
Attackers execute cold boot attacks by forcefully and abruptly rebooting a target machine and then booting a pre-installed operating system from aUSB flash drive,CD-ROMorover the network.[3]In cases where it is not practical to hard reset the target machine, an attacker may alternatively physically remove thememory modulesfrom the original system and quickly place them into a compatible machine under the attacker's control, which is then booted to access the memory.[2]Further analysis can then be performed against the data dumped fromRAM.
A similar kind of attack can also be used to extract data from memory, such as aDMA attackthat allows the physical memory to be accessed via a high-speed expansion port such asFireWire.[3]A cold boot attack may be preferred in certain cases, such as when there is high risk of hardware damage. Using the high-speed expansion port canshort out, or physically damage hardware in certain cases.[3]
Cold boots attacks are typically used fordigital forensic investigations, malicious purposes such as theft, and data recovery.[3]
In certain cases, a cold boot attack is used in the discipline ofdigital forensicsto forensically preserve data contained within memory as criminal evidence.[3]For example, when it is not practical to preserve data in memory through other means, a cold boot attack may be used to perform a dump of the data contained inrandom-access memory. For example, a cold boot attack is used in situations where a system is secured and it is not possible to access the computer.[3]A cold boot attack may also be necessary when a hard disk is encrypted withfull disk encryptionand the disk potentially contains evidence of criminal activity. A cold boot attack provides access to the memory, which can provide information about the state of the system at the time such as what programs are running.[3]
A cold boot attack may be used by attackers to gain access to encrypted information such as financial information ortrade secretsfor malicious intent.[10]
A common purpose of cold boot attacks is to circumvent software-based disk encryption. Cold boot attacks when used in conjunction withkey finding attackshave been demonstrated to be an effective means of circumventingfull disk encryptionschemes of various vendors andoperating systems, even where aTrusted Platform Module(TPM)secure cryptoprocessoris used.[2]
In the case of disk encryption applications that can be configured to allow the operating system to boot without a pre-bootPINbeing entered or a hardware key being present (e.g.BitLockerin a simple configuration that uses a TPM without atwo-factor authenticationPIN or USB key), the time frame for the attack is not limiting at all.[2]
BitLockerin its default configuration uses atrusted platform modulethat neither requires aPIN, nor an external key to decrypt the disk. When theoperating systemboots,BitLockerretrieves the key from the TPM, without any user interaction. Consequently, an attacker can simply power on the machine, wait for the operating system to beginbootingand then execute a cold boot attack against the machine to retrieve the key. Due to this,two-factor authentication, such as a pre-boot PIN or a removable USB device containing a startup key together with a TPM should be used to work around this vulnerability in the default BitLocker implementation.[11][5]However, this workaround only prevents a cold boot attack if the machine was off before the attacker gained physical access. If the machine had already booted and is running, it does not prevent an attacker from retrieving sensitive data from memory, nor from retrieving encryption keys cached in memory.
Since amemory dumpcan be easily performed by executing a cold boot attack, storage of sensitive data in RAM, like encryption keys forfull disk encryptionis unsafe. Several solutions have been proposed for storing encryption keys in areas, other thanrandom-access memory. While these solutions may reduce the chance of breaking full disk encryption, they provide no protection of other sensitive data stored in memory.
One solution for keeping encryption keys out of memory is register-based key storage. Implementations of this solution areTRESOR[12]and Loop-Amnesia.[13]Both of these implementations modify thekernelof an operating system so thatCPU registers(in TRESOR's case thex86 debug registersand in Loop-Amnesia's case the AMD64 or EMT64 profiling registers) can be used to store encryption keys, rather than in RAM. Keys stored at this level cannot easily be read fromuserspace[citation needed]and are lost when the computer restarts for any reason. TRESOR and Loop-Amnesia both must use on-the-flyround keygeneration due to the limited space available for storing cryptographic tokens in this manner. For security, both disable interrupts to prevent key information from leaking to memory from the CPU registers while encryption or decryption is being performed, and both block access to the debug or profile registers.
There are two potential areas in modernx86processors for storing keys: theSSEregisters which could in effect be made privileged by disabling all SSE instructions (and necessarily, any programs relying on them), and the debug registers which were much smaller but had no such issues.
Aproof of conceptdistribution called 'paranoix' based on the SSE register method has been developed.[14]The developers claim that "running TRESOR on a 64-bit CPU that supportsAES-NI, there is no performance penalty compared to a generic implementation ofAES",[15]and run slightly faster than standard encryption despite the need for key recalculation.[12]The primary advantage of Loop-Amnesia compared to TRESOR is that it supports the use of multiple encrypted drives; the primary disadvantages are a lack of support for 32-bit x86 and worse performance on CPUs not supporting AES-NI.
"Frozen cache" (sometimes known as "cache as RAM"),[16]may be used to securely store encryption keys. It works by disabling a CPU's L1 cache and uses it for key storage, however, this may significantly degrade overall system performance to the point of being too slow for most purposes.[17][better source needed]
A similar cache-based solution was proposed by Guan et al. (2015)[18]by employing the WB (Write-Back) cache mode to keep data in caches, reducing the computation times of public key algorithms.
Mimosa[19]in IEEE S&P 2015 presented a more practical solution for public-key cryptographic computations against cold-boot attacks and DMA attacks. It employs hardware transactional memory (HTM) which was originally proposed as a speculative memory access mechanism to boost the performance of multi-threaded applications. The strong atomicity guarantee provided by HTM, is utilized to defeat illegal concurrent accesses to the memory space that contains sensitive data. The RSA private key is encrypted in memory by an AES key that is protected by TRESOR. On request, an RSA private-key computation is conducted within an HTM transaction: the private key is firstly decrypted into memory, and then RSA decryption or signing is conducted. Because a plain-text RSA private key only appears as modified data in an HTM transaction, any read operation to these data will abort the transaction - the transaction will roll-back to its initial state. Note that, the RSA private key is encrypted in initial state, and it is a result of write operations (or AES decryption). Currently HTM is implemented in caches or store-buffers, both of which are located in CPUs, not in external RAM chips. So cold-boot attacks are prevented. Mimosa defeats against attacks that attempt to read sensitive data from memory (including cold-boot attacks, DMA attacks, and other software attacks), and it only introduces a small performance overhead.
Best practice recommends dismounting any encrypted, non-system disks when not in use, since most disk encryption software are designed to securely erase keys cached in memory after use.[20]This reduces the risk of an attacker being able to salvage encryption keys from memory by executing a cold boot attack. To minimize access to encrypted information on the operating system hard disk, the machine should be completely shut down when not in use to reduce the likelihood of a successful cold boot attack.[2][21]However,data may remain readablefrom tens of seconds to several minutes depending upon the physical RAM device in the machine, potentially allowing some data to be retrieved from memory by an attacker. Configuring anoperating systemto shut down or hibernate when unused, instead of using sleep mode, can help mitigate the risk of a successful cold boot attack.
Typically, a cold boot attack can be prevented by limiting an attacker'sphysical accessto the computer or by making it increasingly difficult to carry out the attack. One method involvessolderingor gluing in thememory modulesonto themotherboard, so they cannot be easily removed from their sockets and inserted into another machine under an attacker's control.[2]However, this does not prevent an attacker from booting the victim's machine and performing amemory dumpusing a removableUSB flash drive. Amitigationsuch asUEFI Secure Bootor similar boot verification approaches can be effective in preventing an attacker from booting up a custom software environment to dump out the contents of soldered-on main memory.[22]
Encryptingrandom-access memory(RAM) mitigates the possibility of an attacker being able to obtainencryption keysor other material from memory via a cold boot attack. This approach may require changes to the operating system, applications, or hardware. One example of hardware-based memory encryption was implemented in theMicrosoftXbox.[23]Implementations on newer x86-64 hardware are available from AMD and on IntelWillow Coveand newer.
Software-based full memory encryption is similar to CPU-based key storage since key material is never exposed to memory, but is more comprehensive since all memory contents are encrypted. In general, only immediate pages are decrypted and read on the fly by the operating system.[24]Implementations of software-based memory encryption solutions include: a commercial product fromPrivateCore.[25][26][27]and RamCrypt, a kernel-patch for the Linux kernel that encrypts data in memory and stores the encryption key in the CPU registers in a manner similar toTRESOR.[12][24]
Since version 1.24,VeraCryptsupports RAM encryption for keys and passwords.[28]
More recently, several papers have been published highlighting the availability of security-enhanced x86 and ARM commodity processors.[29][30]In that work, an ARM Cortex A8 processor is used as the substrate on which a full memory encryption solution is built. Process segments (for example, stack, code or heap) can be encrypted individually or in composition. This work marks the first full memory encryption implementation on a general-purpose commodity processor. The system provides both confidentiality and integrity protections of code and data which are encrypted everywhere outside the CPU boundary.
Since cold boot attacks target unencryptedrandom-access memory, one solution is to erase sensitive data from memory when it is no longer in use. The "TCG Platform Reset Attack Mitigation Specification",[31]an industry response to this specific attack, forces theBIOSto overwrite memory duringPOSTif the operating system was not shut down cleanly. However, this measure can still be circumvented by removing the memory module from the system and reading it back on another system under the attacker's control that does not support these measures.[2]
An effective secure erase feature would be that if power is interrupted, the RAM is wiped in the less than 300 ms before power is lost in conjunction with a secure BIOS and hard drive/SSD controller that encrypts data on the M-2 and SATAx ports. If theRAMitself contained no serial presence or other data and the timings were stored in the BIOS with some form of failsafe requiring a hardware key to change them, it would be nearly impossible to recover any data and would also be immune toTEMPESTattacks, man-in-the-RAM and other possible infiltration methods.[citation needed][32]
Someoperating systemssuch asTailsprovide a feature that securely writes random data to system memory when the operating system is shut down to mitigate against a cold boot attack.[33]However, video memory erasure is still not possible and as of 2022 it's still an open ticket on the Tails forum.[34]Potential attacks which could exploit this flaw are:
A cold boot attack can be prevented by ensuring no keys are stored by the hardware under attack.
Memory scramblingmay be used to minimize undesirable parasitic effects of semiconductors as a feature of modernIntel Coreprocessors.[38][39][40][41]However, because the scrambling is only used todecorrelateany patterns within the memory contents, the memory can be descrambled via a descrambling attack.[42][43]Hence, memory scrambling is not a viable mitigation against cold boot attacks.
Sleep modeprovides no additional protection against a cold boot attack because data typically still resides in memory while in this state. As such, full disk encryption products are still vulnerable to attack because the keys reside in memory and do not need to be re-entered once the machine resumes from a low power state.
Although limiting the boot device options in theBIOSmay make it slightly harder to boot another operating system, firmware in modern chipsets tends to allow the user to override the boot device duringPOSTby pressing a specified hot key.[5][44][45]Limiting the boot device options will not prevent the memory module from being removed from the system and read back on an alternative system either. In addition, most chipsets provide a recovery mechanism that allows the BIOS settings to be reset to default even if they are protected with a password.[10][46]TheBIOS settingscan also be modified while the system is running to circumvent any protections enforced by it, such as memory wiping or locking the boot device.[47][48][49]
The cold boot attack can be adapted and carried out in a similar manner on Androidsmartphones.[50]A cold boot can be performed by disconnecting the phone's battery to force a hard reset or holding down the power button.[50]The smartphone is then flashed with an operating system image that can perform amemory dump. Typically, the smartphone is connected to an attacker's machine using aUSBport.
Typically, Androidsmartphonessecurely erase encryption keys fromrandom-access memorywhen the phone is locked.[50]This reduces the risk of an attacker being able to retrieve the keys from memory, even if they succeeded in executing a cold boot attack against the phone.
|
https://en.wikipedia.org/wiki/Cold_boot_attack
|
Cryptographic primitivesare well-established, low-levelcryptographicalgorithmsthat are frequently used to buildcryptographic protocolsfor computer security systems.[1]These routines include, but are not limited to,one-way hash functionsandencryption functions.
When creatingcryptographic systems,designersuse cryptographic primitives as their most basic building blocks. Because of this, cryptographic primitives are designed to do one very specific task in a precisely defined and highly reliable fashion.
Since cryptographic primitives are used as building blocks, they must be very reliable, i.e. perform according to their specification. For example, if an encryption routine claims to be only breakable withXnumber of computer operations, and it is broken with significantly fewer thanXoperations, then that cryptographic primitive has failed. If a cryptographic primitive is found to fail, almost every protocol that uses it becomes vulnerable. Since creating cryptographic routines is very hard, and testing them to be reliable takes a long time, it is essentially never sensible (nor secure) to design a new cryptographic primitive to suit the needs of a new cryptographic system. The reasons include:
Cryptographic primitives are one of the building blocks of every cryptosystem, e.g.,TLS,SSL,SSH, etc. Cryptosystem designers, not being in a position to definitivelyprovetheir security, must take the primitives they use as secure. Choosing the best primitive available for use in a protocol usually provides the best available security. However, compositional weaknesses are possible in any cryptosystem and it is the responsibility of the designer(s) to avoid them.
Cryptographic primitives are not cryptographic systems, as they are quite limited on their own. For example, a bare encryption algorithm will provide no authentication mechanism, nor any explicit message integrity checking. Only when combined insecurity protocolscan more than one security requirement be addressed. For example, to transmit a message that is not only encoded but also protected from tinkering (i.e. it isconfidentialandintegrity-protected), an encoding routine, such asDES, and a hash-routine such asSHA-1can be used in combination. If the attacker does not know the encryption key, they cannot modify the message such that message digest value(s) would be valid.
Combining cryptographic primitives to make a security protocol is itself an entire specialization. Most exploitable errors (i.e., insecurities in cryptosystems) are due not to design errors in the primitives (assuming always that they were chosen with care), but to the way they are used, i.e. bad protocol design andbuggyor not careful enough implementation. Mathematical analysis of protocols is, at the time of this writing, not mature.[citation needed]There are some basic properties that can be verified with automated methods, such asBAN logic. There are even methods for full verification (e.g. theSPI calculus) but they are extremely cumbersome and cannot be automated. Protocol design is an art requiring deep knowledge and much practice; even then mistakes are common. An illustrative example, for a real system, can be seen on theOpenSSLvulnerabilitynews pagehere.
|
https://en.wikipedia.org/wiki/Cryptographic_primitive
|
There are a number ofstandardsrelated tocryptography. Standard algorithms and protocols provide a focus for study; standards for popular applications attract a large amount ofcryptanalysis.
|
https://en.wikipedia.org/wiki/Cryptography_standards
|
TheCyberspace Electronic Security Act of 1999(CESA) is a bill proposed by theClinton administrationduring the106th United States Congressthat enables the government to harvest keys used inencryption. The Cyberspace Electronic Security Act giveslaw enforcementthe ability to gain access toencryption keysandcryptographymethods. The initial version of this act enabled federal law enforcement agencies to secretly use monitoring, electronic capturing equipment and other technologies to access and obtain information. These provisions were later stricken from the act, although federal law enforcement agencies still have a significant degree of latitude to conduct investigations relating to electronic information. The act generated discussion about what capabilities should be allowed to law enforcement in the detection of criminal activity. After vocal objections from civil liberties groups, the administration backed away from the controversial bill.
This computing article is astub. You can help Wikipedia byexpanding it.
This United States federal legislation article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Cyberspace_Electronic_Security_Act
|
Anencrypted functionis an attempt to providemobile codeprivacy without providing anytamper-resistanthardware. It is a method where in mobile code can carry outcryptographicprimitives.
Polynomialandrational functionsareencryptedsuch that their transformation can again be implemented, as programs consisting ofcleartextinstructions that aprocessororinterpreterunderstands. The processor would not understand the program's function. This field of study is gaining popularity as mobile cryptography.
Scenario:
Host A, has analgorithmwhich computes function f. A wants to send itsmobile agentto B which holds input x, to compute f(x). But A doesn't want B to learn anything about f.
Scheme:
Function f is encrypted in a way that results in E(f). Host A then creates another program P(E(f)), which implements E(f), and sends it to B through its agent. B then runs the agent, which computes P(E(f))(x) and returns the result to A. A then decrypts this to get f(x).
Drawbacks:
Finding appropriate encryption schemes that can transform arbitrary functions is a challenge. The scheme doesn't preventdenial of service, replay,experimental extractionand others.
Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Encrypted_function
|
Theexport of cryptographyis the transfer from one country to another of devices and technology related tocryptography.
In the early days of theCold War, the United States and its allies developed an elaborate series of export control regulations designed to prevent a wide range of Western technology from falling into the hands of others, particularly theEastern bloc. All export of technology classed as 'critical' required a license.CoComwas organized to coordinate Western export controls.
Many countries, notably those participating in theWassenaar Arrangement, introduced restrictions. The Wassenaar restrictions were largely loosened in the late 2010s.[1][2]
This cryptography-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Export_of_cryptography
|
Geo-blocking,geoblockingorgeolockingis technology thatrestricts access to Internet contentbased upon the user'sgeographical location. In a geo-blocking scheme, the user's location is determined usingInternet geolocationtechniques, such as checking the user'sIP addressagainst ablacklistorwhitelist,GPSqueries in the case of a mobile device, accounts, and measuring theend-to-end delayof a network connection to estimate the physical location of the user.[1][2]IP address location tracking—a field pioneered byCyril Houri, the inventor of one of the first systems capable of identifying a user's geographical location via their IP address[3]—is typically used for geo-blocking. This technology has become widely used in fraud prevention, advertising, and content localization, which are integral to geo-blocking applications.[4]The result of the checks is used to determine whether the system will approve or deny access to thewebsiteor to particular content. The geolocation may also be used to modify the content provided: for example, the currency in which goods are quoted, the price or the range of goods that are available.
The term is most commonly associated with its use to restrict access to premium multimedia content on the Internet, such as films and television shows, primarily forcopyrightandlicensingreasons. There are other uses for geo-blocking, such as blocking malicious traffic or to enforceprice discrimination, location-aware authentication, fraud prevention, andonline gambling(where gambling laws vary by region). Websites also use geo-blocking to comply with sanctions rules and regulations.[5]
The ownership of exclusive territorial rights to audiovisual works may differ between regions, requiring the providers of the content to disallow access for users outside of their designated region; for example, although an online service,HBO Nowis only available to residents of the United States, and cannot be offered in other countries because its parent networkHBOhad already licensed exclusive rights to its programming to different broadcasters (such as in Canada, where HBO licensed its back-catalogue toBell Media), who may offer their own, similar service specific to their own region and business model (such asCrave).[6][7]For similar reasons, the library of content available on subscriptionvideo on demandservices such asNetflixmay also vary between regions, or the service may not even be available in the user's country at all.[8][9]
Geo-blocking can be used for other purposes as well.Price discriminationby online stores can be enforced by geo-blocking, forcing users to buy products online from a foreign version of a site where prices may be unnecessarily higher than those of their domestic version (although the inverse is often the case). The "Australia Tax" has been cited as an example of this phenomenon, which has led to governmental pressure to restrict how geo-blocking can be used in this manner in the country.[10][11]
Geo-blocking is also applied to enforce compliance with local laws and regulations. One notable example is theLigue contre le racisme et l'antisémitisme et Union des étudiants juifs de France v. Yahoo!(LICRA v. Yahoo!) case in 2000. TheTribunal de grande instanceof Paris ruled that Yahoo! must prevent French users from accessing its auction sites where Nazi memorabilia were being sold, in violation of Article R645-1 of the French Criminal Code, and it was demonstrated that IP geolocation could be used to filter at least 90% of traffic coming from France.[12]
Other noted uses include blocking access from countries that a particular website is not relevant to (especially if the majority of traffic from that country is malicious),[13]and voluntarily blocking access to content or services that are illegal under local laws. This can includeonline gambling,[14]and various international websites blocking access to users within theEuropean Economic Areadue to concerns of liability under theGeneral Data Protection Regulation(GDPR).[15][16][17]
Geo-blocking can becircumvented. When IP address-based geo-blocking is employed,virtual private network(VPN) andanonymizerservices can be used to evade geo-blocks. A user can, for example, access a website using a U.S. IP address in order to access content or services that are not available from outside the country.Hulu,Netflix,AmazonandBBC iPlayerare among the foreign video services widely used through these means by foreign users.[2]Its popularity among VPN users in the country prompted Netflix to officially establish an Australian version of its service in 2014.[18]In response to complaints over the quality of domestic coverage byNBC, along with a requirement for viewersbe a subscriber to a participating pay television providerin order to access the online content, a large number of American viewers used VPN services to stream foreign online coverage of the2012 Summer Olympicsand2014 Winter Olympicsfrom British and Canadian broadcasters. Unlike NBC's coverage, this foreign coverage only used a geo-block and did not require a TV subscription.[19]
However, many services, including Netflix,[20]Hulu,[21]and even Wikipedia,[22]have implemented measures to recognize and limit or block the use of VPNs by identifying IP addresses associated with such services. This limits users' ability to access geo-blocked content through VPNs or anonymizers.
In 2009, Venezuela subsidized the launch of the communications satelliteVenesat-1, in part to amplifyTelesur's programming by enabling it to avoid geo-blocking efforts byDirectTV, an American company.[23]
In 2013, the New Zealand internet service providerSlingshotintroduced a similar feature known as "global mode"; initially intended for travellers to enable access to local websites blocked in New Zealand, the service was re-launched in July 2014 as a feature to all Slingshot subscribers. The consumer-focused re-launch focused on its ability to provide access to U.S. online video services.[8][18][19][24]Unlike manually-configured VPN services, Global Mode was implemented passively at the ISP level and was automatically activated based on a whitelist, without any further user intervention.[25]
The legality of circumventing geo-blocking to access foreign video services under local copyright laws is unclear and varies by country.[25]Members of theentertainment industry(including broadcasters and studios) have contended that the use of VPNs and similar services to evade geo-blocking by online video services is a violation of copyright laws, as the foreign service does not hold the rights to make its content available in the user's country—thus infringing and undermining the rights held by a local rights holder.[9][24][26]Accessing online video services from outside the country in which they operate is typically considered a violation of their respectiveterms of use; some services have implemented measures to block VPN users, despite there being legitimate uses for such proxy services, under the assumption that they are using them to evade geographic filtering.[9][24][27][8][7][28]
Leaked e-mails from the2014 Sony Pictures hackrevealed statements by Keith LeGoy,Sony Pictures Television's president of international distribution, describing the international usage of Netflix over VPN services as being "semi-sanctioned"piracythat helped to illicitly increase its market share, and criticizing the company for not taking further steps to prevent usage of the service outside of regions where they have licenses to their content, such as detecting ineligible users via their payment method.[9][24]On 14 January 2016, Netflix announced its intent to strengthen measures to prevent subscribers from accessing regional versions of the service that they are not authorized to use.[29]
In Australia, a policy FAQ published by thenMinister for CommunicationsMalcolm Turnbull, states that users violating an "international commercial arrangement to protect copyright in different countries or regions" is not illegal undercopyright law of Australia.[24]However, an amendment to Australian copyright law allows courts to order the blocking of websites that primarily engage in "facilitating"copyright infringement—a definition which could include VPN services that market themselves specifically for the purpose of evading geo-blocking.[24][30]Prior to the passing of this amendment in June 2015, Turnbull acknowledged that VPN services have "a wide range of legitimate uses, not least of which is the preservation of privacy—something which every citizen is entitled to secure for themselves—and [VPN providers] have no oversight, control or influence over their customers' activities."[31]
On 6 May 2015, theEuropean Unionannounced the adoption of its "Digital Single Market" strategy, which would, among other changes, aim to end the use of "unjustified" geo-blocking between EU countries, arguing that "too many Europeans cannot use online services that are available in other EU countries, often without any justification; or they are re-routed to a local store with different prices. Such discrimination cannot exist in aSingle Market."[32][33]However, proposals issued by the European Commission on 25 May 2016 excluded the territorial licensing of copyrighted audiovisual works from this strategy.[34][35]
On 1 April 2018, new digital media portability rules took effect, which requires paid digital media services to offer "roaming" within the EU. This means that, for example, a subscriber toNetflixin one EU country must still be able to access their home country's version of the service when travelling into other EU countries.[36][37][38]
The European Union has approved the Regulation on Measures to Combat Unjustified Geoblocking and Other Forms of Discrimination Based on Citizenship, Place of Residence or Location of a Person in the Internal Market, which entered into force on 3 December 2018.[39]
The geo-blocking regulation aims to provide more options for consumers and businesses in the EU internal market. It addresses the problem that (potential) customers cannot buy goods and services from sellers located in another Member State for reasons related to their citizenship, place of residence or location, and therefore discriminate against them when they try to get access to the best offers, prices or terms of sale compared to the nationals or residents of the member state of the sellers.
The new rules only apply if the other party is a consumer or a company that purchases services or products exclusively for end use (B2C, B2B). Geo-blocking regulation does not apply if products are sold to business customers for commercial purposes. The Geoblocking Ordinance does not completely prohibit geoblocking and geo-discrimination: it only prohibits certain forms.
Geo-blocking regulations prohibit geo-blocking and geo-discrimination in three situations:
The prohibition of direct or indirect discrimination on the basis of citizenship is a fundamental principle of EU law. In situations not covered by this Regulation, Article 20 (2) of the Services Directive (2006/123 / EC) may apply. According to this provision, sellers can only apply a difference of treatment based on nationality or place of residence if this is justified by objective criteria. In some cases, industry-specific legislation (such as transport or health) may also apply that addresses this issue. In addition, the Regulation does not affect the TFEU rules, including the non-discrimination rules.[40]
In April 2015, a group of media companies in New Zealand, includingMediaWorks,Spark,Sky Network Television, andTVNZ, jointly sentcease and desistnotices to several ISPs offering VPN services for the purpose of evading geo-blocking, demanding that they pledge to discontinue the operation of these services by 15 April 2015, and to inform their customers that such services are "unlawful". The companies accused the ISPs of facilitating copyright infringement by violating their exclusive territorial rights to content in the country, and misrepresenting the alleged legality of the services in promotional material. In particular, Spark argued that the use of VPNs to access foreign video on demand services was cannibalizing its own domestic service Lightbox. At least two smaller providers (Lightwire Limitedand Unlimited Internet) announced that they would pull their VPN services in response to the legal concerns. However, CallPlus, the parent company of Slingshot andOrcon, objected to the claims, arguing that the Global Mode service was "completely legal", and accused the broadcasters of displayingprotectionism. Later that month, it was reported that the broadcasters planned to go forward with legal action against CallPlus.[2][24][41]
On 24 June 2015, it was announced that the media companies reached an out-of-court settlement, in which ByPass Network Services, who operates the service, would discontinue it effective 1 September 2015.
|
https://en.wikipedia.org/wiki/Geo-blocking
|
Incryptography,indistinguishability obfuscation(abbreviatedIOoriO) is a type ofsoftware obfuscationwith the defining property that obfuscating any two programs that compute the samemathematical functionresults in programs that cannot be distinguished from each other. Informally, such obfuscation hides the implementation of a program while still allowing users to run it.[1]Formally, iO satisfies the property that obfuscations of two circuits of the same size which implement the same function arecomputationally indistinguishable.[2]
Indistinguishability obfuscation has several interesting theoretical properties. Firstly, iO is the "best-possible" obfuscation (in the sense that any secret about a program that can be hidden by any obfuscator at all can also be hidden by iO). Secondly, iO can be used to construct nearly the entire gamut ofcryptographic primitives, including both mundane ones such aspublic-key cryptographyand more exotic ones such asdeniable encryptionandfunctional encryption(which are types of cryptography that no-one previously knew how to construct[3]), but with the notable exception of collision-resistanthash functionfamilies. For this reason, it has been referred to as "crypto-complete". Lastly, unlike many other kinds of cryptography, indistinguishability obfuscation continues to exist even ifP=NP(though it would have to be constructed differently in this case), though this does not necessarily imply that iO exists unconditionally.
Though the idea of cryptographic software obfuscation has been around since 1996, indistinguishability obfuscation was first proposed by Barak et al. (2001), who proved that iO exists if P=NP is the case. For the P≠NP case (which is harder, but also more plausible[2]), progress was slower: Garg et al. (2013)[4]proposed a construction of iO based on acomputational hardness assumptionrelating tomultilinear maps, but this assumption was later disproven. A construction based on "well-founded assumptions" (hardness assumptions that have been well-studied by cryptographers, and thus widely assumed secure) had to wait until Jain, Lin, and Sahai (2020). (Even so, one of these assumptions used in the 2020 proposal is not secure againstquantum computers.)
Currently known indistinguishability obfuscation candidates are very far from being practical. As measured by a 2017 paper,[needs update]even obfuscating thetoy functionwhich outputs thelogical conjunctionof its thirty-twoBoolean data typeinputs produces a program nearly a dozengigabyteslarge.
LetiO{\displaystyle {\mathcal {iO}}}be some uniformprobabilistic polynomial-timealgorithm. TheniO{\displaystyle {\mathcal {iO}}}is called anindistinguishability obfuscatorif and only if it satisfies both of the following two statements:[5][6][7]
In 2001, Barak et al., showing thatblack-box obfuscationis impossible, also proposed the idea of an indistinguishability obfuscator, and constructed an inefficient one.[8][7][2]Although this notion seemed relatively weak, Goldwasser and Rothblum (2007) showed that an efficient indistinguishability obfuscator would be a best-possible obfuscator, and any best-possible obfuscator would be an indistinguishability obfuscator.[8][9](However, forinefficientobfuscators, no best-possible obfuscator exists unless thepolynomial hierarchycollapses to the second level.[9])
Anopen-source softwareimplementation of an iO candidate was created in 2015.[10]
Barak et al. (2001) proved that aninefficientindistinguishability obfuscator exists for circuits; that is, the lexicographically first circuit that computes the same function.[7]IfP = NPholds, then an indistinguishability obfuscator exists, even though no other kind of cryptography would also exist.[2]
A candidate construction of iO withprovable securityunder concretehardness assumptionsrelating tomultilinear mapswas published by Garg et al. (2013),[2][11][3]but this assumption was later invalidated.[11][3](Previously, Garg, Gentry, and Halevi (2012) had constructed a candidate version of a multilinear map based on heuristic assumptions.[4])
Starting from 2016, Lin began to explore constructions of iO based on less strict versions of multilinear maps, constructing a candidate based on maps of degree up to 30, and eventually a candidate based on maps of degree up to 3.[3]Finally, in 2020, Jain, Lin, and Sahai proposed a construction of iO based on thesymmetric external Diffie-Helman,learning with errors, andlearning plus noiseassumptions,[3][5]as well as the existence of a super-linear stretchpseudorandom generatorin the function classNC0.[5](The existence of pseudorandom generators in NC0(even with sub-linear stretch) was a long-standing open problem until 2006.[12]) It is possible that this construction could be broken withquantum computing, but there is an alternative construction that may be secure even against that (although the latter relies on less established security assumptions).[3][speculation?]
There have been attempts to implement and benchmark iO candidates.[2]In 2017, an obfuscation of the functionx1∧x2∧⋯∧x32{\displaystyle x_{1}\wedge x_{2}\wedge \dots \wedge x_{32}}at a security level of 80 bits took 23.5 minutes to produce and measured 11.6 GB, with an evaluation time of 77 ms.[2]Additionally, an obfuscation of theAdvanced Encryption Standardencryption circuit at a security level of 128 bits would measure 18 PB and have an evaluation time of about 272 years.[2]
It is useful to divide the question of the existence of iO by usingRussell Impagliazzo's "five worlds",[13]which are five different hypothetical situations aboutaverage-case complexity:[6]
Indistinguishability obfuscators, if they exist, could be used for an enormous range ofcryptographicapplications, so much so that it has been referred to as a "central hub" for cryptography,[1][3]the "crown jewel of cryptography",[3]or "crypto-complete".[2]Concretely, an indistinguishability obfuscator (with the additional assumption of the existence ofone-way functions[2]) could be used to construct the following kinds of cryptography:
Additionally, if iO and one-way functions exist, then problems in thePPADcomplexity class are provably hard.[5][19]
However, indistinguishability obfuscation cannot be used to constructeverypossible cryptographic protocol: for example, no black-box construction can convert an indistinguishability obfuscator to a collision-resistanthash functionfamily, even with atrapdoor permutation, except with anexponentialloss of security.[20]
|
https://en.wikipedia.org/wiki/Indistinguishability_obfuscation
|
Multiple encryptionis the process ofencryptingan already encrypted message one or more times, either using the same or a different algorithm. It is also known ascascade encryption,cascade ciphering,multiple encryption, andsuperencipherment.Superencryptionrefers to the outer-level encryption of a multiple encryption.
Some cryptographers, like Matthew Green of Johns Hopkins University, say multiple encryption addresses a problem that mostly doesn't exist:
Modern ciphers rarely get broken... You’re far more likely to get hit by malware or an implementation bug than you are to suffer a catastrophic attack on AES.
However, from the previous quote an argument for multiple encryption can be made, namely poor implementation. Using two different cryptomodules and keying processes from two different vendors requires both vendors' wares to be compromised for security to fail completely.
Picking any twociphers, if thekeyused is the same for both, the second cipher could possibly undo the first cipher, partly or entirely. This is true of ciphers where thedecryptionprocess is exactly the same as the encryption process (areciprocal cipher) – the second cipher would completely undo the first. If an attacker were to recover the key throughcryptanalysisof the first encryption layer, the attacker could possibly decrypt all the remaining layers, assuming the same key is used for all layers.
To prevent that risk, one can use keys that arestatistically independentfor each layer (e.g. independentRNGs).
Ideally each key should have separate and different generation, sharing, and management processes.
For en/decryption processes that require sharing anInitialization Vector(IV) /noncethese are typically, openly shared or made known to the recipient (and everyone else). Its good security policy never to provide the same data in both plaintext and ciphertext when using the same key and IV. Therefore, its recommended(although at this moment without specific evidence)to use separate IVs for each layer of encryption.
With the exception of theone-time pad, no cipher has been theoretically proven to be unbreakable. Furthermore, some recurring properties may be found in theciphertextsgenerated by the first cipher. Since those ciphertexts are the plaintexts used by the second cipher, the second cipher may be rendered vulnerable to attacks based on known plaintext properties (see references below).
This is the case when the first layer is a program P that always adds the same string S of characters at the beginning (or end) of all ciphertexts (commonly known as amagic number). When found in a file, the string S allows anoperating systemto know that the program P has to be launched in order to decrypt the file. This string should be removed before adding a second layer.
To prevent this kind of attack, one can use the method provided byBruce Schneier:[1]
A cryptanalyst must break both ciphers to get any information. This will, however, have the drawback of making the ciphertext twice as long as the original plaintext.
Note, however, that a weak first cipher may merely make a second cipher that is vulnerable to achosen plaintext attackalso vulnerable to aknown plaintext attack. However, ablock ciphermust not be vulnerable to a chosen plaintext attack to be considered secure. Therefore, the second cipher described above is not secure under that definition, either. Consequently, both ciphers still need to be broken. The attack illustrates why strong assumptions are made about secure block ciphers and ciphers that are even partially broken should never be used.
TheRule of Twois a data security principle from theNSA'sCommercial Solutions for Classified Program (CSfC).[2]It specifies two completely independent layers of cryptography to protect data. For example, data could be protected by both hardware encryption at its lowest level and software encryption at the application layer. It could mean using twoFIPS-validated software cryptomodules from different vendors to en/decrypt data.
The importance of vendor and/or model diversity between the layers of components centers around removing the possibility that the manufacturers or models will share a vulnerability. This way if one components is compromised there is still an entire layer of encryption protecting the information at rest or in transit. The CSfC Program offers solutions to achieve diversity in two ways. "The first is to implement each layer using components produced by different manufacturers. The second is to use components from the same manufacturer, where that
manufacturer has provided NSA with sufficient evidence that the implementations of the two components are independent of one another."[3]
The principle is practiced in the NSA's secure mobile phone called Fishbowl.[4]The phones use two layers of encryption protocols,IPsecandSecure Real-time Transport Protocol(SRTP), to protect voice communications. The SamsungGalaxy S9Tactical Edition is also an approved CSfC Component.
|
https://en.wikipedia.org/wiki/Multiple_encryption
|
Acryptosystemis considered to haveinformation-theoretic security(also calledunconditional security[1]) if the system is secure againstadversarieswith unlimited computing resources and time. In contrast, a system which depends on the computational cost ofcryptanalysisto be secure (and thus can be broken by an attack with unlimited computation) is calledcomputationally secureor conditionally secure.[2]
An encryption protocol with information-theoretic security is impossible to break even with infinite computational power. Protocols proven to be information-theoretically secure are resistant to future developments in computing. The concept of information-theoretically secure communication was introduced in 1949 by American mathematicianClaude Shannon, one of the founders of classicalinformation theory, who used it to prove theone-time padsystem was secure.[3]Information-theoretically secure cryptosystems have been used for the most sensitive governmental communications, such asdiplomatic cablesand high-level military communications.[citation needed]
There are a variety of cryptographic tasks for which information-theoretic security is a meaningful and useful requirement. A few of these are:
Algorithms which are computationally or conditionally secure (i.e., they are not information-theoretically secure) are dependent on resource limits. For example,RSArelies on the assertion that factoring large numbers is hard.
A weaker notion of security, defined byAaron D. Wyner, established a now-flourishing area of research that is known as physical layer encryption.[4]It exploits the physicalwirelesschannel for its security by communications, signal processing, and coding techniques. The security isprovable,unbreakable, and quantifiable (in bits/second/hertz).
Wyner's initial physical layer encryption work in the 1970s posed the Alice–Bob–Eve problem in which Alice wants to send a message to Bob without Eve decoding it. If the channel from Alice to Bob is statistically better than the channel from Alice to Eve, it had been shown that secure communication is possible.[5]That is intuitive, but Wyner measured the secrecy in information theoretic terms defining secrecy capacity, which essentially is the rate at which Alice can transmit secret information to Bob. Shortly afterward,Imre Csiszárand Körner showed that secret communication was possible even if Eve had a statistically better channel to Alice than Bob did.[6]The basic idea of the information theoretic approach to securely transmit confidential messages (without using an encryption key) to a legitimate receiver is to use the inherent randomness of the physical medium (including noises and channel fluctuations due to fading) and exploit the difference between the channel to a legitimate receiver and the channel to an eavesdropper to benefit the legitimate receiver.[7]More recent theoretical results are concerned with determining the secrecy capacity and optimal power allocation in broadcast fading channels.[8][9]There are caveats, as many capacities are not computable unless the assumption is made that Alice knows the channel to Eve. If that were known, Alice could simply place a null in Eve's direction. Secrecy capacity forMIMOand multiplecolludingeavesdroppers is more recent and ongoing work,[10][11]and such results still make the non-useful assumption about eavesdropper channel state information knowledge.
Still other work is less theoretical by attempting to compare implementable schemes. One physical layer encryption scheme is to broadcast artificial noise in all directions except that of Bob's channel, which basically jams Eve. One paper by Negi and Goel details its implementation, and Khisti and Wornell computed the secrecy capacity when only statistics about Eve's channel are known.[12][13]
Parallel to that work in the information theory community is work in the antenna community, which has been termed near-field direct antenna modulation or directional modulation.[14]It has been shown that by using aparasitic array, the transmitted modulation in different directions could be controlled independently.[15]Secrecy could be realized by making the modulations in undesired directions difficult to decode. Directional modulation data transmission was experimentally demonstrated using aphased array.[16]Others have demonstrated directional modulation withswitched arraysandphase-conjugating lenses.[17][18][19]
That type of directional modulation is really a subset of Negi and Goel's additive artificial noise encryption scheme. Another scheme usingpattern-reconfigurabletransmit antennas for Alice called reconfigurablemultiplicative noise(RMN) complements additive artificial noise.[20]The two work well together in channel simulations in which nothing is assumed known to Alice or Bob about the eavesdroppers.
The different works mentioned in the previous part employ, in one way or another, the randomness present in the wireless channel to transmit information-theoretically secure messages.
Conversely, we could analyze how much secrecy one can extract from the randomness itself in the form of asecret key.
That is the goal ofsecret key agreement.
In this line of work, started by Maurer[21]and Ahlswede and Csiszár,[22]the basic system model removes any restriction on the communication schemes and assumes that the legitimate users can communicate over a two-way, public, noiseless, and authenticated channel at no cost. This model has been subsequently extended to account for multiple users[23]and a noisy channel[24]among others.
|
https://en.wikipedia.org/wiki/Information-theoretic_security#Physical_layer_encryption
|
Incryptography, arotor machineis an electro-mechanicalstream cipherdevice used forencryptingand decrypting messages. Rotor machines were the cryptographic state-of-the-art for much of the 20th century; they were in widespread use from the 1920s to the 1970s. The most famous example is the GermanEnigma machine, the output of which was deciphered by the Allies during World War II, producing intelligence code-namedUltra.
The primary component of a rotor machine is a set ofrotors, also termedwheelsordrums, which are rotating disks with an array ofelectrical contactson either side. The wiring between the contacts implements a fixedsubstitutionof letters, replacing them in some complex fashion. On its own, this would offer little security; however, before or after encrypting each letter, the rotors advance positions, changing the substitution. By this means, a rotor machine produces a complexpolyalphabetic substitutioncipher, which changes with every key press.
Inclassical cryptography, one of the earliest encryption methods was the simplesubstitution cipher, where letters in a message were systematically replaced using some secret scheme.Monoalphabeticsubstitution ciphers used only a single replacement scheme — sometimes termed an "alphabet"; this could be easily broken, for example, by usingfrequency analysis. Somewhat more secure were schemes involving multiple alphabets,polyalphabetic ciphers. Because such schemes were implemented by hand, only a handful of different alphabets could be used; anything more complex would be impractical. However, using only a few alphabets left the ciphers vulnerable to attack. The invention of rotor machines mechanised polyalphabetic encryption, providing a practical way to use a much larger number of alphabets.
The earliest cryptanalytic technique wasfrequency analysis, in which letter patterns unique to every language could be used to discover information about the substitution alphabet(s) in use in a mono-alphabeticsubstitution cipher. For instance, in English, the plaintext letters E, T, A, O, I, N and S, are usually easy to identify in ciphertext on the basis that since they are very frequent, their corresponding ciphertext letters will also be as frequent. In addition,bigramcombinations like NG, ST and others are also very frequent, while others are rare indeed (Q followed by anything other than U for instance). The simplest frequency analysis relies on oneciphertextletter always being substituted for aplaintextletter in the cipher: if this is not the case, deciphering the message is more difficult. For many years, cryptographers attempted to hide the telltale frequencies by using several different substitutions for common letters, but this technique was unable to fully hide patterns in the substitutions for plaintext letters. Such schemes were being widely broken by the 16th century.
In the mid-15th century, a new technique was invented byAlberti, now known generally aspolyalphabetic ciphers, which recognised the virtue of using more than a single substitution alphabet; he also invented a simple technique for "creating" a multitude of substitution patterns for use in a message. Two parties exchanged a small amount of information (referred to as thekey) and used it to create many substitution alphabets, and so many different substitutions for each plaintext letter over the course of a single plaintext. The idea is simple and effective, but proved more difficult to use than might have been expected. Many ciphers were only partial implementations of Alberti's, and so were easier to break than they might have been (e.g. theVigenère cipher).
Not until the 1840s (Babbage) was any technique known which could reliably break any of the polyalphabetic ciphers. His technique also looked for repeating patterns in theciphertext, which provide clues about the length of the key. Once this is known, the message essentially becomes a series of messages, each as long as the length of the key, to which normal frequency analysis can be applied.Charles Babbage,Friedrich Kasiski, andWilliam F. Friedmanare among those who did most to develop these techniques.
Cipher designers tried to get users to use a different substitution for every letter, but this usually meant a very long key, which was a problem in several ways. A long key takes longer to convey (securely) to the parties who need it, and so mistakes are more likely in key distribution. Also, many users do not have the patience to carry out lengthy, letter-perfect evolutions, and certainly not under time pressure or battlefield stress. The 'ultimate' cipher of this type would be one in which such a 'long' key could be generated from a simple pattern (ideally automatically), producing a cipher in which there are so many substitutionalphabetsthat frequency counting and statistical attacks would be effectively impossible. Enigma, and the rotor machines generally, were just what was needed since they were seriously polyalphabetic, using a different substitution alphabet for each letter of plaintext, and automatic, requiring no extraordinary abilities from their users. Their messages were, generally, much harder to break than any previous ciphers.
It is straightforward to create a machine for performing simple substitution. In an electrical system with 26 switches attached to 26 light bulbs, any one of the switches will illuminate one of the bulbs.
If each switch is operated by a key on atypewriter, and the bulbs are labelled with letters, then such a system can be used for encryption by choosing the wiring between the keys and the bulb: for example, typing the letterAwould make the bulb labelledQlight up. However, the wiring is fixed, providing little security.
Rotor machines change the interconnecting wiring with each key stroke. The wiring is placed inside a rotor, and then rotated with a gear every time a letter is pressed.
So while pressingAthe first time might generate aQ, the next time it might generate aJ. Every letter pressed on the keyboard increments the rotor position and get a new substitution, implementing a polyalphabetic substitution cipher.
Depending on the size of the rotor, this may, or may not, be more secure than hand ciphers. If the rotor has only 26 positions on it, one for each letter, then all messages will have a (repeating) key 26 letters long. Although the key itself (mostly hidden in the wiring of the rotor) might not be known, the methods for attacking these types of ciphers don't need that information. So while such asingle rotormachine is certainly easy to use, it is no more secure than any other partial polyalphabetic cipher system.
But this is easy to correct. Simply stack more rotors next to each other, and gear them together. After the first rotor spins "all the way", make the rotor beside it spin one position. Now you would have to type 26 × 26 = 676 letters (for theLatin alphabet) before the key repeats, and yet it still only requires you to communicate a key of two letters/numbers to set things up. If a key of 676 length is not long enough, another rotor can be added, resulting in a period 17,576 letters long.
In order to be as easy to decipher as encipher, some rotor machines, most notably theEnigma machine, embodied asymmetric-key algorithm, i.e., encrypting twice with the same settings recovers the original message (seeinvolution).
[citation needed]
The concept of a rotor machine occurred to a number of inventors independently at a similar time.
In 2003, it emerged that the first inventors were twoDutch naval officers, Theo A. van Hengel (1875–1939) and R. P. C. Spengler (1875–1955) in 1915 (De Leeuw, 2003). Previously, the invention had been ascribed to four inventors working independently and at much the same time:Edward Hebern,Arvid Damm,Hugo KochandArthur Scherbius.
In theUnited StatesEdward Hugh Hebernbuilt a rotor machine using a single rotor in 1917. He became convinced he would get rich selling such a system to the military, theHebern Rotor Machine, and produced a series of different machines with one to five rotors. His success was limited, however, and he wentbankruptin the 1920s. He sold a small number of machines to theUS Navyin 1931.
In Hebern's machines the rotors could be opened up and the wiring changed in a few minutes, so a single mass-produced system could be sold to a number of users who would then produce their own rotor keying. Decryption consisted of taking out the rotor(s) and turning them around to reverse the circuitry. Unknown to Hebern,William F. Friedmanof theUS Army'sSISpromptly demonstrated a flaw in the system that allowed the ciphers from it, and from any machine with similar design features, to be cracked with enough work.
Another early rotor machine inventor was DutchmanHugo Koch, who filed apatenton a rotor machine in 1919. At about the same time inSweden,Arvid Gerhard Damminvented and patented another rotor design. However, the rotor machine was ultimately made famous byArthur Scherbius, who filed a rotor machine patent in 1918. Scherbius later went on to design and market theEnigma machine.
The most widely known rotor cipher device is the GermanEnigma machineused during World War II, of which there were a number of variants.
The standard Enigma model, Enigma I, used three rotors. At the end of the stack of rotors was an additional, non-rotating disk, the "reflector," wired such that the input was connected electrically back out to another contact on the same side and thus was "reflected" back through the three-rotor stack to produce theciphertext.
When current was sent into most other rotor cipher machines, it would travel through the rotors and out the other side to the lamps. In the Enigma, however, it was "reflected" back through the disks before going to the lamps. The advantage of this was that there was nothing that had to be done to the setup in order to decipher a message; the machine was "symmetrical".
The Enigma's reflector guaranteed that no letter could be enciphered as itself, so anAcould never turn back into anA. This helped Polish and, later, British efforts to break the cipher. (SeeCryptanalysis of the Enigma.)
Scherbius joined forces with a mechanical engineer named Ritter and formed Chiffriermaschinen AG inBerlinbefore demonstrating Enigma to the public inBernin 1923, and then in 1924 at the World Postal Congress inStockholm. In 1927 Scherbius bought Koch's patents, and in 1928 they added aplugboard, essentially a non-rotating manually rewireable fourth rotor, on the front of the machine. After the death of Scherbius in 1929,Willi Kornwas in charge of further technical development of Enigma.
As with other early rotor machine efforts, Scherbius had limited commercial success. However, the German armed forces, responding in part to revelations that their codes had been broken during World War I, adopted the Enigma to secure their communications. TheReichsmarineadopted Enigma in 1926, and theGerman Armybegan to use a different variant around 1928.
The Enigma (in several variants) was the rotor machine that Scherbius's company and its successor, Heimsoth & Reinke, supplied to the German military and to such agencies as the Nazi party security organization, theSD.
ThePolesbroke the German Army Enigma beginning in December 1932, not long after it had been put into service. On July 25, 1939, just five weeks before Hitler's invasion of Poland, thePolish General Staff'sCipher Bureaushared its Enigma-decryption methods and equipment with the French and British as the Poles' contribution to the common defense against Nazi Germany.Dilly Knoxhad already broken Spanish Nationalist messages on a commercial Enigma machine in 1937 during theSpanish Civil War.
A few months later, using the Polish techniques, the British began reading Enigma ciphers in collaboration withPolish Cipher Bureaucryptologists who had escaped Poland, overrun by the Germans, to reachParis. The Poles continued breaking German Army Enigma—along withLuftwaffeEnigma traffic—until work at StationPC Brunoin France was shut down by the German invasion of May–June 1940.
The British continued breaking Enigma and, assisted eventually by the United States, extended the work to German Naval Enigma traffic (which the Poles had been reading before the war), most especially to and fromU-boatsduring theBattle of the Atlantic.
DuringWorld War II(WWII), both the Germans and Allies developed additional rotor machines. The Germans used theLorenz SZ 40/42andSiemens and Halske T52machines to encipher teleprinter traffic which used theBaudot code; this traffic was known asFishto the Allies. The Allies developed theTypex(British) and theSIGABA(American). During the War theSwissbegan development on an Enigma improvement which became theNEMA machinewhich was put into service after World War II. There was even a Japanese developed variant of the Enigma in which the rotors sat horizontally; it was apparently never put into service. The JapanesePURPLEmachine was not a rotor machine, being built around electricalstepping switches, but was conceptually similar.
Rotor machines continued to be used even in the computer age. TheKL-7(ADONIS), an encryption machine with 8 rotors, was widely used by the U.S. and its allies from the 1950s until the 1980s. The lastCanadianmessage encrypted with a KL-7 was sent on June 30, 1983. The Soviet Union and its allies used a 10-rotor machine calledFialkawell into the 1970s.
A unique rotor machine called the Cryptograph was constructed in 2002 byNetherlands-based Tatjana van Vark. This unusual device is inspired by Enigma, but makes use of 40-point rotors, allowing letters, numbers and some punctuation; each rotor contains 509 parts.
A software implementation of a rotor machine was used in thecryptcommand that was part of earlyUNIXoperating systems. It was among the first software programs to run afoul ofU.S. export regulationswhich classifiedcryptographicimplementations as munitions.
|
https://en.wikipedia.org/wiki/Rotor_machine
|
Television encryption, often referred to asscrambling, isencryptionused to control access topay televisionservices, usuallycable,satellite, orInternet Protocol television(IPTV) services.
Pay televisionexists to make revenue fromsubscribers, and sometimes those subscribers do not pay. The prevention ofpiracyon cable and satellite networks has been one of the main factors in the development of Pay TV encryption systems.
The early cable-based Pay TV networks used no security. This led to problems with people connecting to the network without paying. Consequently, some methods were developed to frustrate these self-connectors. The early Pay TV systems for cable television were based on a number of simple measures. The most common of these was a channel-based filter that would effectively stop the channel being received by those who had not subscribed. These filters would be added or removed according to the subscription. As the number of television channels on these cable networks grew, the filter-based approach became increasingly impractical.
Other techniques, such as adding an interfering signal to the video or audio, began to be used as the simple filter solutions were easily bypassed. As the technology evolved, addressable set-top boxes became common, and more complex scrambling techniques such as digital encryption of the audio or video cut and rotate (where a line of video is cut at a particular point and the two parts are then reordered around this point) were applied to signals.
Encryption was used to protect satellite-distributed feeds for cable television networks. Some of the systems used for cable feed distribution were expensive. As theDTHmarket grew, less secure systems began to be used. Many of these systems (such asOak Orion) were variants of cable television scrambling systems that affected the synchronisation part of the video, inverted the video signal, or added an interfering frequency to the video. All of these analogue scrambling techniques were easily defeated.
In France,Canal+launched a scrambled service in 1984. It was also claimed that it was an unbreakable system. Unfortunately for that company, an electronics magazine, "Radio Plans", published a design for a pirate decoder within a month of the channel launching.[citation needed]
In the US,HBOwas one of the first services to encrypt its signal using the VideoCipher II system. In Europe,FilmNetscrambled its satellite service in September 1986, thus creating one of the biggest markets for pirate satellite TV decoders in the world, because the system that FilmNet used was easily hacked. One of FilmNet's main attractions was that it would screen hard-core porn films on various nights of the week. The VideoCipher II system proved somewhat more difficult to hack, but it eventually fell prey to the pirates.[citation needed]
Analoganddigitalpay television have severalconditional accesssystems that are used forpay-per-view(PPV) and other subscriber related services. Originally, analog-only cable television systems relied on set-top boxes to control access to programming, as television sets originally were not "cable-ready". Analog encryption was typically limited to premium channels such as HBO or channels with adult-oriented content. In those cases, various proprietary video synchronization suppression methods were used to control access to programming. In some of these systems, the necessary sync signal was on a separate subcarrier though sometimes the sync polarity is simply inverted, in which case, if used in conjunction withPAL, aSECAM LTV with a cable tuner can be used to partially descramble the signal though only in black and white and with invertedluminanceand thus a multi standard TV which supports PAL L is preferred to decode the color as well. This, however will lead to a part of the video signal being received as audio as well and thus another TV with preferably no auto mute should be used for audio decoding. Analog set-top boxes have largely been replaced by digital set-top boxes that can directly control access to programming as well as digitally decrypt signals.
Although several analog encryption types were tested in the early 1980s,VideoCipher IIbecame the de facto analog encryption standard thatC-Bandsatellite pay TV channels used. Early adopters of VCII were HBO and Cinemax, encrypting full time beginning in January 1986; Showtime and The Movie Channel beginning in May 1986; and CNN and Headline news, in July of that year. VideoCipher II was replaced as a standard by VCII+ in the early 1990s, and it in turn was replaced by VCII+ RS. A VCII-capable satellite receiver is required to decode VCII channels. VCII has largely been replaced byDigiCipher 2inNorth America. Originally, VCII-based receivers had a separate modem technology for pay-per-view access known as Videopal. This technology became fully integrated in later-generation analog satellite television receivers.
DigiCipher 2is General Instrument's proprietary video distribution system. DigiCipher 2 is based upon MPEG-2. A4DTVsatellite receiver is required to decode DigiCipher 2 channels. In North America, mostdigital cableprogramming is accessed with DigiCipher 2-based set-top boxes. DigiCipher 2 may also be referred to as DCII.
PowerVuis another popular digital encryption technology used for non-residential usage. PowerVu was developed byScientific Atlanta. Other commercial digital encryption systems are,Nagravision(by Kudelski),Viaccess(by France Telecom), andWegener.
In the US, bothDirecTVandDish Networkdirect-broadcast satellitesystems use digital encryption standards for controlling access to programming. DirecTV usesVideoGuard, a system designed byNDS. DirecTV has been cracked in the past, which led to an abundance of cracked smartcards being available on the black market. However, a switch to a stronger form of smart card (the P4 card) wiped out DirectTV piracy soon after it was introduced. Since then, no public cracks have become available. Dish Network uses Nagravision (2 and 3) encryption. The now-defunctVOOMandPrimeStarservices both used General Instruments/Motorola equipment, and thus used aDigiCipher 2-based system very similar to that of earlier 4DTV large dish satellite systems.[citation needed]
InCanada, bothBell Satellite TVandShaw DirectDBS systems use digital encryption standards. Bell TV, like Dish Network, uses Nagravision for encryption. Shaw Direct, meanwhile, uses a DigiCipher 2-based system, due to their equipment also being sourced from General Instruments/Motorola.
Zenith Electronicsdeveloped an encryption scheme for theirPhonevisionsystem of the 1950s and 1960s.
Oak Orionwas originally used for analog satellite television pay channel access inCanada. It was innovative for its time as it useddigital audio. It has been completely replaced by digital encryption technologies. Oak Orion was used bySky Channelin Europe between the years 1982 and 1987, andM-Netin South Africa from 1986 to 2018. Oak developed related encryption systems for cable TV and broadcast pay TV services such asONTV.
Leitch Viewguard is an analog encryption standard used primarily by broadcast TV networks inNorth America. Its method of scrambling is by re-ordering the lines of video (Line Shuffle), but leaves the audio intact. Terrestrial broadcast CATV systems in Northern Canada used this conditional access system for many years. It is only occasionally used today on some satellite circuits because of its similarity toD2-MACandB-MAC.
There was also a version that encrypted the audio using a digital audio stream in the horizontal blanking interval like the VCII system. One US network used that for its affiliate feeds and would turn off the analog sub carriers on the satellite feed.
B-MAC has not been used for DTH applications sincePrimeStarswitched to an all-digital delivery system in the mid-1990s.
VideoCrypt was an analogue cut and rotate scrambling system with a smartcard based conditional access system. It was used in the 1990s by several European satellite broadcasters, mainlyBritish Sky Broadcasting. It was also used by Sky New Zealand (Sky-NZ). One version of Videocrypt (VideoCrypt-S) had the capability of scrambling sound. A soft encryption option was also available where the encrypted video could be transmitted with a fixed key and any VideoCrypt decoder could decode it.
RITC Discret 11 is a system based on horizontal video line delay and audio scrambling. The start point of each line of video waspseudorandomlydelayed by either 0ns, 902 ns, or 1804 ns. First used in 1984 by French channelCanal Plus, it was widely compromised after the December 1984 issue of "Radio Plans" magazine printed decoder plans.[4]The BBC also used a later revision of the encryption system intended for use outside of France, Discret 12, in the late 1980s, as part of testing the use of off-air hours for encrypted specialist programming, with BMTV (British Medical Television) being broadcast on BBC Two.[5]This would ultimately lead to the launch of the scrambledBBC Selectservice in the early 1990s.[6]
Used by European channel FilmNet, the SATPAC interfered with the horizontal and vertical synchronisation signals and transmitted a signal containing synchronisation and authorisation data on a separate subcarrier. The system was first used in September 1986 and saw many upgrades as it was easily compromised by pirates. By September 1992, FilmNet changed to D2-MAC EuroCrypt.
Added an interferingsine waveof a frequency circa 93.750kHzto the video signal. This interfering signal was approximately six times the frequency of the horizontal refresh. It had an optional sound scrambling using Spectrum Inversion. Used in the UK by BBC for its world service broadcasts and by the now defunct UK movie channel "Premiere".
Used by German/Swiss channel Teleclub in the early 1990s, this system employed various methods such as video inversion, modification of synchronisation signals, and a pseudo line delay effect.
Conditional Access system using theD2-MACstandard. Developed mainly by France Telecom, the system was smartcard based. The encryption algorithm in the smartcard was based onDES. It was one of the first smart card based systems to be compromised.
An older Nagravision system for scrambling analogue satellite and terrestrial television programs was used in the 1990s, for example by the German pay-TV broadcaster Premiere. In this line-shuffling system, 32 lines of the PAL TV signal are temporarily stored in both the encoder and decoder and read out in permuted order under the control of apseudorandom number generator. A smartcard security microcontroller (in a key-shaped package) decrypts data that is transmitted during the blanking intervals of the TV signal and extracts the random seed value needed for controlling the random number generation. The system also permitted the audio signal to be scrambled by inverting its spectrum at 12.8 kHz using a frequency mixer.
|
https://en.wikipedia.org/wiki/Television_encryption
|
Tokenization, when applied to data security, is the process of substituting a sensitivedata elementwith a non-sensitive equivalent, referred to as atoken, that has no intrinsic or exploitable meaning or value. The token is a reference (i.e. identifier) that maps back to the sensitive data through a tokenization system. The mapping from original data to a token uses methods that render tokens infeasible to reverse in the absence of the tokenization system, for example using tokens created fromrandom numbers.[3]A one-way cryptographic function is used to convert the original data into tokens, making it difficult to recreate the original data without obtaining entry to the tokenization system's resources.[4]To deliver such services, the system maintains a vault database of tokens that are connected to the corresponding sensitive data. Protecting the system vault is vital to the system, and improved processes must be put in place to offer database integrity and physical security.[5]
The tokenization system must be secured and validated using security best practices[6]applicable to sensitive data protection, secure storage, audit, authentication and authorization. The tokenization system provides data processing applications with the authority and interfaces to request tokens, or detokenize back to sensitive data.
The security and risk reduction benefits of tokenization require that the tokenization system is logically isolated and segmented from data processing systems and applications that previously processed or stored sensitive data replaced by tokens. Only the tokenization system can tokenize data to create tokens, or detokenize back to redeem sensitive data under strict security controls. The token generation method must be proven to have the property that there is no feasible means through direct attack,cryptanalysis, side channel analysis, token mapping table exposure or brute force techniques to reverse tokens back to live data.
Replacing live data with tokens in systems is intended to minimize exposure of sensitive data to those applications, stores, people and processes, reducing risk of compromise or accidental exposure and unauthorized access to sensitive data. Applications can operate using tokens instead of live data, with the exception of a small number of trusted applications explicitly permitted to detokenize when strictly necessary for an approved business purpose. Tokenization systems may be operated in-house within a secure isolated segment of the data center, or as a service from a secure service provider.
Tokenization may be used to safeguard sensitive data involving, for example,bank accounts,financial statements,medical records,criminal records,driver's licenses,loanapplications, stocktrades,voter registrations, and other types ofpersonally identifiable information(PII). Tokenization is often used in credit card processing. ThePCI Councildefines tokenization as "a process by which theprimary account number(PAN) is replaced with a surrogate value called a token. A PAN may be linked to a reference number through the tokenization process. In this case, the merchant simply has to retain the token and a reliable third party controls the relationship and holds the PAN. The token may be created independently of the PAN, or the PAN can be used as part of the data input to the tokenization technique. The communication between the merchant and the third-party supplier must be secure to prevent an attacker from intercepting to gain the PAN and the token.[7]
De-tokenization[8]is the reverse process of redeeming a token for its associated PAN value. The security of an individual token relies predominantly on the infeasibility of determining the original PAN knowing only the surrogate value".[9]The choice of tokenization as an alternative to other techniques such asencryptionwill depend on varying regulatory requirements, interpretation, and acceptance by respective auditing or assessment entities. This is in addition to any technical, architectural or operational constraint that tokenization imposes in practical use.
The concept of tokenization, as adopted by the industry today, has existed since the firstcurrencysystems emerged centuries ago as a means to reduce risk in handling high valuefinancial instrumentsby replacing them with surrogate equivalents.[10][11][12]In the physical world,coin tokenshave a long history of use replacing the financial instrument ofminted coinsandbanknotes. In more recent history, subway tokens and casino chips found adoption for their respective systems to replace physical currency and cash handling risks such as theft.Exonumiaandscripare terms synonymous with such tokens.
In the digital world, similar substitution techniques have been used since the 1970s as a means to isolate real data elements from exposure to other data systems. In databases for example,surrogate keyvalues have been used since 1976 to isolate data associated with the internal mechanisms of databases and their external equivalents for a variety of uses in data processing.[13][14]More recently, these concepts have been extended to consider this isolation tactic to provide a security mechanism for the purposes of data protection.
In thepayment cardindustry, tokenization is one means of protecting sensitive cardholder data in order to comply with industry standards and government regulations.[15]
Tokenization was applied to payment card data byShift4 Corporation[16]and released to the public during an industry Security Summit inLas Vegas,Nevadain 2005.[17]The technology is meant to prevent the theft of the credit card information in storage. Shift4 defines tokenization as: “The concept of using a non-decryptable piece of data to represent, by reference, sensitive or secret data. Inpayment card industry(PCI) context, tokens are used to reference cardholder data that is managed in a tokenization system, application or off-site secure facility.”[18]
To protect data over its full lifecycle, tokenization is often combined withend-to-end encryptionto securedata in transitto the tokenization system or service, with a token replacing the original data on return. For example, to avoid the risks ofmalwarestealing data from low-trust systems such aspoint of sale(POS) systems, as in theTarget breach of 2013, cardholder data encryption must take place prior to card data entering the POS and not after. Encryption takes place within the confines of a security hardened and validated card reading device and data remains encrypted until received by the processing host, an approach pioneered byHeartland Payment Systems[19]as a means to secure payment data from advanced threats, now widely adopted by industry payment processing companies and technology companies.[20]The PCI Council has also specified end-to-end encryption (certified point-to-point encryption—P2PE) for various service implementations in variousPCI Council Point-to-point Encryptiondocuments.
The process of tokenization consists of the following steps:
Tokenization systems share several components according to established standards.
Tokenization and “classic”encryptioneffectively protect data if implemented properly, and a computer security system may use both. While similar in certain regards, tokenization and classic encryption differ in a few key aspects. Both arecryptographicdata security methods and they essentially have the same function, however they do so with differing processes and have different effects on the data they are protecting.
Tokenization is a non-mathematical approach that replacessensitive datawith non-sensitive substitutes without altering the type or length of data. This is an important distinction from encryption because changes in data length and type can render information unreadable in intermediate systems such as databases. Tokenized data can still be processed by legacy systems which makes tokenization more flexible than classic encryption.
In many situations, the encryption process is a constant consumer of processing power, hence such a system needs significant expenditures in specialized hardware and software.[4]
Another difference is that tokens require significantly less computational resources to process. With tokenization, specific data is kept fully or partially visible for processing and analytics while sensitive information is kept hidden. This allows tokenized data to be processed more quickly and reduces the strain on system resources. This can be a key advantage in systems that rely on high performance.
In comparison to encryption, tokenization technologies reduce time, expense, and administrative effort while enabling teamwork and communication.[4]
There are many ways that tokens can be classified however there is currently no unified classification. Tokens can be: single or multi-use,cryptographicor non-cryptographic, reversible or irreversible, authenticable or non-authenticable, and various combinations thereof.
In the context of payments, the difference between high and low value tokens plays a significant role.
HVTs serve as surrogates for actualPANsin payment transactions and are used as an instrument for completing a payment transaction. In order to function, they must look like actual PANs. Multiple HVTs can map back to a single PAN and a single physical credit card without the owner being aware of it. Additionally, HVTs can be limited to certain networks and/or merchants whereas PANs cannot.
HVTs can also be bound to specific devices so that anomalies between token use, physical devices, and geographic locations can be flagged as potentially fraudulent. HVT blocking enhances efficiency by reducing computational costs while maintaining accuracy and reducing record linkage as it reduces the number of records that are compared.[23]
LVTs also act as surrogates for actual PANs in payment transactions, however they serve a different purpose. LVTs cannot be used by themselves to complete a payment transaction. In order for an LVT to function, it must be possible to match it back to the actual PAN it represents, albeit only in a tightly controlled fashion. Using tokens to protect PANs becomes ineffectual if a tokenization system is breached, therefore securing the tokenization system itself is extremely important.
First generation tokenization systems use a database to map from live data to surrogate substitute tokens and back. This requires the storage, management, and continuous backup for every new transaction added to the token database to avoid data loss. Another problem is ensuring consistency across data centers, requiring continuous synchronization of token databases. Significant consistency, availability and performance trade-offs, per theCAP theorem, are unavoidable with this approach. This overhead adds complexity to real-time transaction processing to avoid data loss and to assure data integrity across data centers, and also limits scale. Storing all sensitive data in one service creates an attractive target for attack and compromise, and introduces privacy and legal risk in the aggregation of dataInternet privacy, particularlyin the EU.
Another limitation of tokenization technologies is measuring the level of security for a given solution through independent validation. With the lack of standards, the latter is critical to establish the strength of tokenization offered when tokens are used for regulatory compliance. ThePCI Councilrecommends independent vetting and validation of any claims of security and compliance: "Merchants considering the use of tokenization should perform a thorough evaluation and risk analysis to identify and document the unique characteristics of their particular implementation, including all interactions with payment card data and the particular tokenization systems and processes"[24]
The method of generating tokens may also have limitations from a security perspective. With concerns about security and attacks torandom number generators, which are a common choice for the generation of tokens and token mapping tables, scrutiny must be applied to ensure proven and validated methods are used versus arbitrary design.[25][26]Random-number generatorshave limitations in terms of speed, entropy, seeding and bias, and security properties must be carefully analysed and measured to avoid predictability and compromise.
With tokenization's increasing adoption, new tokenization technology approaches have emerged to remove such operational risks and complexities and to enable increased scale suited to emergingbig datause cases and high performance transaction processing, especially in financial services and banking.[27]In addition to conventional tokenization methods, Protegrity provides additional security through its so-called "obfuscation layer." This creates a barrier that prevents not only regular users from accessing information they wouldn't see but also privileged users who has access, such as database administrators.[28]
Stateless tokenization allows live data elements to be mapped to surrogate values randomly, without relying on a database, while maintaining the isolation properties of tokenization.
November 2014,American Expressreleased its token service which meets theEMVtokenization standard.[29]Other notable examples of Tokenization-based payment systems, according to the EMVCo standard, includeGoogle Wallet,Apple Pay,[30]Samsung Pay,Microsoft Wallet,Fitbit PayandGarmin Pay.Visauses tokenization techniques to provide a secure online and mobile shopping.[31]
Using blockchain, as opposed to relying on trusted third parties, it is possible to run highly accessible, tamper-resistant databases for transactions.[32][33]With help of blockchain, tokenization is the process of converting the value of a tangible or intangible asset into a token that can be exchanged on the network.
This enables the tokenization of conventional financial assets, for instance, by transforming rights into a digital token backed by the asset itself using blockchain technology.[34]Besides that, tokenization enables the simple and efficient compartmentalization and management of data across multiple users. Individual tokens created through tokenization can be used to split ownership and partially resell an asset.[35][36]Consequently, only entities with the appropriate token can access the data.[34]
Numerousblockchaincompanies support asset tokenization. In 2019,eToroacquired Firmo and renamed as eToroX. Through its Token Management Suite, which is backed by USD-pegged stablecoins, eToroX enables asset tokenization.[37][38]
The tokenization of equity is facilitated by STOKR, a platform that links investors with small and medium-sized businesses. Tokens issued through the STOKR platform are legally recognized as transferable securities under European Union capital market regulations.[39]
Breakers enable tokenization of intellectual property, allowing content creators to issue their own digital tokens. Tokens can be distributed to a variety of project participants. Without intermediaries or governing body, content creators can integrate reward-sharing features into the token.[39]
Building an alternate payments system requires a number of entities working together in order to delivernear field-communication(NFC) or other technology based payment services to the end users. One of the issues is the interoperability between the players and to resolve this issue the role of trusted service manager (TSM) is proposed to establish a technical link betweenmobile network operators(MNO) and providers of services, so that these entities can work together. Tokenization can play a role in mediating such services.
Tokenization as a security strategy lies in the ability to replace a real card number with a surrogate (target removal) and the subsequent limitations placed on the surrogate card number (risk reduction). If the surrogate value can be used in an unlimited fashion or even in a broadly applicable manner, the token value gains as much value as the real credit card number. In these cases, the token may be secured by a second dynamic token that is unique for each transaction and also associated to a specific payment card. Example of dynamic, transaction-specific tokens include cryptograms used in theEMVspecification.
ThePayment Card Industry Data Security Standard, an industry-wide set of guidelines that must be met by any organization that stores, processes, or transmits cardholder data, mandates that credit card data must be protected when stored.[40]Tokenization, as applied to payment card data, is often implemented to meet this mandate, replacing credit card and ACH numbers in some systems with a random value or string of characters.[41]Tokens can be formatted in a variety of ways.[42]Some token service providers or tokenization products generate the surrogate values in such a way as to match the format of the original sensitive data. In the case of payment card data, a token might be the same length as a Primary Account Number (bank card number) and contain elements of the original data such as the last four digits of the card number. When a payment card authorization request is made to verify the legitimacy of a transaction, a token might be returned to the merchant instead of the card number, along with the authorization code for the transaction. The token is stored in the receiving system while the actual cardholder data is mapped to the token in a secure tokenization system. Storage of tokens and payment card data must comply with current PCI standards, including the use ofstrong cryptography.[43]
Tokenization is currently in standards definition in ANSI X9 asX9.119 Part 2. X9 is responsible for the industry standards for financial cryptography and data protection including payment card PIN management, credit and debit card encryption and related technologies and processes. The PCI Council has also stated support for tokenization in reducing risk in data breaches, when combined with other technologies such as Point-to-Point Encryption (P2PE) and assessments of compliance to PCI DSS guidelines.[44]Visa Inc. released Visa Tokenization Best Practices[45]for tokenization uses in credit and debit card handling applications and services. In March 2014,EMVCo LLCreleased its first payment tokenization specification forEMV.[46]PCI DSS is the most frequently utilized standard for Tokenization systems used by payment industry players.[22]
Tokenization can render it more difficult for attackers to gain access to sensitive data outside of the tokenization system or service. Implementation of tokenization may simplify the requirements of thePCI DSS, as systems that no longer store or process sensitive data may have a reduction of applicable controls required by the PCI DSS guidelines.
As a security best practice,[47]independent assessment and validation of any technologies used for data protection, including tokenization, must be in place to establish the security and strength of the method and implementation before any claims of privacy compliance, regulatory compliance, and data security can be made. This validation is particularly important in tokenization, as the tokens are shared externally in general use and thus exposed in high risk, low trust environments. The infeasibility of reversing a token or set of tokens to a live sensitive data must be established using industry accepted measurements and proofs by appropriate experts independent of the service or solution provider.
Not all organizational data can be tokenized, and needs to be examined and filtered.
When databases are utilized on a large scale, they expand exponentially, causing the search process to take longer, restricting system performance, and increasing backup processes. A database that links sensitive information to tokens is called a vault. With the addition of new data, the vault's maintenance workload increases significantly.
For ensuring database consistency, token databases need to be continuously synchronized.
Apart from that, secure communication channels must be built between sensitive data and the vault so that data is not compromised on the way to or from storage.[4]
|
https://en.wikipedia.org/wiki/Tokenization_(data_security)
|
Dynamic Secretsis a novelkey managementscheme forsecure communications. It was proposed by Sheng Xiao, Weibo Gong, andDon Towsley. The first academic publication had been nominated for INFOCOM 2010 best paper award.[1][2]In 2012 a monograph was published bySpringerto extend this scheme to a framework.[3]
Dynamic secrets can be applied to all bi-directional communication systems and some single-directional communication systems to improve theircommunications security. There are three main benefits:
1.Theencryptionandauthenticationkeys are rapidly and automatically updated for any pair of communication devices.
2.The key update process binds to the communication process and incurs negligible computing and bandwidth cost.
3.The use a cloned key in either authentication or in encrypted communication is guaranteed to be detected. This detection has no false positives and does not cost any computing/networking resources. (Dynamic secrets automatically breaks the secure communication whenever a clone key and the legitimate key co-exist. To find out who is the attacker is, however, takes such resources.)
1.Infisical Dynamic Secrets
This cryptography-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Dynamic_secrets
|
Hardware securityis a discipline originated from the cryptographic engineering and involveshardware design,access control,secure multi-party computation, secure key storage, ensuring code authenticity, measures to ensure that the supply chain that built the product is secure among other things.[1][2][3][4]
Ahardware security module(HSM) is a physical computing device that safeguards and managesdigital keysfor strong authentication and providescryptoprocessing. These modules traditionally come in the form of a plug-in card or an external device that attaches directly to acomputerornetwork server.
Some providers in this discipline consider that the key difference between hardware security andsoftware securityis that hardware security is implemented using "non-Turing-machine" logic (rawcombinatorial logicorsimple state machines). One approach, referred to as "hardsec", usesFPGAsto implement non-Turing-machine security controls as a way of combining the security of hardware with the flexibility of software.[5]
Hardware backdoorsarebackdoorsinhardware. Conceptionally related, ahardware Trojan(HT) is a malicious modification ofelectronic system, particularly in the context ofintegrated circuit.[1][3]
Aphysical unclonable function(PUF)[6][7]is a physical entity that is embodied in a physical structure and is easy to evaluate but hard to predict. Further, an individual PUF device must be easy to make but practically impossible to duplicate, even given the exact manufacturing process that produced it. In this respect it is the hardware analog of aone-way function. The name "physical unclonable function" might be a little misleading as some PUFs are clonable, and most PUFs are noisy and therefore do not achieve the requirements for afunction. Today, PUFs are usually implemented inintegrated circuitsand are typically used in applications with high security requirements.
Many attacks on sensitive data and resources reported by organizations occur from within the organization itself.[8]
|
https://en.wikipedia.org/wiki/Hardware_security
|
Incryptography, akey ceremonyis a ceremony held to generate or use acryptographic key.[1]
A public example is the signing of theDNS root zoneforDNSSEC.[2]
Inpublic-key cryptographyandcomputer security, aroot-key ceremonyis a procedure for generating a unique pair of public and private root keys. Depending on thecertificate policyof a system, the generation of the root keys may require notarization, legal representation, witnesses, or “key-holders” to be present. A commonly recognized practice is to follow theSAS 70standard for root key ceremonies.[3]
At the heart of everycertificate authority(CA) is at least one root key or root certificate and usually at least one intermediate root certificate. This “root key” is a unique key that must be generated for secure server interaction with a protective network, often called the "root zone". Prompts for information from this zone can be made through a server. The keys and certificates serve as the credentials and safeguards for the system. These digital certificates are made from a public key and aprivate key.[citation needed]
The following examples A and B are at opposite ends of the security spectrum, and no two environments are the same. Depending on the level of protection required, different levels of security will be used.
Unless the information that is being accessed or transmitted is valued in terms of millions of dollars, it is generally adequate that the root key ceremony be conducted within the security of the vendor's laboratory. The customer may opt to have the root key stored in ahardware security module, but in most cases, the safe storage of the root key on a CD or hard disk is admissible. The root key is never stored on the CA server.
Machine Readable Travel Documents(MRTDs) require a much higher level of security. When conducting the root key ceremony, the government or organization will require rigorous security checks on all personnel in attendance. Those normally required to attend the key ceremony include a minimum of two administrators from the organization, two signatories from the organization, one lawyer, a notary, and two video camera operators, in addition to the CA software vendor's technical team.
The actual generation of the root key-pair typically occurs in a secure vault, with no external communication except for a single telephone line or intercom. Upon securing the vault, all present personnel must verify their identity using at least two legally recognized forms of identification. The lawyer in charge logs every person, transaction, and event in a root key ceremony log book, with each page notarized. From the moment the vault door closes until its reopening, everything is also video recorded. The lawyer and the organization's two signatories sign the recording, which is also notarized.
As part of the process, the root key is divided into up to twenty-one parts, each secured in a safe with a key and numerical lock. The keys are distributed to up to twenty-one people, and the numerical codes are distributed to another twenty-one people.[citation needed]
The CA vendors and organizations, such asRSA,VeriSign, and Digi-Sign, implement projects of this nature where conducting a root key ceremony would be a central component of their service.[4]
Ahardware security module (HSM)key ceremony is a procedure where the master key is generated and loaded to initialize the use of the HSM. The master key is at the top of the key hierarchy and is the root of trust to encrypt all other keys generated by the HSM. A master key is composed of at least two parts. Each key part is normally owned by a different person to enhance security.
The master key is stored within the HSM. IBM HSMs support two types of cryptographic mechanisms:
Depending on the cryptographic mechanisms that the HSM supports and the key objects that are encrypted by the master key, the following types of master keys are available:
ForIBM ZandLinux OneSystems, the HSMs are used to perform cryptographic operations. The HSM has 85 domains, with each having its own set of master keys.[10]Before using the system, the HSM Key Ceremony must be conducted to load the master key securely and properly. For EP11 HSMs, the master key parts are stored onsmart cardsand loaded to the HSM with the Trusted Key Entry (TKE) workstation. For CCA HSMs, the master key parts can be stored either on smart cards or in files on the TKE workstation.
EP11 HSM is currently the only type of HSM that supports Key Ceremony in the cloud. Both the cloudcommand-line interface(CLI) and smart cards are provided to load the master key parts to the cloud HSM. IBM Cloud Hyper Protect Crypto Services is presently the onlykey managementservice and cloud HSM in the cloud to provide HSM key ceremony through both CLI and smart cards.[11]
Depending on the key ceremony types, the master key parts can be stored either on smart cards or in files on the workstation.
Smart cards are protected by apersonal identification number (PIN)that must be entered on a smart card reader pad. Each master key part owner has one smart card, and only the owner knows its PIN. This solution ensures that the master key parts never appear in the clear outside the smart cards.
Compared with the smart card solution, the workstation solution does not require the procurement of smart card readers and smart cards. This solution uses workstation files encrypted with keys derived from a file password to store master key parts. When the keys are used, file content is decrypted and appear temporarily in the clear in workstation memory.[12]
A key ceremony can be used to generate theprivate keyfor acryptocurrency wallet.[13][14]For Multiparty Computation (MPC), key ceremonies are used to split parts of keys to participants securely.
It is also used innon-interactive zero-knowledge proof(zKP) protocols, specifically first generation zk-SNARK, as they need a trusted setup.[15][16]
|
https://en.wikipedia.org/wiki/Key_ceremony
|
Incryptography, akey distribution center(KDC) is part of acryptosystemintended to reduce the risks inherent in exchangingkeys. KDCs often operate in systems within which some users may have permission to use certain services at some times and not at others.
For instance, an administrator may have established a policy that only certain users may back up to tape. Manyoperating systemscan control access to the tape facility via a "systemservice". If that system service further restricts the tape drive to operate only on behalf of users who can submit a service-granting ticket when they wish to use it, there remains only the task of distributing such tickets to the appropriately permitted users. If the ticket consists of (or includes) a key, one can then term the mechanism which distributes it a KDC. Usually, in such situations, the KDC itself also operates as a system service.
A typical operation with a KDC involves a request from a user to use some service. The KDC will use cryptographic techniques, mostly usingsymmetric encryption, to authenticate requesting users as themselves. It will also check whether an individual user has the right to access the service requested. If the authenticated user meets all prescribed conditions, the KDC can issue a ticket permitting access.
In most (but not all) cases the KDC shares akeywith each of all the other parties.
The KDC produces aticketbased on aserverkey.
Theclientreceives the ticket and submits it to the appropriateserver.
The server can verify the submitted ticket and grant access to user submitting it.
Security systems using KDCs includeKerberos. (Actually, Kerberos partitions KDC functionality between two different agents: the AS (Authentication Server) and the TGS (Ticket Granting Service).)
|
https://en.wikipedia.org/wiki/Key_distribution_center
|
Incryptography, akey encapsulation mechanism(KEM) is apublic-key cryptosystemthat allows a sender to generate a short secret key and transmit it to a receiver securely, in spite ofeavesdroppingandinterceptingadversaries.[1][2][3]Modern standards forpublic-key encryptionof arbitrary messages are usually based on KEMs.[4][5]
A KEM allows a sender who knows a public key to simultaneously generate a short random secret key and anencapsulationorciphertextof the secret key by the KEM'sencapsulation algorithm.
The receiver who knows the private key corresponding to the public key can recover the same random secret key from the encapsulation by the KEM'sdecapsulation algorithm.[1][2][3]
The security goal of a KEM is to prevent anyone whodoes notknow the private key from recovering any information about the encapsulated secret keys, even after eavesdropping or submitting other encapsulations to the receiver to study how the receiver reacts.[1][2][3]
The difference between apublic-key encryptionscheme and a KEM is that a public-key encryption scheme allows a sender to choose an arbitrary message from some space of possible messages, while a KEM chooses a short secret key at random for the sender.[1][2][3]
The sender may take the random secret key produced by a KEM and use it as asymmetric keyfor anauthenticated cipherwhose ciphertext is sent alongside the encapsulation to the receiver.
This serves to compose a public-key encryption scheme out of a KEM and a symmetric-key authenticated cipher in ahybrid cryptosystem.[1][2][3][5]
Most public-key encryption schemes such asRSAES-PKCS1-v1_5,RSAES-OAEP, andElgamal encryptionare limited to small messages[6][7]and are almost always used to encrypt a short random secret key in a hybrid cryptosystem anyway.[8][9][5]And although a public-key encryption scheme can conversely be converted to a KEM by choosing a random secret key and encrypting it as a message, it is easier to design and analyze a secure KEM than to design a secure public-key encryption scheme as a basis.
So most modern public-key encryption schemes are based on KEMs rather than the other way around.[10][5]
A KEM consists of three algorithms:[1][2][3][11][12]
A KEM iscorrectif, for any key pair(pk,sk){\displaystyle ({\mathit {pk}},{\mathit {sk}})}generated byGen{\displaystyle \operatorname {Gen} }, decapsulating an encapsulationc{\displaystyle c}returned by(k,c):=Encap(pk){\displaystyle (k,c):=\operatorname {Encap} ({\mathit {pk}})}with high probability yields the same keyk{\displaystyle k}, that is,Decap(sk,c)=k{\displaystyle \operatorname {Decap} ({\mathit {sk}},c)=k}.[2][3][11][12]
Securityof a KEM is quantified by itsindistinguishability against chosen-ciphertext attack, IND-CCA, which is loosely how much better an adversary can do than a coin toss to tell whether, given a random key and an encapsulation, the key is encapsulated by that encapsulation or is an independent random key.[2][3][11][12]
Specifically, in the IND-CCA game:
TheIND-CCA advantageof the adversary is|Pr[b′=b]−1/2|{\displaystyle \left|\Pr[b'=b]-1/2\right|}, that is, the probability beyond a fair coin toss at correctly distinguishing an encapsulated key from an independently randomly chosen key.
TraditionalRSA encryption, witht{\displaystyle t}-bit moduli and exponente{\displaystyle e}, is defined as follows:[13][14][15]
This naive approach is totally insecure.
For example, since it is nonrandomized, it cannot be secure against evenknown-plaintext attack—an adversary can tell whether the sender is sending the messageATTACK AT DAWNversus the messageATTACK AT DUSKsimply by encrypting those messages and comparing the ciphertext.
Even ifm{\displaystyle m}is always a random secret key, such as a 256-bitAESkey, whene{\displaystyle e}is chosen to optimize efficiency ase=3{\displaystyle e=3}, the messagem{\displaystyle m}can be computed from the ciphertextc{\displaystyle c}simply by taking real number cube roots, and there are many otherattacks against plain RSA.[13][14]Variousrandomized padding schemeshave been devised in attempts—sometimes failed, likeRSAES-PKCS1-v1_5[13][17][18]—to make it secure for arbitrary short messagesm{\displaystyle m}.[13][14]
Since the messagem{\displaystyle m}is almost always a short secret key for asymmetric-keyauthenticated cipherused to encrypt an arbitrary bit string message, a simpler approach calledRSA-KEMis to choose an element ofZ/nZ{\displaystyle \mathbb {Z} /n\mathbb {Z} }at random and use that toderivea secret key using akey derivation functionH{\displaystyle H}, roughly as follows:[19][8]
This approach is simpler to implement, and provides a tighter reduction to theRSA problem, than padding schemes likeRSAES-OAEP.[19]
TraditionalElgamal encryptionis defined over a multiplicative subgroup of the finite fieldZ/pZ{\displaystyle \mathbb {Z} /p\mathbb {Z} }with generatorg{\displaystyle g}of orderq{\displaystyle q}as follows:[20][21]
This meets the syntax of a public-key encryption scheme, restricted to messages in the spaceZ/pZ{\displaystyle \mathbb {Z} /p\mathbb {Z} }(which limits it to message of a few hundred bytes for typical values ofp{\displaystyle p}).
By validating ciphertexts in decryption, it avoids leaking bits of the private keyx{\displaystyle x}through maliciously chosen ciphertexts outside the group generated byg{\displaystyle g}.
However, this fails to achieveindistinguishability against chosen ciphertext attack.
For example, an adversary having a ciphertextc=(c1,c2){\displaystyle c=(c_{1},c_{2})}for an unknown messagem{\displaystyle m}can trivially decrypt it by querying the decryption oracle for the distinct ciphertextc′:=(c1,c2g){\displaystyle c':=(c_{1},c_{2}g)}, yielding the related plaintextm′:=mgmodp{\displaystyle m':=mg{\bmod {p}}}, from whichm{\displaystyle m}can be recovered bym=m′g−1modp{\displaystyle m=m'g^{-1}{\bmod {p}}}.[20]
Traditional Elgamal encryption can be adapted to the elliptic-curve setting, but it requires some way to reversibly encode messages as points on the curve, which is less trivial than encoding messages as integers modp{\displaystyle p}.[22]
Since the messagem{\displaystyle m}is almost always a short secret key for asymmetric-keyauthenticated cipherused to encrypt an arbitrary bit string message, a simpler approach is toderivethe secret key fromt{\displaystyle t}and dispense withm{\displaystyle m}andc2{\displaystyle c_{2}}altogether, as a KEM, using akey derivation functionH{\displaystyle H}:[1]
When combined with an authenticated cipher to encrypt arbitrary bit string messages, the combination is essentially theIntegrated Encryption Scheme.
Since this KEM only requires a one-way key derivation function to hash random elements of the group it is defined over,Z/pZ{\displaystyle \mathbb {Z} /p\mathbb {Z} }in this case, and not a reversible encoding of messages, it is easy to extend to more compact and efficient elliptic curve groups for the same security, as in theECIES, Elliptic Curve Integrated Encryption Scheme.
|
https://en.wikipedia.org/wiki/Key_encapsulation
|
Alockis amechanicalorelectronicfastening device that is released by a physical object (such as a key,keycard,fingerprint,RFIDcard,security tokenor coin), by supplying secret information (such as a number or letter permutation orpassword), by a combination thereof, or it may only be able to be opened from one side, such as a door chain.
Akeyis a device that is used to operate a lock (to lock or unlock it). A typical key is a small piece of metal consisting of two parts: thebitorblade, which slides into thekeywayof the lock and distinguishes between different keys, and thebow, which is left protruding so that torque can be applied by the user. In its simplest implementation, a key operates one lock or set of locks that are keyed alike, a lock/key system where each similarly keyed lock requires the same, unique key.
The key serves as asecurity tokenfor access to the locked area; locks are meant to only allow persons having the correct key to open it and gain access. In more complex mechanical lock/key systems, two different keys, one of which is known as the master key, serve to open the lock. Common metals includebrass, plated brass,nickel silver, andsteel. The act of opening a lock without a key is calledlock picking.
Locks have been in use for over 6000 years, with one early example discovered in the ruins ofNineveh, the capital of ancientAssyria.[1]Locks such as this were developed into theEgyptianwoodenpin lock, which consisted of a bolt, door fixture or attachment, and key. When the key was inserted, pins within the fixture were lifted out of drilled holes within the bolt, allowing it to move. When the key was removed, the pins fell part-way into the bolt, preventing movement.[2]
Thewarded lockwas also present from antiquity and remains the most recognizable lock and key design in the Western world. The first all-metal locks appeared between the years 870 and 900, and are attributed to English craftsmen.[3]It is also said that the key was invented byTheodorus of Samosin the 6th century BC.[1]
The Romans invented metal locks and keys and the system of security provided by wards.[4]
Affluent Romans often kept their valuables in secure locked boxes within their households, and wore the keys as rings on their fingers. The practice had two benefits: It kept the key handy at all times, while signaling that the wearer was wealthy and important enough to have money and jewellery worth securing.[5]
A special type of lock, dating back to the 17th–18th century, although potentially older as similar locks date back to the 14th century, can be found in theBeguinageof the Belgian cityLier.[6][7]These locks are most likely Gothic locks, that were decorated with foliage, often in a V-shape surrounding the keyhole.[8]They are often calleddrunk man's lock, as these locks were, according to certain sources, designed in such a way a person can still find the keyhole in the dark, although this might not be the case as the ornaments might have been purely aesthetic.[6][7]In more recent times similar locks have been designed.[9][10]
With the onset of theIndustrial Revolutionin the late 18th century and the concomitant development of precision engineering and component standardization, locks and keys were manufactured with increasing complexity and sophistication.[11]
Thelever tumbler lock, which uses a set of levers to prevent the bolt from moving in the lock, was invented byRobert Barronin 1778.[12]His double acting lever lock required the lever to be lifted to a certain height by having a slot cut in the lever, so lifting the lever too far was as bad as not lifting the lever far enough. This type of lock is still used today.[13]
The lever tumbler lock was greatly improved byJeremiah Chubbin 1818.[12]A burglary inPortsmouth Dockyardprompted theBritish Governmentto announce a competition to produce a lock that could be opened only with its own key.[5]Chubb developed theChubb detector lock, which incorporated anintegral security featurethat could frustrate unauthorized access attempts and would indicate to the lock's owner if it had been interfered with. Chubb was awarded £100 after a trainedlock-pickerfailed to break the lock after 3 months.[14]
In 1820, Jeremiah joined his brotherCharlesin starting their own lock company,Chubb. Chubb made various improvements to his lock: his 1824 improved design did not require a special regulator key to reset the lock; by 1847 his keys used six levers rather than four; and he later introduced a disc that allowed the key to pass but narrowed the field of view, hiding the levers from anybody attempting to pick the lock.[15]The Chubb brothers also received a patent for the first burglar-resistingsafeand began production in 1835.
The designs of Barron and Chubb were based on the use of movable levers, butJoseph Bramah, a prolific inventor, developed an alternative method in 1784. His lock used a cylindrical key with precise notches along the surface; these moved the metal slides that impeded the turning of the bolt into an exact alignment, allowing the lock to open. The lock was at the limits of the precision manufacturing capabilities of the time and was said by its inventor to be unpickable. In the same year Bramah started the Bramah Locks company at 124 Piccadilly, and displayed the "Challenge Lock" in the window of his shop from 1790, challenging "...the artist who can make an instrument that will pick or open this lock" for the reward of £200. The challenge stood for over 67 years until, at theGreat Exhibitionof 1851, the American locksmithAlfred Charles Hobbswas able to open the lock and, following some argument about the circumstances under which he had opened it, was awarded the prize. Hobbs' attempt required some 51 hours, spread over 16 days.
The earliest patent for a double-actingpin tumbler lockwas granted to American physician Abraham O. Stansbury in England in 1805,[16]but the modern version, still in use today, was invented by AmericanLinus Yale Sr.in 1848.[17]This lock design usedpinsof varying lengths to prevent the lock from opening without the correct key. In 1861,Linus Yale Jr.was inspired by the original 1840s pin-tumbler lock designed by his father, thus inventing and patenting a smaller flat key with serrated edges as well as pins of varying lengths within the lock itself, the same design of the pin-tumbler lock which still remains in use today.[18]The modern Yale lock is essentially a more developed version of the Egyptian lock.
Despite some improvement in key design since, the majority of locks today are still variants of the designs invented by Bramah, Chubb and Yale.
Awarded lockuses a set of obstructions, or wards, to prevent the lock from opening unless the correct key is inserted. The key has notches or slots that correspond to the obstructions in the lock, allowing it to rotate freely inside the lock. Warded locks are typically reserved for low-security applications as a well-designedskeleton keycan successfully open a wide variety of warded locks.
Thepin tumbler lockuses a set of pins to prevent the lock from opening unless the correct key is inserted. The key has a series of grooves on either side of the key's blade that limit the type of lock the key can slide into. As the key slides into the lock, the horizontal grooves on the blade align with thewardsin thekeywayallowing or denying entry to thecylinder. A series of pointed teeth and notches on the blade, calledbittings, then allowpinsto move up and down until they are in line with theshear lineof the inner and outer cylinder, allowing the cylinder orcamto rotate freely and the lock to open. An additional pin called the master pin is present between the key and driver pins in locks that accept master keys, to allow the plug to rotate at multiple pin elevations.
Awafer tumbler lockis similar to the pin tumbler lock and works on a similar principle. However, unlike the pin lock (where each pin consists of two or more pieces) each wafer is a single piece. The wafer tumbler lock is often incorrectly referred to as a disc tumbler lock, which uses an entirely different mechanism. The wafer lock is relatively inexpensive to produce and is often used in automobiles and cabinetry.
Thedisc tumbler lockorAbloylock is composed of slotted rotating detainer discs.
Thelever tumbler lockuses a set of levers to prevent the bolt from moving in the lock. In its simplest form, lifting the tumbler above a certain height will allow the bolt to slide past. Lever locks are commonlyrecessedinside wooden doors or on some older forms of padlocks, including fire brigade padlocks.
Amagnetic keyed lockis a locking mechanism whereby the key utilizes magnets as part of the locking and unlocking mechanism. A magnetic key would use from one to many small magnets oriented so that the North and South poles would equate to a combination to push or pull the lock's internal tumblers thus releasing the lock.
Anelectronic lockworks by means of an electric current and is usually connected to anaccess controlsystem. In addition to the pin and tumbler used in standard locks, electronic locks connect theboltorcylinderto a motor within the door using a part called an actuator. Types of electronic locks include the following:
Akeycard lockoperates with a flat card of similar dimensions as acredit card. In order to open the door, one needs to successfully match the signature within thekeycard.
The lock in a typicalremote keyless systemoperates with asmart keyradio transmitter. The lock typically accepts a particular valid code only once, and the smart key transmits a differentrolling codeevery time the button is pressed.
Generally the car door can be opened with either a valid code by radio transmission, or with a (non-electronic) pin tumbler key.
The ignition switch may require atransponder car keyto both open a pin tumbler lock and also transmit a valid code by radio transmission.
A smart lock is an electromechanics lock that gets instructions to lock and unlock the door from an authorized device using acryptographic keyand wireless protocol. Smart locks have begun to be used more commonly in residential areas, often controlled withsmartphones.[19][20]Smart locks are used incoworkingspaces and offices to enable keyless office entry.[21]In addition, electronic locks cannot be picked with conventional tools.
Locksmithingis a traditional trade, and in most countries requires completion of anapprenticeship. The level of formal education required varies from country to country, from no qualifications required at all in the UK,[22]to a simple training certificate awarded by an employer, to a fulldiplomafrom anengineeringcollege. Locksmiths may be commercial (working out of a storefront), mobile (working out of a vehicle), institutional, or investigational (forensic locksmiths). They may specialize in one aspect of the skill, such as an automotive lock specialist, a master key system specialist or a safe technician. Many also act as security consultants, but not all security consultants have the skills and knowledge of a locksmith.[citation needed]
Historically, locksmiths constructed or repaired an entire lock, including its constituent parts. The rise of cheap mass production has made this less common; the vast majority of locks are repaired through like-for-like replacements, high-security safes and strongboxes being the most common exception. Many locksmiths also work on any existing door hardware, including door closers, hinges, electric strikes, and frame repairs, or serviceelectronic locksby making keys for transponder-equipped vehicles and implementing access control systems.
Although the fitting and replacement of keys remains an important part of locksmithing, modern locksmiths are primarily involved in the installation of high quality lock-sets and the design, implementation, and management of keying and key control systems. Locksmiths are frequently required to determine the level of risk to an individual or institution and then recommend and implement appropriate combinations of equipment and policies to create a "security layer" that exceeds the reasonable gain of an intruder.[citation needed]
Traditionalkey cuttingis the primary method of key duplication. It is asubtractive processnamed after the metalworking process ofcutting, where a flatblankkey is ground down to form the same shape as thetemplate(original) key. The process roughly follows these stages:
Modern key cutting replaces the mechanical key following aspect with a process in which the original key is scanned electronically, processed by software, stored, then used to guide a cutting wheel when a key is produced. The capability to store electronic copies of the key's shape allows for key shapes to be stored for key cutting by any party that has access to the key image.
Different key cutting machines are more or less automated, using different milling or grinding equipment, and follow the design of early 20th century key duplicators.
Key duplication is available in many retailhardware storesand as a service of the specialized locksmith, though the correct key blank may not be available. More recently, online services for duplicating keys have become available.
Akeyhole(orkeyway) is a hole or aperture (as in a door or lock) for receiving a key.[23]Lock keyway shapes vary widely with lock manufacturer, and many manufacturers have a number of unique profiles requiring a specifically milledkey blankto engage the lock'stumblers.
Keys appear in various symbols and coats of arms, the best-known being that of theHoly See:[24]derived from the phrase inMatthew 16:19which promisesSaint Peter, in Roman Catholic tradition the firstpope, theKeys of Heaven. But this is by no means the only case.
Some works of art associate keys with the Greek goddess ofwitchcraft, known asHecate.[25]
ThePalestinian keyis the Palestinian collective symbol of their homes lost in theNakba, when more than half of the population ofMandatory Palestinewasexpelled or fled violence in 1948and were subsequently refused theright to return.[26][27][28]Since 2016, a Palestinian restaurant inDoha,Qatar, holds theGuinness World Recordfor the world's largest key – 2.7 tonnes and 7.8 × 3 meters.[29][30]
|
https://en.wikipedia.org/wiki/Key_management_(access_control)
|
AJava KeyStore(JKS) is arepositoryof security certificates – eitherauthorization certificatesorpublic key certificates– plus correspondingprivate keys, used for instance inTLS encryption.
InIBM WebSphere Application ServerandOracle WebLogic Server, a file with extensionjksserves as a keystore.
TheJava Development Kitmaintains aCAkeystore file namedcacertsin folderjre/lib/security. JDKs provide a tool namedkeytool[1]to manipulate the keystore.keytoolhas no functionality to extract the private key out of the keystore, but this is possible with third-party tools like jksExportKey, CERTivity,[2]Portecle[3]and KeyStore Explorer.[4]
Thiscomputer securityarticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Keystore
|
TheKey Management Interoperability Protocol(KMIP) is anextensiblecommunication protocolthat defines message formats for the manipulation ofcryptographic keyson akey managementserver. This facilitates dataencryptionby simplifying encryption key management. Keys may be created on a server and then retrieved, possibly wrapped by other keys. Bothsymmetricandasymmetrickeys are supported, including the ability to sign certificates. KMIP also allows for clients to ask a server to encrypt or decrypt data, without needing direct access to the key.
The KMIP standard was first released in 2010. Clients and servers are commercially available from multiple vendors. The KMIP standard effort is governed by theOASIS standards body. Technical details can also be found on the official KMIP page[1]and kmip wiki.[2]
A KMIP server stores and controlsManaged Objectslike symmetric and asymmetric keys, certificates, and user defined objects. Clients then use the protocol for accessing these objects subject to a security model that is implemented by the servers. Operations are provided to create, locate, use, retrieve and update managed objects.
Each managed object comprises an immutableValuelike a key-block containing a cryptographic-key. These objects also have mutableAttributeswhich can be used for storing metadata about their keys. Some attributes are derived directly from the Value, like the cryptographic-algorithm and key-length. Other attributes are defined in the specification for the management of objects like the Application-Specific Identifier which is usually derived from tape-identification data. Additional identifiers can be defined by the server or client per application need.
Each object is identified by a unique and immutable object-identifier generated by the server and is used for getting object-values. Managed-objects may also be given a number of mutable yet globally uniqueNameattribute which can be used for Locating objects.
The types of managed-objects being managed by KMIP include:
The operations provided by KMIP include:
Each key has a cryptographic state defined by theNational Institute of Standards and Technology(NIST). Keys are created in an Initial state, and must be Activated before they can be used. Keys may then be Deactivated and eventually Destroyed. A key may also be marked being Compromised.
Operations are provided for manipulating Key-state in conformance with the NIST life-cycle guidelines. A Key-state may be interrogated using the State attribute or the attributes that record dates of each transformation such as Activation Date. Dates can be specified into the future thus keys automatically become unavailable for specified operations when they expire.
KMIP is a stateless protocol in which messages are sent from a client to a server and then the client normally awaits on a reply. Each request may contain many operations thus enables the protocol to efficiently handle large numbers of keys. There are also advanced features for processing requests asynchronously.
The KMIP protocol specifies several different types of encodings. The main one is atype–length–valueencoding of messages, calledTTLV(Tag, Type, Length, Value). Nested TTLV structures allow for encoding of complex, multi-operation messages in a singlebinary message.
There are also well defined XML and JSON encodings of the protocol for environments where binary is not appropriate.
All of these protocols are expected to be transmitted usingTLSprotocol in order to ensure integrity and security. However, it is also possible to register and retrieve keys that are wrapped (encrypted) using another key on the server, which provides an additional level of security.
KMIP provides standardized mechanisms to manage a KMIP server by suitably authorized administrative clients using System Objects.
User objects can be created and authorized to perform specific operations on specific managed objects. Both Managed Objects and Users can be assigned to groups, and those groups can form a hierarchy which facilitates efficient management of complex operating environments.
KMIP also provides a provisioning system that facilitates providing end points with credentials using simple one-time passwords.
Default values of attributes can be provided, so that simple clients need not specify cryptographic and other parameters. For example, an administrative user might specify that all "SecretAgent" keys should be 192 bitAESkeys withCBCblock chaining. A client then only needs to specify that they wish to create a "SecretAgent" key to have those defaults provided. It is also possible to enforce constraints on key parameters that implement security policy.
KMIP also defines a set ofprofiles, which are subsets of the KMIP specification showing common usage for a particular context. A particular KMIP implementation is said to beconformantto a profile when it fulfills all the requirements set forth in a profile specification document.OASIShas put forth various profiles describing the requirements for compliance towards storage arrays[3]and tape libraries,[4]but any organization can create a profile.
PKCS#11is aCAPIused to control ahardware security module. PKCS#11 provides cryptographic operations to encrypt and decrypt, as well as operations for simple key management. There is considerable amount of overlap between the PKCS#11 API and the KMIP protocol.
The two standards were originally developed independently. PKCS#11 was created byRSA Security, but the standard is now also governed by anOASIStechnical committee. It is the stated objective of both the PKCS#11 and KMIP committees to align the standards where practical. For example, the PKCS#11 Sensitive and Extractable attributes have been added to KMIP version 1.4. Many individuals are on the technical committees of both KMIP and PKCS#11.
KMIP 2.0 also provides a standardized mechanism to transport PKCS#11 messages from clients to servers. This can be used to target different PKCS#11 implementations without the need to recompile the programs that use it.
The OASIS KMIP Technical Committee maintains a list of known KMIP implementations, which can be found atOASIS Known Implementations. As of December 2024, there are 35 implementations and 91 KMIP products in this list.
The KMIP standard is defined using a formal specification document, test cases, and profiles put forth by theOASISKMIP technical committee. These documents are publicly available on the OASIS website.
Vendors demonstrate interoperability during a process organized by the OASIS KMIP technical committee in the months before each RSA security conference. These demonstrations are informally known asinterops. KMIP interops have been held every year since 2010. The following chart shows the Normalised number of general test cases and profile tests of all interop participants that have participated in two or more interops since 2014.[5]
The 2025 interoperability testedPost Quantum Cryptography(PCQ) algorithms that will be required as quantum computers become more powerful.[6]
The following shows the XML encoding of a request to Locate a key named "MyKeyName" and return its value wrapped in a different key with ID "c6d14516-4d38-0644-b810-1913b9aef4da". (TTLV is a more common wire protocol, but XML is more human readable.)
Documentation is freely available from the OASIS website.[7]This includes the formal technical specification and a usage guide to assist people that are unfamiliar with the specification. A substantial library of test cases is also provided. These are used to test the interoperability of clients and servers, but they also provide concrete examples of the usage of each standard KMIP feature.
|
https://en.wikipedia.org/wiki/KMIP
|
TheKSD-64[A] Crypto Ignition Key (CIK)is anNSA-developedEEPROMchip packed in a plastic case that looks like a toykey. The model number is due to its storage capacity — 64 kibibits (65,536bits, or 8KiB), enough to store multipleencryption keys. Most frequently it was used inkey-splittingapplications: either the encryption device or theKSD-64alone is worthless, but together they can be used to make encrypted connections. It was also used alone as afill devicefor transfer of key material, as for the initial seed key loading of anSTU-IIIsecure phone.
Newer systems, such as theSecure Terminal Equipment, use theFortezzaPC card as asecurity tokeninstead of the KSD-64. The KSD-64 was withdrawn from the market in 2014. Over one million were produced in its 30-year life.[1]
The CIK is a small device which can be loaded with a 128·bit sequence which is different for each user. When the device is removed from the machine, that sequence is automatically added (mod 2) to the unique key in the machine, thus leaving it stored in encrypted form. When it is reattached, the unique key in the machine is decrypted, and it is now ready to operate in the normal way. The analogy with an automobile ignition key is close, thus the name. If the key is lost, the user is still safe unless the finder or thief can match it with the user's machine. In case of loss, the user gets a new CIK, effectively changing the lock in the cipher machine, and gets back in business.
The ignition key sequence can be provided in several ways. In the first crypto-equipment to use the idea (the KY-70), the CIK is loaded with its sequence at NSA and supplied to each user like any other item of keying material. Follow-on application (as in the STU-II) use an even more clever scheme. The CIK device is simply an empty register which can be supplied with its unique sequence from the randomizer function of the parent machine itself. Not only that, each time the device is removed and re-inserted, it gets a brand new sequence. The effect of this procedure is to provide high protection against the covert compromise of the CIK wherein a thief acquires the device, copies it, and replaces it unknown to its owner. The next morning (say), when the user inserts the device, it will receive a new sequence and the old copied one will be useless thereafter. If the thief has gotten to his machine during the night, he may be able to act into the net; but when the user attempts to start up in the morning the user's device will no longer work, thus flagging the fact that penetration has occurred.
This concept appears particularly attractive in office environments where physical structures and guarding arrangements will not be sufficiently rigorous to assure that crypto-equipments cannot be accessed by unauthorized people.[2]
This cryptography-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/KSD-64
|
This glossary lists types ofkeysas the term is used incryptography, as opposed todoor locks. Terms that are primarily used by the U.S.National Security Agencyare marked(NSA). For classification of keys according to their usage seecryptographic key types.
|
https://en.wikipedia.org/wiki/List_of_cryptographic_key_types
|
TheNational Security Agency(NSA) is anintelligence agencyof theUnited States Department of Defense, under the authority of thedirector of national intelligence(DNI). The NSA is responsible for global monitoring, collection, and processing of information and data for global intelligence andcounterintelligencepurposes, specializing in a discipline known assignals intelligence(SIGINT). The NSA is also tasked with theprotectionof U.S. communications networks andinformation systems.[9][10]The NSA relies on a variety of measures to accomplish its mission, the majority of which areclandestine.[11]The NSA has roughly 32,000 employees.[12]
Originating as a unit to decipher coded communications inWorld War II, it was officially formed as the NSA by PresidentHarry S. Trumanin 1952. Between then and the end of the Cold War, it became the largest of theU.S. intelligence organizationsin terms of personnel and budget. Still, information available as of 2013 indicates that theCentral Intelligence Agency(CIA) pulled ahead in this regard, with a budget of $14.7 billion.[6][13]The NSA currently conductsworldwide mass data collectionand has been known to physicallybugelectronic systems as one method to this end.[14]The NSA is also alleged to have been behind such attack software asStuxnet, which severely damagedIran's nuclear program.[15][16]The NSA, alongside the CIA, maintains a physical presence in many countries across the globe; the CIA/NSA jointSpecial Collection Service(a highly classified intelligence team) inserts eavesdropping devices in high-value targets (such as presidential palaces or embassies). SCS collection tactics allegedly encompass "close surveillance, burglary, wiretapping, [and] breaking".[17]
Unlike the CIA and theDefense Intelligence Agency(DIA), both of which specialize primarily in foreignhuman espionage, the NSA does not publicly conducthuman intelligence gathering. The NSA is entrusted with assisting with and coordinating, SIGINT elements for other government organizations—which Executive Order prevents from engaging in such activities on their own.[18]As part of these responsibilities, the agency has a co-located organization called theCentral Security Service(CSS), which facilitates cooperation between the NSA and other U.S. defensecryptanalysiscomponents. To further ensure streamlined communication between the signalsintelligence communitydivisions, theNSA Directorsimultaneously serves as the Commander of theUnited States Cyber Commandand as Chief of the Central Security Service.
The NSA's actions have been a matter of political controversy on several occasions, including its role in providing intelligence during theGulf of Tonkin incident, which contributed to theescalationof U.S. involvement in theVietnam War.[19]Declassified documents later revealed that the NSA misinterpreted or overstated signals intelligence, leading to reports of a secondNorth Vietnameseattack that likely never occurred.[20]The agency has also received scrutiny forspying on anti–Vietnam War leadersand the agency's participation ineconomic espionage. In 2013, the NSA had many of its secret surveillance programsrevealed to the publicbyEdward Snowden, a former NSA contractor. According to the leaked documents, the NSA intercepts and stores the communications of over a billion people worldwide, including United States citizens. The documents also revealed that the NSA tracks hundreds of millions of people's movements using cell phonesmetadata. Internationally, research has pointed to the NSA's ability to surveil the domestic Internet traffic of foreign countries through "boomerang routing".[21]
The origins of the National Security Agency can be traced back to April 28, 1917, three weeks after theU.S. Congressdeclared war on Germany inWorld War I. Acodeandcipherdecryption unit was established as the Cable and Telegraph Section, which was also known as the Cipher Bureau.[22]It was headquartered in Washington, D.C., and was part of the war effort under the executive branch without direct congressional authorization. During the war, it was relocated in the army's organizational chart several times. On July 5, 1917,Herbert O. Yardleywas assigned to head the unit. At that point, the unit consisted of Yardley and twocivilianclerks. It absorbed the Navy'scryptanalysisfunctions in July 1918. World War I ended onNovember 11, 1918, and the army cryptographic section of Military Intelligence (MI-8) moved to New York City on May 20, 1919, where it continued intelligence activities as the Code Compilation Company under the direction of Yardley.[23][24]
After the disbandment of theU.S. Armycryptographic section of military intelligence known as MI-8, the U.S. government created the Cipher Bureau, also known asBlack Chamber, in 1919. The Black Chamber was the United States' first peacetimecryptanalyticorganization.[25]Jointly funded by the Army and the State Department, the Cipher Bureau was disguised as aNew York Citycommercial codecompany; it produced and sold such codes for business use. Its true mission, however, was to break the communications (chiefly diplomatic) of other nations. At theWashington Naval Conference, it aided American negotiators by providing them with the decrypted traffic of many of the conference delegations, including theJapanese. The Black Chamber successfully persuadedWestern Union, the largest U.S.telegramcompany at the time, as well as several other communications companies, to illegally give the Black Chamber access to cable traffic of foreign embassies and consulates.[26]Soon, these companies publicly discontinued their collaboration. Despite the Chamber's initial successes, it was shut down in 1929 by U.S. Secretary of StateHenry L. Stimson, who defended his decision by stating, "Gentlemen do not read each other's mail."[27]
DuringWorld War II, theSignal Intelligence Service(SIS) was created to intercept and decipher the communications of theAxis powers.[28]When the war ended, the SIS was reorganized as theArmy Security Agency(ASA), and it was placed under the leadership of the Director of Military Intelligence.[28]
On May 20, 1949, all cryptologic activities were centralized under a national organization called the Armed Forces Security Agency (AFSA).[28]This organization was originally established within theU.S. Department of Defenseunder the command of theJoint Chiefs of Staff.[29]The AFSA was tasked with directing the Department of Defense communications and electronic intelligence activities, except those of U.S.military intelligenceunits.[29]However, the AFSA was unable to centralizecommunications intelligenceand failed to coordinate with civilian agencies that shared its interests, such as theDepartment of State, theCentral Intelligence Agency(CIA) and theFederal Bureau of Investigation(FBI).[29]In December 1951, PresidentHarry S. Trumanordered a panel to investigate how AFSA had failed to achieve its goals. The results of the investigation led to improvements and its redesignation as the National Security Agency.[30]
TheNational Security Councilissued a memorandum of October 24, 1952, that revisedNational Security Council Intelligence Directive (NSCID) 9. On the same day, Truman issued a second memorandum that called for the establishment of the NSA.[31]The actual establishment of the NSA was done by a November 4 memo byRobert A. Lovett, theSecretary of Defense, changing the name of the AFSA to the NSA, and making the new agency responsible for all communications intelligence.[32]Since President Truman's memo was aclassifieddocument,[31]the existence of the NSA was not known to the public at that time. Due to its ultra-secrecy, the U.S. intelligence community referred to the NSA as "No Such Agency".[33]
In the 1960s, the NSA played a key role in expanding American commitment to theVietnam Warby providing evidence of aNorth Vietnameseattack on the American Naval destroyerUSSMaddoxduring theGulf of Tonkin incident.[34]A secret operation, code-named "MINARET", was set up by the NSA to monitor the phone communications of SenatorsFrank ChurchandHoward Baker, as well as key leaders of thecivil rights movement, includingMartin Luther King Jr., and prominent U.S. journalists and athletes who criticized theVietnam War.[35]However, the project turned out to be controversial, and an internal review by the NSA concluded that its Minaret program was "disreputable if not outright illegal".[35]
The NSA has mounted a major effort to secure tactical communications among U.S. armed forces during the war with mixed success. TheNESTORfamily of compatiblesecure voicesystems it developed was widely deployed during theVietnam War, with about 30,000 NESTOR sets produced. However, a variety of technical and operational problems limited their use, allowing the North Vietnamese to exploit and intercept U.S. communications.[36]: Vol I, p.79
In the aftermath of theWatergate scandal, a congressional hearing in 1975 led by SenatorFrank Church[37]revealed that the NSA, in collaboration with Britain's SIGINT intelligence agency,Government Communications Headquarters(GCHQ), had routinely intercepted the international communications of prominent anti-Vietnam war leaders such asJane Fondaand Dr.Benjamin Spock.[38]The NSA tracked these individuals in a secret filing system that was destroyed in 1974.[39]Following the resignation of PresidentRichard Nixon, there were several investigations into suspected misuse of FBI, CIA and NSA facilities.[40]SenatorFrank Churchuncovered previously unknown activity,[40]such as a CIA plot (ordered by the administration of PresidentJohn F. Kennedy) to assassinateFidel Castro.[41]The investigation also uncovered NSA's wiretaps on targeted U.S. citizens.[42]After the Church Committee hearings, theForeign Intelligence Surveillance Actof 1978 was passed. This was designed to limit the practice ofmass surveillance in the United States.[40]
In 1986, the NSA intercepted the communications of the Libyan government during the immediate aftermath of theBerlin discotheque bombing. TheWhite Houseasserted that the NSA interception had provided "irrefutable" evidence that Libya was behind the bombing, which U.S. PresidentRonald Reagancited as a justification for the1986 United States bombing of Libya.[43][44]
In 1999, a multi-year investigation by the European Parliament highlighted the NSA's role in economic espionage in a report entitled 'Development of Surveillance Technology and Risk of Abuse of Economic Information'.[45]That year, the NSA founded theNSA Hall of Honor, a memorial at theNational Cryptologic Museumin Fort Meade, Maryland.[46]The memorial is a, "tribute to the pioneers and heroes who have made significant and long-lasting contributions to American cryptology".[46]NSA employees must be retired for more than fifteen years to qualify for the memorial.[46]
NSA's infrastructure deteriorated in the 1990s as defense budget cuts resulted in maintenance deferrals. On January 24, 2000, NSA headquarters suffered a total network outage for three days caused by an overloaded network. Incoming traffic was successfully stored on agency servers, but it could not be directed and processed. The agency carried out emergency repairs for $3 million to get the system running again (some incoming traffic was also directed instead to Britain'sGCHQfor the time being). DirectorMichael Haydencalled the outage a "wake-up call" for the need to invest in the agency's infrastructure.[47]
In the 1990s the defensive arm of the NSA—the Information Assurance Directorate (IAD)—started working more openly; the first public technical talk by an NSA scientist at a major cryptography conference was J. Solinas' presentation on efficientElliptic Curve Cryptographyalgorithms at Crypto 1997.[48]The IAD's cooperative approach to academia and industry culminated in its support for atransparent processfor replacing the outdatedData Encryption Standard(DES) by anAdvanced Encryption Standard(AES). Cybersecurity policy expertSusan Landauattributes the NSA's harmonious collaboration with industry and academia in the selection of the AES in 2000—and the Agency's support for the choice of a strong encryption algorithm designed by Europeans rather than by Americans—toBrian Snow, who was the Technical Director of IAD and represented the NSA as cochairman of the Technical Working Group for the AES competition, and Michael Jacobs, who headed IAD at the time.[49]: 75
After theterrorist attacks of September 11, 2001, the NSA believed that it had public support for a dramatic expansion of its surveillance activities.[50]According toNeal KoblitzandAlfred Menezes, the period when the NSA was a trusted partner with academia and industry in the development of cryptographic standards started to come to an end when, as part of the change in the NSA in the post-September 11 era, Snow was replaced as Technical Director, Jacobs retired, and IAD could no longer effectively oppose proposed actions by the offensive arm of the NSA.[51]
In the aftermath of theSeptember 11 attacks, the NSA created new IT systems to deal with the flood of information from new technologies like the Internet and cell phones.ThinThreadcontained advanceddata miningcapabilities. It also had a "privacy mechanism"; surveillance was stored encrypted; decryption required a warrant. The research done under this program may have contributed to the technology used in later systems. ThinThread was canceled when Michael Hayden choseTrailblazer, which did not include ThinThread's privacy system.[52]
Trailblazer Projectramped up in 2002 and was worked on byScience Applications International Corporation(SAIC),Boeing,Computer Sciences Corporation,IBM, andLitton Industries. Some NSAwhistleblowerscomplained internally about major problems surrounding Trailblazer. This led to investigations by Congress and the NSA and DoDInspectors General. The project was canceled in early 2004.Turbulencestarted in 2005. It was developed in small, inexpensive "test" pieces, rather than one grand plan like Trailblazer. It also included offensive cyber-warfare capabilities, like injectingmalwareinto remote computers. Congress criticized Turbulence in 2007 for having similar bureaucratic problems as Trailblazer.[53]It was to be a realization of information processing at higher speeds in cyberspace.[54]
The massive extent of the NSA's spying, both foreign and domestic, was revealed to the public in a series of detailed disclosures of internal NSA documents beginning in June 2013. Most of the disclosures were leaked by former NSA contractorEdward Snowden. On 4 September 2020, the NSA's surveillance program was ruled unlawful by theUS Court of Appeals. The court also added that the US intelligence leaders, who publicly defended it, were not telling the truth.[55]
NSA'seavesdroppingmission includes radio broadcasting, both from various organizations and individuals, the Internet, telephone calls, and other intercepted forms of communication. Its secure communications mission includes military, diplomatic, and all other sensitive, confidential, or secret government communications.[56]
According to a 2010 article inThe Washington Post, "every day, collection systems at the National Security Agency intercept and store 1.7 billion e-mails, phone calls and other types of communications. The NSA sorts a fraction of those into 70 separate databases."[57]
Because of its listening task, NSA/CSS has been heavily involved incryptanalyticresearch, continuing the work of predecessor agencies which had broken many World War IIcodesandciphers(see, for instance,Purple,Venona project, andJN-25). In 2004, NSACentral Security Serviceand theNational Cyber Security Divisionof theDepartment of Homeland Security(DHS) agreed to expand the NSA Centers of Academic Excellence in Information Assurance Education Program.[58]
As part of theNational Security Presidential Directive54/Homeland Security Presidential Directive 23 (NSPD 54), signed on January 8, 2008, by President Bush, the NSA became the lead agency to monitor and protect all of the federal government's computer networks fromcyber-terrorism.[10]A part of the NSA's mission is to serve as acombat support agencyfor the Department of Defense.[59]
Operations by the National Security Agency can be divided into three types:
"Echelon" was created in the incubator of theCold War.[60]Today it is alegacy system, and several NSA stations are closing.[61]NSA/CSS, in combination with the equivalent agencies in the United Kingdom (Government Communications Headquarters), Canada (Communications Security Establishment), Australia (Australian Signals Directorate), and New Zealand (Government Communications Security Bureau), otherwise known as theUKUSAgroup,[62]was reported to be in command of the operation of the so-calledECHELONsystem. Its capabilities were suspected to include the ability to monitor a large proportion of the world's transmitted civilian telephone, fax, and data traffic.[63]
During the early 1970s, the first of what became more than eight large satellite communications dishes were installed at Menwith Hill.[64]Investigative journalistDuncan Campbellreported in 1988 on the "ECHELON" surveillance program, an extension of theUKUSA Agreementon global signals intelligenceSIGINT, and detailed how the eavesdropping operations worked.[65]On November 3, 1999, the BBC reported that they had confirmation from the Australian Government of the existence of a powerful "global spying network" code-named Echelon, that could "eavesdrop on every single phone call, fax or e-mail, anywhere on the planet" with Britain and the United States as the chief protagonists. They confirmed that Menwith Hill was "linked directly to the headquarters of the US National Security Agency (NSA) at Fort Meade in Maryland".[66]NSA's United States Signals Intelligence Directive 18 (USSID 18) strictly prohibited the interception or collection of information about "...U.S. persons, entities, corporations or organizations...." without explicit written legal permission from theUnited States Attorney Generalwhen the subject is located abroad, or theForeign Intelligence Surveillance Courtwhen within U.S. borders. Alleged Echelon-related activities, including its use for motives other than national security, including political andindustrial espionage, received criticism from countries outside the UKUSA alliance.[67]
The NSA was also involved in planning to blackmail people with "SEXINT", intelligence gained about a potential target's sexual activity and preferences. Those targeted had not committed any apparent crime nor were they charged with one.[68]To support itsfacial recognitionprogram, the NSA is intercepting "millions of images per day".[69]TheReal Time Regional Gatewayis a data collection program introduced in 2005 in Iraq by the NSA during theIraq Warthat consisted of gathering all electronic communication, storing it, then searching and otherwise analyzing it. It was effective in providing information about Iraqi insurgents who had eluded less comprehensive techniques.[70]This "collect it all" strategy introduced by NSA director,Keith B. Alexander, is believed byGlenn GreenwaldofThe Guardianto be the model for the comprehensive worldwide mass archiving of communications which NSA is engaged in as of 2013.[71]
A dedicated unit of the NSA locates targets for theCIAfor extrajudicial assassination in the Middle East.[72]The NSA has also spied extensively on the European Union, the United Nations, and numerous governments including allies and trading partners in Europe, South America, and Asia.[73][74]In June 2015,WikiLeakspublished documents showing that NSA spied onFrenchcompanies.[75]WikiLeaks also published documents showing that NSA spied on federal German ministries since the 1990s.[76][77]Even Germany's ChancellorAngela Merkel's cellphones and phones of her predecessors had been intercepted.[78]
In June 2013,Edward Snowdenrevealed that between 8th February and 8th March 2013, the NSA collected about 124.8 billion telephone data items and 97.1 billion computer data items throughout the world, as was displayed in charts from an internal NSA tool codenamedBoundless Informant. Initially, it was reported that some of these data reflected eavesdropping on citizens in countries like Germany, Spain, and France,[79]but later on, it became clear that those data were collected by European agencies during military missions abroad and were subsequently shared with NSA.
In 2013, reporters uncovered a secret memo that claims the NSA created and pushed for the adoption of theDual EC DRBGencryption standard that contained built-in vulnerabilities in 2006 to the United StatesNational Institute of Standards and Technology(NIST), and theInternational Organization for Standardization(aka ISO).[80][81]This memo appears to give credence to previous speculation by cryptographers atMicrosoft Research.[82]Edward Snowdenclaims that the NSA often bypasses the encryption process altogether by lifting information before encryption or after decryption.[81]
XKeyscorerules (as specified in a file xkeyscorerules100.txt, sourced by German TV stationsNDRandWDR, who claim to have excerpts from its source code) reveal that the NSA tracks users of privacy-enhancing software tools, includingTor; an anonymous email service provided by theMIT Computer Science and Artificial Intelligence Laboratory(CSAIL) in Cambridge, Massachusetts; and readers of theLinux Journal.[83][84]
Linus Torvalds, the founder ofLinux kernel, joked during aLinuxConkeynote on September 18, 2013, that the NSA, who is the founder ofSELinux, wanted a backdoor in the kernel.[85]However, later, Linus' father, aMember of the European Parliament(MEP), revealed that the NSA actually did this.[86]
When my oldest son was asked the same question: "Has he been approached by the NSA about backdoors?" he said "No", but at the same time he nodded. Then he was sort of in the legal free. He had given the right answer, everybody understood that the NSA had approached him.
IBM Noteswas the first widely adopted software product to usepublic key cryptographyfor client-server and server–server authentication and encryption of data. Until US laws regulating encryption were changed in 2000, IBM andLotuswere prohibited from exporting versions of Notes that supportedsymmetric encryptionkeys that were longer than 40 bits. In 1997, Lotus negotiated an agreement with the NSA that allowed the export of a version that supported stronger keys with 64 bits, but 24 of the bits were encrypted with a special key and included in the message to provide a "workload reduction factor" for the NSA. This strengthened the protection for users of Notes outside the US against private-sectorindustrial espionage, but not against spying by the US government.[88][89]
While it is assumed that foreign transmissions terminating in the U.S. (such as a non-U.S. citizen accessing a U.S. website) subject non-U.S. citizens to NSA surveillance, recent research into boomerang routing has raised new concerns about the NSA's ability to surveil the domestic Internet traffic of foreign countries.[21]Boomerang routing occurs when an Internet transmission that originates and terminates in a single country transits another. Research at theUniversity of Torontohas suggested that approximately 25% of Canadian domestic traffic may be subject to NSA surveillance activities as a result of the boomerang routing of CanadianInternet service providers.[21]
A document included in the NSA files released withGlenn Greenwald's bookNo Place to Hidedetails how the agency'sTailored Access Operations(TAO) and other NSA units gained access to hardware equipment. They interceptedrouters,servers, and othernetwork hardwareequipment being shipped to organizations targeted for surveillance and installing covert implant firmware onto them before they are delivered. This was described by an NSA manager as "some of the most productive operations in TAO because they preposition access points into hard target networks around the world."[90]
Computers that were seized by the NSA due tointerdictionare often modified with a physical device known as Cottonmouth.[91]It is a device that can be inserted at the USB port of a computer to establish remote access to the targeted machine. According to the NSA's Tailored Access Operations (TAO) group implant catalog, after implanting Cottonmouth, the NSA can establish anetwork bridge"that allows the NSA to load exploit software onto modified computers as well as allowing the NSA to relay commands and data between hardware and software implants."[92]
NSA's mission, as outlined inExecutive Order 12333in 1981, is to collect information that constitutes "foreign intelligence or counterintelligence" whilenot"acquiring information concerning the domestic activities ofUnited States persons". NSA has declared that it relies on the FBI to collect information on foreign intelligence activities within the borders of the United States while confining its activities within the United States to the embassies and missions of foreign nations.[93]
The appearance of a 'Domestic Surveillance Directorate' of the NSA was soon exposed as a hoax in 2013.[94][95]NSA's domestic surveillance activities are limited by the requirements imposed by theFourth Amendment to the U.S. Constitution. TheForeign Intelligence Surveillance Courtfor example held in October 2011, citing multiple Supreme Court precedents, that the Fourth Amendment prohibitions against unreasonable searches and seizures apply to the contents of all communications, whatever the means, because "a person's private communications are akin to personal papers."[96]However, these protections do not apply to non-U.S. persons located outside of U.S. borders, so the NSA's foreign surveillance efforts are subject to far fewer limitations under U.S. law.[97]The specific requirements for domestic surveillance operations are contained in theForeign Intelligence Surveillance Actof 1978 (FISA), which does not extend protection to non-U.S. citizens located outside ofU.S. territory.[97]
George W. Bush, president during the9/11 terrorist attacks, approved thePatriot Actshortly after the attacks to take anti-terrorist security measures.Titles 1,2, and9specifically authorized measures that would be taken by the NSA. These titles granted enhanced domestic security against terrorism, surveillance procedures, and improved intelligence, respectively. On March 10, 2004, there was a debate between President Bush and White House CounselAlberto Gonzales, Attorney GeneralJohn Ashcroft, and Acting Attorney GeneralJames Comey. The Attorneys General were unsure if the NSA's programs could be considered constitutional. They threatened to resign over the matter, but ultimately the NSA's programs continued.[98]On March 11, 2004, President Bush signed a new authorization for mass surveillance of Internet records, in addition to the surveillance of phone records. This allowed the president to be able to override laws such as theForeign Intelligence Surveillance Act, which protected civilians from mass surveillance. In addition to this, President Bush also signed that the measures of mass surveillance were also retroactively in place.[99][100]
One such surveillance program, authorized by the U.S. Signals Intelligence Directive 18 of President George Bush, was the Highlander Project undertaken for the National Security Agency by the U.S. Army513th Military Intelligence Brigade. NSA relayed telephone (including cell phone) conversations obtained from ground, airborne, and satellite monitoring stations to various U.S. Army Signal Intelligence Officers, including the201st Military Intelligence Battalion. Conversations of citizens of the U.S. were intercepted, along with those of other nations.[101]Proponents of the surveillance program claim that the President hasexecutive authorityto order such action[citation needed], arguing that laws such as FISA are overridden by the President's Constitutional powers. In addition, some argued that FISA was implicitly overridden by a subsequent statute, theAuthorization for Use of Military Force, although the Supreme Court's ruling inHamdan v. Rumsfelddeprecates this view.[102]
Under thePRISMprogram, which started in 2007,[103][104]NSA gathers Internet communications from foreign targets from nine major U.S. Internet-based communication service providers:Microsoft,[105]Yahoo,Google,Facebook,PalTalk,AOL,Skype,YouTubeandApple. Data gathered include email, videos, photos,VoIPchats such asSkype, and file transfers.
Former NSA director General Keith Alexander claimed that in September 2009 the NSA preventedNajibullah Zaziand his friends from carrying out a terrorist attack.[106]However, no evidence has been presented demonstrating that the NSA has ever been instrumental in preventing a terrorist attack.[107][108][109][110]
FASCIAis adatabasecreated and used by the U.S. National Security Agency that contains trillions of device-location records that are collected from a variety of sources.[111]Its existence was revealed during the 2013global surveillance disclosurebyEdward Snowden.[112]
The FASCIA database stores various types of information, includingLocation Area Codes(LACs),Cell Tower IDs(CeLLIDs),Visitor Location Registers(VLRs),International Mobile Station Equipment Identity(IMEIs) andMSISDNs(Mobile Subscriber Integrated Services Digital Network-Numbers).[111][112]Over about seven months, more than 27terabytesof location data were collected and stored in the database.[113]
Besides the more traditional ways of eavesdropping to collect signals intelligence, the NSA is also engaged inhackingcomputers, smartphones, and their networks. A division that conducts such operations is theTailored Access Operations(TAO) division, which has been active since at least circa 1998.[114]
According to theForeign Policymagazine, "... the Office of Tailored Access Operations, or TAO, has successfully penetrated Chinese computer and telecommunications systems for almost 15 years, generating some of the best and most reliable intelligence information about what is going on inside the People's Republic of China."[115][116]In an interview withWiredmagazine, Edward Snowden said the Tailored Access Operations division accidentally causedSyria's internet blackout in 2012.[117]
The NSA is led by theDirector of the National Security Agency(DIRNSA), who also serves as Chief of theCentral Security Service(CHCSS) and Commander of theUnited States Cyber Command(USCYBERCOM) and is the highest-ranking military official of these organizations. He is assisted by aDeputy Director, who is the highest-ranking civilian within the NSA/CSS. NSA also has anInspector General, head of the Office of the Inspector General (OIG);[118]aGeneral Counsel, head of the Office of the General Counsel (OGC); and a Director of Compliance, who is head of the Office of the Director of Compliance (ODOC).[119]TheNational Security Agency Office of Inspector Generalhas worked on cases in collaboration with theUnited States Department of Justiceand theCentral Intelligence Agency Office of Inspector General.[120]Unlike other intelligence organizations such as theCIAorDIA, the NSA has always been particularly reticent concerning its internal organizational structure.[citation needed]
As of the mid-1990s, the National Security Agency was organized into five Directorates:
Each of these directorates consisted of several groups or elements, designated by a letter. There were for example the A Group, which was responsible for all SIGINT operations against the Soviet Union and Eastern Europe, and the G Group, which was responsible for SIGINT related to all non-communist countries. These groups were divided into units designated by an additional number, like unit A5 for breaking Soviet codes, and G6, being the office for the Middle East, North Africa, Cuba, and Central and South America.[122][123]
As of 2013[update], NSA has about a dozen directorates, which are designated by a letter, although not all of them are publicly known.[124]
In the year 2000, a leadership team was formed consisting of the director, the deputy director, and the directors of the Signals Intelligence (SID), the Information Assurance (IAD) and the Technical Directorate (TD). The chiefs of other main NSA divisions became associate directors of the senior leadership team.[125]After President George W. Bush initiated thePresident's Surveillance Program(PSP) in 2001, the NSA created a 24-hour Metadata Analysis Center (MAC), followed in 2004 by the Advanced Analysis Division (AAD), with the mission of analyzing content, Internet metadata and telephone metadata. Both units were part of the Signals Intelligence Directorate.[126]
In 2016, a proposal combined the Signals Intelligence Directorate with the Information Assurance Directorate into a Directorate of Operations.[127]
NSANet stands for National Security Agency Network and is the official NSAintranet.[128]It is a classified network,[129]for information up to the level ofTS/SCI[130]to support the use and sharing of intelligence data between NSA and the signals intelligence agencies of the four other nations of theFive Eyespartnership. The management of NSANet has been delegated to theCentral Security ServiceTexas (CSSTEXAS).[131]
NSANet is a highly secured computer network consisting of fiber-optic and satellite communication channels that are almost completely separated from the public Internet. The network allows NSA personnel and civilian and military intelligence analysts anywhere in the world to have access to the agency's systems and databases. This access is tightly controlled and monitored. For example, every keystroke is logged, activities are audited at random, and downloading and printing of documents from NSANet are recorded.[132]In 1998, NSANet, along withNIPRNetandSIPRNet, had "significant problems with poor search capabilities, unorganized data, and old information".[133]In 2004, the network was reported to have used over twentycommercial off-the-shelfoperating systems.[134]Some universities that do highly sensitive research are allowed to connect to it.[135]The thousands of Top Secret internal NSA documents that were taken byEdward Snowdenin 2013 were stored in "a file-sharing location on the NSA's intranet site"; so, they could easily be read online by NSA personnel. Everyone with a TS/SCI clearance had access to these documents. As a system administrator, Snowden was responsible for moving accidentally misplaced highly sensitive documents to safer storage locations.[136]
The NSA maintains at least two watch centers:
The NSA has its law enforcement team, known as theNSA Police(and formerly asNSA Security Protective Force) which provides law enforcement services, emergency response, and physical security to its officials and properties.[138]
NSA Police are armed federal officers. NSA Police has a K9 division, which generally conducts explosive detection screening of mail, vehicles, and cargo entering NSA grounds.[139]They use marked vehicles to carry out patrols.[140]
The number of NSA employees is officially classified[4]but there are several sources providing estimates.
In 1961, the NSA had 59,000 military and civilian employees, which grew to 93,067 in 1969, of which 19,300 worked at the headquarters at Fort Meade. In the early 1980s, NSA had roughly 50,000 military and civilian personnel. By 1989 this number had grown again to 75,000, of which 25,000 worked at the NSA headquarters. Between 1990 and 1995 the NSA's budget and workforce were cut by one-third, which led to a substantial loss of experience.[141]
In 2012, the NSA said more than 30,000 employees worked at Fort Meade and other facilities.[2]In 2012,John C. Inglis, the deputy director, said that the total number of NSA employees is "somewhere between 37,000 and one billion" as a joke,[4]and stated that the agency is "probably the biggest employer ofintroverts."[4]In 2013Der Spiegelstated that the NSA had 40,000 employees.[5]More widely, it has been described as the world's largest single employer ofmathematicians.[142]Some NSA employees form part of the workforce of theNational Reconnaissance Office(NRO), the agency that provides the NSA with satellitesignals intelligence. As of 2013 about 1,000system administratorswork for the NSA.[143]
The NSA received criticism early on in 1960 aftertwo agentshad defected to theSoviet Union. Investigations by theHouse Un-American Activities Committeeand a special subcommittee of theUnited States House Committee on Armed Servicesrevealed severe cases of ignorance of personnel security regulations, prompting the former personnel director and the director of security to step down and leading to the adoption of stricter security practices.[144]Nonetheless, security breaches reoccurred only a year later when in an issue ofIzvestiaof July 23, 1963, a former NSA employee published several cryptologic secrets. The very same day, an NSA clerk-messenger committedsuicideas ongoing investigations disclosed that he had sold secret information to the Soviets regularly. The reluctance of congressional houses to look into these affairs prompted a journalist to write, "If a similar series of tragic blunders occurred in any ordinary agency of Government an aroused public would insist that those responsible be officially censured, demoted, or fired."David Kahncriticized the NSA's tactics of concealing its doings as smug and the Congress' blind faith in the agency's right-doing as shortsighted and pointed out the necessity of surveillance by the Congress to prevent abuse of power.[144]
Edward Snowden's leaking of the existence ofPRISMin 2013 caused the NSA to institute a "two-man rule", where two system administrators are required to be present when one accesses certain sensitive information.[143]Snowden claims he suggested such a rule in 2009.[145]
The NSA conductspolygraphtests of employees. For new employees, the tests are meant to discover enemy spies who are applying to the NSA and to uncover any information that could make an applicant pliant to coercion.[146]As part of the latter, historicallyEPQsor "embarrassing personal questions" about sexual behavior had been included in the NSA polygraph.[146]The NSA also conducts five-year periodic reinvestigation polygraphs of employees, focusing on counterintelligence programs. In addition, the NSA conducts periodic polygraph investigations to find spies and leakers; those who refuse to take them may receive "termination of employment", according to a 1982 memorandum from the director of the NSA.[147]
There are also "special access examination" polygraphs for employees who wish to work in highly sensitive areas, and those polygraphs cover counterintelligence questions and some questions about behavior.[147]NSA's brochure states that the average test length is between two and four hours.[148]A 1983 report of theOffice of Technology Assessmentstated that "It appears that the NSA [National Security Agency] (and possibly CIA) use the polygraph not to determine deception or truthfulness per se, but as a technique of interrogation to encourage admissions."[149]Sometimes applicants in the polygraph process confess to committing felonies such as murder, rape, and selling of illegal drugs. Between 1974 and 1979, of the 20,511 job applicants who took polygraph tests, 695 (3.4%) confessed to previous felony crimes; almost all of those crimes had been undetected.[146]
In 2010 the NSA produced a video explaining its polygraph process.[150]The video, ten minutes long, is titled "The Truth About the Polygraph" and was posted to the Web site of theDefense Security Service. Jeff Stein ofThe Washington Postsaid that the video portrays "various applicants, or actors playing them—it's not clear—describing everything bad they had heard about the test, the implication being that none of it is true."[151]AntiPolygraph.org argues that the NSA-produced video omits some information about the polygraph process; it produced a video responding to the NSA video.[150][152]George Maschke, the founder of the Web site, accused the NSA polygraph video of being "Orwellian".[151]
In 2013, an article indicated that afterEdward Snowdenrevealed his identity in 2013, the NSA began requiring polygraphing of employees once per quarter.[153]
The number of exemptions from legal requirements has been criticized. When in 1964 Congress was hearing a bill giving the director of the NSA the power to fire at will any employee,The Washington Postwrote: "This is the very definition of arbitrariness. It means that an employee could be discharged and disgraced based on anonymous allegations without the slightest opportunity to defend himself." Yet, the bill was accepted by an overwhelming majority.[144]Also, every person hired to a job in the US after 2007, at any private organization, state or federal government agency,mustbe reported to theNew Hire Registry, ostensibly to look forchild supportevaders,exceptthat employees of an intelligence agency may be excluded from reporting if the director deems it necessary for national security reasons.[154]
When the agency was first established, its headquarters and cryptographic center were in the Naval Security Station in Washington, D.C. The COMINT functions were located inArlington HallinNorthern Virginia, which served as the headquarters of theU.S. Army's cryptographic operations.[155]Because theSoviet Unionhad detonated a nuclear bomb and because the facilities were crowded, the federal government wanted to move several agencies, including the AFSA/NSA. A planning committee consideredFort Knox, butFort Meade,Maryland, was ultimately chosen as NSA headquarters because it was far enough away from Washington, D.C. in case of a nuclear strike and was close enough so its employees would not have to move their families.[156]
Construction of additional buildings began after the agency occupied buildings at Fort Meade in the late 1950s, which they soon outgrew.[156]In 1963 the new headquarters building, nine stories tall, opened. NSA workers referred to the building as the "Headquarters Building" and since the NSA management occupied the top floor, workers used "Ninth Floor" to refer to their leaders.[157]COMSEC remained in Washington, D.C., until its new building was completed in 1968.[156]In September 1986, the Operations 2A and 2B buildings, both copper-shielded to preventeavesdropping, opened with a dedication by PresidentRonald Reagan.[158]The four NSA buildings became known as the "Big Four."[158]The NSA director moved to 2B when it opened.[158]
Headquarters for the National Security Agency is located at39°6′32″N76°46′17″W / 39.10889°N 76.77139°W /39.10889; -76.77139inFort George G. Meade,Maryland, although it is separate from other compounds and agencies that are based within this same military installation. Fort Meade is about 20 mi (32 km) southwest ofBaltimore,[159]and 25 mi (40 km) northeast of Washington, D.C.[160]The NSA has two dedicated exits offBaltimore–Washington Parkway. The Eastbound exit from the Parkway (heading toward Baltimore) is open to the public and provides employee access to its main campus and public access to the National Cryptology Museum. The Westbound side exit, (heading toward Washington) is labeled "NSA Employees Only".[161][162]The exit may only be used by people with the proper clearances, and security vehicles parked along the road guard the entrance.[163]
NSA is the largest employer in the state of Maryland, and two-thirds of its personnel work at Fort Meade.[164]Built on 350 acres (140 ha; 0.55 sq mi)[165]of Fort Meade's 5,000 acres (2,000 ha; 7.8 sq mi),[166]the site has 1,300 buildings and an estimated 18,000 parking spaces.[160][167]
The main NSA headquarters and operations building is whatJames Bamford, author ofBody of Secrets, describes as "a modern boxy structure" that appears similar to "any stylish office building."[168]The building is covered with one-way dark glass, which is lined with copper shielding to prevent espionage by trapping in signals and sounds.[168]It contains 3,000,000 square feet (280,000 m2), or more than 68 acres (28 ha), of floor space; Bamford said that theU.S. Capitol"could easily fit inside it four times over."[168]
The facility has over 100 watchposts,[169]one of them being the visitor control center, a two-story area that serves as the entrance.[168]At the entrance, a white pentagonal structure,[170]visitor badges are issued to visitors and security clearances of employees are checked.[171]The visitor center includes a painting of the NSA seal.[170]
The OPS2A building, the tallest building in the NSA complex and the location of much of the agency's operations directorate is accessible from the visitor center. Bamford described it as a "dark glassRubik's Cube".[172]The facility's "red corridor" houses non-security operations such as concessions and the drug store. The name refers to the "red badge" which is worn by someone without a security clearance. The NSA headquarters includes a cafeteria, a credit union, ticket counters for airlines and entertainment, a barbershop, and a bank.[170]NSA headquarters has its own post office, fire department, and police force.[173][174][175]
The employees at the NSA headquarters reside in various places in theBaltimore-Washington area, includingAnnapolis, Baltimore, andColumbiain Maryland and the District of Columbia, including theGeorgetowncommunity.[176]The NSA maintains a shuttle service from theOdenton stationofMARCto its Visitor Control Center and has done so since 2005.[177]
Following a major power outage in 2000, in 2003, and follow-ups through 2007,The Baltimore Sunreported that the NSA was at risk of electrical overload because of insufficient internal electrical infrastructure at Fort Meade to support the amount of equipment being installed. This problem was apparently recognized in the 1990s but not made a priority, and "now the agency's ability to keep its operations going is threatened."[178]
On August 6, 2006,The Baltimore Sunreported that the NSA had completely maxed out the grid and that Baltimore Gas & Electric (BGE, nowConstellation Energy) was unable to sell them any more power.[179]NSA decided to move some of its operations to a new satellite facility. BGE provided NSA with 65 to 75megawattsat Fort Meade in 2007 and expected that an increase of 10 to 15 megawatts would be needed later that year.[180]In 2011, the NSA was Maryland's largest consumer of power.[164]In 2007, as BGE's largest customer, NSA bought as much electricity asAnnapolis, the capital city of Maryland.[178]One estimate put the potential for power consumption by the newUtah Data CenteratUS$40 million per year.[181]
In 1995,The Baltimore Sunreported that the NSA is the owner of the single largest group ofsupercomputers.[182]NSA held a groundbreaking ceremony at Fort Meade in May 2013 for its High-Performance Computing Center 2, expected to open in 2016.[183]Called Site M, the center has a 150-megawatt power substation, 14 administrative buildings and 10 parking garages.[173]It cost $3.2 billion and covers 227 acres (92 ha; 0.355 sq mi).[173]The center is 1,800,000 square feet (17 ha; 0.065 sq mi)[173]and initially uses 60 megawatts of electricity.[184]Increments II and III are expected to be completed by 2030 and would quadruple the space, covering 5,800,000 square feet (54 ha; 0.21 sq mi) with 60 buildings and 40 parking garages.[173]Defense contractorsare also establishing or expandingcybersecurityfacilities near the NSA and around theWashington metropolitan area.[173]
The DoD Computer Security Center was founded in 1981 and renamed the National Computer Security Center (NCSC) in 1985. NCSC was responsible for computer security throughout the federal government.[185]NCSC was part of NSA,[186]and during the late 1980s and the 1990s, NSA and NCSC publishedTrusted Computer System Evaluation Criteriain a six-foot highRainbow Seriesof books that detailed trusted computing and network platform specifications.[187]The Rainbow books were replaced by theCommon Criteria, however, in the early 2000s.[187]
NSA had facilities atFriendship Annex(FANX) inLinthicum, Maryland, which is a 20 to 25-minute drive from Fort Meade;[188]theAerospace Data FacilityatBuckley Space Force BaseinAurora, Colorado; NSA Texas in theTexas Cryptology CenteratLackland Air Force BaseinSan Antonio, Texas; NSA Georgia,Georgia Cryptologic Center, Fort Gordon (nowFort Eisenhower),Augusta, Georgia; NSA Hawaii,Hawaii Cryptologic CenterinHonolulu; theMultiprogram Research FacilityinOak Ridge, Tennessee, and elsewhere.[176][181]
In 2009, to protect its assets and access more electricity, NSA sought to decentralize and expand its existing facilities in Fort Meade and Menwith Hill,[189]the latter expansion expected to be completed by 2015.[190]
On January 6, 2011, a groundbreaking ceremony was held to begin construction on the NSA's first Comprehensive National Cyber-security Initiative (CNCI) Data Center, known as the "Utah Data Center" for short. The $1.5B data center is being built atCamp Williams,Utah, located 25 miles (40 km) south ofSalt Lake City, and will help support the agency's National Cyber-security Initiative.[191]It is expected to be operational by September 2013.[181]Construction of Utah Data Center finished in May 2019.[192]
In 2012, NSA collected intelligence from fourgeostationary satellites.[181]Satellite receivers were atRoaring Creek StationinCatawissa, PennsylvaniaandSalt Creek StationinArbuckle, California.[181]It operated ten to twentytapson U.S. telecom switches. NSA had installations in several U.S. states and from them observed intercepts from Europe, the Middle East, North Africa, Latin America, and Asia.[181]TheYakima Herald-Republiccited Bamford, saying that many of NSA's bases for its Echelon program were alegacy system, using outdated, 1990s technology.[61]In 2004, NSA closed its operations atBad Aibling Station(Field Station 81) inBad Aibling, Germany.[193]In 2012, NSA began to move some of its operations at Yakima Research Station,Yakima Training Center, in Washington state to Colorado, planning to leave Yakima closed.[194]During 2013, NSA also intended to close operations atSugar Grove, West Virginia.[61]
Following the[195]UKUSA Agreementbetween theFive Eyesthat cooperated onsignals intelligenceandECHELON,[196]NSA stations were built atGCHQ BudeinMorwenstow, United Kingdom;Geraldton,Pine GapandShoal Bay, Australia;LeitrimandOttawa, Ontario, Canada;Misawa, Japan; andWaihopaiandTangimoana,[197]New Zealand.[198]
NSA operatesRAF Menwith Hillin North Yorkshire, United Kingdom, which was, according toBBC Newsin 2007, the largest electronic monitoring station in the world.[199]Planned in 1954, and opened in 1960, the base covered 562 acres (227 ha; 0.878 sq mi) in 1999.[200]The agency'sEuropean Cryptologic Center(ECC), with 240 employees in 2011, is headquartered at a US military compound inGriesheim, nearFrankfurtin Germany. A 2011 NSA report indicates that the ECC is responsible for the "largest analysis and productivity in Europe" and focuses on various priorities, including Africa, Europe, the Middle East, and counterterrorism operations.[201]
In 2013, a new Consolidated Intelligence Center, also to be used by NSA, is being built at the headquarters of theUnited States Army EuropeinWiesbaden, Germany.[202]NSA's partnership withBundesnachrichtendienst(BND), the German foreign intelligence service, was confirmed by BND presidentGerhard Schindler.[202]
Thailandis a "3rd party partner" of the NSA along with nine other nations.[203]These are non-English-speaking countries that have made security agreements for the exchange of SIGINT raw material and end product reports. Thailand is the site of at least two US SIGINT collection stations. One is at theUS EmbassyinBangkok, an NSA-CIAJoint Special Collection Service (JSCS) unit. It presumably eavesdrops on foreign consulates, embassies, governmental communications, and other targets of opportunity.[204]
The second installation is a FORNSAT (foreign satellite interception) station in the Thai city ofKhon Kaen. It is codenamed INDRA, but has also been referred to as LEMONWOOD.[204]The station is approximately 40 hectares (99 acres) in size and consists of a large 3,700–4,600 m2(40,000–50,000 ft2) operations building on the west side of the ops compound and fourradome-enclosedparabolic antennas. Possibly two of the radome-enclosed antennas are used for SATCOM intercept and two antennas are used for relaying the intercepted material back to the NSA. There is also a PUSHER-type circularly-disposed antenna array (CDAA) just north of the ops compound.[205][206]NSA activated Khon Kaen in October 1979. Its mission was to eavesdrop on the radio traffic ofChinese armyandair forceunits in southern China, especially in and around the city ofKunminginYunnanProvince. In the late 1970s, the base consisted only of a small CDAA antenna array that was remote-controlled via satellite from the NSA listening post atKunia, Hawaii, and a small force of civilian contractors fromBendix Field Engineering Corp.whose job it was to keep the antenna array and satellite relay facilities up and running 24/7.[205]According to the papers of the late General William Odom, the INDRA facility was upgraded in 1986 with a new British-made PUSHER CDAA antenna as part of an overall upgrade of NSA and Thai SIGINT facilities whose objective was to spy on the neighboring communist nations of Vietnam, Laos, and Cambodia.[205]The base fell into disrepair in the 1990s as China and Vietnam became more friendly towards the US, and by 2002 archived satellite imagery showed that the PUSHER CDAA antenna had been torn down, perhaps indicating that the base had been closed. At some point in the period since9/11, the Khon Kaen base was reactivated and expanded to include a sizeable SATCOM intercept mission. It is likely that the NSA presence at Khon Kaen is relatively small, and that most of the work is done by civilian contractors.[205]
NSA has been involved in debates about public policy, both indirectly as a behind-the-scenes adviser to other departments, and directly during and afterVice Admiral Bobby Ray Inman's directorship. NSA was a major player in the debates of the 1990s regarding theexport of cryptography in the United States. Restrictions on export were reduced but not eliminated in 1996. Its secure government communications work has involved the NSA in numerous technology areas, including the design of specialized communicationshardwareand software, production of dedicatedsemiconductorsat theFt. Meadechip fabrication plant), and advancedcryptographyresearch. For 50 years, the NSA designed and built most of its in-house computer equipment, but from the 1990s until about 2003 (when the U.S. Congress curtailed the practice), the agency contracted with the private sector in the fields of research and equipment.[207]
NSA was embroiled in some controversy concerning its involvement in the creation of the Data Encryption Standard (DES), a standard and publicblock cipheralgorithmused by theU.S. governmentand banking community.[208]During the development of DES byIBMin the 1970s, NSA recommended changes to some details of the design. There was suspicion that these changes had weakened the algorithm sufficiently to enable the agency to eavesdrop if required, including speculation that a critical component—the so-calledS-boxes—had been altered to insert a "backdoor" and that the reduction in key length might have made it feasible for NSA to discover DES keys using massive computing power. It has since been observed that the S-boxes in DES are particularly resilient againstdifferential cryptanalysis, a technique that was not publicly discovered until the late 1980s but known to the IBM DES team.
The involvement of the NSA in selecting a successor to the Data Encryption Standard (DES), the Advanced Encryption Standard (AES), was limited to hardware performance testing (seeAES competition).[209]NSA has subsequently certified AES for protection of classified information when used in NSA-approved systems.[210]
The NSA is responsible for the encryption-related components in these legacy systems:
The NSA oversees encryption in the following systems that are in use today:
The NSA has specifiedSuite AandSuite Bcryptographic algorithm suites to be used in U.S. government systems; the Suite B algorithms are a subset of those previously specified byNISTand are expected to serve for most information protection purposes, while the Suite A algorithms are secret and are intended for especially high levels of protection.[210]
The widely usedSHA-1andSHA-2hash functions were designed by NSA. SHA-1 is a slight modification of the weakerSHA-0algorithm, also designed by NSA in 1993. This small modification was suggested by the NSA two years later, with no justification other than the fact that it provides additional security. An attack for SHA-0 that does not apply to the revised algorithm was indeed found between 1998 and 2005 by academic cryptographers. Because of weaknesses and key length restrictions in SHA-1, NIST deprecates its use fordigital signaturesand approves only the newer SHA-2 algorithms for such applications from 2013 on.[220]
A new hash standard,SHA-3, has recently been selected through thecompetitionconcluded on October 2, 2012, with the selection ofKeccakas the algorithm. The process to select SHA-3 was similar to the one held in choosing the AES, but some doubts have been cast over it,[221][222]since fundamental modifications have been made to Keccak to turn it into a standard.[223]These changes potentially undermine the cryptanalysis performed during the competition and reduce the security levels of the algorithm.[221]
Because of concerns that widespread use of strong cryptography would hamper government use ofwiretaps, the NSA proposed the concept ofkey escrowin 1993 and introduced the Clipper chip that would offer stronger protection than DES but would allow access to encrypted data by authorized law enforcement officials.[224]The proposal was strongly opposed and key escrow requirements ultimately went nowhere.[225]However, NSA'sFortezzahardware-based encryption cards, created for the Clipper project, are still used within government, and NSA ultimately declassified and published the design of theSkipjack cipherused on the cards.[226][227]
NSA promoted the inclusion of a random number generator calledDual EC DRBGin the U.S.National Institute of Standards and Technology's 2007 guidelines. This led to speculation of abackdoorwhich would allow NSA access to data encrypted by systems using thatpseudorandom number generator(PRNG).[228]
This is now deemed to be plausible based on the fact that output of next iterations of PRNG can provably be determined if relation between two internalElliptic Curvepoints is known.[229][230]Both NIST andRSAare now officially recommending against the use of this PRNG.[231][232]
Perfect Citizen is a program to performvulnerability assessmentby the NSA in the Americancritical infrastructure.[233][234]It was originally reported to be a program to develop a system of sensors to detect cyber attacks on critical infrastructure computer networks in both the private and public sector through anetwork monitoringsystem namedEinstein.[235][236]It is funded by theComprehensive National Cybersecurity Initiativeand thus farRaytheonhas received a contract for up to $100 million for the initial stage.
The NSA has invested many millions of dollars in academic research under grant code prefixMDA904, resulting in over 3,000 papers as of October 11, 2007.[update]The NSA publishes its documents through various publications.
Despite this, the NSA/CSS has, at times, attempted to restrict the publication of academic research into cryptography; for example, theKhufu and Khafreblock ciphers were voluntarily withheld in response to an NSA request to do so. In response to aFOIAlawsuit, in 2013 the NSA released the 643-page research paper titled, "Untangling the Web: A Guide to Internet Research",[242]written and compiled by NSA employees to assist other NSA workers in searching for information of interest to the agency on the public Internet.[243]
NSA can file for a patent from theU.S. Patent and Trademark Officeundergag order. Unlike normal patents, these are not revealed to the public and do not expire. However, if the Patent Office receives an application for an identical patent from a third party, they will reveal the NSA's patent and officially grant it to the NSA for the full term on that date.[244]
One of NSA's published patents describes a method ofgeographically locatingan individual computer site in an Internet-like network, based on thelatencyof multiple network connections.[245]Although no public patent exists, NSA is reported to have used a similar locating technology called trilateralization that allows real-time tracking of an individual's location, including altitude from ground level, using data obtained from cellphone towers.[246]
Theheraldicinsignia of NSA consists of aneagleinside a circle, grasping akeyin its talons.[247]The eagle represents the agency's national mission.[247]Its breast features a shield with bands of red and white, taken from theGreat Seal of the United Statesand representing Congress.[247]The key is taken from the emblem ofSaint Peterand represents security.[247]
When the NSA was created, the agency had no emblem and used that of the Department of Defense.[248]The agency adopted its first of two emblems in 1963.[248]The current NSA insignia has been in use since 1965, when then-Director, LTGMarshall S. Carter(USA) ordered the creation of a device to represent the agency.[249]The NSA's flag consists of the agency's seal on a light blue background.
Crews associated with NSA missions have been involved in several dangerous and deadly situations.[250]TheUSSLibertyincidentin 1967 andUSSPuebloincidentin 1968 are examples of the losses endured during theCold War.[250]The National Security Agency/Central Security Service Cryptologic Memorial honors and remembers the fallen personnel, both military and civilian, of these intelligence missions.[251]It is made of black granite, and has 171 names carved into it, as of 2013.[update][251]It is located at NSA headquarters. A tradition of declassifying the stories of the fallen was begun in 2001.[251]
In the United States, at least since 2001,[252]there has been legal controversy over what signal intelligence can be used for and how much freedom the National Security Agency has to use signal intelligence.[253]In 2015, the government made slight changes in how it uses and collects certain types of data,[254]specifically phone records. The government was not analyzing the phone records as of early 2019.[255]The surveillance programs were deemed unlawful in September 2020 in a court of appeals case.[55]
On December 16, 2005,The New York Timesreported that underWhite Housepressure and with anexecutive orderfrom PresidentGeorge W. Bush, the National Security Agency, in an attempt to thwart terrorism, had been tapping phone calls made to persons outside the country, without obtainingwarrantsfrom theUnited States Foreign Intelligence Surveillance Court, a secret court created for that purpose under theForeign Intelligence Surveillance Act(FISA).[100]
Edward Snowden is a former American intelligence contractor who revealed in 2013 the existence of secret wide-ranging information-gathering programs conducted by the National Security Agency (NSA).[256]More specifically, Snowden released information that demonstrated how the United States government was gathering immense amounts of personal communications, emails, phone locations, web histories and more of American citizens without their knowledge.[257]One of Snowden's primary motivators for releasing this information was fear of a surveillance state developing as a result of the infrastructure being created by the NSA. As Snowden recounts, "I believe that, at this point in history, the greatest danger to our freedom and way of life comes from the reasonable fear of omniscient State powers kept in check by nothing more than policy documents... It is not that I do not value intelligence, but that I oppose . . . omniscient, automatic, mass surveillance. . . . That seems to me a greater threat to the institutions of free society than missed intelligence reports, and unworthy of the costs."[258]
In March 2014, Army GeneralMartin Dempsey,Chairman of the Joint Chiefs of Staff, told theHouse Armed Services Committee, "The vast majority of the documents that Snowden ... exfiltrated from our highest levels of security ... had nothing to do with exposing government oversight of domestic activities. The vast majority of those were related to our military capabilities, operations, tactics, techniques, and procedures."[259]When asked in a May 2014 interview to quantify the number of documents Snowden stole, retired NSA director Keith Alexander said there was no accurate way of counting what he took, but Snowden may have downloaded more than a million documents.[260]
On January 17, 2006, theCenter for Constitutional Rightsfiled a lawsuit,CCR v. Bush, against theGeorge W. Bushpresidency. The lawsuit challenged the National Security Agency's (NSA's) surveillance of people within the U.S., including the interception of CCR emails without securing a warrant first.[261][262]
In the August 2006 caseACLU v. NSA,U.S. District CourtJudgeAnna Diggs Taylorconcluded that NSA's warrantless surveillance program was both illegal and unconstitutional. On July 6, 2007, the6th Circuit Court of Appealsvacated the decision because the ACLU lacked standing to bring the suit.[263]
In September 2008, theElectronic Frontier Foundation(EFF) filed aclass actionlawsuit against the NSA and several high-ranking officials of theBush administration,[264]charging an "illegal and unconstitutional program of dragnet communications surveillance,"[265]based on documentation provided by formerAT&TtechnicianMark Klein.[266]
As a result of theUSA Freedom Actpassed byCongressin June 2015, the NSA had to shut down its bulk phone surveillance program on November 29 of the same year. The USA Freedom Act forbids the NSA to collect metadata and content of phone calls unless it has a warrant for terrorism investigation. In that case, the agency must ask thetelecom companiesfor the record, which will only be kept for six months. The NSA's use of large telecom companies to assist it with its surveillance efforts has caused several privacy concerns.[267]: 1568–69
In May 2008,Mark Klein, a formerAT&Temployee, alleged that his company had cooperated with NSA in installingNarushardware to replace the FBICarnivoreprogram, to monitor network communications including traffic between U.S. citizens.[268]
NSA was reported in 2008 to use its computing capability to analyze "transactional" data that it regularly acquires from other government agencies, which gather it under their jurisdictional authorities.[269]
A 2013 advisory group for the Obama administration, seeking to reform NSA spying programs following the revelations of documents released by Edward J. Snowden,[270]mentioned in 'Recommendation 30' on page 37, "...that the National Security Council staff should manage an interagency process to review regularly the activities of the US Government regarding attacks that exploit a previously unknown vulnerability in a computer application." Retired cybersecurity expertRichard A. Clarkewas a group member and stated on April 11, 2014, that NSA had no advance knowledge ofHeartbleed.[271]
In August 2013 it was revealed that a 2005 IRS training document showed that NSA intelligence intercepts and wiretaps, both foreign and domestic, were being supplied to theDrug Enforcement Administration(DEA) andInternal Revenue Service(IRS) and were illegally used to launch criminal investigations of US citizens. Law enforcement agents were directed to conceal how the investigations began and recreate a legal investigative trail by re-obtaining the same evidence by other means.[272][273]
In the months leading to April 2009, the NSA intercepted the communications of U.S. citizens, including a congressman, although theJustice Departmentbelieved that the interception was unintentional. The Justice Department then took action to correct the issues and bring the program into compliance with existing laws.[274]United States Attorney GeneralEric Holderresumed the program according to his understanding of theForeign Intelligence Surveillance Actamendment of 2008, without explaining what had occurred.[275]
Polls conducted in June 2013 found divided results among Americans regarding NSA's secret data collection.[276]Rasmussen Reportsfound that 59% of Americans disapprove,[277]Gallupfound that 53% disapprove,[278]andPewfound that 56% are in favor of NSA data collection.[279]
On April 25, 2013, the NSA obtained a court order requiringVerizon's Business Network Services to providemetadataon all calls in its system to the NSA "on an ongoing daily basis" for three months, as reported byThe Guardianon June 6, 2013. This information includes "the numbers of both parties on a call ... location data, call duration, unique identifiers, and the time and duration of all calls" but not "[t]he contents of the conversation itself". The order relies on the so-called "business records" provision of the Patriot Act.[280][281]
In August 2013, following the Snowden leaks, new details about the NSA's data mining activity were revealed. Reportedly, the majority of emails into or out of the United States are captured at "selected communications links" and automatically analyzed for keywords or other "selectors". Emails that do not match are deleted.[282]The utility of such a massive metadata collection in preventing terrorist attacks is disputed. Many studies reveal the dragnet-like system to be ineffective. One such report, released by theNew America Foundationconcluded that after an analysis of 225 terrorism cases, the NSA "had no discernible impact on preventing acts of terrorism."[283]
Defenders of the program said that while metadata alone cannot provide all the information necessary to prevent an attack, it assures the ability to "connect the dots"[284]between suspect foreign numbers and domestic numbers with a speed only the NSA's software is capable of. One benefit of this is quickly being able to determine the difference between suspicious activity and real threats.[285]As an example, NSA director GeneralKeith B. Alexandermentioned at the annual Cybersecurity Summit in 2013, that metadata analysis of domestic phone call records after theBoston Marathon bombinghelped determine that rumors of a follow-up attack in New York were baseless.[284]In addition to doubts about its effectiveness, many people argue that the collection of metadata is an unconstitutional invasion of privacy. As of 2015[update], the collection process remained legal and grounded in the ruling fromSmith v. Maryland(1979). A prominent opponent of the data collection and its legality isU.S. District JudgeRichard J. Leon, who issued a report in 2013[286]in which he stated: "I cannot imagine a more 'indiscriminate' and 'arbitrary invasion' than this systematic and high tech collection and retention of personal data on virtually every single citizen for purposes of querying and analyzing it without prior judicial approval...Surely, such a program infringes on 'that degree of privacy' that the founders enshrined in theFourth Amendment".
As of May 7, 2015, the United States Court of Appeals for the Second Circuit ruled that the interpretation of Section 215 of the Patriot Act was wrong and that the NSA program that has been collecting Americans' phone records in bulk is illegal.[287]It stated that Section 215 cannot be interpreted to allow government to collect national phone data and, as a result, expired on June 1, 2015. This ruling "is the first time a higher-level court in the regular judicial system has reviewed the NSA phone records program."[288]The replacement law known as theUSA Freedom Act, which will enable the NSA to continue to have bulk access to citizens' metadata but with the stipulation that the data will now be stored by the companies themselves.[288]This change will not have any effect on other Agency procedures—outside of metadata collection—which have purportedly challenged Americans' Fourth Amendment rights,[289]includingUpstream collection, a mass of techniques used by the Agency to collect and store American's data/communications directly from theInternet backbone.[290]
Under the Upstream collection program, the NSA paid telecommunications companies hundreds of millions of dollars in order to collect data from them.[291]While companies such as Google and Yahoo! claim that they do not provide "direct access" from their servers to the NSA unless under a court order,[292]the NSA had access to emails, phone calls, and cellular data users.[293]Under this new ruling, telecommunications companies maintain bulk user metadata on their servers for at least 18 months, to be provided upon request to the NSA.[288]This ruling made the mass storage of specific phone records at NSA datacenters illegal, but it did not rule on Section 215's constitutionality.[288]
In a declassified document it was revealed that 17,835 phone lines were on an improperly permitted "alert list" from 2006 to 2009 in breach of compliance, which tagged these phone lines for daily monitoring.[294][295][296]Eleven percent of these monitored phone lines met the agency's legal standard for "reasonably articulable suspicion" (RAS).[294][297]
The NSA tracks the locations of hundreds of millions of cell phones per day, allowing it to map people's movements and relationships in detail.[298]The NSA has been reported to have access to all communications made via Google, Microsoft, Facebook, Yahoo, YouTube,AOL, Skype, Apple and Paltalk,[299]and collects hundreds of millions of contact lists from personal email andinstant messagingaccounts each year.[300]It has also managed to weaken much of the encryption used on the Internet (by collaborating with, coercing, or otherwise infiltrating numerous technology companies to leave "backdoors" into their systems) so that the majority of encryption is inadvertently vulnerable to different forms of attack.[301][302]
Domestically, the NSA has been proven to collect and store metadata records of phone calls,[303]including over 120 million USVerizon subscribers,[304]as well as intercept vast amounts of communications via the internet (Upstream).[299]The government's legal standing had been to rely on a secret interpretation of thePatriot Actwhereby the entirety of US communications may be considered "relevant" to a terrorism investigation if it is expected that even a tiny minority may relate to terrorism.[305]The NSA also supplies foreign intercepts to theDEA,IRSand other law enforcement agencies, who use these to initiate criminal investigations. Federal agents are then instructed to "recreate" the investigative trail viaparallel construction.[306]
The NSA also spies on influential Muslim societies to obtain information that could be used to discredit them, such as their use of pornography. The targets, both domestic and abroad, are not suspected of any crime but hold religious or political views deemed "radical" by the NSA.[307]According to a report inThe Washington Postin July 2014, relying on information provided by Snowden, 90% of those placed under surveillance in the U.S. are ordinary Americans and are not the intended targets. The newspaper said it had examined documents including emails, text messages, and online accounts that support the claim.[308]
The Intelligence Committees of the US House and Senate exercise primary oversight over the NSA; other members of Congress have been denied access to materials and information regarding the agency and its activities.[309]TheUnited States Foreign Intelligence Surveillance Court, the secret court charged with regulating the NSA's activities is, according to its chief judge, incapable of investigating or verifying how often the NSA breaks even its own secret rules.[310]It has since been reported that the NSA violated its own rules on data access thousands of times a year, many of these violations involving large-scale data interceptions.[311]NSA officers have even used data intercepts to spy on love interests;[312]"most of the NSA violations were self-reported, and each instance resulted in administrative action of termination."[313][attribution needed]
The NSA has "generally disregarded the special rules for disseminating United States person information" by illegally sharing its intercepts with other law enforcement agencies.[314]A March 2009 FISA Court opinion, which the court released, states that protocols restricting data queries had been "so frequently and systemically violated that it can be fairly said that this critical element of the overall ... regime has never functioned effectively."[315][316]In 2011 the same court noted that the "volume and nature" of the NSA's bulk foreign Internet intercepts was "fundamentally different from what the court had been led to believe".[314]Email contact lists (including those of US citizens) are collected at numerous foreign locations to work around the illegality of doing so on US soil.[300]
Legal opinions on the NSA's bulk collection program have differed. In mid-December 2013, U.S. District Judge Richard Leon ruled that the "almost-Orwellian" program likely violates the Constitution, and wrote, "I cannot imagine a more 'indiscriminate' and 'arbitrary invasion' than this systematic and high-tech collection and retention of personal data on virtually every single citizen for purposes of querying and analyzing it without prior judicial approval. Surely, such a program infringes on 'that degree of privacy' that the Founders enshrined in the Fourth Amendment. Indeed, I have little doubt that the author of our Constitution,James Madison, who cautioned us to beware 'the abridgment of the freedom of the people by gradual and silent encroachments by those in power,' would be aghast."[317]
Later that month, U.S. District JudgeWilliam Pauleyruled that the NSA's collection of telephone records is legal and valuable in the fight against terrorism. In his opinion, he wrote, "a bulk telephony metadata collection program [is] a wide net that could find and isolate gossamer contacts among suspected terrorists in an ocean of seemingly disconnected data" and noted that a similar collection of data before 9/11 might have prevented the attack.[318]
At a March 2013Senate Intelligence Committeehearing, SenatorRon Wydenasked the Director of National IntelligenceJames Clapper, "Does the NSA collect any type of data at all on millions or hundreds of millions of Americans?" Clapper replied "No, sir. ... Not wittingly. There are cases where they could inadvertently perhaps collect, but not wittingly."[319]This statement came under scrutiny months later, in June 2013, when details of thePRISMsurveillance program were published, showing that "the NSA apparently can gain access to the servers of nine Internet companies for a wide range of digital data."[319]Wyden said that Clapper had failed to give a "straight answer" in his testimony. Clapper, in response to criticism, said, "I responded in what I thought was the most truthful, or least untruthful manner." Clapper added, "There are honest differences on the semantics of what—when someone says 'collection' to me, that has a specific meaning, which may have a different meaning to him."[319]
NSA whistle-blowerEdward Snowdenadditionally revealed the existence ofXKeyscore, a top-secret surveillance program that allows the N.S.A for searching vast databases of "the metadata as well as the content of emails and other internet activity, such as browser history," with the capability to search by "name, telephone number, IP address, keywords, the language in which the internet activity was conducted or the type of browser used."[320]XKeyscore "provides the technological capability, if not the legal authority, to target even US persons for extensive electronic surveillance without a warrant provided that some identifying information, such as their email or IP address, is known to the analyst."[320]
Regarding the necessity of these NSA programs, Alexander stated on June 27, 2013, that the NSA's bulk phone and Internet intercepts had been instrumental in preventing 54 terrorist "events", including 13 in the US, and in all but one of these cases had provided the initial tip to "unravel the threat stream".[321]On July 31 NSA Deputy Director John Inglis conceded to the Senate that these intercepts had not been vital in stopping any terrorist attacks, but were "close" to vital in identifying and convicting four San Diego men for sending US$8,930 toAl-Shabaab, a militia that conducts terrorism in Somalia.[322][323][324]The U.S. government has aggressively sought to dismiss and challengeFourth Amendmentcases raised against it, and has granted retroactive immunity to ISPs and telecoms participating in domestic surveillance.[325][326]
The U.S. military has acknowledged blocking access to parts ofThe Guardianwebsite for thousands of defense personnel across the country,[327][328]and blocking the entireGuardianwebsite for personnel stationed throughout Afghanistan, the Middle East, and South Asia.[329]In October 2014, the United Nations report condemned mass surveillance programs carried out by the U.S. intelligence communities and other nations as violating multiple global treaties and conventions that guaranteed core privacy rights.[330]
An exploit dubbedEternalBlue, created by the NSA, was used in theWannaCry ransomware attackin May 2017.[331]The exploit had been leaked online by a hacking group, The Shadow Brokers, nearly a month before the attack. Several experts have pointed the finger at the NSA's non-disclosure of the underlying vulnerability, and their loss of control over the EternalBlue attack tool that exploited it. Edward Snowden said that if the NSA had "privately disclosedthe flaw used to attack hospitals when they found it, not when they lost it, [the attack] might not have happened".[332]Wikipedia co-founder,Jimmy Wales, stated that he joined "with Microsoft and the other leaders of the industry in saying this is a huge screw-up by the government ... the moment the NSA found it, they should have notified Microsoft so they could quietly issue apatchand really chivvy people along, long before it became a huge problem."[333]
Former employee David Evenden, who had left the NSA to work for US defense contractor Cyperpoint at a position in theUnited Arab Emirates, was tasked with hacking UAE neighborQatarin 2015 to determine if they were funding terrorist groupMuslim Brotherhood. He quit the company after learning his team had hacked Qatari SheikhaMoza bint Nasser's email exchanges withMichelle Obama, just before she visitedDoha.[334]Upon Evenden's return to the US, he reported his experiences to theFBI. The incident highlights a growing trend of former NSA employees and contractors leaving the agency to start up their firms, and then hiring out to countries likeTurkey,Sudan, and evenRussia, a country involved innumerous cyberattacks against the US.[334]
In May 2021, it was reported that theDanish Defence Intelligence Servicecollaborated with the NSA to wiretap on fellow EU members and leaders,[335][336]leading to wide backlash among EU countries and demands for explanation from Danish and American governments.[337]
NSA directorPaul Nakasonedisclosed in a letter to RepresentativeRon Wydenthat the NSA buys data without a warrant.[338][339]
|
https://en.wikipedia.org/wiki/NSA
|
TheElectronic Key Management System(EKMS) is a United StatesNational Security Agencyled program responsible for Communications Security (COMSEC)key management, accounting, and distribution. Specifically, EKMS generates and distributes electronickeymaterial for allNSA encryption systemswhose keys are loaded using standard fill devices, and directs the distribution ofNSAproduced key material. Additionally, EKMS performs account registration, privilege management, ordering, distribution, and accounting to direct the management and distribution of physical COMSEC material for the services. The common EKMS components and standards facilitate interoperability and commonality among the armed services and civilian agencies.[1][2][3]
Key Management Infrastructure (KMI)replaces EKMS.[4]
The primary reason for the development of EKMS centers on the security and logistics problems that plagued the COMSEC Material Control System (CMCS),[5]which replaced the Registered Publications System (RPS) in the 1970s. The CMCS was a very labor-intensive operation that had been stretched to capacity. The most serious, immediate concern was the human threat associated with access to and exploitation of paper key throughout its life cycle. The disclosure of theWalker spy ringwas clear justification of this concern. Although eliminating the majority of paper keys will greatly reduce this human threat, the long-term goal of EKMS to minimize human access to keys will not be realized until benign fill key is fully implemented.Benign fillpermits the encrypted distribution of electronic keying material directly to the COMSEC device without human access to the key itself.
The need for joint interoperability led to the Defense Reorganization Act of 1986, under which theJoint Chiefs of Staff(JCS) tasked NSA, the Defense Information Systems Agency (DISA), and the Joint Tactical Command, Control and Communications Agency (JTC3A) to develop a Key Management Goal Architecture (KMGA). Subsequent difficulties in coordinating COMSEC distribution and support during joint military operations, e.g.,Desert Storm,Urgent Fury, andOperation Just Cause, have further emphasized the need for a system capable of interoperability between the Services.
EKMS starts with theCentral Facility(CF), run by NSA, which provides a broad range of capabilities to the Services and other government agencies. The CF, also referred to as Tier 0, is the foundation of EKMS. Traditional paper-based keys, and keys for Secure Telephone Unit – Third Generation (STU-III),STE,FNBDT,Iridium, Secure Data Network System (SDNS), and other electronic key are managed from an underground building inFinksburg, Marylandwhich is capable of the following:
The CF talks to other EKMS elements through a variety of media, communication devices, and networks, either through direct distance dialing usingSTU-III(data mode) or dedicated link access usingKG-84devices. During the transition to full electronic key, the 3.5-inch floppy disk and 9-track magnetic tape are also supported. A common user interface, theTCP/IP-based message service, is the primary method of communication with the CF. The message service permits EKMS elements to store EKMS messages that include electronic key for later retrieval by another EKMS element.
Under CMCS, each service maintained a central office of record (COR) that performed basic key and COMSEC management functions, such as key ordering, distribution, inventory control, etc. Under EKMS, each service operates its own key management system using EKMS Tier 1 software that supports physical and electronic key distribution, traditional electronic key generation, management of material distribution, ordering, and other related accounting and COR functions. Common Tier 1 is based on theU.S. Navy's key distribution system (NKDS) software developed by theNaval Research Laboratoryand further developed bySAICin San Diego.
EKMSTier 2, theLocal Management Device(LMD), is composed of a commercial off-the-shelf (COTS)personal computer(PC) running theSanta Cruz Operation's SCOUNIXoperating system, and an NSA KOK-22AKey Processor (KP). The KP is atrustedcomponent of EKMS. It performs cryptographic functions, includingencryptionand decryption functions for the account, as well as key generation, and electronic signature operations. The KP is capable of secure field generation of traditional keys. Locally generated keys can be employed in crypto-net communications, transmission security (TRANSEC) applications, point-to-point circuits, and virtually anywhere that paper-based keys were used. Electronic keys can be downloaded directly to afill device, such as theKYK-13, KYX-15, or the more modernAN/CYZ-10Data Transfer Device (DTD) for further transfer (orfill) into the end cryptographic unit.
The lowest tier or layer of the EKMS architecture which includes the AN/CYZ-10 (Data Transfer Device (DTD)), the SKL (Simple Key Loader)AN/PYQ-10, and all other means used to fill keys to End Cryptographic Units (ECUs); hard copy material holdings only; and STU-III/STE material only using Key Management Entities (KMEs) (i.e., Local Elements (LEs)). Unlike LMD/KP Tier 2 accounts, Tier 3 using entities never receive electronic key directly from a COR or Tier 0.
|
https://en.wikipedia.org/wiki/Electronic_Key_Management_System
|
Over-the-air rekeying(OTAR) refers to transmitting or updating encryption keys (rekeying) in secure information systems by conveying the keys via encrypted electronic communication channels ("over the air").[1]It is also referred to as over-the-air transfer (OTAT), or over-the-air distribution (OTAD),[2]depending on the specific type, use, and transmission means of the key being changed. Although the acronym refers specifically to radio transmission, the technology is also employed via wire, cable, or optical fiber.
As a "paperless encryption key system" OTAR was originally adopted specifically in support of high speed data communications because previously known "paperless key" systems such as supported by Diffie-Hellman key exchange,[3]or Firefly key exchange technology[4](as used in the now obsolete STU-III "scrambled" telephone)[5]were not capable of handling the high speed transmission volumes required by normal governmental/military communications traffic.[6]Now also adopted for civilian and commercial secure voice use, especially by emergency first responders, OTAR has become not only a security technology, but a preferred basis of communications security doctrine world-wide. The term "OTAR" is now basic to the lexicon of communications security.
OTAR technology created by NSA inventor, innovator, and author, Mahlon Doyle[7]was operationally introduced to the US Department of Defense in 1988. Lieutenant Commander David Winters, an American naval officer in London andcode masterduring the final years of the Cold War,[8]was first to recognize the necessity and security potential of OTAR. In order to exploit the advantages of this technology, he conceived and initiated its first large scale practical application and deployment.[9]
Due to the efficiency and vast cost savings inherent to OTAR, Commander Winters' methods were quickly adopted and spread Navy-wide, following which Vice Admiral J.O Tuttle, Commander of the Navy Telecommunications Command,[10]the Navy "J6", shortly influenced the Joint Chiefs of Staff to bring all the other military services into compliance.[11]In due course, OTAR shortly became the NATO standard.
This coincided with the introduction of newerNSAcryptographic systems that use a 128-bit electronickey, such as theANDVT,KY-58,KG-84A/C, andKY-75, capable of obtaining new or updated keys via the circuit they protect or other secure communications circuits. Adoption of OTAR reduces requirements both for the distribution of physical keying material and the physical process of loading cryptographic devices with key tapes.
Accordingly, OTAR eliminates the need for individual stations to be involved with physical key changeovers. Instead, electronically transmitted keys would normally come from a network control station (NCS). The OTAT feature permits a key to be extracted from an OTAT-capable cryptographic system using afill device, such as theKYK-13orKYX-15/KYX-15A and then loaded ("squirted") into another cryptographic system as needed. Alternatively, encryption systems may also be configured to automatically receive and update code keys with virtually no manual intervention, as is the case for GPS (Global Positioning System) navigation satellite signals.
Now that OTAR applications have been adapted for civilian emergency service providers and other users requiring enhanced communications security, extensive parallel technology conversion and development have produced commercially viable systems that include end-to-end key generation, distribution, management, and control.[12][13][14][15][16][17][18]Network controllers can remotely, dependably, and securely change encryption keys for an entire network at their discretion. This simplifies and streamlines operations while virtually eliminating risk of compromise. In practical terms, this means users need not bring or return their units for manual updates, nor must technicians visit each user, station, or node to service their units in the field. Further, in the unlikely event that a unit, station, or node is stolen, mimicked, or otherwise compromised, a network controller may:
Telecommunications protected by encryption require proprietary or classified keys to lock and unlock them. Security of such telecommunications is no greater than the security of its keys. Therefore, key protection is paramount. So long as use of encryption remains reasonably limited, key security is realistically manageable. However, in the mid-twentieth century, military and diplomatic telecommunications loads grew by orders of magnitude. Encryption systems became automated and key quantities ballooned.
These encryption keys usually comprised printed sheets, punched paper strips or cards, or electromagnetic tapes. The security of their production, transport, storage, distribution, accounting, employment, and finally destruction required thousands of trusted agents, world-wide. Vulnerability of so many physical keys to theft or loss became a statistical reality that was exploited for two decades by the infamous "Johnny Walker" spy ring. Elimination of this vulnerability through adoption of Over The Air Rekeying (OTAR) although little appreciated at the time, was an innovation of inestimable impact. Placing this technology in perspective, OTAR comprised a transformation at the most basic foundations of communications security such that through the decades since introduction of OTAR, not a single new breach of US code systems has occurred. Introduction of OTAR technology into practical application precipitated NSA creation of theElectronic Key Management System(EKMS) which permanently altered the power balance in communications security and espionage.
Recent declassification of the details relating to its introduction may be expected to now become the subject of more scholarly work.[19]
Vulnerabilities due to accidental, unencrypted “In the clear” transmissions have been demonstrated with systems incorporating OTAR as implemented in Project 25 Digital Mobile Radio Communications Standards.
|
https://en.wikipedia.org/wiki/Over-the-air_rekeying
|
Incryptography, apseudorandom function family, abbreviatedPRF, is a collection ofefficiently-computablefunctionswhich emulate arandom oraclein the following way: no efficient algorithm can distinguish (with significantadvantage) between a function chosen randomly from the PRF family and a random oracle (a function whose outputs are fixed completely at random). Pseudorandom functions are vital tools in the construction ofcryptographic primitives, especially secureencryption schemes.
Pseudorandom functions are not to be confused withpseudorandom generators(PRGs). The guarantee of a PRG is that asingleoutput appearsrandomif the input was chosen at random. On the other hand, the guarantee of a PRF is thatall its outputsappear random, regardless of how the corresponding inputs were chosen, as long as thefunctionwas drawn at random from the PRF family.
A pseudorandom function family can be constructed from any pseudorandom generator, using, for example, the "GGM" construction given byGoldreich,Goldwasser, andMicali.[1]While in practice,block ciphersare used in most instances where a pseudorandom function is needed, they do not, in general, constitute a pseudorandom function family, as block ciphers such asAESare defined for only limited numbers of input and key sizes.[2]
A PRF is an efficient (i.e. computable in polynomial time), deterministic function that maps two distinct sets (domain and range) and looks like a truly random function.
Essentially, a truly random function would just be composed of a lookup table filled with uniformly distributed random entries. However, in practice, a PRF is given an input string in the domain and a hidden random seed and runs multiple times with the same input string and seed, always returning the same value. Nonetheless, given an arbitrary input string, the output looks random if the seed is taken from a uniform distribution.
A PRF is considered to be good if its behavior is indistinguishable from a truly random function. Therefore, given an output from either the truly random function or a PRF, there should be no efficient method to correctly determine whether the output was produced by the truly random function or the PRF.
Pseudorandom functions take inputsx∈{0,1}∗{\displaystyle x\in \{0,1\}^{*}}, where∗{\displaystyle {}^{*}}is theKleene star. Both the input sizeI=|x|{\displaystyle I=|x|}and output sizeλ{\displaystyle \lambda }depend only on the index sizen:=|s|{\displaystyle n:=|s|}.
A family of functions,
fs:{0,1}I(n)→{0,1}λ(n){\displaystyle f_{s}:\left\{0,1\right\}^{I(n)}\rightarrow \left\{0,1\right\}^{\lambda (n)}}
ispseudorandomif the following conditions are satisfied:
In anoblivious pseudorandom function, abbreviated OPRF, information is concealed from two parties that are involved in a PRF.[4]That is, if Alice cryptographically hashes her secret value,cryptographically blindsthe hash to produce the message she sends to Bob, and Bob mixes in his secret value and gives the result back to Alice, who unblinds it to get the final output, Bob is not able to see either Alice's secret value or the final output, and Alice is not able to see Bob's secret input, but Alice sees the final output which is a PRF of the two inputs -- a PRF of Alice's secret and Bob's secret.[5]This enables transactions of sensitive cryptographic information to be secure even between untrusted parties.
An OPRF is used in some implementations ofpassword-authenticated key agreement.[5]
An OPRF is used in the Password Monitor functionality inMicrosoft Edge.[6]
PRFs can be used for:[7]
|
https://en.wikipedia.org/wiki/Pseudorandom_function_family
|
Anoblivious pseudorandom function(OPRF) is acryptographicfunction, similar to akeyed-hash function, but with the distinction that in an OPRFtwo partiescooperate tosecurely computeapseudorandom function(PRF).[1]
Specifically, an OPRF is apseudorandom functionwith the following properties:
The function is called anobliviouspseudorandom function, because the second party isobliviousto the function's output. This party learns no new information from participating in the calculation of the result.
However, because it is only the second party that holds the secret, the first party must involve the second party to calculate the output of thepseudorandom function(PRF). This requirement enables the second party to implementaccess controls,throttling,audit loggingand other security measures.
While conventionalpseudorandom functionscomputed by a single party were first formalized in 1986,[2]it was not until 1997 that thefirst two-party oblivious pseudorandom functionwas described in the literature,[3]but the term "oblivious pseudorandom function" was not coined until 2005 by some of the same authors.[4]
OPRFs have many useful applications incryptographyandinformation security.
These includepassword-based key derivation, password-basedkey agreement, password-hardening, untraceableCAPTCHAs,password management,homomorphickey management, andprivate set intersection.[1][5]
An OPRF can be viewed as a special case ofhomomorphic encryption, as it enables another party to compute a function over anencrypted inputand produce a result (which remains encrypted) and therefore it learns nothing about what it computed.
Most forms of password-based key derivation suffer from the fact that passwords usually contain asmall amount of randomness(or entropy) compared to full-length 128- or 256-bit encryption keys. This makes keys derived from passwords vulnerable tobrute-force attacks.
However, this threat can be mitigated by using the output of an OPRF that takes the password as input.
If the secret key used in the OPRF is high-entropy, then the output of the OPRF will also be high-entropy. This thereby solves the problem of the password being low-entropy, and therefore vulnerable tocrackingviabrute force.
This technique is calledpassword hardening.[6]It fills a similar purpose askey stretching, but password hardening adds significantly more entropy.
Further, since each attempt at guessing a password that is hardened in this way requires interaction with a server, it prevents anoffline attack, and thus enables the user or system administrator to be alerted to any password-cracking attempt.
The recovered key may then be used for authentication (e.g. performing aPKI-basedauthentication using adigital certificateandprivate key), or may be used to decrypt sensitive content, such as an encryptedfileorcrypto wallet.
Apasswordcan be used as the basis of akey agreementprotocol, to establish temporary session keys and mutually authenticate the client and server. This is known as apassword-authenticated key exchangeorPAKE.
Inbasic authentication, the server learns the user's password during the course of the authentication. If the server is compromised, this exposes the user's password which compromises the security of the user.
With PAKE, however, the user's password is not sent to the server, preventing it from falling into an eavesdropper's hands. It can be seen as an authentication via azero-knowledge password proof.
Various 'augmented forms' of PAKE incorporate an oblivious pseudorandom function so that the server never sees the user's password during the authentication, but nevertheless it is able to authenticate the client is in possession of the correct password. This is done by assuming only the client that knows the correct password can use the OPRF to derive the correct key.
An example of an augmented PAKE that uses an OPRF in this way isOPAQUE.[7][8][9][10]
Recently, OPRFs have been applied to password-based key exchange toback upencrypted chat histories inWhatsApp[11]andFacebook Messenger.[12]A similar use case is planned to be added inSignal Messenger.[13]
A CAPTCHA or "Completely Automated PublicTuring testto tell Computers and Humans Apart"[14]is a mechanism to prevent automated robots or (bots) from accessing websites. Lately, mechanisms for running CAPTCHA tests have been centralized to services such asGoogleandCloudFlare, but this can come at the expense of user privacy.
Recently, CloudFlare developed a privacy-preserving technology called "Privacy Pass".[15]This technology is based on OPRFs, and enables the client's browser to obtain passes from CloudFlare and then present them to bypass CAPTCHA tests. Due to the fact that the CloudFlare service is oblivious to which passes were provided to which users, there is no way it can correlate users with the websites they visit. This prevents tracking of the user, and thereby preserves the user's privacy.
Apassword manageris software or a service that holds potentially many different account credentials on behalf of the user. Access to the password manager is thus highly sensitive: an attack could expose many credentials to the attacker.
The first proposal for a password manager based on OPRFs was SPHINX.[16]It uses two devices (such as the user's laptop and phone) which collaborate to compute a password for a given account (as identified by the username and website's domain name). Because the user's two devices exchange values according to an OPRF protocol, intercepting the connection between them does not reveal anything about the password or the internal values each device used to compute it. Requiring two devices to compute any password also ensures that a compromise of either device does not allow the attacker to compute any of the passwords. A downside of this approach is that the user always needs access to both devices whenever they want to log in to any of their accounts.
An OPRF is used by the Password Monitor inMicrosoft Edgeto allow querying a server for whether a credential (which the user saved in the browser) is known to be compromised, without needing to reveal this credential to the server.[17]
Similarly to securing passwords managed by a password manager, an OPRF can be used to enhance the security of akey-management system.
For example, an OPRF enables a key-management system to issuecryptographic keysto authenticated and authorized users, without ever seeing, learning, or being in a position to learn, any of the keys it provides to users.[18]
Private set intersectionis a cryptographic technique that enables two or more parties to compare their private sets to determine which entries they share in common, but without disclosing any entires which they do not hold in common.
For example, private set intersection could be used by two users of a social network to determine which friends they have in common, without revealing the identities of friends they do not have in common. To do this, they could share the outputs of an OPRF applied to the friend's identity (e.g., the friend's phone number or e-mail address).
The output of the OPRF cannot be inverted to determine the identity of the user, and since the OPRF may berate-limited, it will prevent a brute-force attack (e.g., iterating over all possible phone numbers).[19]
There are various mathematical functions that can serve as the basis to implement an OPRF.
For example, methods fromasymmetric cryptography, includingelliptic curvepoint multiplication,Diffie–Hellmanmodular exponentiation over a prime, or anRSAsignature calculation.
Elliptic curves and prime order fields can be used to implement an OPRF. The essential idea is that the first party (the client), must cryptographicallyblindthe input prior sending it to the second party.
This blinding can be viewed as a form ofencryptionthat survives the computation performed by the second party. Therefore, the first party candecryptwhat it receives from the second party to "unblind" it, and thereby receive the same result it would have received had the input not been blinded.
When the second party receives the blinded input, it performs a computation on it using asecret. The result of this computation must not reveal the secret.
For example, the second party may perform apoint multiplicationof a point on an elliptic curve. Or it may perform amodular exponentiationmodulo a largeprime.
The first party, upon receipt of the result, and with knowledge of the blinding-factor, computes a function that removes the blinding factor's influence on the result returned by the second party. This 'unblinds' the result, revealing the output of the OPRF (or an intermediate result which is then used by the client to compute the output of the OPRF, for example, by hashing this intermediate result).
The following ispseudocodefor the calculations performed by the client and server using an elliptic-curve–based OPRF.
The following code represents calculations performed by the client, or the first party.
Notes:
The client computes themultiplicative inverseof the blinding factor. This enables it to reverse the effect of the blinding factor on the result, and obtain the result the server would have returned had the client not blinded the input.
As a final step, to complete the OPRF, the client performs aone-way hashon the result to ensure the OPRF output isuniform, completelypseudorandom, and non-invertible.
The following code represents calculations performed by the server, or the second party.
The server receives theblinded inputvalue from the client, and may perform authentication, access control, request throttling, or other security measures before processing the request. It then uses its own secret to compute:
It then returns the response, which is the blinded output, to the client.
Notes:
Because the elliptic curve point multiplication is computationally difficult to invert (like thediscrete logarithmproblem, the client cannot feasibly learn the server's secret from the response it produces.
Note, however, that this function is vulnerable toattacksbyquantum computers. A client or third party in possession of a quantum computer could solve for the server's secret knowing the result it produced for a given input.
When the output of ablind signaturescheme is deterministic, it can be used as the basis of building an OPRF, e.g. simply by hashing the resulting signature.
This is because due to the blinding, the party computing the blind signature learns neither the input (what is being signed) nor the output (the resultingdigital signature).
The OPRF construction can be extended in various ways. These include: verifiable, partially oblivious, threshold-secure, and post-quantum–secure versions.
Many applications require the ability of the first party to verify the OPRF output was computed correctly. For example, when using the output as a key to encrypt data. If the wrong value is computed, that encrypted data may be lost forever.
Fortunately, most OPRFs support verifiability. For example, when usingRSAblind signatures as the underlying construction, the client can, with the public key, verify the correctness of the resultingdigital signature.
When using OPRFs based onelliptic curveorDiffie–Hellman, knowing the public keyy = gxit is possible to use a second request to the OPRF server to create azero-knowledge proofof correctness for the previous result.[20][21]
One modification to an OPRF is called a partially oblivious PRF, or P-OPRF.
Specifically, a P-OPRF is any function with the following properties:
The use case for this is when the server needs to implement specific throttling or access controls on the exposed input (E), for example, (E) could be a file path, or user name, for which the server enforcesaccess controls, and only services requests when the requesting user is authorized.
A P-OPRF based onbilinear pairingswas used by the "Pythia PRF Service".[22]
Recently, versions of P-OPRFs not based on pairings have appeared, such as a version standardized in theIETFRFC9497,[21]as well in its more recent improvement.[23]
For even greater security, it is possible to"thresholdize" the server, such that the secret (S) is not held by any individual server, and so the compromise of any single server, or a set of servers numbering below some defined threshold, will not expose the secret.
This can be done by having each server be a shareholder in asecret-sharing scheme. Instead of using its secret to compute the result, each server uses itsshareof the secret to perform the computation.
The client then takes some subset of the server's computed results, and combines them, for example by computing a protocol known asinterpolation in the exponent. This recovers the same result as if the client had interacted with a single server which has the full secret.
This algorithm is used in various distributed cryptographic protocols.[24]
Finding efficientpost-quantum–secure implementations of OPRFs is an area of active research.[25]
"With the exception of OPRFs based on symmetric primitives, all known efficient OPRF
constructions rely on discrete-log- or factoring-type hardness assumptions. These assumptions are known to fall with the rise of quantum computers."[1]
Two possible exceptions arelattice-basedOPRFs[26]andisogeny-basedOPRFs,[27]but more research is required to improve their efficiency and establish their security. Recent attacks on isogenies raise doubts on the security of the algorithm.[28]
A more secure, but less efficient approach to realize a post-quantum–secure OPRF is to use asecure two-party computationprotocol to compute a PRF using asymmetric-keyconstruction, such asAESorHMAC.
|
https://en.wikipedia.org/wiki/Oblivious_Pseudorandom_Function
|
Innumber theory,Euler's criterionis a formula for determining whether anintegeris aquadratic residuemoduloaprime. Precisely,
Letpbe anoddprime andabe an integercoprimetop. Then[1][2][3]
Euler's criterion can be concisely reformulated using theLegendre symbol:[4]
The criterion dates from a 1748 paper byLeonhard Euler.[5][6]
The proof uses the fact that the residue classes modulo a prime number are afield. See the articleprime fieldfor more details.
Because the modulus is prime,Lagrange's theoremapplies: a polynomial of degreekcan only have at mostkroots. In particular,x2≡a(modp)has at most 2 solutions for eacha. This immediately implies that besides 0 there are at leastp− 1/2distinct quadratic residues modulop: each of thep− 1possible values ofxcan only be accompanied by one other to give the same residue.
In fact,(p−x)2≡x2(modp).{\displaystyle (p-x)^{2}\equiv x^{2}{\pmod {p}}.}This is because(p−x)2≡p2−2xp+x2≡x2(modp).{\displaystyle (p-x)^{2}\equiv p^{2}-{2}{x}{p}+x^{2}\equiv x^{2}{\pmod {p}}.}So, thep−12{\displaystyle {\tfrac {p-1}{2}}}distinct quadratic residues are:12,22,...,(p−12)2(modp).{\displaystyle 1^{2},2^{2},...,({\tfrac {p-1}{2}})^{2}{\pmod {p}}.}
Asais coprime top,Fermat's little theoremsays that
which can be written as
Since the integers modpform a field, for eacha, one or the other of these factors must be zero. Therefore,
Now ifais a quadratic residue,a≡x2,
So every quadratic residue (modp) makes the first factor zero.
Applying Lagrange's theorem again, we note that there can be no more thanp− 1/2values ofathat make the first factor zero. But as we noted at the beginning, there are at leastp− 1/2distinct quadratic residues (modp) (besides 0). Therefore, they are precisely the residue classes that make the first factor zero. The otherp− 1/2residue classes, the nonresidues, must make the second factor zero, or they would not satisfy Fermat's little theorem. This is Euler's criterion.
This proof only uses the fact that any congruencekx≡l(modp){\displaystyle kx\equiv l\!\!\!{\pmod {p}}}has a unique (modulop{\displaystyle p}) solutionx{\displaystyle x}providedp{\displaystyle p}does not dividek{\displaystyle k}. (This is true because asx{\displaystyle x}runs through all nonzero remainders modulop{\displaystyle p}without repetitions, so doeskx{\displaystyle kx}: if we havekx1≡kx2(modp){\displaystyle kx_{1}\equiv kx_{2}{\pmod {p}}}, thenp∣k(x1−x2){\displaystyle p\mid k(x_{1}-x_{2})}, hencep∣(x1−x2){\displaystyle p\mid (x_{1}-x_{2})}, butx1{\displaystyle x_{1}}andx2{\displaystyle x_{2}}aren't congruent modulop{\displaystyle p}.) It follows from this fact that all nonzero remainders modulop{\displaystyle p}the square of which isn't congruent toa{\displaystyle a}can be grouped into unordered pairs(x,y){\displaystyle (x,y)}according to the rule that the product of the members of each pair is congruent toa{\displaystyle a}modulop{\displaystyle p}(since by this fact for everyy{\displaystyle y}we can find such anx{\displaystyle x}, uniquely, and vice versa, and they will differ from each other ify2{\displaystyle y^{2}}is not congruent toa{\displaystyle a}). Ifa{\displaystyle a}is not a quadratic residue, this is simply a regrouping of allp−1{\displaystyle p-1}nonzero residues into(p−1)/2{\displaystyle (p-1)/2}pairs, hence we conclude that1⋅2⋅...⋅(p−1)≡ap−12(modp){\displaystyle 1\cdot 2\cdot ...\cdot (p-1)\equiv a^{\frac {p-1}{2}}\!\!\!{\pmod {p}}}. Ifa{\displaystyle a}is a quadratic residue, exactly two remainders were not among those paired,r{\displaystyle r}and−r{\displaystyle -r}such thatr2≡a(modp){\displaystyle r^{2}\equiv a\!\!\!{\pmod {p}}}. If we pair those two absent remainders together, their product will be−a{\displaystyle -a}rather thana{\displaystyle a}, whence in this case1⋅2⋅...⋅(p−1)≡−ap−12(modp){\displaystyle 1\cdot 2\cdot ...\cdot (p-1)\equiv -a^{\frac {p-1}{2}}\!\!\!{\pmod {p}}}. In summary, considering these two cases we have demonstrated that fora≢0(modp){\displaystyle a\not \equiv 0\!\!\!{\pmod {p}}}we have1⋅2⋅...⋅(p−1)≡−(ap)ap−12(modp){\displaystyle 1\cdot 2\cdot ...\cdot (p-1)\equiv -\left({\frac {a}{p}}\right)a^{\frac {p-1}{2}}\!\!\!{\pmod {p}}}. It remains to substitutea=1{\displaystyle a=1}(which is obviously a square) into this formula to obtain at onceWilson's theorem, Euler's criterion, and (by squaring both sides of Euler's criterion)Fermat's little theorem.
Example 1: Finding primes for whichais a residue
Leta= 17. For which primespis 17 a quadratic residue?
We can test primep's manually given the formula above.
In one case, testingp= 3, we have 17(3 − 1)/2= 171≡ 2 ≡ −1 (mod 3), therefore 17 is not a quadratic residue modulo 3.
In another case, testingp= 13, we have 17(13 − 1)/2= 176≡ 1 (mod 13), therefore 17 is a quadratic residue modulo 13. As confirmation, note that 17 ≡ 4 (mod 13), and 22= 4.
We can do these calculations faster by using various modular arithmetic and Legendre symbol properties.
If we keep calculating the values, we find:
Example 2: Finding residues given a prime modulusp
Which numbers are squares modulo 17 (quadratic residues modulo 17)?
We can manually calculate it as:
So the set of the quadratic residues modulo 17 is {1,2,4,8,9,13,15,16}. Note that we did not need to calculate squares for the values 9 through 16, as they are all negatives of the previously squared values (e.g. 9 ≡ −8 (mod 17), so 92≡ (−8)2= 64 ≡ 13 (mod 17)).
We can find quadratic residues or verify them using the above formula. To test if 2 is a quadratic residue modulo 17, we calculate 2(17 − 1)/2= 28≡ 1 (mod 17), so it is a quadratic residue. To test if 3 is a quadratic residue modulo 17, we calculate 3(17 − 1)/2= 38≡ 16 ≡ −1 (mod 17), so it is not a quadratic residue.
Euler's criterion is related to thelaw of quadratic reciprocity.
In practice, it is more efficient to use an extended variant ofEuclid's algorithmto calculate theJacobi symbol(an){\displaystyle \left({\frac {a}{n}}\right)}. Ifn{\displaystyle n}is an odd prime, this is equal to the Legendre symbol, and decides whethera{\displaystyle a}is a quadratic residue modulon{\displaystyle n}.
On the other hand, since the equivalence ofan−12{\displaystyle a^{\frac {n-1}{2}}}to the Jacobi symbol holds for all odd primes, but not necessarily for composite numbers, calculating both and comparing them can be used as a primality test, specifically theSolovay–Strassen primality test. Composite numbers for which the congruence holds for a givena{\displaystyle a}are calledEuler–Jacobi pseudoprimesto basea{\displaystyle a}.
TheDisquisitiones Arithmeticaehas been translated from Gauss'sCiceronian LatinintoEnglishandGerman. The German edition includes all of his papers on number theory: all the proofs ofquadratic reciprocity, the determination of the sign of theGauss sum, the investigations intobiquadratic reciprocity, and unpublished notes.
|
https://en.wikipedia.org/wiki/Euler%27s_criterion
|
Inalgebraandnumber theory,Wilson's theoremstates that anatural numbern> 1 is aprime numberif and only ifthe product of all thepositive integersless thannis one less than a multiple ofn. That is (using the notations ofmodular arithmetic), thefactorial(n−1)!=1×2×3×⋯×(n−1){\displaystyle (n-1)!=1\times 2\times 3\times \cdots \times (n-1)}satisfies
exactly whennis a prime number. In other words, any integern> 1 is a prime number if, and only if, (n− 1)! + 1 is divisible byn.[1]
The theorem was first stated byIbn al-Haythamc.1000 AD.[2]Edward Waringannounced the theorem in 1770 without proving it, crediting his studentJohn Wilsonfor the discovery.[3]Lagrangegave the first proof in 1771.[4]There is evidence thatLeibnizwas also aware of the result a century earlier, but never published it.[5]
For each of the values ofnfrom 2 to 30, the following table shows the number (n− 1)! and the remainder when (n− 1)! is divided byn. (In the notation ofmodular arithmetic, the remainder whenmis divided bynis writtenmmodn.)
The background color is blue forprimevalues ofn, gold forcompositevalues.
As abiconditional(if and only if) statement, the proof has two halves: to show that equalitydoes nothold whenn{\displaystyle n}is composite, and to show that itdoeshold whenn{\displaystyle n}is prime.
Suppose thatn{\displaystyle n}is composite. Therefore, it is divisible by some prime numberq{\displaystyle q}where2≤q<n{\displaystyle 2\leq q<n}. Becauseq{\displaystyle q}dividesn{\displaystyle n}, there is an integerk{\displaystyle k}such thatn=qk{\displaystyle n=qk}. Suppose for the sake of contradiction that(n−1)!{\displaystyle (n-1)!}were congruent to−1{\displaystyle -1}modulon{\displaystyle {n}}. Then(n−1)!{\displaystyle (n-1)!}would also be congruent to−1{\displaystyle -1}moduloq{\displaystyle {q}}: indeed, if(n−1)!≡−1(modn){\displaystyle (n-1)!\equiv -1{\pmod {n}}}then(n−1)!=nm−1=(qk)m−1=q(km)−1{\displaystyle (n-1)!=nm-1=(qk)m-1=q(km)-1}for some integerm{\displaystyle m}, and consequently(n−1)!{\displaystyle (n-1)!}is one less than a multiple ofq{\displaystyle q}. On the other hand, since2≤q≤n−1{\displaystyle 2\leq q\leq n-1}, one of the factors in the expanded product(n−1)!=(n−1)×(n−2)×⋯×2×1{\displaystyle (n-1)!=(n-1)\times (n-2)\times \cdots \times 2\times 1}isq{\displaystyle q}. Therefore(n−1)!≡0(modq){\displaystyle (n-1)!\equiv 0{\pmod {q}}}. This is a contradiction; therefore it is not possible that(n−1)!≡−1(modn){\displaystyle (n-1)!\equiv -1{\pmod {n}}}whenn{\displaystyle n}is composite.
In fact, more is true. With the sole exception of the casen=4{\displaystyle n=4}, where3!=6≡2(mod4){\displaystyle 3!=6\equiv 2{\pmod {4}}}, ifn{\displaystyle n}is composite then(n−1)!{\displaystyle (n-1)!}is congruent to 0 modulon{\displaystyle n}. The proof can be divided into two cases: First, ifn{\displaystyle n}can be factored as the product of two unequal numbers,n=ab{\displaystyle n=ab}, where2≤a<b<n{\displaystyle 2\leq a<b<n}, then botha{\displaystyle a}andb{\displaystyle b}will appear as factors in the product(n−1)!=(n−1)×(n−2)×⋯×2×1{\displaystyle (n-1)!=(n-1)\times (n-2)\times \cdots \times 2\times 1}and so(n−1)!{\displaystyle (n-1)!}is divisible byab=n{\displaystyle ab=n}. Ifn{\displaystyle n}has no such factorization, then it must be the square of some primeq{\displaystyle q}larger than 2. But then2q<q2=n{\displaystyle 2q<q^{2}=n}, so bothq{\displaystyle q}and2q{\displaystyle 2q}will be factors of(n−1)!{\displaystyle (n-1)!}, and son{\displaystyle n}divides(n−1)!{\displaystyle (n-1)!}in this case, as well.
The first two proofs below use the fact that theresidue classesmodulo a prime number form afinite field(specifically, aprime field).[6]
The result is trivial whenp=2{\displaystyle p=2}, so assumep{\displaystyle p}is an odd prime,p≥3{\displaystyle p\geq 3}. Since the residue classes modulop{\displaystyle p}form a field, every non-zero residuea{\displaystyle a}has a unique multiplicative inversea−1{\displaystyle a^{-1}}.Euclid's lemmaimplies[a]that the only values ofa{\displaystyle a}for whicha≡a−1(modp){\displaystyle a\equiv a^{-1}{\pmod {p}}}area≡±1(modp){\displaystyle a\equiv \pm 1{\pmod {p}}}. Therefore, with the exception of±1{\displaystyle \pm 1}, the factors in the expanded form of(p−1)!{\displaystyle (p-1)!}can be arranged in disjoint pairs such that product of each pair is congruent to 1 modulop{\displaystyle p}. This proves Wilson's theorem.
For example, forp=11{\displaystyle p=11}, one has10!=[(1⋅10)]⋅[(2⋅6)(3⋅4)(5⋅9)(7⋅8)]≡[−1]⋅[1⋅1⋅1⋅1]≡−1(mod11).{\displaystyle 10!=[(1\cdot 10)]\cdot [(2\cdot 6)(3\cdot 4)(5\cdot 9)(7\cdot 8)]\equiv [-1]\cdot [1\cdot 1\cdot 1\cdot 1]\equiv -1{\pmod {11}}.}
Again, the result is trivial forp= 2, so supposepis an odd prime,p≥ 3. Consider the polynomial
ghas degreep− 1, leading termxp− 1, and constant term(p− 1)!. Itsp− 1roots are 1, 2, ...,p− 1.
Now consider
halso has degreep− 1and leading termxp− 1. Modulop,Fermat's little theoremsays it also has the samep− 1roots, 1, 2, ...,p− 1.
Finally, consider
fhas degree at mostp− 2 (since the leading terms cancel), and modulopalso has thep− 1roots 1, 2, ...,p− 1. ButLagrange's theoremsays it cannot have more thanp− 2 roots. Therefore,fmust be identically zero (modp), so its constant term is(p− 1)! + 1 ≡ 0 (modp). This is Wilson's theorem.
It is possible to deduce Wilson's theorem from a particular application of theSylow theorems. Letpbe a prime. It is immediate to deduce that thesymmetric groupSp{\displaystyle S_{p}}has exactly(p−1)!{\displaystyle (p-1)!}elements of orderp, namely thep-cyclesCp{\displaystyle C_{p}}. On the other hand, each Sylowp-subgroup inSp{\displaystyle S_{p}}is a copy ofCp{\displaystyle C_{p}}. Hence it follows that the number of Sylowp-subgroups isnp=(p−2)!{\displaystyle n_{p}=(p-2)!}. The third Sylow theorem implies
Multiplying both sides by(p− 1)gives
that is, the result.
In practice, Wilson's theorem is useless as aprimality testbecause computing (n− 1)! modulonfor largeniscomputationally complex.[7]
Using Wilson's Theorem, for any odd primep= 2m+ 1, we can rearrange the left hand side of1⋅2⋯(p−1)≡−1(modp){\displaystyle 1\cdot 2\cdots (p-1)\ \equiv \ -1\ {\pmod {p}}}to obtain the equality1⋅(p−1)⋅2⋅(p−2)⋯m⋅(p−m)≡1⋅(−1)⋅2⋅(−2)⋯m⋅(−m)≡−1(modp).{\displaystyle 1\cdot (p-1)\cdot 2\cdot (p-2)\cdots m\cdot (p-m)\ \equiv \ 1\cdot (-1)\cdot 2\cdot (-2)\cdots m\cdot (-m)\ \equiv \ -1{\pmod {p}}.}This becomes∏j=1mj2≡(−1)m+1(modp){\displaystyle \prod _{j=1}^{m}\ j^{2}\ \equiv (-1)^{m+1}{\pmod {p}}}or(m!)2≡(−1)m+1(modp).{\displaystyle (m!)^{2}\equiv (-1)^{m+1}{\pmod {p}}.}We can use this fact to prove part of a famous result: for any primepsuch thatp≡ 1 (mod 4), the number (−1) is a square (quadratic residue) modp. For this, supposep= 4k+ 1 for some integerk. Then we can takem= 2kabove, and we conclude that (m!)2is congruent to (−1) (modp).
Wilson's theorem has been used to constructformulas for primes, but they are too slow to have practical value.
Wilson's theorem allows one to define thep-adic gamma function.
Gaussproved[8][9]that∏k=1gcd(k,m)=1mk≡{−1(modm)ifm=4,pα,2pα1(modm)otherwise{\displaystyle \prod _{k=1 \atop \gcd(k,m)=1}^{m}\!\!k\ \equiv {\begin{cases}-1{\pmod {m}}&{\text{if }}m=4,\;p^{\alpha },\;2p^{\alpha }\\\;\;\,1{\pmod {m}}&{\text{otherwise}}\end{cases}}}whereprepresents an odd prime andα{\displaystyle \alpha }a positive integer. That is, the product of the positive integers less thanmand relatively prime tomis one less than a multiple ofmwhenmis equal to 4, or a power of an odd prime, or twice a power of an odd prime; otherwise, the product is one more than a multiple ofm. The values ofmfor which the product is −1 are precisely the ones where there is aprimitive root modulom.
Original: Inoltre egli intravide anche il teorema di Wilson, come risulta dall'enunciato seguente:"Productus continuorum usque ad numerum qui antepraecedit datum divisus per datum relinquit 1 (vel complementum ad unum?) si datus sit primitivus. Si datus sit derivativus relinquet numerum qui cum dato habeat communem mensuram unitate majorem."Egli non giunse pero a dimostrarlo.
Translation: In addition, he [Leibniz] also glimpsed Wilson's theorem, as shown in the following statement:"The product of all integers preceding the given integer, when divided by the given integer, leaves 1 (or the complement of 1?) if the given integer be prime. If the given integer be composite, it leaves a number which has a common factor with the given integer [which is] greater than one."However, he didn't succeed in proving it.
TheDisquisitiones Arithmeticaehas been translated from Gauss's Ciceronian Latin into English and German. The German edition includes all of his papers on number theory: all the proofs of quadratic reciprocity, the determination of the sign of the Gauss sum, the investigations into biquadratic reciprocity, and unpublished notes.
|
https://en.wikipedia.org/wiki/Wilson%27s_theorem
|
Inmathematics, specifically ingroup theory, anelementary abelian groupis anabelian groupin which all elements other than the identity have the sameorder. This common order must be aprime number, and the elementary abelian groups in which the common order ispare a particular kind ofp-group.[1][2]A group for whichp= 2 (that is, an elementary abelian 2-group) is sometimes called aBoolean group.[3]
Every elementary abelianp-group is avector spaceover theprime fieldwithpelements, and conversely every such vector space is an elementary abelian group.
By theclassification of finitely generated abelian groups, or by the fact that every vector space has abasis, every finite elementary abelian group must be of the form (Z/pZ)nforna non-negative integer (sometimes called the group'srank). Here,Z/pZdenotes thecyclic groupof orderp(or equivalently the integersmodp), and the superscript notation means then-folddirect product of groups.[2]
In general, a (possibly infinite)elementary abelianp-groupis adirect sumof cyclic groups of orderp.[4](Note that in the finite case the direct product and direct sum coincide, but this is not so in the infinite case.)
SupposeV≅{\displaystyle \cong }(Z/pZ)nis a finite elementary abelian group. SinceZ/pZ≅{\displaystyle \cong }Fp, thefinite fieldofpelements, we haveV= (Z/pZ)n≅{\displaystyle \cong }Fpn, henceVcan be considered as ann-dimensionalvector spaceover the fieldFp. Note that an elementary abelian group does not in general have a distinguished basis: choice of isomorphismV→≅{\displaystyle {\overset {\cong }{\to }}}(Z/pZ)ncorresponds to a choice of basis.
To the observant reader, it may appear thatFpnhas more structure than the groupV, in particular that it has scalar multiplication in addition to (vector/group) addition. However,Vas an abelian group has a uniqueZ-modulestructure where the action ofZcorresponds to repeated addition, and thisZ-module structure is consistent with theFpscalar multiplication. That is,c⋅g=g+g+ ... +g(ctimes) wherecinFp(considered as an integer with 0 ≤c<p) givesVa naturalFp-module structure.
As a finite-dimensional vector spaceVhas a basis {e1, ...,en} as described in the examples, if we take {v1, ...,vn} to be anynelements ofV, then bylinear algebrawe have that the mappingT(ei) =viextends uniquely to a linear transformation ofV. Each suchTcan be considered as a group homomorphism fromVtoV(anendomorphism) and likewise any endomorphism ofVcan be considered as a linear transformation ofVas a vector space.
If we restrict our attention toautomorphismsofVwe have Aut(V) = {T:V→V| kerT= 0 } = GLn(Fp), thegeneral linear groupofn×ninvertible matrices onFp.
The automorphism group GL(V) = GLn(Fp) actstransitivelyonV\ {0} (as is true for any vector space). This in fact characterizes elementary abelian groups among all finite groups: ifGis a finite group with identityesuch that Aut(G) acts transitively onG\ {e}, thenGis elementary abelian. (Proof: if Aut(G) acts transitively onG\ {e}, then all nonidentity elements ofGhave the same (necessarily prime) order. ThenGis ap-group. It follows thatGhas a nontrivialcenter, which is necessarily invariant under all automorphisms, and thus equals all ofG.)
It can also be of interest to go beyond prime order components to prime-power order. Consider an elementary abelian groupGto be oftype(p,p,...,p) for some primep. Ahomocyclic group[5](of rankn) is an abelian group of type (m,m,...,m) i.e. the direct product ofnisomorphic cyclic groups of orderm, of which groups of type (pk,pk,...,pk) are a special case.
Theextra special groupsare extensions of elementary abelian groups by a cyclic group of orderp,and are analogous to theHeisenberg group.
|
https://en.wikipedia.org/wiki/Elementary_abelian_group
|
Inmathematics, thefield with one elementis a suggestive name for an object that should behave similarly to afinite fieldwith a single element, if such a field could exist. This object is denotedF1, or, in a French–English pun,Fun.[1]The name "field with one element" and the notationF1are only suggestive, as there is no field with one element in classicalabstract algebra. Instead,F1refers to the idea that there should be a way to replacesetsandoperations, the traditional building blocks for abstract algebra, with other, more flexible objects. Many theories ofF1have been proposed, but it is not clear which, if any, of them giveF1all the desired properties. While there is still no field with a single element in these theories, there is a field-like object whosecharacteristicis one.
Most proposed theories ofF1replace abstract algebra entirely. Mathematical objects such asvector spacesandpolynomial ringscan be carried over into these new theories by mimicking their abstract properties. This allows the development ofcommutative algebraandalgebraic geometryon new foundations. One of the defining features of theories ofF1is that these new foundations allow more objects than classical abstract algebra does, one of which behaves like a field of characteristic one.
The possibility of studying the mathematics ofF1was originally suggested in 1956 byJacques Tits, published inTits 1957, on the basis of an analogy between symmetries inprojective geometryand the combinatorics ofsimplicial complexes.F1has been connected tononcommutative geometryand to a possible proof of theRiemann hypothesis.
In 1957, Jacques Tits introduced the theory ofbuildings, which relatealgebraic groupstoabstract simplicial complexes. One of the assumptions is a non-triviality condition: If the building is ann‑dimensional abstract simplicial complex, and ifk<n, then everyk‑simplex of the building must be contained in at least threen‑simplices. This is analogous to the condition in classicalprojective geometrythat a line must contain at least three points. However, there aredegenerategeometries that satisfy all the conditions to be a projective geometry except that the lines admit only two points. The analogous objects in the theory of buildings are called apartments. Apartments play such a constituent role in the theory of buildings that Tits conjectured the existence of a theory of projective geometry in which the degenerate geometries would have equal standing with the classical ones. This geometry would take place, he said, over afield of characteristic one.[2]Using this analogy it was possible to describe some of the elementary properties ofF1, but it was not possible to construct it.
After Tits' initial observations, little progress was made until the early 1990s. In the late 1980s,Alexander Smirnovgave a series of talks in which he conjectured that the Riemann hypothesis could be proven by considering the integers as a curve over a field with one element. By 1991, Smirnov had taken some steps towards algebraic geometry overF1,[3]introducing extensions ofF1and using them to handle the projective lineP1overF1.[3]Algebraic numberswere treated as maps to thisP1, and conjectural approximations tothe Riemann–Hurwitz formulafor these maps were suggested. These approximations imply solutions to important problems likethe abc conjecture. The extensions ofF1later on were denoted asFqwithq= 1n. Together withMikhail Kapranov, Smirnov went on to explore how algebraic andnumber-theoreticconstructions in prime characteristic might look in "characteristic one", culminating in an unpublished work released in 1995.[4]In 1993,Yuri Maningave a series of lectures onzeta functionswhere he proposed developing a theory of algebraic geometry overF1.[5]He suggested that zeta functions ofvarietiesoverF1would have very simple descriptions, and he proposed a relation between theK‑theoryofF1and thehomotopy groups of spheres. This inspired several people to attempt to construct explicit theories ofF1‑geometry.
The first published definition of a variety overF1came fromChristophe Souléin 1999,[6]who constructed it using algebras over thecomplex numbersandfunctorsfromcategoriesof certain rings.[6]In 2000, Zhu proposed thatF1was the same asF2except that the sum of one and one was one, not zero.[7]Deitmar suggested thatF1should be found by forgetting the additive structure of a ring and focusing on the multiplication.[8]Toën and Vaquié built on Hakim's theory of relative schemes and definedF1usingsymmetric monoidal categories.[9]Their construction was later shown to be equivalent to Deitmar's by Vezzani.[10]Nikolai DurovconstructedF1as a commutative algebraicmonad.[11]Borger useddescentto construct it from the finite fields and the integers.[12]
Alain ConnesandCaterina Consanideveloped both Soulé and Deitmar's notions by "gluing" the category of multiplicativemonoidsand the category of rings to create a new categoryMR,{\displaystyle {\mathfrak {M}}{\mathfrak {R}},}then definingF1‑schemes to be a particular kind of representable functor onMR.{\displaystyle {\mathfrak {M}}{\mathfrak {R}}.}[13]Using this, they managed to provide a notion of several number-theoretic constructions overF1such as motives and field extensions, as well as constructingChevalley groupsoverF12. Along withMatilde Marcolli, Connes and Consani have also connectedF1withnoncommutative geometry.[14]It has also been suggested to have connections to theunique games conjectureincomputational complexity theory.[15]
Oliver Lorscheid, along with others, has recently achieved Tits' original aim of describing Chevalley groups overF1by introducing objects called blueprints, which are a simultaneous generalisation of bothsemiringsand monoids.[16][17]These are used to define so-called "blue schemes", one of which is SpecF1.[18]Lorscheid's ideas depart somewhat from other ideas of groups overF1, in that theF1‑scheme is not itself the Weyl group of its base extension to normal schemes. Lorscheid first defines the Tits category, a full subcategory of the category of blue schemes, and defines the "Weyl extension", a functor from the Tits category toSet. A Tits–Weyl model of an algebraic groupG{\displaystyle {\mathcal {G}}}is a blue schemeGwith a group operation that is a morphism in the Tits category, whose base extension isG{\displaystyle {\mathcal {G}}}and whose Weyl extension is isomorphic to the Weyl group ofG.{\displaystyle {\mathcal {G}}.}
F1‑geometry has been linked totropical geometry, via the fact that semirings (in particular, tropical semirings) arise as quotients of some monoid semiringN[A] of finite formal sums of elements of a monoidA, which is itself anF1‑algebra. This connection is made explicit by Lorscheid's use of blueprints.[19]The Giansiracusa brothers have constructed a tropical scheme theory, for which their category of tropical schemes is equivalent to the category of Toën–VaquiéF1‑schemes.[20]This category embedsfaithfully, but notfully, into the category of blue schemes, and is afull subcategoryof the category of Durov schemes.
One motivation forF1comes fromalgebraic number theory.Weil's proof of theRiemann hypothesis for curves over finite fieldsstarts with a curveCover a finite fieldk, which comes equipped with afunction fieldF, which is afield extensionofk. Each such function field gives rise to aHasse–Weil zeta functionζF, and the Riemann hypothesis for finite fields determines the zeroes ofζF. Weil's proof then uses various geometric properties ofCto studyζF.
The field of rational numbersQis linked in a similar way to theRiemann zeta function, butQis not the function field of a variety. Instead,Qis the function field of theschemeSpecZ. This is a one-dimensional scheme (also known as analgebraic curve), and so there should be some "base field" that this curve lies over, of whichQwould be afield extension(in the same way thatCis a curve overk, andFis an extension ofk). The hope ofF1‑geometry is that a suitable objectF1could play the role of this base field, which would allow for a proof of theRiemann hypothesisby mimicking Weil's proof withF1in place ofk.
Geometry over a field with one element is also motivated byArakelov geometry, whereDiophantine equationsare studied using tools fromcomplex geometry. The theory involves complicated comparisons between finite fields and the complex numbers. Here the existence ofF1is useful for technical reasons.
F1cannot be a field because by definition all fields must contain two distinct elements, theadditive identityzero and themultiplicative identityone. Even if this restriction is dropped (for instance by letting the additive and multiplicative identities be the same element), a ring with one element must be thezero ring, which does not behave like a finite field. For instance, allmodulesover the zero ring are isomorphic (as the only element of such a module is the zero element). However, one of the key motivations ofF1is the description of sets as "F1‑vector spaces" – if finite sets were modules over the zero ring, then every finite set would be the same size, which is not the case. Moreover, thespectrumof the trivial ring is empty, but the spectrum of a field has one point.
Various structures on asetare analogous to structures on a projective space, and can be computed in the same way:
The number of elements ofP(Fnq) =Pn−1(Fq), the(n− 1)‑dimensionalprojective spaceover thefinite fieldFq, is theq‑integer[24]
Takingq= 1yields[n]q=n.
The expansion of theq‑integer into a sum of powers ofqcorresponds to theSchubert celldecomposition of projective space.
There aren! permutations of a set withnelements, and [n]!qmaximalflagsinFnq, where
is theq‑factorial. Indeed, a permutation of a set can be considered afiltered set, as a flag is a filtered vector space: for instance, the ordering(0, 1, 2)of the set{0, 1, 2}corresponds to the filtration{0} ⊂ {0, 1} ⊂ {0, 1, 2}.
Thebinomial coefficient
gives the number ofm-element subsets of ann-element set, and theq‑binomial coefficient
gives the number ofm-dimensional subspaces of ann-dimensional vector space overFq.
The expansion of theq‑binomial coefficient into a sum of powers ofqcorresponds to theSchubert celldecomposition of theGrassmannian.
Deitmar's construction of monoid schemes[25]has been called "the very core ofF1‑geometry",[16]as most other theories ofF1‑geometry contain descriptions of monoid schemes. Morally, it mimicks the theory ofschemesdeveloped in the 1950s and 1960s by replacingcommutative ringswithmonoids. The effect of this is to "forget" the additive structure of the ring, leaving only the multiplicative structure. For this reason, it is sometimes called "non-additive geometry".
Amultiplicative monoidis a monoidAthat also contains anabsorbing element0 (distinct from the identity 1 of the monoid), such that0a= 0for everyain the monoidA. The field with one element is then defined to beF1= {0, 1}, the multiplicative monoid of the field with two elements, which isinitialin the category of multiplicative monoids. Amonoid idealin a monoidAis a subsetIthat is multiplicatively closed, contains 0, and such thatIA= {ra:r∈I,a∈A} =I. Such an ideal isprimeifA∖Iis multiplicatively closed and contains 1.
For monoidsAandB, amonoid homomorphismis a functionf:A→Bsuch that
Thespectrumof a monoidA, denotedSpecA, is the set ofprime idealsofA. The spectrum of a monoid can be given aZariski topology, by definingbasicopen sets
for eachhinA. Amonoidal spaceis a topological space along with asheafof multiplicative monoids called thestructure sheaf. Anaffine monoidschemeis a monoidal space that is isomorphic to the spectrum of a monoid, and amonoid schemeis a sheaf of monoids that has an open cover by affine monoid schemes.
Monoid schemes can be turned into ring-theoretic schemes by means of abase extensionfunctor– ⊗F1Zthat sends the monoidAto theZ‑module (i.e. ring)Z[A] /⟨0A⟩, and a monoid homomorphismf:A→Bextends to a ring homomorphismfZ:A⊗F1Z→B⊗F1Zthat is linear as aZ‑module homomorphism. The base extension of an affine monoid scheme is defined via the formula
which in turn defines the base extension of a general monoid scheme.
This construction achieves many of the desired properties ofF1‑geometry:SpecF1consists of a single point, so behaves similarly to the spectrum of a field in conventional geometry, and the category of affine monoid schemes is dual to the category of multiplicative monoids, mirroring the duality of affine schemes and commutative rings. Furthermore, this theory satisfies the combinatorial properties expected ofF1mentioned in previous sections; for instance, projective space overF1of dimensionnas a monoid scheme is identical to an apartment of projective space overFqof dimensionnwhen described as a building.
However, monoid schemes do not fulfill all of the expected properties of a theory ofF1‑geometry, as the only varieties that have monoid scheme analogues aretoric varieties.[26]More precisely, ifXis a monoid scheme whose base extension is aflat,separated,connectedscheme offinite type, then the base extension ofXis a toric variety. Other notions ofF1‑geometry, such as that of Connes–Consani,[27]build on this model to describeF1‑varieties that are not toric.
One may definefield extensionsof the field with one element as the group ofroots of unity, or more finely (with a geometric structure) as thegroup scheme of roots of unity. This is non-naturally isomorphic to thecyclic groupof ordern, the isomorphism depending on choice of aprimitive root of unity:[28]
Thus a vector space of dimensiondoverF1nis a finite set of orderdnon which the roots of unity act freely, together with a base point.
From this point of view thefinite fieldFqis an algebra overF1n, of dimensiond= (q− 1)/nfor anynthat is a factor ofq− 1(for examplen=q− 1orn= 1). This corresponds to the fact that the group of units of a finite fieldFq(which are theq− 1non-zero elements) is a cyclic group of orderq− 1, on which any cyclic group of order dividingq− 1acts freely (by raising to a power), and the zero element of the field is the base point.
Similarly, thereal numbersRare an algebra overF12, of infinite dimension, as the real numbers contain ±1, but no other roots of unity, and the complex numbersCare an algebra overF1nfor alln, again of infinite dimension, as the complex numbers have all roots of unity.
From this point of view, any phenomenon that only depends on a field having roots of unity can be seen as coming fromF1– for example, thediscrete Fourier transform(complex-valued) and the relatednumber-theoretic transform(Z/nZ‑valued).
|
https://en.wikipedia.org/wiki/Field_with_one_element
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.