text
stringlengths 16
172k
| source
stringlengths 32
122
|
---|---|
Innumber theory,Fermat's little theoremstates that ifpis aprime number, then for anyintegera, the numberap−ais an integer multiple ofp. In the notation ofmodular arithmetic, this is expressed asap≡a(modp).{\displaystyle a^{p}\equiv a{\pmod {p}}.}
For example, ifa= 2andp= 7, then27= 128, and128 − 2 = 126 = 7 × 18is an integer multiple of7.
Ifais not divisible byp, that is, ifaiscoprimetop, then Fermat's little theorem is equivalent to the statement thatap− 1− 1is an integer multiple ofp, or in symbols:[1][2]ap−1≡1(modp).{\displaystyle a^{p-1}\equiv 1{\pmod {p}}.}
For example, ifa= 2andp= 7, then26= 64, and64 − 1 = 63 = 7 × 9is a multiple of7.
Fermat's little theorem is the basis for theFermat primality testand is one of the fundamental results ofelementary number theory. The theorem is named afterPierre de Fermat, who stated it in 1640. It is called the "little theorem" to distinguish it fromFermat's Last Theorem.[3]
Pierre de Fermat first stated the theorem in a letter dated October 18, 1640, to his friend and confidantFrénicle de Bessy. His formulation is equivalent to the following:[3]
Ifpis a prime andais any integer not divisible byp, thenap− 1− 1is divisible byp.
Fermat's original statement was
Tout nombre premier mesure infailliblement une des puissances−1{\displaystyle -1}de quelque progression que ce soit, et l'exposant de la dite puissance est sous-multiple du nombre premier donné−1{\displaystyle -1}; et, après qu'on a trouvé la première puissance qui satisfait à la question, toutes celles dont les exposants sont multiples de l'exposant de la première satisfont tout de même à la question.
This may be translated, with explanations and formulas added in brackets for easier understanding, as:
Every prime number [p] divides necessarily one of the powers minus one of any [geometric]progression[a,a2,a3, …] [that is, there existstsuch thatpdividesat− 1], and the exponent of this power [t] divides the given prime minus one [dividesp− 1]. After one has found the first power [t] that satisfies the question, all those whose exponents are multiples of the exponent of the first one satisfy similarly the question [that is, all multiples of the firstthave the same property].
Fermat did not consider the case whereais a multiple ofpnor prove his assertion, only stating:[4]
Et cette proposition est généralement vraie en toutes progressions et en tous nombres premiers; de quoi je vous envoierois la démonstration, si je n'appréhendois d'être trop long.
(And this proposition is generally true for all series [sic] and for all prime numbers; I would send you a demonstration of it, if I did not fear going on for too long.)[5]
Eulerprovided the first published proof in 1736, in a paper titled "Theorematum Quorundam ad Numeros Primos Spectantium Demonstratio" (in English: "Demonstration of Certain Theorems Concerning Prime Numbers") in theProceedingsof the St. Petersburg Academy,[6][7]butLeibnizhad given virtually the same proof in an unpublished manuscript from sometime before 1683.[3]
The term "Fermat's little theorem" was probably first used in print in 1913 inZahlentheoriebyKurt Hensel:[8]
Für jede endliche Gruppe besteht nun ein Fundamentalsatz, welcher der kleine Fermatsche Satz genannt zu werden pflegt, weil ein ganz spezieller Teil desselben zuerst von Fermat bewiesen worden ist.
(There is a fundamental theorem holding in every finite group, usually called Fermat's little theorem because Fermat was the first to have proved a very special part of it.)
An early use in English occurs inA.A. Albert'sModern Higher Algebra(1937), which refers to "the so-called 'little' Fermat theorem" on page 206.[9]
Some mathematicians independently made the related hypothesis (sometimes incorrectly called the Chinese hypothesis) that2p≡ 2 (modp)if and only ifpis prime. Indeed, the "if" part is true, and it is a special case of Fermat's little theorem. However, the "only if" part is false: For example,2341≡ 2 (mod 341), but 341 = 11 × 31 is apseudoprimeto base 2. Seebelow.
Several proofs of Fermat's little theorem are known. It is frequently proved as acorollaryofEuler's theorem.
Euler's theoremis a generalization of Fermat's little theorem: For anymodulusnand any integeracoprime ton, one has
aφ(n)≡1(modn),{\displaystyle a^{\varphi (n)}\equiv 1{\pmod {n}},}
whereφ(n)denotesEuler's totient function(which counts the integers from 1 tonthat are coprime ton). Fermat's little theorem is indeed a special case, because ifnis a prime number, thenφ(n) =n− 1.
A corollary of Euler's theorem is: For every positive integern, if the integeraiscoprimewithn, thenx≡y(modφ(n))impliesax≡ay(modn),{\displaystyle x\equiv y{\pmod {\varphi (n)}}\quad {\text{implies}}\quad a^{x}\equiv a^{y}{\pmod {n}},}for any integersxandy.
This follows from Euler's theorem, since, ifx≡y(modφ(n)){\displaystyle x\equiv y{\pmod {\varphi (n)}}}, thenx=y+kφ(n)for some integerk, and one hasax=ay+φ(n)k=ay(aφ(n))k≡ay1k≡ay(modn).{\displaystyle a^{x}=a^{y+\varphi (n)k}=a^{y}(a^{\varphi (n)})^{k}\equiv a^{y}1^{k}\equiv a^{y}{\pmod {n}}.}
Ifnis prime, this is also a corollary of Fermat's little theorem. This is widely used inmodular arithmetic, because this allows reducingmodular exponentiationwith large exponents to exponents smaller thann.
Euler's theorem is used withnnot prime inpublic-key cryptography, specifically in theRSA cryptosystem, typically in the following way:[10]ify=xe(modn),{\displaystyle y=x^{e}{\pmod {n}},}retrievingxfrom the values ofy,eandnis easy if one knowsφ(n).[11]In fact, theextended Euclidean algorithmallows computing themodular inverseofemoduloφ(n), that is, the integerfsuch thatef≡1(modφ(n)).{\displaystyle ef\equiv 1{\pmod {\varphi (n)}}.}It follows thatx≡xef≡(xe)f≡yf(modn).{\displaystyle x\equiv x^{ef}\equiv (x^{e})^{f}\equiv y^{f}{\pmod {n}}.}
On the other hand, ifn=pqis the product of two distinct prime numbers, thenφ(n) = (p− 1)(q− 1). In this case, findingffromnandeis as difficult as computingφ(n)(this has not been proven, but no algorithm is known for computingfwithout knowingφ(n)). Knowing onlyn, the computation ofφ(n)has essentially the same difficulty as the factorization ofn, sinceφ(n) = (p− 1)(q− 1), and conversely, the factorspandqare the (integer) solutions of the equationx2− (n−φ(n) + 1)x+n= 0.
The basic idea of RSA cryptosystem is thus: If a messagexis encrypted asy=xe(modn), using public values ofnande, then, with the current knowledge, it cannot be decrypted without finding the (secret) factorspandqofn.
Fermat's little theorem is also related to theCarmichael functionandCarmichael's theorem, as well as toLagrange's theorem in group theory.
Theconverseof Fermat's little theorem fails forCarmichael numbers. However, a slightly weaker variant of the converse isLehmer's theorem:
If there exists an integerasuch thatap−1≡1(modp){\displaystyle a^{p-1}\equiv 1{\pmod {p}}}and for all primesqdividingp− 1one hasa(p−1)/q≢1(modp),{\displaystyle a^{(p-1)/q}\not \equiv 1{\pmod {p}},}thenpis prime.
This theorem forms the basis for theLucas primality test, an importantprimality test, and Pratt'sprimality certificate.
Ifaandpare coprime numbers such thatap−1− 1is divisible byp, thenpneed not be prime. If it is not, thenpis called a(Fermat) pseudoprimeto basea. The first pseudoprime to base 2 was found in 1820 byPierre Frédéric Sarrus: 341 = 11 × 31.[12][13]
A numberpthat is a Fermat pseudoprime to baseafor every numberacoprime topis called aCarmichael number. Alternately, any numberpsatisfying the equalitygcd(p,∑a=1p−1ap−1)=1{\displaystyle \gcd \left(p,\sum _{a=1}^{p-1}a^{p-1}\right)=1}is either a prime or a Carmichael number.
TheMiller–Rabin primality testuses the following extension of Fermat's little theorem:[14]
Ifpis anoddprime andp− 1 = 2sdwiths > 0anddodd > 0, then for everyacoprime top, eitherad≡ 1 (modp)or there existsrsuch that0 ≤r<sanda2rd≡ −1 (modp).
This result may be deduced from Fermat's little theorem by the fact that, ifpis an odd prime, then the integers modulopform afinite field, in which 1 modulophas exactly two square roots, 1 and −1 modulop.
Note thatad≡ 1 (modp)holds trivially fora≡ 1 (modp), because the congruence relation iscompatible with exponentiation. Andad=a20d≡ −1 (modp)holds trivially fora≡ −1 (modp)sincedis odd, for the same reason. That is why one usually chooses a randomain the interval1 <a<p− 1.
The Miller–Rabin test uses this property in the following way: given an odd integerpfor which primality has to be tested, writep− 1 = 2sdwiths > 0anddodd > 0, and choose a randomasuch that1 <a<p− 1; then computeb=admodp; ifbis not 1 nor −1, then square it repeatedly modulopuntil you get −1 or have squareds− 1times. Ifb≠ 1and −1 has not been obtained by squaring, thenpis acompositeandais awitnessfor the compositeness ofp. Otherwise,pis astrongprobable primeto base a; that is, it may be prime or not. Ifpis composite, the probability that the test declares it a strong probable prime anyway is at most1⁄4, in which casepis astrong pseudoprime, andais astrong liar. Therefore afterknon-conclusive random tests, the probability thatpis composite is at most 4−k, and may thus be made as low as desired by increasingk.
In summary, the test either proves that a number is composite or asserts that it is prime with a probability of error that may be chosen as low as desired. The test is very simple to implement and computationally more efficient than all known deterministic tests. Therefore, it is generally used before starting a proof of primality.
|
https://en.wikipedia.org/wiki/Fermat%27s_little_theorem
|
Inmathematics, apermutationof asetcan mean one of two different things:
An example of the first meaning is the six permutations (orderings) of the set {1, 2, 3}: written astuples, they are (1, 2, 3), (1, 3, 2), (2, 1, 3), (2, 3, 1), (3, 1, 2), and (3, 2, 1).Anagramsof a word whose letters are all different are also permutations: the letters are already ordered in the original word, and the anagram reorders them. The study of permutations offinite setsis an important topic incombinatoricsandgroup theory.
Permutations are used in almost every branch of mathematics and in many other fields of science. Incomputer science, they are used for analyzingsorting algorithms; inquantum physics, for describing states of particles; and inbiology, for describingRNAsequences.
The number of permutations ofndistinct objects isnfactorial, usually written asn!, which means the product of all positive integers less than or equal ton.
According to the second meaning, a permutation of asetSis defined as abijectionfromSto itself.[2][3]That is, it is afunctionfromStoSfor which every element occurs exactly once as animagevalue. Such a functionσ:S→S{\displaystyle \sigma :S\to S}is equivalent to the rearrangement of the elements ofSin which each elementiis replaced by the correspondingσ(i){\displaystyle \sigma (i)}. For example, the permutation (3, 1, 2) corresponds to the functionσ{\displaystyle \sigma }defined asσ(1)=3,σ(2)=1,σ(3)=2.{\displaystyle \sigma (1)=3,\quad \sigma (2)=1,\quad \sigma (3)=2.}The collection of all permutations of a set form agroupcalled thesymmetric groupof the set. Thegroup operationis thecomposition of functions(performing one rearrangement after the other), which results in another function (rearrangement). The properties of permutations do not depend on the nature of the elements being permuted, only on their number, so one often considers the standard setS={1,2,…,n}{\displaystyle S=\{1,2,\ldots ,n\}}.
In elementary combinatorics, thek-permutations, orpartial permutations, are the ordered arrangements ofkdistinct elements selected from a set. Whenkis equal to the size of the set, these are the permutations in the previous sense.
Permutation-like objects calledhexagramswere used in China in theI Ching(Pinyin: Yi Jing) as early as 1000 BC.
In Greece,Plutarchwrote thatXenocratesof Chalcedon (396–314 BC) discovered the number of different syllables possible in the Greek language. This would have been the first attempt on record to solve a difficult problem in permutations and combinations.[4]
Al-Khalil(717–786), anArab mathematicianandcryptographer, wrote theBook of Cryptographic Messages. It contains the first use of permutations and combinations, to list all possibleArabicwords with and without vowels.[5]
The rule to determine the number of permutations ofnobjects was known in Indian culture around 1150 AD. TheLilavatiby the Indian mathematicianBhāskara IIcontains a passage that translates as follows:
The product of multiplication of the arithmetical series beginning and increasing by unity and continued to the number of places, will be the variations of number with specific figures.[6]
In 1677,Fabian Stedmandescribed factorials when explaining the number of permutations of bells inchange ringing. Starting from two bells: "first,twomust be admitted to be varied in two ways", which he illustrates by showing 1 2 and 2 1.[7]He then explains that with three bells there are "three times two figures to be produced out of three" which again is illustrated. His explanation involves "cast away 3, and 1.2 will remain; cast away 2, and 1.3 will remain; cast away 1, and 2.3 will remain".[8]He then moves on to four bells and repeats the casting away argument showing that there will be four different sets of three. Effectively, this is a recursive process. He continues with five bells using the "casting away" method and tabulates the resulting 120 combinations.[9]At this point he gives up and remarks:
Now the nature of these methods is such, that the changes on one number comprehends the changes on all lesser numbers, ... insomuch that a compleat Peal of changes on one number seemeth to be formed by uniting of the compleat Peals on all lesser numbers into one entire body;[10]
Stedman widens the consideration of permutations; he goes on to consider the number of permutations of the letters of the alphabet and of horses from a stable of 20.[11]
A first case in which seemingly unrelated mathematical questions were studied with the help of permutations occurred around 1770, whenJoseph Louis Lagrange, in the study of polynomial equations, observed that properties of the permutations of therootsof an equation are related to the possibilities to solve it. This line of work ultimately resulted, through the work ofÉvariste Galois, inGalois theory, which gives a complete description of what is possible and impossible with respect to solving polynomial equations (in one unknown) by radicals. In modern mathematics, there are many similar situations in which understanding a problem requires studying certain permutations related to it.
The study of permutations as substitutions on n elements led to the notion of group as algebraic structure, through the works ofCauchy(1815 memoir).
Permutations played an important role in thecryptanalysis of the Enigma machine, a cipher device used byNazi GermanyduringWorld War II. In particular, one important property of permutations, namely, that two permutations are conjugate exactly when they have the same cycle type, was used by cryptologistMarian Rejewskito break the German Enigma cipher in turn of years 1932-1933.[12][13]
In mathematics texts it is customary to denote permutations using lowercase Greek letters. Commonly, eitherα,β,γ{\displaystyle \alpha ,\beta ,\gamma }orσ,τ,ρ,π{\displaystyle \sigma ,\tau ,\rho ,\pi }are used.[14]
A permutation can be defined as abijection(an invertible mapping, a one-to-one and onto function) from a setSto itself:
σ:S⟶∼S.{\displaystyle \sigma :S\ {\stackrel {\sim }{\longrightarrow }}\ S.}
Theidentity permutationis defined byσ(x)=x{\displaystyle \sigma (x)=x}for all elementsx∈S{\displaystyle x\in S}, and can be denoted by the number1{\displaystyle 1},[a]byid=idS{\displaystyle {\text{id}}={\text{id}}_{S}}, or by a single 1-cycle (x).[15][16]The set of all permutations of a set withnelements forms thesymmetric groupSn{\displaystyle S_{n}}, where thegroup operationiscomposition of functions. Thus for two permutationsσ{\displaystyle \sigma }andτ{\displaystyle \tau }in the groupSn{\displaystyle S_{n}}, their productπ=στ{\displaystyle \pi =\sigma \tau }is defined by:
π(i)=σ(τ(i)).{\displaystyle \pi (i)=\sigma (\tau (i)).}
Composition is usually written without a dot or other sign. In general, composition of two permutations is notcommutative:τσ≠στ.{\displaystyle \tau \sigma \neq \sigma \tau .}
As a bijection from a set to itself, a permutation is a function thatperformsa rearrangement of a set, termed anactive permutationorsubstitution. An older viewpoint sees a permutation as an ordered arrangement or list of all the elements ofS, called apassive permutation.[17]According to this definition, all permutations in§ One-line notationare passive. This meaning is subtly distinct from how passive (i.e.alias) is used inActive and passive transformationand elsewhere,[18][19]which would consider all permutations open to passive interpretation (regardless of whether they are in one-line notation, two-line notation, etc.).
A permutationσ{\displaystyle \sigma }can be decomposed into one or more disjointcycleswhich are theorbitsof the cyclic group⟨σ⟩={1,σ,σ2,…}{\displaystyle \langle \sigma \rangle =\{1,\sigma ,\sigma ^{2},\ldots \}}actingon the setS. A cycle is found by repeatedly applying the permutation to an element:x,σ(x),σ(σ(x)),…,σk−1(x){\displaystyle x,\sigma (x),\sigma (\sigma (x)),\ldots ,\sigma ^{k-1}(x)}, where we assumeσk(x)=x{\displaystyle \sigma ^{k}(x)=x}. A cycle consisting ofkelements is called ak-cycle. (See§ Cycle notationbelow.)
Afixed pointof a permutationσ{\displaystyle \sigma }is an elementxwhich is taken to itself, that isσ(x)=x{\displaystyle \sigma (x)=x}, forming a 1-cycle(x){\displaystyle (\,x\,)}. A permutation with no fixed points is called aderangement. A permutation exchanging two elements (a single 2-cycle) and leaving the others fixed is called atransposition.
Several notations are widely used to represent permutations conveniently.Cycle notationis a popular choice, as it is compact and shows the permutation's structure clearly. This article will use cycle notation unless otherwise specified.
Cauchy'stwo-line notation[20][21]lists the elements ofSin the first row, and the image of each element below it in the second row. For example, the permutation ofS= {1, 2, 3, 4, 5, 6} given by the function
σ(1)=2,σ(2)=6,σ(3)=5,σ(4)=4,σ(5)=3,σ(6)=1{\displaystyle \sigma (1)=2,\ \ \sigma (2)=6,\ \ \sigma (3)=5,\ \ \sigma (4)=4,\ \ \sigma (5)=3,\ \ \sigma (6)=1}
can be written as
The elements ofSmay appear in any order in the first row, so this permutation could also be written:
If there is a "natural" order for the elements ofS,[b]sayx1,x2,…,xn{\displaystyle x_{1},x_{2},\ldots ,x_{n}}, then one uses this for the first row of the two-line notation:
Under this assumption, one may omit the first row and write the permutation inone-line notationas
that is, as an ordered arrangement of the elements ofS.[22][23]Care must be taken to distinguish one-line notation from the cycle notation described below: a common usage is to omit parentheses or other enclosing marks for one-line notation, while using parentheses for cycle notation. The one-line notation is also called thewordrepresentation.[24]
The example above would then be:
σ=(123456265431)=265431.{\displaystyle \sigma ={\begin{pmatrix}1&2&3&4&5&6\\2&6&5&4&3&1\end{pmatrix}}=265431.}
(It is typical to use commas to separate these entries only if some have two or more digits.)
This compact form is common in elementarycombinatoricsandcomputer science. It is especially useful in applications where the permutations are to be compared aslarger or smallerusinglexicographic order.
Cycle notation describes the effect of repeatedly applying the permutation on the elements of the setS, with an orbit being called acycle. The permutation is written as a list of cycles; since distinct cycles involvedisjointsets of elements, this is referred to as "decomposition into disjoint cycles".
To write down the permutationσ{\displaystyle \sigma }in cycle notation, one proceeds as follows:
Also, it is common to omit 1-cycles, since these can be inferred: for any elementxinSnot appearing in any cycle, one implicitly assumesσ(x)=x{\displaystyle \sigma (x)=x}.[25]
Following the convention of omitting 1-cycles, one may interpret an individual cycle as a permutation which fixes all the elements not in the cycle (acyclic permutationhaving only one cycle of length greater than 1). Then the list of disjoint cycles can be seen as the composition of these cyclic permutations. For example, the one-line permutationσ=265431{\displaystyle \sigma =265431}can be written in cycle notation as:
σ=(126)(35)(4)=(126)(35).{\displaystyle \sigma =(126)(35)(4)=(126)(35).}
This may be seen as the compositionσ=κ1κ2{\displaystyle \sigma =\kappa _{1}\kappa _{2}}of cyclic permutations:
κ1=(126)=(126)(3)(4)(5),κ2=(35)=(35)(1)(2)(6).{\displaystyle \kappa _{1}=(126)=(126)(3)(4)(5),\quad \kappa _{2}=(35)=(35)(1)(2)(6).}
While permutations in general do not commute, disjoint cycles do; for example:
σ=(126)(35)=(35)(126).{\displaystyle \sigma =(126)(35)=(35)(126).}
Also, each cycle can be rewritten from a different starting point; for example,
σ=(126)(35)=(261)(53).{\displaystyle \sigma =(126)(35)=(261)(53).}
Thus one may write the disjoint cycles of a given permutation in many different ways.
A convenient feature of cycle notation is that inverting the permutation is given by reversing the order of the elements in each cycle. For example,
σ−1=(A2(126)(35))−1=(621)(53).{\displaystyle \sigma ^{-1}=\left({\vphantom {A^{2}}}(126)(35)\right)^{-1}=(621)(53).}
In some combinatorial contexts it is useful to fix a certain order for the elements in the cycles and of the (disjoint) cycles themselves.Miklós Bónacalls the following ordering choices thecanonical cycle notation:
For example,(513)(6)(827)(94){\displaystyle (513)(6)(827)(94)}is a permutation ofS={1,2,…,9}{\displaystyle S=\{1,2,\ldots ,9\}}in canonical cycle notation.[26]
Richard Stanleycalls this the "standard representation" of a permutation,[27]and Martin Aigner uses "standard form".[24]Sergey Kitaevalso uses the "standard form" terminology, but reverses both choices; that is, each cycle lists its minimal element first, and the cycles are sorted in decreasing order of their minimal elements.[28]
There are two ways to denote the composition of two permutations. In the most common notation,σ⋅τ{\displaystyle \sigma \cdot \tau }is the function that maps any elementxtoσ(τ(x)){\displaystyle \sigma (\tau (x))}. The rightmost permutation is applied to the argument first,[29]because the argument is written to the right of the function.
Adifferentrule for multiplying permutations comes from writing the argument to the left of the function, so that the leftmost permutation acts first.[30][31][32]In this notation, the permutation is often written as an exponent, soσacting onxis writtenxσ; then the product is defined byxσ⋅τ=(xσ)τ{\displaystyle x^{\sigma \cdot \tau }=(x^{\sigma })^{\tau }}. This article uses the first definition, where the rightmost permutation is applied first.
Thefunction compositionoperation satisfies the axioms of agroup. It isassociative, meaning(ρσ)τ=ρ(στ){\displaystyle (\rho \sigma )\tau =\rho (\sigma \tau )}, and products of more than two permutations are usually written without parentheses. The composition operation also has anidentity element(the identity permutationid{\displaystyle {\text{id}}}), and each permutationσ{\displaystyle \sigma }has an inverseσ−1{\displaystyle \sigma ^{-1}}(itsinverse function) withσ−1σ=σσ−1=id{\displaystyle \sigma ^{-1}\sigma =\sigma \sigma ^{-1}={\text{id}}}.
The concept of a permutation as an ordered arrangement admits several generalizations that have been calledpermutations, especially in older literature.
In older literature and elementary textbooks, ak-permutation ofn(sometimes called apartial permutation,sequence without repetition,variation, orarrangement) means an ordered arrangement (list) of ak-element subset of ann-set.[c][33][34]The number of suchk-permutations (k-arrangements) ofn{\displaystyle n}is denoted variously by such symbols asPkn{\displaystyle P_{k}^{n}},nPk{\displaystyle _{n}P_{k}},nPk{\displaystyle ^{n}\!P_{k}},Pn,k{\displaystyle P_{n,k}},P(n,k){\displaystyle P(n,k)}, orAnk{\displaystyle A_{n}^{k}},[35]computed by the formula:[36]
which is 0 whenk>n, and otherwise is equal to
The product is well defined without the assumption thatn{\displaystyle n}is a non-negative integer, and is of importance outside combinatorics as well; it is known as thePochhammer symbol(n)k{\displaystyle (n)_{k}}or as thek{\displaystyle k}-th falling factorial powernk_{\displaystyle n^{\underline {k}}}:
P(n,k)=nPk=(n)k=nk_.{\displaystyle P(n,k)={_{n}}P_{k}=(n)_{k}=n^{\underline {k}}.}
This usage of the termpermutationis closely associated with the termcombinationto mean a subset. Ak-combinationof a setSis ak-element subset ofS: the elements of a combination are not ordered. Ordering thek-combinations ofSin all possible ways produces thek-permutations ofS. The number ofk-combinations of ann-set,C(n,k), is therefore related to the number ofk-permutations ofnby:
These numbers are also known asbinomial coefficients, usually denoted(nk){\displaystyle {\tbinom {n}{k}}}:
C(n,k)=nCk=(nk).{\displaystyle C(n,k)={_{n}}C_{k}={\binom {n}{k}}.}
Ordered arrangements ofkelements of a setS, where repetition is allowed, are calledk-tuples. They have sometimes been referred to aspermutations with repetition, although they are not permutations in the usual sense. They are also calledwordsorstringsover the alphabetS. If the setShasnelements, the number ofk-tuples overSisnk.{\displaystyle n^{k}.}
IfMis a finitemultiset, then amultiset permutationis an ordered arrangement of elements ofMin which each element appears a number of times equal exactly to its multiplicity inM. Ananagramof a word having some repeated letters is an example of a multiset permutation.[d]If the multiplicities of the elements ofM(taken in some order) arem1{\displaystyle m_{1}},m2{\displaystyle m_{2}}, ...,ml{\displaystyle m_{l}}and their sum (that is, the size ofM) isn, then the number of multiset permutations ofMis given by themultinomial coefficient,[37]
For example, the number of distinct anagrams of the word MISSISSIPPI is:[38]
Ak-permutationof a multisetMis a sequence ofkelements ofMin which each element appearsa number of times less than or equal toits multiplicity inM(an element'srepetition number).
Permutations, when considered as arrangements, are sometimes referred to aslinearly orderedarrangements. If, however, the objects are arranged in a circular manner this distinguished ordering is weakened: there is no "first element" in the arrangement, as any element can be considered as the start. An arrangement of distinct objects in a circular manner is called acircular permutation.[39][e]These can be formally defined asequivalence classesof ordinary permutations of these objects, for theequivalence relationgenerated by moving the final element of the linear arrangement to its front.
Two circular permutations are equivalent if one can be rotated into the other. The following four circular permutations on four letters are considered to be the same.
The circular arrangements are to be read counter-clockwise, so the following two are not equivalent since no rotation can bring one to the other.
There are (n– 1)! circular permutations of a set withnelements.
The number of permutations ofndistinct objects isn!.
The number ofn-permutations withkdisjoint cycles is the signlessStirling number of the first kind, denotedc(n,k){\displaystyle c(n,k)}or[nk]{\displaystyle [{\begin{smallmatrix}n\\k\end{smallmatrix}}]}.[40]
The cycles (including the fixed points) of a permutationσ{\displaystyle \sigma }of a set withnelements partition that set; so the lengths of these cycles form aninteger partitionofn, which is called thecycle type(or sometimescycle structureorcycle shape) ofσ{\displaystyle \sigma }. There is a "1" in the cycle type for every fixed point ofσ{\displaystyle \sigma }, a "2" for every transposition, and so on. The cycle type ofβ=(125)(34)(68)(7){\displaystyle \beta =(1\,2\,5\,)(\,3\,4\,)(6\,8\,)(\,7\,)}is(3,2,2,1).{\displaystyle (3,2,2,1).}
This may also be written in a more compact form as[112231].
More precisely, the general form is[1α12α2⋯nαn]{\displaystyle [1^{\alpha _{1}}2^{\alpha _{2}}\dotsm n^{\alpha _{n}}]}, whereα1,…,αn{\displaystyle \alpha _{1},\ldots ,\alpha _{n}}are the numbers of cycles of respective length. The number of permutations of a given cycle type is[41]
The number of cycle types of a set withnelements equals the value of thepartition functionp(n){\displaystyle p(n)}.
Polya'scycle indexpolynomial is agenerating functionwhich counts permutations by their cycle type.
In general, composing permutations written in cycle notation follows no easily described pattern – the cycles of the composition can be different from those being composed. However the cycle type is preserved in the special case ofconjugatinga permutationσ{\displaystyle \sigma }by another permutationπ{\displaystyle \pi }, which means forming the productπσπ−1{\displaystyle \pi \sigma \pi ^{-1}}. Here,πσπ−1{\displaystyle \pi \sigma \pi ^{-1}}is theconjugateofσ{\displaystyle \sigma }byπ{\displaystyle \pi }and its cycle notation can be obtained by taking the cycle notation forσ{\displaystyle \sigma }and applyingπ{\displaystyle \pi }to all the entries in it.[42]It follows that two permutations are conjugate exactly when they have the same cycle type.
The order of a permutationσ{\displaystyle \sigma }is the smallest positive integermso thatσm=id{\displaystyle \sigma ^{m}=\mathrm {id} }. It is theleast common multipleof the lengths of its cycles. For example, the order ofσ=(152)(34){\displaystyle \sigma =(152)(34)}islcm(3,2)=6{\displaystyle {\text{lcm}}(3,2)=6}.
Every permutation of a finite set can be expressed as the product of transpositions.[43]Although many such expressions for a given permutation may exist, either they all contain an even number of transpositions or they all contain an odd number of transpositions. Thus all permutations can be classified aseven or odddepending on this number.
This result can be extended so as to assign asign, writtensgnσ{\displaystyle \operatorname {sgn} \sigma }, to each permutation.sgnσ=+1{\displaystyle \operatorname {sgn} \sigma =+1}ifσ{\displaystyle \sigma }is even andsgnσ=−1{\displaystyle \operatorname {sgn} \sigma =-1}ifσ{\displaystyle \sigma }is odd. Then for two permutationsσ{\displaystyle \sigma }andπ{\displaystyle \pi }
It follows thatsgn(σσ−1)=+1.{\displaystyle \operatorname {sgn} \left(\sigma \sigma ^{-1}\right)=+1.}
The sign of a permutation is equal to the determinant of its permutation matrix (below).
Apermutation matrixis ann×nmatrixthat has exactly one entry 1 in each column and in each row, and all other entries are 0. There are several ways to assign a permutation matrix to a permutation of {1, 2, ...,n}. One natural approach is to defineLσ{\displaystyle L_{\sigma }}to be thelinear transformationofRn{\displaystyle \mathbb {R} ^{n}}which permutes thestandard basis{e1,…,en}{\displaystyle \{\mathbf {e} _{1},\ldots ,\mathbf {e} _{n}\}}byLσ(ej)=eσ(j){\displaystyle L_{\sigma }(\mathbf {e} _{j})=\mathbf {e} _{\sigma (j)}}, and defineMσ{\displaystyle M_{\sigma }}to be its matrix. That is,Mσ{\displaystyle M_{\sigma }}has itsjthcolumn equal to the n × 1 column vectoreσ(j){\displaystyle \mathbf {e} _{\sigma (j)}}: its (i,j) entry is to 1 ifi=σ(j), and 0 otherwise. Since composition of linear mappings is described by matrix multiplication, it follows that this construction is compatible with composition of permutations:
MσMτ=Mστ{\displaystyle M_{\sigma }M_{\tau }=M_{\sigma \tau }}.
For example, the one-line permutationsσ=213,τ=231{\displaystyle \sigma =213,\ \tau =231}have productστ=132{\displaystyle \sigma \tau =132}, and the corresponding matrices are:MσMτ=(010100001)(001100010)=(100001010)=Mστ.{\displaystyle M_{\sigma }M_{\tau }={\begin{pmatrix}0&1&0\\1&0&0\\0&0&1\end{pmatrix}}{\begin{pmatrix}0&0&1\\1&0&0\\0&1&0\end{pmatrix}}={\begin{pmatrix}1&0&0\\0&0&1\\0&1&0\end{pmatrix}}=M_{\sigma \tau }.}
It is also common in the literature to find the inverse convention, where a permutationσis associated to the matrixPσ=(Mσ)−1=(Mσ)T{\displaystyle P_{\sigma }=(M_{\sigma })^{-1}=(M_{\sigma })^{T}}whose (i,j) entry is 1 ifj=σ(i) and is 0 otherwise. In this convention, permutation matrices multiply in the opposite order from permutations, that is,PσPτ=Pτσ{\displaystyle P_{\sigma }P_{\tau }=P_{\tau \sigma }}. In this correspondence, permutation matrices act on the right side of the standard1×n{\displaystyle 1\times n}row vectors(ei)T{\displaystyle ({\bf {e}}_{i})^{T}}:(ei)TPσ=(eσ(i))T{\displaystyle ({\bf {e}}_{i})^{T}P_{\sigma }=({\bf {e}}_{\sigma (i)})^{T}}.
TheCayley tableon the right shows these matrices for permutations of 3 elements.
In some applications, the elements of the set being permuted will be compared with each other. This requires that the setShas atotal orderso that any two elements can be compared. The set {1, 2, ...,n} with the usual ≤ relation is the most frequently used set in these applications.
A number of properties of a permutation are directly related to the total ordering ofS,considering the permutation written in one-line notation as a sequenceσ=σ(1)σ(2)⋯σ(n){\displaystyle \sigma =\sigma (1)\sigma (2)\cdots \sigma (n)}.
Anascentof a permutationσofnis any positioni<nwhere the following value is bigger than the current one. That is,iis an ascent ifσ(i)<σ(i+1){\displaystyle \sigma (i)<\sigma (i{+}1)}. For example, the permutation 3452167 has ascents (at positions) 1, 2, 5, and 6.
Similarly, adescentis a positioni<nwithσ(i)>σ(i+1){\displaystyle \sigma (i)>\sigma (i{+}1)}, so everyiwith1≤i<n{\displaystyle 1\leq i<n}is either an ascent or a descent.
Anascending runof a permutation is a nonempty increasing contiguous subsequence that cannot be extended at either end; it corresponds to a maximal sequence of successive ascents (the latter may be empty: between two successive descents there is still an ascending run of length 1). By contrast anincreasing subsequenceof a permutation is not necessarily contiguous: it is an increasing sequence obtained by omitting some of the values of the one-line notation.
For example, the permutation 2453167 has the ascending runs 245, 3, and 167, while it has an increasing subsequence 2367.
If a permutation hask− 1 descents, then it must be the union ofkascending runs.[44]
The number of permutations ofnwithkascents is (by definition) theEulerian number⟨nk⟩{\displaystyle \textstyle \left\langle {n \atop k}\right\rangle }; this is also the number of permutations ofnwithkdescents. Some authors however define the Eulerian number⟨nk⟩{\displaystyle \textstyle \left\langle {n \atop k}\right\rangle }as the number of permutations withkascending runs, which corresponds tok− 1descents.[45]
An exceedance of a permutationσ1σ2...σnis an indexjsuch thatσj>j. If the inequality is not strict (that is,σj≥j), thenjis called aweak exceedance. The number ofn-permutations withkexceedances coincides with the number ofn-permutations withkdescents.[46]
Arecordorleft-to-right maximumof a permutationσis an elementisuch thatσ(j) <σ(i) for allj < i.
Foata'sfundamental bijectiontransforms a permutationσwith a given canonical cycle form into the permutationf(σ)=σ^{\displaystyle f(\sigma )={\hat {\sigma }}}whose one-line notation has the same sequence of elements with parentheses removed.[27][47]For example:σ=(513)(6)(827)(94)=(123456789375916824),{\displaystyle \sigma =(513)(6)(827)(94)={\begin{pmatrix}1&2&3&4&5&6&7&8&9\\3&7&5&9&1&6&8&2&4\end{pmatrix}},}
σ^=513682794=(123456789513682794).{\displaystyle {\hat {\sigma }}=513682794={\begin{pmatrix}1&2&3&4&5&6&7&8&9\\5&1&3&6&8&2&7&9&4\end{pmatrix}}.}
Here the first element in each canonical cycle ofσbecomes a record (left-to-right maximum) ofσ^{\displaystyle {\hat {\sigma }}}. Givenσ^{\displaystyle {\hat {\sigma }}}, one may find its records and insert parentheses to construct the inverse transformationσ=f−1(σ^){\displaystyle \sigma =f^{-1}({\hat {\sigma }})}. Underlining the records in the above example:σ^=5_136_8_279_4{\displaystyle {\hat {\sigma }}={\underline {5}}\,1\,3\,{\underline {6}}\,{\underline {8}}\,2\,7\,{\underline {9}}\,4}, which allows the reconstruction of the cycles ofσ.
The following table showsσ^{\displaystyle {\hat {\sigma }}}andσfor the six permutations ofS= {1, 2, 3}, with the bold text on each side showing the notation used in the bijection: one-line notation forσ^{\displaystyle {\hat {\sigma }}}and canonical cycle notation forσ.
σ^=f(σ)σ=f−1(σ^)123=(1)(2)(3)123=(1)(2)(3)132=(1)(32)132=(1)(32)213=(21)(3)213=(21)(3)231=(312)321=(2)(31)312=(321)231=(312)321=(2)(31)312=(321){\displaystyle {\begin{array}{l|l}{\hat {\sigma }}=f(\sigma )&\sigma =f^{-1}({\hat {\sigma }})\\\hline \mathbf {123} =(\,1\,)(\,2\,)(\,3\,)&123=\mathbf {(\,1\,)(\,2\,)(\,3\,)} \\\mathbf {132} =(\,1\,)(\,3\,2\,)&132=\mathbf {(\,1\,)(\,3\,2\,)} \\\mathbf {213} =(\,2\,1\,)(\,3\,)&213=\mathbf {(\,2\,1\,)(\,3\,)} \\\mathbf {231} =(\,3\,1\,2\,)&321=\mathbf {(\,2\,)(\,3\,1\,)} \\\mathbf {312} =(\,3\,2\,1\,)&231=\mathbf {(\,3\,1\,2\,)} \\\mathbf {321} =(\,2\,)(\,3\,1\,)&312=\mathbf {(\,3\,2\,1\,)} \end{array}}}As a first corollary, the number ofn-permutations with exactlykrecords is equal to the number ofn-permutations with exactlykcycles: this last number is the signlessStirling number of the first kind,c(n,k){\displaystyle c(n,k)}. Furthermore, Foata's mapping takes ann-permutation withkweak exceedances to ann-permutation withk− 1ascents.[47]For example, (2)(31) = 321 hask =2 weak exceedances (at index 1 and 2), whereasf(321) = 231hask− 1 = 1ascent (at index 1; that is, from 2 to 3).
Aninversionof a permutationσis a pair(i,j)of positions where the entries of a permutation are in the opposite order:i<j{\displaystyle i<j}andσ(i)>σ(j){\displaystyle \sigma (i)>\sigma (j)}.[49]Thus a descent is an inversion at two adjacent positions. For example,σ= 23154has (i,j) = (1, 3), (2, 3), and (4, 5), where (σ(i),σ(j)) = (2, 1), (3, 1), and (5, 4).
Sometimes an inversion is defined as the pair of values (σ(i),σ(j)); this makes no difference for thenumberof inversions, and the reverse pair (σ(j),σ(i)) is an inversion in the above sense for the inverse permutationσ−1.
The number of inversions is an important measure for the degree to which the entries of a permutation are out of order; it is the same forσand forσ−1. To bring a permutation withkinversions into order (that is, transform it into the identity permutation), by successively applying (right-multiplication by)adjacent transpositions, is always possible and requires a sequence ofksuch operations. Moreover, any reasonable choice for the adjacent transpositions will work: it suffices to choose at each step a transposition ofiandi+ 1whereiis a descent of the permutation as modified so far (so that the transposition will remove this particular descent, although it might create other descents). This is so because applying such a transposition reduces the number of inversions by 1; as long as this number is not zero, the permutation is not the identity, so it has at least one descent.Bubble sortandinsertion sortcan be interpreted as particular instances of this procedure to put a sequence into order. Incidentally this procedure proves that any permutationσcan be written as a product of adjacent transpositions; for this one may simply reverse any sequence of such transpositions that transformsσinto the identity. In fact, by enumerating all sequences of adjacent transpositions that would transformσinto the identity, one obtains (after reversal) acompletelist of all expressions of minimal length writingσas a product of adjacent transpositions.
The number of permutations ofnwithkinversions is expressed by aMahonian number.[50]This is the coefficient ofqk{\displaystyle q^{k}}in the expansion of the product
[n]q!=∏m=1n∑i=0m−1qi=1(1+q)(1+q+q2)⋯(1+q+q2+⋯+qn−1),{\displaystyle [n]_{q}!=\prod _{m=1}^{n}\sum _{i=0}^{m-1}q^{i}=1\left(1+q\right)\left(1+q+q^{2}\right)\cdots \left(1+q+q^{2}+\cdots +q^{n-1}\right),}
The notation[n]q!{\displaystyle [n]_{q}!}denotes theq-factorial. This expansion commonly appears in the study ofnecklaces.
Letσ∈Sn,i,j∈{1,2,…,n}{\displaystyle \sigma \in S_{n},i,j\in \{1,2,\dots ,n\}}such thati<j{\displaystyle i<j}andσ(i)>σ(j){\displaystyle \sigma (i)>\sigma (j)}.
In this case, say the weight of the inversion(i,j){\displaystyle (i,j)}isσ(i)−σ(j){\displaystyle \sigma (i)-\sigma (j)}.
Kobayashi (2011) proved the enumeration formula∑i<j,σ(i)>σ(j)(σ(i)−σ(j))=|{τ∈Sn∣τ≤σ,τis bigrassmannian}{\displaystyle \sum _{i<j,\sigma (i)>\sigma (j)}(\sigma (i)-\sigma (j))=|\{\tau \in S_{n}\mid \tau \leq \sigma ,\tau {\text{ is bigrassmannian}}\}}
where≤{\displaystyle \leq }denotesBruhat orderin thesymmetric groups. This graded partial order often appears in the context ofCoxeter groups.
One way to represent permutations ofnthings is by an integerNwith 0 ≤N<n!, provided convenient methods are given to convert between the number and the representation of a permutation as an ordered arrangement (sequence). This gives the most compact representation of arbitrary permutations, and in computing is particularly attractive whennis small enough thatNcan be held in a machine word; for 32-bit words this meansn≤ 12, and for 64-bit words this meansn≤ 20. The conversion can be done via the intermediate form of a sequence of numbersdn,dn−1, ...,d2,d1, wherediis a non-negative integer less thani(one may omitd1, as it is always 0, but its presence makes the subsequent conversion to a permutation easier to describe). The first step then is to simply expressNin thefactorial number system, which is just a particularmixed radixrepresentation, where, for numbers less thann!, the bases (place values or multiplication factors) for successive digits are(n− 1)!,(n− 2)!, ..., 2!, 1!. The second step interprets this sequence as aLehmer codeor (almost equivalently) as an inversion table.
In theLehmer codefor a permutationσ, the numberdnrepresents the choice made for the first termσ1, the numberdn−1represents the choice made for the second termσ2among the remainingn− 1elements of the set, and so forth. More precisely, eachdn+1−igives the number ofremainingelements strictly less than the termσi. Since those remaining elements are bound to turn up as some later termσj, the digitdn+1−icounts theinversions(i,j) involvingias smaller index (the number of valuesjfor whichi<jandσi>σj). Theinversion tableforσis quite similar, but heredn+1−kcounts the number of inversions (i,j) wherek=σjoccurs as the smaller of the two values appearing in inverted order.[51]
Both encodings can be visualized by annbynRothe diagram[52](named afterHeinrich August Rothe) in which dots at (i,σi) mark the entries of the permutation, and a cross at (i,σj) marks the inversion (i,j); by the definition of inversions a cross appears in any square that comes both before the dot (j,σj) in its column, and before the dot (i,σi) in its row. The Lehmer code lists the numbers of crosses in successive rows, while the inversion table lists the numbers of crosses in successive columns; it is just the Lehmer code for the inverse permutation, and vice versa.
To effectively convert a Lehmer codedn,dn−1, ...,d2,d1into a permutation of an ordered setS, one can start with a list of the elements ofSin increasing order, and foriincreasing from 1 tonsetσito the element in the list that is preceded bydn+1−iother ones, and remove that element from the list. To convert an inversion tabledn,dn−1, ...,d2,d1into the corresponding permutation, one can traverse the numbers fromd1todnwhile inserting the elements ofSfrom largest to smallest into an initially empty sequence; at the step using the numberdfrom the inversion table, the element fromSinserted into the sequence at the point where it is preceded bydelements already present. Alternatively one could process the numbers from the inversion table and the elements ofSboth in the opposite order, starting with a row ofnempty slots, and at each step place the element fromSinto the empty slot that is preceded bydother empty slots.
Converting successive natural numbers to the factorial number system produces those sequences inlexicographic order(as is the case with any mixed radix number system), and further converting them to permutations preserves the lexicographic ordering, provided the Lehmer code interpretation is used (using inversion tables, one gets a different ordering, where one starts by comparing permutations by theplaceof their entries 1 rather than by the value of their first entries). The sum of the numbers in the factorial number system representation gives the number of inversions of the permutation, and the parity of that sum gives thesignatureof the permutation. Moreover, the positions of the zeroes in the inversion table give the values of left-to-right maxima of the permutation (in the example 6, 8, 9) while the positions of the zeroes in the Lehmer code are the positions of the right-to-left minima (in the example positions the 4, 8, 9 of the values 1, 2, 5); this allows computing the distribution of such extrema among all permutations. A permutation with Lehmer codedn,dn−1, ...,d2,d1has an ascentn−iif and only ifdi≥di+1.
In computing it may be required to generate permutations of a given sequence of values. The methods best adapted to do this depend on whether one wants some randomly chosen permutations, or all permutations, and in the latter case if a specific ordering is required. Another question is whether possible equality among entries in the given sequence is to be taken into account; if so, one should only generate distinct multiset permutations of the sequence.
An obvious way to generate permutations ofnis to generate values for theLehmer code(possibly using thefactorial number systemrepresentation of integers up ton!), and convert those into the corresponding permutations. However, the latter step, while straightforward, is hard to implement efficiently, because it requiresnoperations each of selection from a sequence and deletion from it, at an arbitrary position; of the obvious representations of the sequence as anarrayor alinked list, both require (for different reasons) aboutn2/4 operations to perform the conversion. Withnlikely to be rather small (especially if generation of all permutations is needed) that is not too much of a problem, but it turns out that both for random and for systematic generation there are simple alternatives that do considerably better. For this reason it does not seem useful, although certainly possible, to employ a special data structure that would allow performing the conversion from Lehmer code to permutation inO(nlogn)time.
For generatingrandom permutationsof a given sequence ofnvalues, it makes no difference whether one applies a randomly selected permutation ofnto the sequence, or chooses a random element from the set of distinct (multiset) permutations of the sequence. This is because, even though in case of repeated values there can be many distinct permutations ofnthat result in the same permuted sequence, the number of such permutations is the same for each possible result. Unlike for systematic generation, which becomes unfeasible for largendue to the growth of the numbern!, there is no reason to assume thatnwill be small for random generation.
The basic idea to generate a random permutation is to generate at random one of then! sequences of integersd1,d2,...,dnsatisfying0 ≤di<i(sinced1is always zero it may be omitted) and to convert it to a permutation through abijectivecorrespondence. For the latter correspondence one could interpret the (reverse) sequence as a Lehmer code, and this gives a generation method first published in 1938 byRonald FisherandFrank Yates.[53]While at the time computer implementation was not an issue, this method suffers from the difficulty sketched above to convert from Lehmer code to permutation efficiently. This can be remedied by using a different bijective correspondence: after usingdito select an element amongiremaining elements of the sequence (for decreasing values ofi), rather than removing the element and compacting the sequence by shifting down further elements one place, oneswapsthe element with the final remaining element. Thus the elements remaining for selection form a consecutive range at each point in time, even though they may not occur in the same order as they did in the original sequence. The mapping from sequence of integers to permutations is somewhat complicated, but it can be seen to produce each permutation in exactly one way, by an immediateinduction. When the selected element happens to be the final remaining element, the swap operation can be omitted. This does not occur sufficiently often to warrant testing for the condition, but the final element must be included among the candidates of the selection, to guarantee that all permutations can be generated.
The resulting algorithm for generating a random permutation ofa[0],a[1], ...,a[n− 1]can be described as follows inpseudocode:
This can be combined with the initialization of the arraya[i] =ias follows
Ifdi+1=i, the first assignment will copy an uninitialized value, but the second will overwrite it with the correct valuei.
However, Fisher-Yates is not the fastest algorithm for generating a permutation, because Fisher-Yates is essentially a sequential algorithm and "divide and conquer" procedures can achieve the same result in parallel.[54]
There are many ways to systematically generate all permutations of a given sequence.[55]One classic, simple, and flexible algorithm is based upon finding the next permutation inlexicographic ordering, if it exists. It can handle repeated values, for which case it generates each distinct multiset permutation once. Even for ordinary permutations it is significantly more efficient than generating values for the Lehmer code in lexicographic order (possibly using thefactorial number system) and converting those to permutations. It begins by sorting the sequence in (weakly)increasingorder (which gives its lexicographically minimal permutation), and then repeats advancing to the next permutation as long as one is found. The method goes back toNarayana Panditain 14th century India, and has been rediscovered frequently.[56]
The following algorithm generates the next permutation lexicographically after a given permutation. It changes the given permutation in-place.
For example, given the sequence [1, 2, 3, 4] (which is in increasing order), and given that the index iszero-based, the steps are as follows:
Following this algorithm, the next lexicographic permutation will be [1, 3, 2, 4], and the 24th permutation will be [4, 3, 2, 1] at which pointa[k] <a[k+ 1] does not exist, indicating that this is the last permutation.
This method uses about 3 comparisons and 1.5 swaps per permutation, amortized over the whole sequence, not counting the initial sort.[57]
An alternative to the above algorithm, theSteinhaus–Johnson–Trotter algorithm, generates an ordering on all the permutations of a given sequence with the property that any two consecutive permutations in its output differ by swapping two adjacent values. This ordering on the permutations was known to 17th-century English bell ringers, among whom it was known as "plain changes". One advantage of this method is that the small amount of change from one permutation to the next allows the method to be implemented in constant time per permutation. The same can also easily generate the subset of even permutations, again in constant time per permutation, by skipping every other output permutation.[56]
An alternative to Steinhaus–Johnson–Trotter isHeap's algorithm,[58]said byRobert Sedgewickin 1977 to be the fastest algorithm of generating permutations in applications.[55]
The following figure shows the output of all three aforementioned algorithms for generating all permutations of lengthn=4{\displaystyle n=4}, and of six additional algorithms described in the literature.
Explicit sequence of swaps (transpositions, 2-cycles(pq){\displaystyle (pq)}), is described here, each swap applied (on the left) to the previous chain providing a new permutation, such that all the permutations can be retrieved, each only once.[64]This counting/generating procedure has an additional structure (call it nested), as it is given in steps: after completely retrievingSk−1{\displaystyle S_{k-1}}, continue retrievingSk∖Sk−1{\displaystyle S_{k}\backslash S_{k-1}}by cosetsSk−1τi{\displaystyle S_{k-1}\tau _{i}}ofSk−1{\displaystyle S_{k-1}}inSk{\displaystyle S_{k}}, by appropriately choosing the coset representativesτi{\displaystyle \tau _{i}}to be described below. Since eachSm{\displaystyle S_{m}}is sequentially generated, there is alast elementλm∈Sm{\displaystyle \lambda _{m}\in S_{m}}. So, after generatingSk−1{\displaystyle S_{k-1}}by swaps, the next permutation inSk∖Sk−1{\displaystyle S_{k}\backslash S_{k-1}}has to beτ1=(p1k)λk−1{\displaystyle \tau _{1}=(p_{1}k)\lambda _{k-1}}for some1≤p1<k{\displaystyle 1\leq p_{1}<k}. Then all swaps that generatedSk−1{\displaystyle S_{k-1}}are repeated, generating the whole cosetSk−1τ1{\displaystyle S_{k-1}\tau _{1}}, reaching the last permutation in that cosetλk−1τ1{\displaystyle \lambda _{k-1}\tau _{1}}; the next swap has to move the permutation to representative of another cosetτ2=(p2k)λk−1τ1{\displaystyle \tau _{2}=(p_{2}k)\lambda _{k-1}\tau _{1}}.
Continuing the same way, one gets coset representativesτj=(pjk)λk−1⋯λk−1(pik)λk−1⋯λk−1(p1k)λk−1{\displaystyle \tau _{j}=(p_{j}k)\lambda _{k-1}\cdots \lambda _{k-1}(p_{i}k)\lambda _{k-1}\cdots \lambda _{k-1}(p_{1}k)\lambda _{k-1}}for the cosets ofSk−1{\displaystyle S_{k-1}}inSk{\displaystyle S_{k}}; the ordered set(p1,…,pk−1){\displaystyle (p_{1},\ldots ,p_{k-1})}(0≤pi<k{\displaystyle 0\leq p_{i}<k}) is called the set of coset beginnings. Two of these representatives are in the same coset if and only ifτj(τi)−1=(pjk)λk−1(pj−1k)λk−1⋯λk−1(pi+1k)=ϰij∈Sk−1{\displaystyle \tau _{j}(\tau _{i})^{-1}=(p_{j}k)\lambda _{k-1}(p_{j-1}k)\lambda _{k-1}\cdots \lambda _{k-1}(p_{i+1}k)=\varkappa _{ij}\in S_{k-1}}, that is,ϰij(k)=k{\displaystyle \varkappa _{ij}(k)=k}. Concluding, permutationsτi∈Sk−Sk−1{\displaystyle \tau _{i}\in S_{k}-S_{k-1}}are all representatives of distinct cosets if and only if for anyk>j>i≥1{\displaystyle k>j>i\geq 1},(λk−1)j−ipi≠pj{\displaystyle (\lambda _{k-1})^{j-i}p_{i}\neq p_{j}}(no repeat condition). In particular, for all generated permutations to be distinct it is not necessary for thepi{\displaystyle p_{i}}values to be distinct. In the process, one gets thatλk=λk−1(pk−1k)λk−1(pk−2k)λk−1⋯λk−1(p1k)λk−1{\displaystyle \lambda _{k}=\lambda _{k-1}(p_{k-1}k)\lambda _{k-1}(p_{k-2}k)\lambda _{k-1}\cdots \lambda _{k-1}(p_{1}k)\lambda _{k-1}}and this provides the recursion procedure.
EXAMPLES: obviously, forλ2{\displaystyle \lambda _{2}}one hasλ2=(12){\displaystyle \lambda _{2}=(12)}; to buildλ3{\displaystyle \lambda _{3}}there are only two possibilities for the coset beginnings satisfying the no repeat condition; the choicep1=p2=1{\displaystyle p_{1}=p_{2}=1}leads toλ3=λ2(13)λ2(13)λ2=(13){\displaystyle \lambda _{3}=\lambda _{2}(13)\lambda _{2}(13)\lambda _{2}=(13)}. To continue generatingS4{\displaystyle S_{4}}one needs appropriate coset beginnings (satisfying the no repeat condition): there is a convenient choice:p1=1,p2=2,p3=3{\displaystyle p_{1}=1,p_{2}=2,p_{3}=3}, leading toλ4=(13)(1234)(13)=(1432){\displaystyle \lambda _{4}=(13)(1234)(13)=(1432)}. Then, to buildλ5{\displaystyle \lambda _{5}}a convenient choice for the coset beginnings (satisfying the no repeat condition) isp1=p2=p3=p4=1{\displaystyle p_{1}=p_{2}=p_{3}=p_{4}=1}, leading toλ5=(15){\displaystyle \lambda _{5}=(15)}.
From examples above one can inductively go to higherk{\displaystyle k}in a similar way, choosing coset beginnings ofSk{\displaystyle S_{k}}inSk+1{\displaystyle S_{k+1}}, as follows: fork{\displaystyle k}even choosing all coset beginnings equal to 1 and fork{\displaystyle k}odd choosing coset beginnings equal to(1,2,…,k){\displaystyle (1,2,\dots ,k)}. With such choices the "last" permutation isλk=(1k){\displaystyle \lambda _{k}=(1k)}fork{\displaystyle k}odd andλk=(1k−)(12⋯k)(1k−){\displaystyle \lambda _{k}=(1k_{-})(12\cdots k)(1k_{-})}fork{\displaystyle k}even (k−=k−1{\displaystyle k_{-}=k-1}). Using these explicit formulae one can easily compute the permutation of certain index in the counting/generation steps with minimum computation. For this, writing the index in factorial base is useful. For example, the permutation for index699=5(5!)+4(4!)+1(2!)+1(1!){\displaystyle 699=5(5!)+4(4!)+1(2!)+1(1!)}is:σ=λ2(13)λ2(15)λ4(15)λ4(15)λ4(15)λ4(56)λ5(46)λ5(36)λ5(26)λ5(16)λ5={\displaystyle \sigma =\lambda _{2}(13)\lambda _{2}(15)\lambda _{4}(15)\lambda _{4}(15)\lambda _{4}(15)\lambda _{4}(56)\lambda _{5}(46)\lambda _{5}(36)\lambda _{5}(26)\lambda _{5}(16)\lambda _{5}=}λ2(13)λ2((15)λ4)4(λ5)−1λ6=(23)(14325)−1(15)(15)(123456)(15)={\displaystyle \lambda _{2}(13)\lambda _{2}((15)\lambda _{4})^{4}(\lambda _{5})^{-1}\lambda _{6}=(23)(14325)^{-1}(15)(15)(123456)(15)=}(23)(15234)(123456)(15){\displaystyle (23)(15234)(123456)(15)}, yelding finally,σ=(1653)(24){\displaystyle \sigma =(1653)(24)}.
Because multiplying by swap permutation takes short computing time and every new generated permutation requires only one such swap multiplication, this generation procedure is quite efficient. Moreover as there is a simple formula, having the last permutation in eachSk{\displaystyle S_{k}}can save even more time to go directly to a permutation with certain index in fewer steps than expected as it can be done in blocks of subgroups rather than swap by swap.
Permutations are used in theinterleavercomponent of theerror detection and correctionalgorithms, such asturbo codes, for example3GPP Long Term Evolutionmobile telecommunication standard uses these ideas (see 3GPP technical specification 36.212[65]).
Such applications raise the question of fast generation of permutations satisfying certain desirable properties. One of the methods is based on thepermutation polynomials. Also as a base for optimal hashing in Unique Permutation Hashing.[66]
|
https://en.wikipedia.org/wiki/Permutation
|
Inmathematics,Stirling numbersarise in a variety ofanalyticandcombinatorialproblems. They are named afterJames Stirling, who introduced them in a purely algebraic setting in his bookMethodus differentialis(1730).[1]They were rediscovered and given a combinatorial meaning by Masanobu Saka in his 1782Sanpō-Gakkai(The Sea of Learning on Mathematics).[2][3]
Two different sets of numbers bear this name: theStirling numbers of the first kindand theStirling numbers of the second kind. Additionally,Lah numbersare sometimes referred to as Stirling numbers of the third kind. Each kind is detailed in its respective article, this one serving as a description of relations between them.
A common property of all three kinds is that they describe coefficients relating three different sequences of polynomials that frequently arise in combinatorics. Moreover, all three can be defined as the number of partitions ofnelements intoknon-empty subsets, where each subset is endowed with a certain kind of order (no order, cyclical, or linear).
Several different notations for Stirling numbers are in use. Ordinary (signed)Stirling numbers of the first kindare commonly denoted:
Unsigned Stirling numbers of the first kind, which count the number ofpermutationsofnelements withkdisjointcycles, are denoted:
Stirling numbers of the second kind, which count the number of ways to partition a set ofnelements intoknonempty subsets:[4]
Abramowitz and Stegunuse an uppercaseS{\displaystyle S}and ablackletterS{\displaystyle {\mathfrak {S}}}, respectively, for the first and second kinds of Stirling number. The notation of brackets and braces, in analogy tobinomial coefficients, was introduced in 1935 byJovan Karamataand promoted later byDonald Knuth, though the bracket notation conflicts with a common notation forGaussian coefficients.[5]The mathematical motivation for this type of notation, as well as additional Stirling number formulae, may be found on the page forStirling numbers and exponential generating functions.
Another infrequent notation iss1(n,k){\displaystyle s_{1}(n,k)}ands2(n,k){\displaystyle s_{2}(n,k)}.
Stirling numbers express coefficients in expansions offalling and rising factorials(also known as the Pochhammer symbol) as polynomials.
That is, thefalling factorial, defined as(x)n=x(x−1)⋯(x−n+1),{\displaystyle \ (x)_{n}=x(x-1)\ \cdots (x-n+1)\ ,}is a polynomial inxof degreenwhose expansion is
with (signed) Stirling numbers of the first kind as coefficients.
Note that(x)0≡1,{\displaystyle \ (x)_{0}\equiv 1\ ,}by convention, because it is anempty product. The notationsxn_{\displaystyle \ x^{\underline {n}}\ }for the falling factorial andxn¯{\displaystyle \ x^{\overline {n}}\ }for the rising factorial are also often used.[6](Confusingly, the Pochhammer symbol that many use forfallingfactorials is used inspecial functionsforrisingfactorials.)
Similarly, therising factorial, defined asx(n)=x(x+1)⋯(x+n−1),{\displaystyle \ x^{(n)}\ =\ x(x+1)\ \cdots (x+n-1)\ ,}is a polynomial inxof degreenwhose expansion is
with unsigned Stirling numbers of the first kind as coefficients.
One of these expansions can be derived from the other by observing thatx(n)=(−1)n(−x)n.{\displaystyle \ x^{(n)}=(-1)^{n}(-x)_{n}~.}
Stirling numbers of the second kind express the reverse relations:
and
Considering the set ofpolynomialsin the (indeterminate) variablexas a vector space,
each of the three sequences
is abasis.
That is, every polynomial inxcan be written as a suma0x(0)+a1x(1)+⋯+anx(n){\displaystyle a_{0}x^{(0)}+a_{1}x^{(1)}+\dots +a_{n}x^{(n)}}for some unique coefficientsai{\displaystyle a_{i}}(similarly for the other two bases).
The above relations then express thechange of basisbetween them, as summarized in the followingcommutative diagram:
The coefficients for the two bottom changes are described by theLah numbersbelow.
Since coefficients in any basis are unique, one can define Stirling numbers this way, as the coefficients expressing polynomials of one basis in terms of another, that is, the unique numbers relatingxn{\displaystyle x^{n}}with falling and rising factorials as above.
Falling factorials define, up to scaling, the same polynomials asbinomial coefficients:(xk)=(x)k/k!{\textstyle {\binom {x}{k}}=(x)_{k}/k!}. The changes between the standard basisx0,x1,x2,…{\displaystyle \textstyle x^{0},x^{1},x^{2},\dots }and the basis(x0),(x1),(x2),…{\textstyle {\binom {x}{0}},{\binom {x}{1}},{\binom {x}{2}},\dots }are thus described by similar formulas:
Expressing a polynomial in the basis of falling factorials is useful for calculating sums of the polynomial evaluated at consecutive integers.
Indeed, the sum of falling factorials with fixedkcan expressed as another falling factorial (fork≠−1{\displaystyle k\neq -1})
This can be proved byinduction.
For example, the sum of fourth powers of integers up ton(this time withnincluded), is:
Here the Stirling numbers can be computed from their definition as the number of partitions of 4 elements intoknon-empty unlabeled subsets.
In contrast, the sum∑i=0nik{\textstyle \sum _{i=0}^{n}i^{k}}in the standard basis is given byFaulhaber's formula, which in general is more complicated.
The Stirling numbers of the first and second kinds can be considered inverses of one another:
and
whereδnk{\displaystyle \delta _{nk}}is theKronecker delta. These two relationships may be understood to be matrix inverse relationships. That is, letsbe thelower triangular matrixof Stirling numbers of the first kind, whose matrix elementssnk=s(n,k).{\displaystyle s_{nk}=s(n,k).\,}Theinverseof this matrix isS, thelower triangular matrixof Stirling numbers of the second kind, whose entries areSnk=S(n,k).{\displaystyle S_{nk}=S(n,k).}Symbolically, this is written
AlthoughsandSare infinite, so calculating a product entry involves an infinite sum, the matrix multiplications work because these matrices are lower triangular, so only a finite number of terms in the sum are nonzero.
The Lah numbersL(n,k)=(n−1k−1)n!k!{\displaystyle L(n,k)={n-1 \choose k-1}{\frac {n!}{k!}}}are sometimes called Stirling numbers of the third kind.[7]By convention,L(0,0)=1{\displaystyle L(0,0)=1}andL(n,k)=0{\displaystyle L(n,k)=0}ifn<k{\displaystyle n<k}ork=0<n{\displaystyle k=0<n}.
These numbers are coefficients expressing falling factorials in terms of rising factorials and vice versa:
As above, this means they express the change of basis between the bases(x)0,(x)1,(x)2,⋯{\displaystyle (x)_{0},(x)_{1},(x)_{2},\cdots }andx(0),x(1),x(2),⋯{\displaystyle x^{(0)},x^{(1)},x^{(2)},\cdots }, completing the diagram.
In particular, one formula is the inverse of the other, thus:
Similarly, composing the change of basis fromx(n){\displaystyle x^{(n)}}toxn{\displaystyle x^{n}}with the change of basis fromxn{\displaystyle x^{n}}to(x)n{\displaystyle (x)_{n}}gives the change of basis directly fromx(n){\displaystyle x^{(n)}}to(x)n{\displaystyle (x)_{n}}:
and similarly for other compositions. In terms of matrices, ifL{\displaystyle L}denotes the matrix with entriesLnk=L(n,k){\displaystyle L_{nk}=L(n,k)}andL−{\displaystyle L^{-}}denotes the matrix with entriesLnk−=(−1)n−kL(n,k){\displaystyle L_{nk}^{-}=(-1)^{n-k}L(n,k)}, then one is the inverse of the other:L−=L−1{\displaystyle L^{-}=L^{-1}}.
Composing the matrix of unsigned Stirling numbers of the first kind with the matrix of Stirling numbers of the second kind gives the Lah numbers:L=|s|⋅S{\displaystyle L=|s|\cdot S}.
Enumeratively,{nk},[nk],L(n,k){\textstyle \left\{{\!n\! \atop \!k\!}\right\},\left[{n \atop k}\right],L(n,k)}can be defined as the number of partitions ofnelements intoknon-empty unlabeled subsets, where each subset is endowed with no order, acyclic order, or a linear order, respectively. In particular, this implies the inequalities:
For any pair of sequences,{fn}{\displaystyle \{f_{n}\}}and{gn}{\displaystyle \{g_{n}\}}, related by a finite sum Stirling number formula given by
for all integersn≥0{\displaystyle n\geq 0}, we have a correspondinginversion formulaforfn{\displaystyle f_{n}}given by
The lower indices could be any integer between0{\textstyle 0}andn{\textstyle n}.
These inversion relations between the two sequences translate into functional equations between the sequenceexponential generating functionsgiven by theStirling (generating function) transformas
and
ForD=d/dx{\displaystyle D=d/dx}, thedifferential operatorsxnDn{\displaystyle x^{n}D^{n}}and(xD)n{\displaystyle (xD)^{n}}are related by the following formulas for all integersn≥0{\displaystyle n\geq 0}:[8]
Another pair of "inversion" relations involving theStirling numbersrelate theforward differencesand the ordinarynth{\displaystyle n^{th}}derivativesof a function,f(x){\displaystyle f(x)}, which is analytic for allx{\displaystyle x}by the formulas[9]
See the specific articles for details.
Abramowitz and Stegun give the following symmetric formulae that relate the Stirling numbers of the first and second kind.[10]
and
The Stirling numbers can be extended to negative integral values, but not all authors do so in the same way.[11][12][13]Regardless of the approach taken, it is worth noting that Stirling numbers of first and second kind are connected by the relations:
whennandkare nonnegative integers. So we have the following table for[−n−k]{\displaystyle \left[{-n \atop -k}\right]}:
Donald Knuth[13]defined the more general Stirling numbers by extending arecurrence relationto all integers. In this approach,[nk]{\textstyle \left[{n \atop k}\right]}and{nk}{\textstyle \left\{{\!n\! \atop \!k\!}\right\}}are zero ifnis negative andkis nonnegative, or ifnis nonnegative andkis negative, and so we have, foranyintegersnandk,
On the other hand, for positive integersnandk, David Branson[12]defined[−n−k],{\textstyle \left[{-n \atop -k}\right]\!,}{−n−k},{\textstyle \left\{{\!-n\! \atop \!-k\!}\right\}\!,}[−nk],{\textstyle \left[{-n \atop k}\right]\!,}and{−nk}{\textstyle \left\{{\!-n\! \atop \!k\!}\right\}}(but not[n−k]{\textstyle \left[{n \atop -k}\right]}or{n−k}{\textstyle \left\{{\!n\! \atop \!-k\!}\right\}}). In this approach, one has the following extension of therecurrence relationof the Stirling numbers of the first kind:
For example,[−5k]=1120(5−102k+103k−54k+15k).{\textstyle \left[{-5 \atop k}\right]={\frac {1}{120}}{\Bigl (}5-{\frac {10}{2^{k}}}+{\frac {10}{3^{k}}}-{\frac {5}{4^{k}}}+{\frac {1}{5^{k}}}{\Bigr )}.}This leads to the following table of values of[nk]{\textstyle \left[{n \atop k}\right]}for negative integraln.
In this case∑n=1∞[−n−k]=Bk{\textstyle \sum _{n=1}^{\infty }\left[{-n \atop -k}\right]=B_{k}}whereBk{\displaystyle B_{k}}is aBell number, and so one may define the negative Bell numbers by∑n=1∞[−nk]=:B−k{\textstyle \sum _{n=1}^{\infty }\left[{-n \atop k}\right]=:B_{-k}}.
For example, this produces∑n=1∞[−n1]=B−1=1e∑j=1∞1j⋅j!=1e∫01et−1tdt=0.4848291…{\textstyle \sum _{n=1}^{\infty }\left[{-n \atop 1}\right]=B_{-1}={\frac {1}{e}}\sum _{j=1}^{\infty }{\frac {1}{j\cdot j!}}={\frac {1}{e}}\int _{0}^{1}{\frac {e^{t}-1}{t}}dt=0.4848291\dots }, generallyB−k=1e∑j=1∞1jkj!{\textstyle B_{-k}={\frac {1}{e}}\sum _{j=1}^{\infty }{\frac {1}{j^{k}j!}}}.
|
https://en.wikipedia.org/wiki/Stirling_numbers
|
Inmathematics,Stirling's approximation(orStirling's formula) is anasymptoticapproximation forfactorials. It is a good approximation, leading to accurate results even for small values ofn{\displaystyle n}. It is named afterJames Stirling, though a related but less precise result was first stated byAbraham de Moivre.[1][2][3]
One way of stating the approximation involves thelogarithmof the factorial:ln(n!)=nlnn−n+O(lnn),{\displaystyle \ln(n!)=n\ln n-n+O(\ln n),}where thebig O notationmeans that, for all sufficiently large values ofn{\displaystyle n}, the difference betweenln(n!){\displaystyle \ln(n!)}andnlnn−n{\displaystyle n\ln n-n}will be at most proportional to the logarithm ofn{\displaystyle n}. In computer science applications such as theworst-case lower bound for comparison sorting, it is convenient to instead use thebinary logarithm, giving the equivalent formlog2(n!)=nlog2n−nlog2e+O(log2n).{\displaystyle \log _{2}(n!)=n\log _{2}n-n\log _{2}e+O(\log _{2}n).}The error term in either base can be expressed more precisely as12log2(2πn)+O(1n){\displaystyle {\tfrac {1}{2}}\log _{2}(2\pi n)+O({\tfrac {1}{n}})}, corresponding to an approximate formula for the factorial itself,n!∼2πn(ne)n.{\displaystyle n!\sim {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}.}Here the sign∼{\displaystyle \sim }means that the two quantities are asymptotic, that is, their ratio tends to 1 asn{\displaystyle n}tends to infinity.
Roughly speaking, the simplest version of Stirling's formula can be quickly obtained by approximating the sumln(n!)=∑j=1nlnj{\displaystyle \ln(n!)=\sum _{j=1}^{n}\ln j}with anintegral:∑j=1nlnj≈∫1nlnxdx=nlnn−n+1.{\displaystyle \sum _{j=1}^{n}\ln j\approx \int _{1}^{n}\ln x\,{\rm {d}}x=n\ln n-n+1.}
The full formula, together with precise estimates of its error, can be derived as follows. Instead of approximatingn!{\displaystyle n!}, one considers itsnatural logarithm, as this is aslowly varying function:ln(n!)=ln1+ln2+⋯+lnn.{\displaystyle \ln(n!)=\ln 1+\ln 2+\cdots +\ln n.}
The right-hand side of this equation minus12(ln1+lnn)=12lnn{\displaystyle {\tfrac {1}{2}}(\ln 1+\ln n)={\tfrac {1}{2}}\ln n}is the approximation by thetrapezoid ruleof the integralln(n!)−12lnn≈∫1nlnxdx=nlnn−n+1,{\displaystyle \ln(n!)-{\tfrac {1}{2}}\ln n\approx \int _{1}^{n}\ln x\,{\rm {d}}x=n\ln n-n+1,}
and the error in this approximation is given by theEuler–Maclaurin formula:ln(n!)−12lnn=12ln1+ln2+ln3+⋯+ln(n−1)+12lnn=nlnn−n+1+∑k=2m(−1)kBkk(k−1)(1nk−1−1)+Rm,n,{\displaystyle {\begin{aligned}\ln(n!)-{\tfrac {1}{2}}\ln n&={\tfrac {1}{2}}\ln 1+\ln 2+\ln 3+\cdots +\ln(n-1)+{\tfrac {1}{2}}\ln n\\&=n\ln n-n+1+\sum _{k=2}^{m}{\frac {(-1)^{k}B_{k}}{k(k-1)}}\left({\frac {1}{n^{k-1}}}-1\right)+R_{m,n},\end{aligned}}}
whereBk{\displaystyle B_{k}}is aBernoulli number, andRm,nis the remainder term in the Euler–Maclaurin formula. Take limits to find thatlimn→∞(ln(n!)−nlnn+n−12lnn)=1−∑k=2m(−1)kBkk(k−1)+limn→∞Rm,n.{\displaystyle \lim _{n\to \infty }\left(\ln(n!)-n\ln n+n-{\tfrac {1}{2}}\ln n\right)=1-\sum _{k=2}^{m}{\frac {(-1)^{k}B_{k}}{k(k-1)}}+\lim _{n\to \infty }R_{m,n}.}
Denote this limit asy{\displaystyle y}. Because the remainderRm,nin the Euler–Maclaurin formula satisfiesRm,n=limn→∞Rm,n+O(1nm),{\displaystyle R_{m,n}=\lim _{n\to \infty }R_{m,n}+O\left({\frac {1}{n^{m}}}\right),}
wherebig-O notationis used, combining the equations above yields the approximation formula in its logarithmic form:ln(n!)=nln(ne)+12lnn+y+∑k=2m(−1)kBkk(k−1)nk−1+O(1nm).{\displaystyle \ln(n!)=n\ln \left({\frac {n}{e}}\right)+{\tfrac {1}{2}}\ln n+y+\sum _{k=2}^{m}{\frac {(-1)^{k}B_{k}}{k(k-1)n^{k-1}}}+O\left({\frac {1}{n^{m}}}\right).}
Taking the exponential of both sides and choosing any positive integerm{\displaystyle m}, one obtains a formula involving an unknown quantityey{\displaystyle e^{y}}. Form= 1, the formula isn!=eyn(ne)n(1+O(1n)).{\displaystyle n!=e^{y}{\sqrt {n}}\left({\frac {n}{e}}\right)^{n}\left(1+O\left({\frac {1}{n}}\right)\right).}
The quantityey{\displaystyle e^{y}}can be found by taking the limit on both sides asn{\displaystyle n}tends to infinity and usingWallis' product, which shows thatey=2π{\displaystyle e^{y}={\sqrt {2\pi }}}. Therefore, one obtains Stirling's formula:n!=2πn(ne)n(1+O(1n)).{\displaystyle n!={\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}\left(1+O\left({\frac {1}{n}}\right)\right).}
An alternative formula forn!{\displaystyle n!}using thegamma functionisn!=∫0∞xne−xdx.{\displaystyle n!=\int _{0}^{\infty }x^{n}e^{-x}\,{\rm {d}}x.}(as can be seen by repeated integration by parts). Rewriting and changing variablesx=ny, one obtainsn!=∫0∞enlnx−xdx=enlnnn∫0∞en(lny−y)dy.{\displaystyle n!=\int _{0}^{\infty }e^{n\ln x-x}\,{\rm {d}}x=e^{n\ln n}n\int _{0}^{\infty }e^{n(\ln y-y)}\,{\rm {d}}y.}ApplyingLaplace's methodone has∫0∞en(lny−y)dy∼2πne−n,{\displaystyle \int _{0}^{\infty }e^{n(\ln y-y)}\,{\rm {d}}y\sim {\sqrt {\frac {2\pi }{n}}}e^{-n},}which recovers Stirling's formula:n!∼enlnnn2πne−n=2πn(ne)n.{\displaystyle n!\sim e^{n\ln n}n{\sqrt {\frac {2\pi }{n}}}e^{-n}={\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}.}
In fact, further corrections can also be obtained using Laplace's method. From previous result, we know thatΓ(x)∼xxe−x{\displaystyle \Gamma (x)\sim x^{x}e^{-x}}, so we "peel off" this dominant term, then perform two changes of variables, to obtain:x−xexΓ(x)=∫Rex(1+t−et)dt{\displaystyle x^{-x}e^{x}\Gamma (x)=\int _{\mathbb {R} }e^{x(1+t-e^{t})}dt}To verify this:∫Rex(1+t−et)dt=t↦lntex∫0∞tx−1e−xtdt=t↦t/xx−xex∫0∞e−ttx−1dt=x−xexΓ(x){\displaystyle \int _{\mathbb {R} }e^{x(1+t-e^{t})}dt{\overset {t\mapsto \ln t}{=}}e^{x}\int _{0}^{\infty }t^{x-1}e^{-xt}dt{\overset {t\mapsto t/x}{=}}x^{-x}e^{x}\int _{0}^{\infty }e^{-t}t^{x-1}dt=x^{-x}e^{x}\Gamma (x)}.
Now the functiont↦1+t−et{\displaystyle t\mapsto 1+t-e^{t}}is unimodal, with maximum value zero. Locally around zero, it looks like−t2/2{\displaystyle -t^{2}/2}, which is why we are able to perform Laplace's method. In order to extend Laplace's method to higher orders, we perform another change of variables by1+t−et=−τ2/2{\displaystyle 1+t-e^{t}=-\tau ^{2}/2}. This equation cannot be solved in closed form, but it can be solved by serial expansion, which gives ust=τ−τ2/6+τ3/36+a4τ4+O(τ5){\displaystyle t=\tau -\tau ^{2}/6+\tau ^{3}/36+a_{4}\tau ^{4}+O(\tau ^{5})}. Now plug back to the equation to obtainx−xexΓ(x)=∫Re−xτ2/2(1−τ/3+τ2/12+4a4τ3+O(τ4))dτ=2π(x−1/2+x−3/2/12)+O(x−5/2){\displaystyle x^{-x}e^{x}\Gamma (x)=\int _{\mathbb {R} }e^{-x\tau ^{2}/2}(1-\tau /3+\tau ^{2}/12+4a_{4}\tau ^{3}+O(\tau ^{4}))d\tau ={\sqrt {2\pi }}(x^{-1/2}+x^{-3/2}/12)+O(x^{-5/2})}notice how we don't need to actually finda4{\displaystyle a_{4}}, since it is cancelled out by the integral. Higher orders can be achieved by computing more terms int=τ+⋯{\displaystyle t=\tau +\cdots }, which can be obtained programmatically.[note 1]
Thus we get Stirling's formula to two orders:n!=2πn(ne)n(1+112n+O(1n2)).{\displaystyle n!={\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}\left(1+{\frac {1}{12n}}+O\left({\frac {1}{n^{2}}}\right)\right).}
A complex-analysis version of this method[4]is to consider1n!{\displaystyle {\frac {1}{n!}}}as aTaylor coefficientof the exponential functionez=∑n=0∞znn!{\displaystyle e^{z}=\sum _{n=0}^{\infty }{\frac {z^{n}}{n!}}}, computed byCauchy's integral formulaas1n!=12πi∮|z|=rezzn+1dz.{\displaystyle {\frac {1}{n!}}={\frac {1}{2\pi i}}\oint \limits _{|z|=r}{\frac {e^{z}}{z^{n+1}}}\,\mathrm {d} z.}
This line integral can then be approximated using thesaddle-point methodwith an appropriate choice of contour radiusr=rn{\displaystyle r=r_{n}}. The dominant portion of the integral near the saddle point is then approximated by a real integral and Laplace's method, while the remaining portion of the integral can be bounded above to give an error term.
An alternative version uses the fact that thePoisson distributionconverges to anormal distributionby theCentral Limit Theorem.[5]
Since the Poisson distribution with parameterλ{\displaystyle \lambda }converges to a normal distribution with meanλ{\displaystyle \lambda }and varianceλ{\displaystyle \lambda }, theirdensity functionswill be approximately the same:
exp(−μ)μxx!≈12πμexp(−12(x−μμ)){\displaystyle {\frac {\exp(-\mu )\mu ^{x}}{x!}}\approx {\frac {1}{\sqrt {2\pi \mu }}}\exp(-{\frac {1}{2}}({\frac {x-\mu }{\sqrt {\mu }}}))}
Evaluating this expression at the mean, at which the approximation is particularly accurate, simplifies this expression to:
exp(−μ)μμμ!≈12πμ{\displaystyle {\frac {\exp(-\mu )\mu ^{\mu }}{\mu !}}\approx {\frac {1}{\sqrt {2\pi \mu }}}}
Taking logs then results in:
−μ+μlnμ−lnμ!≈−12ln2πμ{\displaystyle -\mu +\mu \ln \mu -\ln \mu !\approx -{\frac {1}{2}}\ln 2\pi \mu }
which can easily be rearranged to give:
lnμ!≈μlnμ−μ+12ln2πμ{\displaystyle \ln \mu !\approx \mu \ln \mu -\mu +{\frac {1}{2}}\ln 2\pi \mu }
Evaluating atμ=n{\displaystyle \mu =n}gives the usual, more precise form of Stirling's approximation.
Stirling's formula is in fact the first approximation to the following series (now called theStirling series):[6]n!∼2πn(ne)n(1+112n+1288n2−13951840n3−5712488320n4+⋯).{\displaystyle n!\sim {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}\left(1+{\frac {1}{12n}}+{\frac {1}{288n^{2}}}-{\frac {139}{51840n^{3}}}-{\frac {571}{2488320n^{4}}}+\cdots \right).}
An explicit formula for the coefficients in this series was given by G. Nemes.[7]Further terms are listed in theOn-Line Encyclopedia of Integer SequencesasA001163andA001164. The first graph in this section shows therelative errorvs.n{\displaystyle n}, for 1 through all 5 terms listed above. (Bender and Orszag[8]p. 218) gives the asymptotic formula for the coefficients:A2j+1∼(−1)j2(2j)!/(2π)2(j+1){\displaystyle A_{2j+1}\sim (-1)^{j}2(2j)!/(2\pi )^{2(j+1)}}which shows that it grows superexponentially, and that by theratio testtheradius of convergenceis zero.
Asn→ ∞, the error in the truncated series is asymptotically equal to the first omitted term. This is an example of anasymptotic expansion. It is not aconvergent series; for anyparticularvalue ofn{\displaystyle n}there are only so many terms of the series that improve accuracy, after which accuracy worsens. This is shown in the next graph, which shows the relative error versus the number of terms in the series, for larger numbers of terms. More precisely, letS(n,t)be the Stirling series tot{\displaystyle t}terms evaluated atn{\displaystyle n}. The graphs show|ln(S(n,t)n!)|,{\displaystyle \left|\ln \left({\frac {S(n,t)}{n!}}\right)\right|,}which, when small, is essentially the relative error.
Writing Stirling's series in the formln(n!)∼nlnn−n+12ln(2πn)+112n−1360n3+11260n5−11680n7+⋯,{\displaystyle \ln(n!)\sim n\ln n-n+{\tfrac {1}{2}}\ln(2\pi n)+{\frac {1}{12n}}-{\frac {1}{360n^{3}}}+{\frac {1}{1260n^{5}}}-{\frac {1}{1680n^{7}}}+\cdots ,}it is known that the error in truncating the series is always of the opposite sign and at most the same magnitude as the first omitted term.[citation needed]
Other bounds, due to Robbins,[9]valid for all positive integersn{\displaystyle n}are2πn(ne)ne112n+1<n!<2πn(ne)ne112n.{\displaystyle {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}e^{\frac {1}{12n+1}}<n!<{\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}e^{\frac {1}{12n}}.}This upper bound corresponds to stopping the above series forln(n!){\displaystyle \ln(n!)}after the1n{\displaystyle {\frac {1}{n}}}term. The lower bound is weaker than that obtained by stopping the series after the1n3{\displaystyle {\frac {1}{n^{3}}}}term. A looser version of this bound is thatn!ennn+12∈(2π,e]{\displaystyle {\frac {n!e^{n}}{n^{n+{\frac {1}{2}}}}}\in ({\sqrt {2\pi }},e]}for alln≥1{\displaystyle n\geq 1}.
For all positive integers,n!=Γ(n+1),{\displaystyle n!=\Gamma (n+1),}whereΓdenotes thegamma function.
However, the gamma function, unlike the factorial, is more broadly defined for all complex numbers other than non-positive integers; nevertheless, Stirling's formula may still be applied. IfRe(z) > 0, thenlnΓ(z)=zlnz−z+12ln2πz+∫0∞2arctan(tz)e2πt−1dt.{\displaystyle \ln \Gamma (z)=z\ln z-z+{\tfrac {1}{2}}\ln {\frac {2\pi }{z}}+\int _{0}^{\infty }{\frac {2\arctan \left({\frac {t}{z}}\right)}{e^{2\pi t}-1}}\,{\rm {d}}t.}
Repeated integration by parts giveslnΓ(z)∼zlnz−z+12ln2πz+∑n=1N−1B2n2n(2n−1)z2n−1=zlnz−z+12ln2πz+112z−1360z3+11260z5+…,{\displaystyle {\begin{aligned}\ln \Gamma (z)\sim z\ln z-z+{\tfrac {1}{2}}\ln {\frac {2\pi }{z}}+\sum _{n=1}^{N-1}{\frac {B_{2n}}{2n(2n-1)z^{2n-1}}}\\=z\ln z-z+{\tfrac {1}{2}}\ln {\frac {2\pi }{z}}+{\frac {1}{12z}}-{\frac {1}{360z^{3}}}+{\frac {1}{1260z^{5}}}+\dots ,\end{aligned}}}
whereBn{\displaystyle B_{n}}is then{\displaystyle n}thBernoulli number(note that the limit of the sum asN→∞{\displaystyle N\to \infty }is not convergent, so this formula is just anasymptotic expansion). The formula is valid forz{\displaystyle z}large enough in absolute value, when|arg(z)| < π −ε, whereεis positive, with an error term ofO(z−2N+ 1). The corresponding approximation may now be written:Γ(z)=2πz(ze)z(1+O(1z)).{\displaystyle \Gamma (z)={\sqrt {\frac {2\pi }{z}}}\,{\left({\frac {z}{e}}\right)}^{z}\left(1+O\left({\frac {1}{z}}\right)\right).}
where the expansion is identical to that of Stirling's series above forn!{\displaystyle n!}, except thatn{\displaystyle n}is replaced withz− 1.[10]
A further application of this asymptotic expansion is for complex argumentzwith constantRe(z). See for example the Stirling formula applied inIm(z) =tof theRiemann–Siegel theta functionon the straight line1/4+it.
Thomas Bayesshowed, in a letter toJohn Cantonpublished by theRoyal Societyin 1763, that Stirling's formula did not give aconvergent series.[11]Obtaining a convergent version of Stirling's formula entails evaluatingBinet's formula:∫0∞2arctan(tx)e2πt−1dt=lnΓ(x)−xlnx+x−12ln2πx.{\displaystyle \int _{0}^{\infty }{\frac {2\arctan \left({\frac {t}{x}}\right)}{e^{2\pi t}-1}}\,{\rm {d}}t=\ln \Gamma (x)-x\ln x+x-{\tfrac {1}{2}}\ln {\frac {2\pi }{x}}.}
One way to do this is by means of a convergent series of invertedrising factorials. Ifzn¯=z(z+1)⋯(z+n−1),{\displaystyle z^{\bar {n}}=z(z+1)\cdots (z+n-1),}then∫0∞2arctan(tx)e2πt−1dt=∑n=1∞cn(x+1)n¯,{\displaystyle \int _{0}^{\infty }{\frac {2\arctan \left({\frac {t}{x}}\right)}{e^{2\pi t}-1}}\,{\rm {d}}t=\sum _{n=1}^{\infty }{\frac {c_{n}}{(x+1)^{\bar {n}}}},}wherecn=1n∫01xn¯(x−12)dx=12n∑k=1nk|s(n,k)|(k+1)(k+2),{\displaystyle c_{n}={\frac {1}{n}}\int _{0}^{1}x^{\bar {n}}\left(x-{\tfrac {1}{2}}\right)\,{\rm {d}}x={\frac {1}{2n}}\sum _{k=1}^{n}{\frac {k|s(n,k)|}{(k+1)(k+2)}},}wheres(n,k)denotes theStirling numbers of the first kind. From this one obtains a version of Stirling's serieslnΓ(x)=xlnx−x+12ln2πx+112(x+1)+112(x+1)(x+2)++59360(x+1)(x+2)(x+3)+2960(x+1)(x+2)(x+3)(x+4)+⋯,{\displaystyle {\begin{aligned}\ln \Gamma (x)&=x\ln x-x+{\tfrac {1}{2}}\ln {\frac {2\pi }{x}}+{\frac {1}{12(x+1)}}+{\frac {1}{12(x+1)(x+2)}}+\\&\quad +{\frac {59}{360(x+1)(x+2)(x+3)}}+{\frac {29}{60(x+1)(x+2)(x+3)(x+4)}}+\cdots ,\end{aligned}}}which converges whenRe(x) > 0.
Stirling's formula may also be given in convergent form as[12]Γ(x)=2πxx−12e−x+μ(x){\displaystyle \Gamma (x)={\sqrt {2\pi }}x^{x-{\frac {1}{2}}}e^{-x+\mu (x)}}whereμ(x)=∑n=0∞((x+n+12)ln(1+1x+n)−1).{\displaystyle \mu \left(x\right)=\sum _{n=0}^{\infty }\left(\left(x+n+{\frac {1}{2}}\right)\ln \left(1+{\frac {1}{x+n}}\right)-1\right).}
The approximationΓ(z)≈2πz(zezsinh1z+1810z6)z{\displaystyle \Gamma (z)\approx {\sqrt {\frac {2\pi }{z}}}\left({\frac {z}{e}}{\sqrt {z\sinh {\frac {1}{z}}+{\frac {1}{810z^{6}}}}}\right)^{z}}and its equivalent form2lnΓ(z)≈ln(2π)−lnz+z(2lnz+ln(zsinh1z+1810z6)−2){\displaystyle 2\ln \Gamma (z)\approx \ln(2\pi )-\ln z+z\left(2\ln z+\ln \left(z\sinh {\frac {1}{z}}+{\frac {1}{810z^{6}}}\right)-2\right)}can be obtained by rearranging Stirling's extended formula and observing a coincidence between the resultantpower seriesand theTaylor seriesexpansion of thehyperbolic sinefunction. This approximation is good to more than 8 decimal digits forzwith a real part greater than 8. Robert H. Windschitl suggested it in 2002 for computing the gamma function with fair accuracy on calculators with limited program or register memory.[13]
Gergő Nemes proposed in 2007 an approximation which gives the same number of exact digits as the Windschitl approximation but is much simpler:[14]Γ(z)≈2πz(1e(z+112z−110z))z,{\displaystyle \Gamma (z)\approx {\sqrt {\frac {2\pi }{z}}}\left({\frac {1}{e}}\left(z+{\frac {1}{12z-{\frac {1}{10z}}}}\right)\right)^{z},}or equivalently,lnΓ(z)≈12(ln(2π)−lnz)+z(ln(z+112z−110z)−1).{\displaystyle \ln \Gamma (z)\approx {\tfrac {1}{2}}\left(\ln(2\pi )-\ln z\right)+z\left(\ln \left(z+{\frac {1}{12z-{\frac {1}{10z}}}}\right)-1\right).}
An alternative approximation for the gamma function stated bySrinivasa RamanujaninRamanujan's lost notebook[15]isΓ(1+x)≈π(xe)x(8x3+4x2+x+130)16{\displaystyle \Gamma (1+x)\approx {\sqrt {\pi }}\left({\frac {x}{e}}\right)^{x}\left(8x^{3}+4x^{2}+x+{\frac {1}{30}}\right)^{\frac {1}{6}}}forx≥ 0. The equivalent approximation forlnn!has an asymptotic error of1/1400n3and is given bylnn!≈nlnn−n+16ln(8n3+4n2+n+130)+12lnπ.{\displaystyle \ln n!\approx n\ln n-n+{\tfrac {1}{6}}\ln(8n^{3}+4n^{2}+n+{\tfrac {1}{30}})+{\tfrac {1}{2}}\ln \pi .}
The approximation may be made precise by giving paired upper and lower bounds; one such inequality is[16][17][18][19]π(xe)x(8x3+4x2+x+1100)1/6<Γ(1+x)<π(xe)x(8x3+4x2+x+130)1/6.{\displaystyle {\sqrt {\pi }}\left({\frac {x}{e}}\right)^{x}\left(8x^{3}+4x^{2}+x+{\frac {1}{100}}\right)^{1/6}<\Gamma (1+x)<{\sqrt {\pi }}\left({\frac {x}{e}}\right)^{x}\left(8x^{3}+4x^{2}+x+{\frac {1}{30}}\right)^{1/6}.}
The formula was first discovered byAbraham de Moivre[2]in the formn!∼[constant]⋅nn+12e−n.{\displaystyle n!\sim [{\rm {constant}}]\cdot n^{n+{\frac {1}{2}}}e^{-n}.}
De Moivre gave an approximate rational-number expression for the natural logarithm of the constant. Stirling's contribution consisted of showing that the constant is precisely2π{\displaystyle {\sqrt {2\pi }}}.[3]
|
https://en.wikipedia.org/wiki/Stirling_approximation
|
Incryptography, theInternational Data Encryption Algorithm(IDEA), originally calledImproved Proposed Encryption Standard(IPES), is asymmetric-keyblock cipherdesigned byJames MasseyofETH ZurichandXuejia Laiand was first described in 1991. The algorithm was intended as a replacement for theData Encryption Standard(DES). IDEA is a minor revision of an earliercipher, the Proposed Encryption Standard (PES).
The cipher was designed under a research contract with the Hasler Foundation, which became part of Ascom-Tech AG. The cipher was patented in a number of countries but was freely available for non-commercial use. The name "IDEA" is also atrademark. The lastpatentsexpired in 2012, and IDEA is now patent-free and thus completely free for all uses.[2]
IDEA was used inPretty Good Privacy(PGP) v2.0 and was incorporated after the original cipher used in v1.0,BassOmatic, was found to be insecure.[3]IDEA is an optional algorithm in theOpenPGPstandard.
IDEA operates on 64-bitblocksusing a 128-bitkeyand consists of a series of 8 identical transformations (around, see the illustration) and an output transformation (thehalf-round). The processes for encryption and decryption are similar. IDEA derives much of its security by interleaving operations from differentgroups—modularaddition and multiplication, and bitwiseeXclusive OR (XOR)— which are algebraically "incompatible" in some sense. In more detail, these operators, which all deal with 16-bit quantities, are:
After the 8 rounds comes a final “half-round”, the output transformation illustrated below (the swap of the middle two values cancels out the swap at the end of the last round, so that there is no net swap):
The overall structure of IDEA follows theLai–Massey scheme. XOR is used for both subtraction and addition. IDEA uses a key-dependent half-round function. To work with 16-bit words (meaning 4 inputs instead of 2 for the 64-bit block size), IDEA uses the Lai–Massey scheme twice in parallel, with the two parallel round functions being interwoven with each other. To ensure sufficient diffusion, two of the sub-blocks are swapped after each round.
Each round uses 6 16-bit sub-keys, while the half-round uses 4, a total of 52 for 8.5 rounds. The first 8 sub-keys are extracted directly from the key, with K1 from the first round being the lower 16 bits; further groups of 8 keys are created by rotating the main key left 25 bits between each group of 8. This means that it is rotated less than once per round, on average, for a total of 6 rotations.
Decryption works like encryption, but the order of the round keys is inverted, and the subkeys for the odd rounds are inversed. For instance, the values of subkeys K1–K4 are replaced by the inverse of K49–K52 for the respective group operation, K5 and K6 of each group should be replaced by K47 and K48 for decryption.
The designers analysed IDEA to measure its strength againstdifferential cryptanalysisand concluded that it is immune under certain assumptions. No successfullinearor algebraic weaknesses have been reported. As of 2007[update], the best attack applied to all keys could break IDEA reduced to 6 rounds (the full IDEA cipher uses 8.5 rounds).[4]Note that a "break" is any attack that requires less than 2128operations; the 6-round attack requires 264known plaintexts and 2126.8operations.
Bruce Schneierthought highly of IDEA in 1996, writing: "In my opinion, it is the best and most secure block algorithm available to the public at this time." (Applied Cryptography, 2nd ed.) However, by 1999 he was no longer recommending IDEA due to the availability of faster algorithms, some progress in itscryptanalysis, and the issue of patents.[5]
In 2011 full 8.5-round IDEA was broken using a meet-in-the-middle attack.[6]Independently in 2012, full 8.5-round IDEA was broken using a narrow-bicliques attack, with a reduction of cryptographic strength of about 2 bits, similar to the effect of the previous bicliques attack onAES; however, this attack does not threaten the security of IDEA in practice.[7]
The very simple key schedule makes IDEA subject to a class ofweak keys; some keys containing a large number of 0 bits produceweak encryption.[8]These are of little concern in practice, being sufficiently rare that they are unnecessary to avoid explicitly when generating keys randomly. A simple fix was proposed: XORing each subkey with a 16-bit constant, such as 0x0DAE.[8][9]
Larger classes of weak keys were found in 2002.[10]
This is still of negligible probability to be a concern to a randomly chosen key, and some of the problems are fixed by the constant XOR proposed earlier, but the paper is not certain if all of them are. A more comprehensive redesign of the IDEA key schedule may be desirable.[10]
A patent application for IDEA was first filed inSwitzerland(CH A 1690/90) on May 18, 1990, then an international patent application was filed under thePatent Cooperation Treatyon May 16, 1991. Patents were eventually granted inAustria,France,Germany,Italy, theNetherlands,Spain,Sweden,Switzerland, theUnited Kingdom, (European Patent Register entry forEuropean patent no. 0482154, filed May 16, 1991, issued June 22, 1994 and expired May 16, 2011), theUnited States(U.S. patent 5,214,703, issued May 25, 1993 and expired January 7, 2012) andJapan(JP 3225440, expired May 16, 2011).[11]
MediaCrypt AG is now offering a successor to IDEA and focuses on its new cipher (official release in May 2005)IDEA NXT, which was previously called FOX.
|
https://en.wikipedia.org/wiki/IDEA_(cipher)
|
TheGlobal System for Mobile Communications(GSM) is a family of standards to describe the protocols for second-generation (2G) digitalcellular networks,[2]as used by mobile devices such asmobile phonesandmobile broadband modems. GSM is also atrade markowned by theGSM Association.[3]"GSM" may also refer to the voice codec initially used in GSM.[4]
2G networks developed as a replacement for first generation (1G) analog cellular networks. The original GSM standard, which was developed by theEuropean Telecommunications Standards Institute(ETSI), originally described a digital, circuit-switched network optimized forfull duplexvoicetelephony, employingtime division multiple access(TDMA) between stations. This expanded over time to includedata communications, first bycircuit-switched transport, then bypacketdata transport via its upgraded standards,GPRSand thenEDGE. GSM exists in various versions based on thefrequency bands used.
GSM was first implemented inFinlandin December 1991.[5]It became the global standard for mobile cellular communications, with over 2 billion GSM subscribers globally in 2006, far above its competing standard,CDMA.[6]Its share reached over 90% market share by the mid-2010s, and operating in over 219 countries and territories.[2]The specifications and maintenance of GSM passed over to the3GPPbody in 2000,[7]which at the time developed third-generation (3G)UMTSstandards, followed by the fourth-generation (4G) LTE Advanced and the fifth-generation5Gstandards, which do not form part of the GSM standard. Beginning in the late 2010s, various carriers worldwidestarted to shut down their GSM networks; nevertheless, as a result of the network's widespread use, the acronym "GSM" is still used as a generic term for the plethora ofGmobile phone technologies evolved from it or mobile phones itself.
In 1983, work began to develop a European standard for digital cellular voice telecommunications when theEuropean Conference of Postal and Telecommunications Administrations(CEPT) set up theGroupe Spécial Mobile(GSM) committee and later provided a permanent technical-support group based inParis. Five years later, in 1987, 15 representatives from 13 European countries signed amemorandum of understandinginCopenhagento develop and deploy a common cellular telephone system across Europe, and EU rules were passed to make GSM a mandatory standard.[8]The decision to develop a continental standard eventually resulted in a unified, open, standard-based network which was larger than that in the United States.[9][10][11][12]
In February 1987 Europe produced the first agreed GSM Technical Specification. Ministers fromthe four big EU countries[clarification needed]cemented their political support for GSM with the Bonn Declaration on Global Information Networks in May and the GSMMoUwas tabled for signature in September. The MoU drew in mobile operators from across Europe to pledge to invest in new GSM networks to an ambitious common date.
In this short 38-week period the whole of Europe (countries and industries) had been brought behind GSM in a rare unity and speed guided by four public officials: Armin Silberhorn (Germany), Stephen Temple (UK),Philippe Dupuis(France), and Renzo Failli (Italy).[13]In 1989 the Groupe Spécial Mobile committee was transferred from CEPT to theEuropean Telecommunications Standards Institute(ETSI).[10][11][12]The IEEE/RSE awarded toThomas HaugandPhilippe Dupuisthe 2018James Clerk Maxwell medalfor their "leadership in the development of the first international mobile communications standard with subsequent evolution into worldwide smartphone data communication".[14]The GSM (2G) has evolved into 3G, 4G and 5G.
In parallelFranceandGermanysigned a joint development agreement in 1984 and were joined byItalyand theUKin 1986. In 1986, theEuropean Commissionproposed reserving the 900 MHz spectrum band for GSM. It was long believed that the formerFinnishprime ministerHarri Holkerimade the world's first GSM call on 1 July 1991, callingKaarina Suonio(deputy mayor of the city ofTampere) using a network built byNokia and SiemensandoperatedbyRadiolinja.[15]In 2021 a former Nokia engineerPekka Lonkarevealed toHelsingin Sanomatmaking a test call just a couple of hours earlier. "World's first GSM call was actually made by me. I called Marjo Jousinen, in Salo.", Lonka informed.[16]The following year saw the sending of the firstshort messaging service(SMS or "text message") message, andVodafone UKand Telecom Finland signed the first internationalroamingagreement.
Work began in 1991 to expand the GSM standard to the 1800 MHz frequency band and the first 1800 MHz network became operational in the UK by 1993, called the DCS 1800. Also that year,Telstrabecame the first network operator to deploy a GSM network outside Europe and the first practical hand-held GSMmobile phonebecame available.
In 1995 fax, data and SMS messaging services were launched commercially, the first 1900 MHz GSM network became operational in the United States and GSM subscribers worldwide exceeded 10 million. In the same year, theGSM Associationformed. Pre-paid GSM SIM cards were launched in 1996 and worldwide GSM subscribers passed 100 million in 1998.[11]
In 2000 the first commercialGeneral Packet Radio Service(GPRS) services were launched and the first GPRS-compatible handsets became available for sale. In 2001, the first UMTS (W-CDMA) network was launched, a 3G technology that is not part of GSM. Worldwide GSM subscribers exceeded 500 million. In 2002, the firstMultimedia Messaging Service(MMS) was introduced and the first GSM network in the 800 MHz frequency band became operational.Enhanced Data rates for GSM Evolution(EDGE) services first became operational in a network in 2003, and the number of worldwide GSM subscribers exceeded 1 billion in 2004.[11]
By 2005 GSM networks accounted for more than 75% of the worldwide cellular network market, serving 1.5 billion subscribers. In 2005, the firstHSDPA-capable network also became operational. The firstHSUPAnetwork launched in 2007. (High Speed Packet Access(HSPA) and its uplink and downlink versions are 3G technologies, not part of GSM.) Worldwide GSM subscribers exceeded three billion in 2008.[11]
TheGSM Associationestimated in 2011 that technologies defined in the GSM standard served 80% of the mobile market, encompassing more than 5 billion people across more than 212 countries and territories, making GSM the most ubiquitous of the many standards for cellular networks.[17]
GSM is a second-generation (2G) standard employing time-division multiple-access (TDMA) spectrum-sharing, issued by the European Telecommunications Standards Institute (ETSI). The GSM standard does not include the 3GUniversal Mobile Telecommunications System(UMTS),code-division multiple access(CDMA) technology, nor the 4G LTEorthogonal frequency-division multiple access(OFDMA) technology standards issued by the 3GPP.[18]
GSM, for the first time, set a common standard for Europe for wireless networks. It was also adopted by many countries outside Europe. This allowed subscribers to use other GSM networks that have roaming agreements with each other. The common standard reduced research and development costs, since hardware and software could be sold with only minor adaptations for the local market.[19]
TelstrainAustraliashut down its 2G GSM network on 1 December 2016, the first mobile network operator to decommission a GSM network.[20]The second mobile provider to shut down its GSM network (on 1 January 2017) wasAT&T Mobilityfrom theUnited States.[21]OptusinAustraliacompleted the shut down of its 2G GSM network on 1 August 2017, part of the Optus GSM network coveringWestern Australiaand theNorthern Territoryhad earlier in the year been shut down in April 2017.[22]Singaporeshut down 2G services entirely in April 2017.[23]
The network is structured into several discrete sections:
GSM utilizes acellular network, meaning thatcell phonesconnect to it by searching for cells in the immediate vicinity. There are five different cell sizes in a GSM network:
The coverage area of each cell varies according to the implementation environment. Macro cells can be regarded as cells where thebase-stationantennais installed on a mast or a building above average rooftop level. Micro cells are cells whose antenna height is under average rooftop level; they are typically deployed in urban areas. Picocells are small cells whose coverage diameter is a few dozen meters; they are mainly used indoors. Femtocells are cells designed for use in residential orsmall-businessenvironments and connect to atelecommunications service provider's network via abroadband-internetconnection. Umbrella cells are used to cover shadowed regions of smaller cells and to fill in gaps in coverage between those cells.
Cell horizontal radius varies – depending on antenna height,antenna gain, andpropagationconditions – from a couple of hundred meters to several tens of kilometers. The longest distance the GSM specification supports in practical use is 35 kilometres (22 mi). There are also several implementations of the concept of an extended cell,[24]where the cell radius could be double or even more, depending on the antenna system, the type of terrain, and thetiming advance.
GSM supports indoor coverage – achievable by using an indoor picocell base station, or anindoor repeaterwith distributed indoor antennas fed through power splitters – to deliver the radio signals from an antenna outdoors to the separate indoor distributed antenna system. Picocells are typically deployed when significant call capacity is needed indoors, as in shopping centers or airports. However, this is not a prerequisite, since indoor coverage is also provided by in-building penetration of radio signals from any nearby cell.
GSM networks operate in a number of differentcarrier frequencyranges (separated intoGSM frequency rangesfor 2G andUMTS frequency bandsfor 3G), with most2GGSM networks operating in the 900 MHz or 1800 MHz bands. Where these bands were already allocated, the 850 MHz and 1900 MHz bands were used instead (for example in Canada and the United States). In rare cases the 400 and 450 MHz frequency bands are assigned in some countries because they were previously used for first-generation systems.
For comparison, most3Gnetworks in Europe operate in the 2100 MHz frequency band. For more information on worldwide GSM frequency usage, seeGSM frequency bands.
Regardless of the frequency selected by an operator, it is divided intotimeslotsfor individual phones. This allows eight full-rate or sixteen half-rate speech channels perradio frequency. These eight radio timeslots (orburstperiods) are grouped into aTDMAframe. Half-rate channels use alternate frames in the same timeslot. The channel data rate for all8 channelsis270.833 kbit/s,and the frame duration is4.615 ms.[25]TDMA noise is interference that can be heard on speakers near a GSM phone using TDMA, audible as a buzzing sound.[26]
The transmission power in the handset is limited to a maximum of 2 watts inGSM 850/900and1 wattinGSM 1800/1900.
GSM has used a variety of voicecodecsto squeeze 3.1 kHz audio into between 7 and 13 kbit/s. Originally, two codecs, named after the types of data channel they were allocated, were used, calledHalf Rate(6.5 kbit/s) andFull Rate(13 kbit/s). These used a system based onlinear predictive coding(LPC). In addition to being efficient withbitrates, these codecs also made it easier to identify more important parts of the audio, allowing the air interface layer to prioritize and better protect these parts of the signal. GSM was further enhanced in 1997[27]with theenhanced full rate(EFR) codec, a 12.2 kbit/s codec that uses a full-rate channel. Finally, with the development ofUMTS, EFR was refactored into a variable-rate codec calledAMR-Narrowband, which is high quality and robust against interference when used on full-rate channels, or less robust but still relatively high quality when used in good radio conditions on half-rate channel.
One of the key features of GSM is theSubscriber Identity Module, commonly known as aSIM card. The SIM is a detachablesmart card[3]containing a user's subscription information and phone book. This allows users to retain their information after switching handsets. Alternatively, users can change networks or network identities without switching handsets - simply by changing the SIM.
Sometimesmobile network operatorsrestrict handsets that they sell for exclusive use in their own network. This is calledSIM lockingand is implemented by a software feature of the phone. A subscriber may usually contact the provider to remove the lock for a fee, utilize private services to remove the lock, or use software and websites to unlock the handset themselves. It is possible to hack past a phone locked by a network operator.
In some countries and regions (e.g.BrazilandGermany) all phones are sold unlocked due to the abundance of dual-SIM handsets and operators.[28]
GSM was intended to be a secure wireless system. It has considered the user authentication using apre-shared keyandchallenge–response, and over-the-air encryption. However, GSM is vulnerable to different types of attack, each of them aimed at a different part of the network.[29]
Research findings indicate that GSM faces susceptibility to hacking byscript kiddies, a term referring to inexperienced individuals utilizing readily available hardware and software. The vulnerability arises from the accessibility of tools such as a DVB-T TV tuner, posing a threat to both mobile and network users. Despite the term "script kiddies" implying a lack of sophisticated skills, the consequences of their attacks on GSM can be severe, impacting the functionality ofcellular networks. Given that GSM continues to be the main source of cellular technology in numerous countries, its susceptibility to potential threats from malicious attacks is one that needs to be addressed.[30]
The development ofUMTSintroduced an optionalUniversal Subscriber Identity Module(USIM), that uses a longer authentication key to give greater security, as well as mutually authenticating the network and the user, whereas GSM only authenticates the user to the network (and not vice versa). The security model therefore offers confidentiality and authentication, but limited authorization capabilities, and nonon-repudiation.
GSM uses several cryptographic algorithms for security. TheA5/1,A5/2, andA5/3stream ciphersare used for ensuring over-the-air voice privacy. A5/1 was developed first and is a stronger algorithm used within Europe and the United States; A5/2 is weaker and used in other countries. Serious weaknesses have been found in both algorithms: it is possible to break A5/2 in real-time with aciphertext-only attack, and in January 2007, The Hacker's Choice started the A5/1 cracking project with plans to useFPGAsthat allow A5/1 to be broken with arainbow tableattack.[31]The system supports multiple algorithms so operators may replace that cipher with a stronger one.
Since 2000, different efforts have been made in order to crack the A5 encryption algorithms. Both A5/1 and A5/2 algorithms have been broken, and theircryptanalysishas been revealed in the literature. As an example,Karsten Nohldeveloped a number ofrainbow tables(static values which reduce the time needed to carry out an attack) and have found new sources forknown plaintext attacks.[32]He said that it is possible to build "a full GSM interceptor... from open-source components" but that they had not done so because of legal concerns.[33]Nohl claimed that he was able to intercept voice and text conversations by impersonating another user to listen tovoicemail, make calls, or send text messages using a seven-year-oldMotorolacellphone and decryption software available for free online.[34]
GSM usesGeneral Packet Radio Service(GPRS) for data transmissions like browsing the web. The most commonly deployed GPRS ciphers were publicly broken in 2011.[35]
The researchers revealed flaws in the commonly used GEA/1 and GEA/2 (standing for GPRS Encryption Algorithms 1 and 2) ciphers and published the open-source "gprsdecode" software forsniffingGPRS networks. They also noted that some carriers do not encrypt the data (i.e., using GEA/0) in order to detect the use of traffic or protocols they do not like (e.g.,Skype), leaving customers unprotected. GEA/3 seems to remain relatively hard to break and is said to be in use on some more modern networks. If used withUSIMto prevent connections to fake base stations anddowngrade attacks, users will be protected in the medium term, though migration to 128-bit GEA/4 is still recommended.
The first public cryptanalysis of GEA/1 and GEA/2 (also written GEA-1 and GEA-2) was done in 2021. It concluded that although using a 64-bit key, the GEA-1 algorithm actually provides only 40 bits of security, due to a relationship between two parts of the algorithm. The researchers found that this relationship was very unlikely to have happened if it was not intentional. This may have been done in order to satisfy European controls on export of cryptographic programs.[36][37][38]
The GSM systems and services are described in a set of standards governed byETSI, where a full list is maintained.[39]
Severalopen-source softwareprojects exist that provide certain GSM features,[40]such as abase transceiver stationbyOpenBTSdevelops aBase transceiver stationand theOsmocomstack providing various parts.[41]
Patents remain a problem for any open-source GSM implementation, because it is not possible for GNU or any other free software distributor to guarantee immunity from all lawsuits by the patent holders against the users. Furthermore, new features are being added to the standard all the time which means they have patent protection for a number of years.[citation needed]
The original GSM implementations from 1991 may now be entirely free of patent encumbrances, however patent freedom is not certain due to the United States' "first to invent" system that was in place until 2012. The "first to invent" system, coupled with "patentterm adjustment" can extend the life of a U.S. patent far beyond 20 years from its priority date. It is unclear at this time whetherOpenBTSwill be able to implement features of that initial specification without limit. As patents subsequently expire, however, those features can be added into the open-source version. As of 2011[update], there have been no lawsuits against users of OpenBTS over GSM use.[citation needed]
|
https://en.wikipedia.org/wiki/Global_System_for_Mobile_Communications
|
TheGlobal System for Mobile Communications(GSM) is a family of standards to describe the protocols for second-generation (2G) digitalcellular networks,[2]as used by mobile devices such asmobile phonesandmobile broadband modems. GSM is also atrade markowned by theGSM Association.[3]"GSM" may also refer to the voice codec initially used in GSM.[4]
2G networks developed as a replacement for first generation (1G) analog cellular networks. The original GSM standard, which was developed by theEuropean Telecommunications Standards Institute(ETSI), originally described a digital, circuit-switched network optimized forfull duplexvoicetelephony, employingtime division multiple access(TDMA) between stations. This expanded over time to includedata communications, first bycircuit-switched transport, then bypacketdata transport via its upgraded standards,GPRSand thenEDGE. GSM exists in various versions based on thefrequency bands used.
GSM was first implemented inFinlandin December 1991.[5]It became the global standard for mobile cellular communications, with over 2 billion GSM subscribers globally in 2006, far above its competing standard,CDMA.[6]Its share reached over 90% market share by the mid-2010s, and operating in over 219 countries and territories.[2]The specifications and maintenance of GSM passed over to the3GPPbody in 2000,[7]which at the time developed third-generation (3G)UMTSstandards, followed by the fourth-generation (4G) LTE Advanced and the fifth-generation5Gstandards, which do not form part of the GSM standard. Beginning in the late 2010s, various carriers worldwidestarted to shut down their GSM networks; nevertheless, as a result of the network's widespread use, the acronym "GSM" is still used as a generic term for the plethora ofGmobile phone technologies evolved from it or mobile phones itself.
In 1983, work began to develop a European standard for digital cellular voice telecommunications when theEuropean Conference of Postal and Telecommunications Administrations(CEPT) set up theGroupe Spécial Mobile(GSM) committee and later provided a permanent technical-support group based inParis. Five years later, in 1987, 15 representatives from 13 European countries signed amemorandum of understandinginCopenhagento develop and deploy a common cellular telephone system across Europe, and EU rules were passed to make GSM a mandatory standard.[8]The decision to develop a continental standard eventually resulted in a unified, open, standard-based network which was larger than that in the United States.[9][10][11][12]
In February 1987 Europe produced the first agreed GSM Technical Specification. Ministers fromthe four big EU countries[clarification needed]cemented their political support for GSM with the Bonn Declaration on Global Information Networks in May and the GSMMoUwas tabled for signature in September. The MoU drew in mobile operators from across Europe to pledge to invest in new GSM networks to an ambitious common date.
In this short 38-week period the whole of Europe (countries and industries) had been brought behind GSM in a rare unity and speed guided by four public officials: Armin Silberhorn (Germany), Stephen Temple (UK),Philippe Dupuis(France), and Renzo Failli (Italy).[13]In 1989 the Groupe Spécial Mobile committee was transferred from CEPT to theEuropean Telecommunications Standards Institute(ETSI).[10][11][12]The IEEE/RSE awarded toThomas HaugandPhilippe Dupuisthe 2018James Clerk Maxwell medalfor their "leadership in the development of the first international mobile communications standard with subsequent evolution into worldwide smartphone data communication".[14]The GSM (2G) has evolved into 3G, 4G and 5G.
In parallelFranceandGermanysigned a joint development agreement in 1984 and were joined byItalyand theUKin 1986. In 1986, theEuropean Commissionproposed reserving the 900 MHz spectrum band for GSM. It was long believed that the formerFinnishprime ministerHarri Holkerimade the world's first GSM call on 1 July 1991, callingKaarina Suonio(deputy mayor of the city ofTampere) using a network built byNokia and SiemensandoperatedbyRadiolinja.[15]In 2021 a former Nokia engineerPekka Lonkarevealed toHelsingin Sanomatmaking a test call just a couple of hours earlier. "World's first GSM call was actually made by me. I called Marjo Jousinen, in Salo.", Lonka informed.[16]The following year saw the sending of the firstshort messaging service(SMS or "text message") message, andVodafone UKand Telecom Finland signed the first internationalroamingagreement.
Work began in 1991 to expand the GSM standard to the 1800 MHz frequency band and the first 1800 MHz network became operational in the UK by 1993, called the DCS 1800. Also that year,Telstrabecame the first network operator to deploy a GSM network outside Europe and the first practical hand-held GSMmobile phonebecame available.
In 1995 fax, data and SMS messaging services were launched commercially, the first 1900 MHz GSM network became operational in the United States and GSM subscribers worldwide exceeded 10 million. In the same year, theGSM Associationformed. Pre-paid GSM SIM cards were launched in 1996 and worldwide GSM subscribers passed 100 million in 1998.[11]
In 2000 the first commercialGeneral Packet Radio Service(GPRS) services were launched and the first GPRS-compatible handsets became available for sale. In 2001, the first UMTS (W-CDMA) network was launched, a 3G technology that is not part of GSM. Worldwide GSM subscribers exceeded 500 million. In 2002, the firstMultimedia Messaging Service(MMS) was introduced and the first GSM network in the 800 MHz frequency band became operational.Enhanced Data rates for GSM Evolution(EDGE) services first became operational in a network in 2003, and the number of worldwide GSM subscribers exceeded 1 billion in 2004.[11]
By 2005 GSM networks accounted for more than 75% of the worldwide cellular network market, serving 1.5 billion subscribers. In 2005, the firstHSDPA-capable network also became operational. The firstHSUPAnetwork launched in 2007. (High Speed Packet Access(HSPA) and its uplink and downlink versions are 3G technologies, not part of GSM.) Worldwide GSM subscribers exceeded three billion in 2008.[11]
TheGSM Associationestimated in 2011 that technologies defined in the GSM standard served 80% of the mobile market, encompassing more than 5 billion people across more than 212 countries and territories, making GSM the most ubiquitous of the many standards for cellular networks.[17]
GSM is a second-generation (2G) standard employing time-division multiple-access (TDMA) spectrum-sharing, issued by the European Telecommunications Standards Institute (ETSI). The GSM standard does not include the 3GUniversal Mobile Telecommunications System(UMTS),code-division multiple access(CDMA) technology, nor the 4G LTEorthogonal frequency-division multiple access(OFDMA) technology standards issued by the 3GPP.[18]
GSM, for the first time, set a common standard for Europe for wireless networks. It was also adopted by many countries outside Europe. This allowed subscribers to use other GSM networks that have roaming agreements with each other. The common standard reduced research and development costs, since hardware and software could be sold with only minor adaptations for the local market.[19]
TelstrainAustraliashut down its 2G GSM network on 1 December 2016, the first mobile network operator to decommission a GSM network.[20]The second mobile provider to shut down its GSM network (on 1 January 2017) wasAT&T Mobilityfrom theUnited States.[21]OptusinAustraliacompleted the shut down of its 2G GSM network on 1 August 2017, part of the Optus GSM network coveringWestern Australiaand theNorthern Territoryhad earlier in the year been shut down in April 2017.[22]Singaporeshut down 2G services entirely in April 2017.[23]
The network is structured into several discrete sections:
GSM utilizes acellular network, meaning thatcell phonesconnect to it by searching for cells in the immediate vicinity. There are five different cell sizes in a GSM network:
The coverage area of each cell varies according to the implementation environment. Macro cells can be regarded as cells where thebase-stationantennais installed on a mast or a building above average rooftop level. Micro cells are cells whose antenna height is under average rooftop level; they are typically deployed in urban areas. Picocells are small cells whose coverage diameter is a few dozen meters; they are mainly used indoors. Femtocells are cells designed for use in residential orsmall-businessenvironments and connect to atelecommunications service provider's network via abroadband-internetconnection. Umbrella cells are used to cover shadowed regions of smaller cells and to fill in gaps in coverage between those cells.
Cell horizontal radius varies – depending on antenna height,antenna gain, andpropagationconditions – from a couple of hundred meters to several tens of kilometers. The longest distance the GSM specification supports in practical use is 35 kilometres (22 mi). There are also several implementations of the concept of an extended cell,[24]where the cell radius could be double or even more, depending on the antenna system, the type of terrain, and thetiming advance.
GSM supports indoor coverage – achievable by using an indoor picocell base station, or anindoor repeaterwith distributed indoor antennas fed through power splitters – to deliver the radio signals from an antenna outdoors to the separate indoor distributed antenna system. Picocells are typically deployed when significant call capacity is needed indoors, as in shopping centers or airports. However, this is not a prerequisite, since indoor coverage is also provided by in-building penetration of radio signals from any nearby cell.
GSM networks operate in a number of differentcarrier frequencyranges (separated intoGSM frequency rangesfor 2G andUMTS frequency bandsfor 3G), with most2GGSM networks operating in the 900 MHz or 1800 MHz bands. Where these bands were already allocated, the 850 MHz and 1900 MHz bands were used instead (for example in Canada and the United States). In rare cases the 400 and 450 MHz frequency bands are assigned in some countries because they were previously used for first-generation systems.
For comparison, most3Gnetworks in Europe operate in the 2100 MHz frequency band. For more information on worldwide GSM frequency usage, seeGSM frequency bands.
Regardless of the frequency selected by an operator, it is divided intotimeslotsfor individual phones. This allows eight full-rate or sixteen half-rate speech channels perradio frequency. These eight radio timeslots (orburstperiods) are grouped into aTDMAframe. Half-rate channels use alternate frames in the same timeslot. The channel data rate for all8 channelsis270.833 kbit/s,and the frame duration is4.615 ms.[25]TDMA noise is interference that can be heard on speakers near a GSM phone using TDMA, audible as a buzzing sound.[26]
The transmission power in the handset is limited to a maximum of 2 watts inGSM 850/900and1 wattinGSM 1800/1900.
GSM has used a variety of voicecodecsto squeeze 3.1 kHz audio into between 7 and 13 kbit/s. Originally, two codecs, named after the types of data channel they were allocated, were used, calledHalf Rate(6.5 kbit/s) andFull Rate(13 kbit/s). These used a system based onlinear predictive coding(LPC). In addition to being efficient withbitrates, these codecs also made it easier to identify more important parts of the audio, allowing the air interface layer to prioritize and better protect these parts of the signal. GSM was further enhanced in 1997[27]with theenhanced full rate(EFR) codec, a 12.2 kbit/s codec that uses a full-rate channel. Finally, with the development ofUMTS, EFR was refactored into a variable-rate codec calledAMR-Narrowband, which is high quality and robust against interference when used on full-rate channels, or less robust but still relatively high quality when used in good radio conditions on half-rate channel.
One of the key features of GSM is theSubscriber Identity Module, commonly known as aSIM card. The SIM is a detachablesmart card[3]containing a user's subscription information and phone book. This allows users to retain their information after switching handsets. Alternatively, users can change networks or network identities without switching handsets - simply by changing the SIM.
Sometimesmobile network operatorsrestrict handsets that they sell for exclusive use in their own network. This is calledSIM lockingand is implemented by a software feature of the phone. A subscriber may usually contact the provider to remove the lock for a fee, utilize private services to remove the lock, or use software and websites to unlock the handset themselves. It is possible to hack past a phone locked by a network operator.
In some countries and regions (e.g.BrazilandGermany) all phones are sold unlocked due to the abundance of dual-SIM handsets and operators.[28]
GSM was intended to be a secure wireless system. It has considered the user authentication using apre-shared keyandchallenge–response, and over-the-air encryption. However, GSM is vulnerable to different types of attack, each of them aimed at a different part of the network.[29]
Research findings indicate that GSM faces susceptibility to hacking byscript kiddies, a term referring to inexperienced individuals utilizing readily available hardware and software. The vulnerability arises from the accessibility of tools such as a DVB-T TV tuner, posing a threat to both mobile and network users. Despite the term "script kiddies" implying a lack of sophisticated skills, the consequences of their attacks on GSM can be severe, impacting the functionality ofcellular networks. Given that GSM continues to be the main source of cellular technology in numerous countries, its susceptibility to potential threats from malicious attacks is one that needs to be addressed.[30]
The development ofUMTSintroduced an optionalUniversal Subscriber Identity Module(USIM), that uses a longer authentication key to give greater security, as well as mutually authenticating the network and the user, whereas GSM only authenticates the user to the network (and not vice versa). The security model therefore offers confidentiality and authentication, but limited authorization capabilities, and nonon-repudiation.
GSM uses several cryptographic algorithms for security. TheA5/1,A5/2, andA5/3stream ciphersare used for ensuring over-the-air voice privacy. A5/1 was developed first and is a stronger algorithm used within Europe and the United States; A5/2 is weaker and used in other countries. Serious weaknesses have been found in both algorithms: it is possible to break A5/2 in real-time with aciphertext-only attack, and in January 2007, The Hacker's Choice started the A5/1 cracking project with plans to useFPGAsthat allow A5/1 to be broken with arainbow tableattack.[31]The system supports multiple algorithms so operators may replace that cipher with a stronger one.
Since 2000, different efforts have been made in order to crack the A5 encryption algorithms. Both A5/1 and A5/2 algorithms have been broken, and theircryptanalysishas been revealed in the literature. As an example,Karsten Nohldeveloped a number ofrainbow tables(static values which reduce the time needed to carry out an attack) and have found new sources forknown plaintext attacks.[32]He said that it is possible to build "a full GSM interceptor... from open-source components" but that they had not done so because of legal concerns.[33]Nohl claimed that he was able to intercept voice and text conversations by impersonating another user to listen tovoicemail, make calls, or send text messages using a seven-year-oldMotorolacellphone and decryption software available for free online.[34]
GSM usesGeneral Packet Radio Service(GPRS) for data transmissions like browsing the web. The most commonly deployed GPRS ciphers were publicly broken in 2011.[35]
The researchers revealed flaws in the commonly used GEA/1 and GEA/2 (standing for GPRS Encryption Algorithms 1 and 2) ciphers and published the open-source "gprsdecode" software forsniffingGPRS networks. They also noted that some carriers do not encrypt the data (i.e., using GEA/0) in order to detect the use of traffic or protocols they do not like (e.g.,Skype), leaving customers unprotected. GEA/3 seems to remain relatively hard to break and is said to be in use on some more modern networks. If used withUSIMto prevent connections to fake base stations anddowngrade attacks, users will be protected in the medium term, though migration to 128-bit GEA/4 is still recommended.
The first public cryptanalysis of GEA/1 and GEA/2 (also written GEA-1 and GEA-2) was done in 2021. It concluded that although using a 64-bit key, the GEA-1 algorithm actually provides only 40 bits of security, due to a relationship between two parts of the algorithm. The researchers found that this relationship was very unlikely to have happened if it was not intentional. This may have been done in order to satisfy European controls on export of cryptographic programs.[36][37][38]
The GSM systems and services are described in a set of standards governed byETSI, where a full list is maintained.[39]
Severalopen-source softwareprojects exist that provide certain GSM features,[40]such as abase transceiver stationbyOpenBTSdevelops aBase transceiver stationand theOsmocomstack providing various parts.[41]
Patents remain a problem for any open-source GSM implementation, because it is not possible for GNU or any other free software distributor to guarantee immunity from all lawsuits by the patent holders against the users. Furthermore, new features are being added to the standard all the time which means they have patent protection for a number of years.[citation needed]
The original GSM implementations from 1991 may now be entirely free of patent encumbrances, however patent freedom is not certain due to the United States' "first to invent" system that was in place until 2012. The "first to invent" system, coupled with "patentterm adjustment" can extend the life of a U.S. patent far beyond 20 years from its priority date. It is unclear at this time whetherOpenBTSwill be able to implement features of that initial specification without limit. As patents subsequently expire, however, those features can be added into the open-source version. As of 2011[update], there have been no lawsuits against users of OpenBTS over GSM use.[citation needed]
|
https://en.wikipedia.org/wiki/GSM#Security
|
3Grefers to the third-generation ofcellular networktechnology. These networks were rolled out beginning in the early 2000s and represented a significant advancement over the second-generation (2G), particularly in terms of data transfer speeds andmobile internetcapabilities. The major 3G standards areUMTS(developed by3GPP, succeedingGSM) andCDMA2000(developed byQualcomm, succeedingcdmaOne);[1][2]both of these are based on theIMT-2000specifications established by theInternational Telecommunication Union(ITU).
While 2G networks such asGPRSandEDGEsupported limited data services, 3G introduced significantly higher-speed mobile internet and enhancedmultimediacapabilities, in addition to improvedvoicequality.[3]It provided moderate internet speeds suitable for generalweb browsingand multimedia content includingvideo callingandmobile TV,[3]supporting services that provide an information transfer rate of at least 144kbit/s.[4][5]
Later 3G releases, often referred to as 3.5G (HSPA) and 3.75G (HSPA+) as well asEV-DO, introduced important improvements, enabling 3G networks to offermobile broadbandaccess with speeds ranging from severalMbit/sup to 42 Mbit/s.[6]These updates improved the reliability and speed of internet browsing, video streaming, and online gaming, enhancing the overall user experience forsmartphonesandmobile modemsin comparison to earlier 3G technologies. 3G was later succeeded by4Gtechnology, which provided even higher data transfer rates and introduced advancements in network performance.
A new generation of cellular standards has emerged roughly every decade since the introduction of1Gsystems in 1979. Each generation is defined by the introduction of newfrequency bands, higher data rates, and transmission technologies that are not backward-compatible due to the need for significant changes in network architecture and infrastructure.
Several telecommunications companies marketed wireless mobile Internet services as3G, indicating that the advertised service was provided over a 3G wireless network. However, 3G services have largely been supplanted in marketing by 4G and 5G services in most areas of the world. Services advertised as 3G are required to meetIMT-2000technical standards, including standards for reliability and speed (data transfer rates). To meet the IMT-2000 standards, Third-generation mobile networks, or 3G, must maintain minimum consistent Internet speeds of 144 Kbps.[5]However, many services advertised as 3G provide higher speed than the minimum technical requirements for a 3G service.[7]Subsequent 3G releases, denoted3.5Gand3.75G, provided mobile broadband access of severalMbit/sfor smartphones and mobile modems in laptop computers.[8]
3G branded standards:
The 3G systems and radio interfaces are based onspread spectrumradio transmission technology. While theGSM EDGEstandard ("2.9G"),DECTcordless phones andMobile WiMAXstandards formally also fulfill the IMT-2000 requirements and are approved as 3G standards by ITU, these are typically not branded as 3G and are based on completely different technologies.
The common standards complying with the IMT2000/3G standard are:
While DECT cordless phones andMobile WiMAXstandards formally also fulfill the IMT-2000 requirements, they are not usually considered due to their rarity and unsuitability for usage with mobile phones.[9]
The 3G (UMTS and CDMA2000) research and development projects started in 1992. In 1999, ITU approved five radio interfaces for IMT-2000 as a part of the ITU-R M.1457 Recommendation;WiMAXwas added in 2007.[10]
There are evolutionary standards (EDGE and CDMA) that are backward-compatible extensions to pre-existing2Gnetworks as well as revolutionary standards that require all-new network hardware and frequency allocations. The cell phones use UMTS in combination with 2G GSM standards and bandwidths, butdo not support EDGE. The latter group is theUMTSfamily, which consists of standards developed for IMT-2000, as well as the independently developed standardsDECTand WiMAX, which were included because they fit the IMT-2000 definition.
WhileEDGEfulfills the 3G specifications, most GSM/UMTS phones report EDGE ("2.75G") and UMTS ("3G") functionality.[11]
3G technology was the result of research and development work carried out by theInternational Telecommunication Union(ITU) in the early 1980s. 3G specifications and standards were developed in fifteen years. The technical specifications were made available to the public under the name IMT-2000. The communication spectrum between 400 MHz to 3 GHz was allocated for 3G. Both the government and communication companies approved the 3G standard. The first pre-commercial 3G network was launched byNTT DoCoMoin Japan in 1998,[12]branded asFOMA. It was first available in May 2001 as a pre-release (test) ofW-CDMAtechnology. The first commercial launch of 3G was also by NTT DoCoMo in Japan on 1 October 2001, although it was initially somewhat limited in scope;[13][14]broader availability of the system was delayed by apparent concerns over its reliability.[15][16][17][18][19]
The first European pre-commercial network was anUMTSnetwork on theIsle of ManbyManx Telecom, the operator then owned byBritish Telecom, and the first commercial network (also UMTS based W-CDMA) in Europe was opened for business byTelenorin December 2001 with no commercial handsets and thus no paying customers.
The first network to go commercially live was bySK Telecomin South Korea on the CDMA-based1xEV-DOtechnology in January 2002. By May 2002, the second South Korean 3G network was byKTon EV-DO and thus the South Koreans were the first to see competition among 3G operators.
The first commercial United States 3G network was by Monet Mobile Networks, onCDMA20001x EV-DO technology, but the network provider later shut down operations. The second 3G network operator in the US was Verizon Wireless in July 2002, also on CDMA2000 1x EV-DO. AT&T Mobility was also a true 3GUMTSnetwork, having completed its upgrade of the 3G network toHSUPA.
The first commercial United Kingdom 3G network was started byHutchison Telecomwhich was originally behindOrange S.A.[20]In 2003, it announced first commercial third generation or 3G mobile phone network in the UK.
The first pre-commercial demonstration network in the southern hemisphere was built inAdelaide, South Australia, by m.Net Corporation in February 2002 using UMTS on 2100 MHz. This was a demonstration network for the 2002 IT World Congress. The first commercial 3G network was launched by Hutchison Telecommunications branded asThreeor "3" in June 2003.[21]
InIndia, on 11 December 2008, the first 3G mobile and internet services were launched by a state-owned company, Mahanagar Telecom Nigam Limited (MTNL), within the metropolitan cities of Delhi and Mumbai. After MTNL, another state-owned company, Bharat Sanchar Nigam Limited (BSNL), began deploying the 3G networks country-wide.
Emtellaunched the first 3G network in Africa.[22]
Japanwas one of the first countries to adopt 3G, the reason being the process of 3G spectrum allocation, which in Japan was awarded without much upfront cost. The frequency spectrum was allocated in the US and Europe based on auctioning, thereby requiring a huge initial investment for any company wishing to provide 3G services. European companies collectively paid over 100 billion dollars in their spectrum auctions.[23]
Nepal Telecomadopted 3G Service for the first time in southernAsia. However, its 3G was relatively slow to be adopted inNepal. In some instances, 3G networks do not use the same radio frequencies as2G, so mobile operators must build entirely new networks and license entirely new frequencies, especially to achieve high data transmission rates. Other countries' delays were due to the expenses of upgrading transmission hardware, especially forUMTS, whose deployment required the replacement of most broadcast towers. Due to these issues and difficulties with deployment, many carriers could not or delayed the acquisition of these updated capabilities.
In December 2007, 190 3G networks were operating in 40 countries and 154HSDPAnetworks were operating in 71 countries, according to the Global Mobile Suppliers Association (GSA). In Asia, Europe, Canada, and the US, telecommunication companies useW-CDMAtechnology with the support of around 100 terminal designs to operate 3G mobile networks.
The roll-out of 3G networks was delayed by the enormous costs of additional spectrum licensing fees in some countries. The license fees in some European countries were particularly high, bolstered by government auctions of a limited number of licenses andsealed bid auctions, and initial excitement over 3G's potential. This led to atelecoms crashthat ran concurrently with similar crashes in thefibre-opticanddot.comfields.
The 3G standard is perhaps well known because of a massive expansion of the mobile communications market post-2G and advances of the consumer mobile phone. An especially notable development during this time is thesmartphone(for example, theiPhone, and theAndroidfamily), combining the abilities of a PDA with a mobile phone, leading to widespread demand for mobile internet connectivity. 3G has also introduced the term "mobile broadband" because its speed and capability made it a viable alternative for internet browsing, and USB Modems connecting to 3G networks, and now4Gbecame increasingly common.
By June 2007, the 200 millionth 3G subscriber had been connected of which 10 million were inNepaland 8.2 million inIndia. This 200 millionth is only 6.7% of the 3 billion mobile phone subscriptions worldwide. (When counting CDMA2000 1x RTT customers—max bitrate 72% of the 200 kbit/s which defines 3G—the total size of the nearly-3G subscriber base was 475 million as of June 2007, which was 15.8% of all subscribers worldwide.) In the countries where 3G was launched first – Japan and South Korea – 3G penetration is over 70%.[24]In Europe the leading country[when?]for 3G penetration is Italy with a third of its subscribers migrated to 3G. Other leading countries[when?]for 3G use includeNepal,UK,Austria,AustraliaandSingaporeat the 32% migration level.
According to ITU estimates,[25]as of Q4 2012 there were 2096 million active mobile-broadband[vague]subscribers worldwide out of a total of 6835 million subscribers—this is just over 30%. About half the mobile-broadband subscriptions are for subscribers in developed nations, 934 million out of 1600 million total, well over 50%. Note however that there is a distinction between a phone with mobile-broadband connectivity and asmart phonewith a large display and so on—although according[26]to the ITU and informatandm.com the US has 321 million mobile subscriptions, including 256 million that are 3G or 4G, which is both 80% of the subscriber base and 80% of the US population, according[25]to ComScore just a year earlier in Q4 2011 only about 42% of people surveyed in the US reported they owned a smart phone. In Japan, 3G penetration was similar at about 81%, but smart phone ownership was lower at about 17%.[25]InChina, there were 486.5 million 3G subscribers in June 2014,[27]in a population of 1,385,566,537 (2013 UN estimate).
Since the increasing adoption of4Gnetworks across the globe, 3G use has been in decline. Several operators around the world have already or are in the process of shutting down their 3G networks (seetable below). In several places, 3G is being shut down while its older predecessor 2G is being kept in operation;VodafoneUK is doing this, citing 2G's usefulness as a low-power fallback.[28]EEin the UK, plans to switch off their 3G networks in early 2024.[29]In the US,Verizonshutdown their 3G services on 31 December 2022,[30]T-Mobile shut downSprint's networks on 31 March 2022 and shutdown their main networks on 1 July 2022,[31]andAT&Thas done so on 22 February 2022.[32]
Currently 3G around the world is declining in availability and support. Technology that depends on 3G for usage are becoming inoperable in many places. For example, theEuropean Unionplans to ensure that member countries maintain 2G networks as a fallback[citation needed], so 3G devices that are backwards compatible with 2G frequencies can continue to be used. However, in countries that plan to decommission 2G networks or have already done so as well, such as the United States and Singapore, devices supporting only 3G and backwards compatible with 2G are becoming inoperable.[33]As of February 2022, less than 1% of cell phone customers in the United States used 3G; AT&T offered free replacement devices to some customers in the run-up to its shutdown.[34]
It has been estimated that there are almost 8,000 patents declared essential (FRAND) related to the 483 technical specifications which form the3GPPand3GPP2standards.[35][36]Twelve companies accounted in 2004 for 90% of the patents (Qualcomm,Ericsson,Nokia,Motorola,Philips,NTT DoCoMo,Siemens,Mitsubishi,Fujitsu,Hitachi,InterDigital, andMatsushita).
Even then, some patentsessentialto 3G might not have been declared by their patent holders. It is believed thatNortelandLucenthave undisclosed patents essential to these standards.[36]
Furthermore, the existing 3G Patent Platform Partnership Patent pool has little impact onFRANDprotection because it excludes the four largest patent owners for 3G.[37][38]
ITU has not provided a clear[39][vague]definition of the data rate that users can expect from 3G equipment or providers. Thus users sold 3G service may not be able to point to a standard and say that the rates it specifies are not being met. While stating in commentary that "it is expected that IMT-2000 will provide higher transmission rates: a minimum data rate of 2 Mbit/s for stationary or walking users, and 348 kbit/s in a moving vehicle,"[40]the ITU does not actually clearly specify minimum required rates, nor required average rates, nor what modes[clarification needed]of the interfaces qualify as 3G, so various[vague]data rates are sold as '3G' in the market.
In a market implementation, 3G downlink data speeds defined by telecom service providers vary depending on the underlying technology deployed; up to 384 kbit/s for UMTS (WCDMA), up to 7.2 Mbit/sec for HSPA, and a theoretical maximum of 21.1 Mbit/s for HSPA+ and 42.2 Mbit/s for DC-HSPA+ (technically 3.5G, but usually clubbed under the tradename of 3G).[citation needed]
3G networks offer greater security than their 2G predecessors. By allowing the UE (User Equipment) to authenticate the network it is attaching to, the user can be sure the network is the intended one and not an impersonator.[41]
3G networks use theKASUMIblock cipherinstead of the olderA5/1stream cipher. However, a number of serious weaknesses in the KASUMI cipher have been identified.
In addition to the 3G network infrastructure security, end-to-end security is offered when application frameworks such as IMS are accessed, although this is not strictly a 3G property.
The bandwidth and location capabilities introduced by 3G networks enabled a wide range of applications that were previously impractical or unavailable on 2G networks. Among the most significant advancements was the ability to perform data-intensive tasks, such as browsing the internet seamlessly while on the move, as well as engaging in other activities that benefited from faster data speeds and enhanced reliability.
Beyond personal communication, 3G networks supported applications in various fields, includingmedical devices,fire alarms, and ankle monitors. This versatility marked a significant milestone in cellular communications, as 3G became the first network to enable such a broad range of use cases.[42]By expanding its functionality beyond traditionalmobile phoneusage, 3G set the stage for the integration ofcellular networksinto a wide array of technologies and services, paving the way for further advancements with subsequent generations of mobile networks.
Both3GPPand3GPP2are working on the extensions to 3G standards that are based on anall-IP network infrastructureand using advanced wireless technologies such asMIMO. These specifications already display features characteristic forIMT-Advanced(4G), the successor of 3G. However, falling short of the bandwidth requirements for 4G (which is 1 Gbit/s for stationary and 100 Mbit/s for mobile operation), these standards are classified as 3.9G or Pre-4G.
3GPP plans to meet the 4G goals withLTE Advanced, whereas Qualcomm has haltedUMBdevelopment in favour of the LTE family.[43]
On 14 December 2009,TeliaSoneraannounced in an official press release that "We are very proud to be the first operator in the world to offer our customers 4G services."[44]With the launch of their LTE network, initially they are offeringpre-4G(orbeyond 3G) services in Stockholm, Sweden and Oslo, Norway.
|
https://en.wikipedia.org/wiki/3G#Security
|
Bluetoothis a short-rangewirelesstechnology standard that is used for exchanging data between fixed and mobile devices over short distances and buildingpersonal area networks(PANs). In the most widely used mode, transmission power is limited to 2.5milliwatts, giving it a very short range of up to 10 metres (33 ft). It employsUHFradio wavesin theISM bands, from 2.402GHzto 2.48GHz.[3]It is mainly used as an alternative to wired connections to exchange files between nearby portable devices and connectcell phonesand music players withwireless headphones,wireless speakers,HIFIsystems,car audioand wireless transmission betweenTVsandsoundbars.
Bluetooth is managed by theBluetooth Special Interest Group(SIG), which has more than 35,000 member companies in the areas of telecommunication, computing, networking, and consumer electronics. TheIEEEstandardized Bluetooth asIEEE 802.15.1but no longer maintains the standard. The Bluetooth SIG oversees the development of the specification, manages the qualification program, and protects the trademarks.[4]A manufacturer must meetBluetooth SIG standardsto market it as a Bluetooth device.[5]A network ofpatentsapplies to the technology, which is licensed to individual qualifying devices. As of 2021[update], 4.7 billion Bluetoothintegrated circuitchips are shipped annually.[6]Bluetooth was first demonstrated in space in 2024, an early test envisioned to enhanceIoTcapabilities.[7]
The name "Bluetooth" was proposed in 1997 by Jim Kardach ofIntel, one of the founders of the Bluetooth SIG. The name was inspired by a conversation with Sven Mattisson who related Scandinavian history through tales fromFrans G. Bengtsson'sThe Long Ships, a historical novel about Vikings and the 10th-century Danish kingHarald Bluetooth. Upon discovering a picture of therunestone of Harald Bluetooth[8]in the bookA History of the VikingsbyGwyn Jones, Kardach proposed Bluetooth as the codename for the short-range wireless program which is now called Bluetooth.[9][10][11]
According to Bluetooth's official website,
Bluetooth was only intended as a placeholder until marketing could come up with something really cool.
Later, when it came time to select a serious name, Bluetooth was to be replaced with either RadioWire or PAN (Personal Area Networking). PAN was the front runner, but an exhaustive search discovered it already had tens of thousands of hits throughout the internet.
A full trademark search on RadioWire couldn't be completed in time for launch, making Bluetooth the only choice. The name caught on fast and before it could be changed, it spread throughout the industry, becoming synonymous with short-range wireless technology.[12]
Bluetooth is theAnglicisedversion of the ScandinavianBlåtand/Blåtann(or inOld Norseblátǫnn). It was theepithetof King Harald Bluetooth, who united the disparate
Danish tribes into a single kingdom; Kardach chose the name to imply that Bluetooth similarly unites communication protocols.[13]
The Bluetooth logois abind runemerging theYounger Futharkrunes(ᚼ,Hagall) and(ᛒ,Bjarkan), Harald's initials.[14][15]
The development of the "short-link" radio technology, later named Bluetooth, was initiated in 1989 by Nils Rydbeck, CTO atEricsson MobileinLund, Sweden. The purpose was to develop wireless headsets, according to two inventions byJohan Ullman,SE 8902098-6, issued 12 June 1989andSE 9202239, issued 24 July 1992. Nils Rydbeck taskedTord Wingrenwith specifying and DutchmanJaap Haartsenand Sven Mattisson with developing.[16]Both were working for Ericsson in Lund.[17]Principal design and development began in 1994 and by 1997 the team had a workable solution.[18]From 1997 Örjan Johansson became the project leader and propelled the technology and standardization.[19][20][21][22]
In 1997, Adalio Sanchez, then head ofIBMThinkPadproduct R&D, approached Nils Rydbeck about collaborating on integrating amobile phoneinto a ThinkPad notebook. The two assigned engineers fromEricssonandIBMstudied the idea. The conclusion was that power consumption on cellphone technology at that time was too high to allow viable integration into a notebook and still achieve adequate battery life. Instead, the two companies agreed to integrate Ericsson's short-link technology on both a ThinkPad notebook and an Ericsson phone to accomplish the goal.
Since neither IBM ThinkPad notebooks nor Ericsson phones were the market share leaders in their respective markets at that time, Adalio Sanchez and Nils Rydbeck agreed to make the short-link technology an open industry standard to permit each player maximum market access. Ericsson contributed the short-link radio technology, and IBM contributed patents around the logical layer. Adalio Sanchez of IBM then recruited Stephen Nachtsheim of Intel to join and then Intel also recruitedToshibaandNokia. In May 1998, the Bluetooth SIG was launched with IBM and Ericsson as the founding signatories and a total of five members: Ericsson, Intel, Nokia, Toshiba, and IBM.
The first Bluetooth device was revealed in 1999. It was a hands-free mobile headset that earned the "Best of show Technology Award" atCOMDEX. The first Bluetooth mobile phone was the unreleased prototype Ericsson T36, though it was the revised Ericsson modelT39that actually made it to store shelves in June 2001. However Ericsson released the R520m in Quarter 1 of 2001,[23]making the R520m the first ever commercially available Bluetooth phone. In parallel, IBM introduced the IBM ThinkPad A30 in October 2001 which was the first notebook with integrated Bluetooth.
Bluetooth's early incorporation into consumer electronics products continued at Vosi Technologies in Costa Mesa, California, initially overseen by founding members Bejan Amini and Tom Davidson. Vosi Technologies had been created by real estate developer Ivano Stegmenga, with United States Patent 608507, for communication between a cellular phone and a vehicle's audio system. At the time, Sony/Ericsson had only a minor market share in the cellular phone market, which was dominated in the US by Nokia and Motorola. Due to ongoing negotiations for an intended licensing agreement with Motorola beginning in the late 1990s, Vosi could not publicly disclose the intention, integration, and initial development of other enabled devices which were to be the first "Smart Home" internet connected devices.
Vosi needed a means for the system to communicate without a wired connection from the vehicle to the other devices in the network. Bluetooth was chosen, sinceWi-Fiwas not yet readily available or supported in the public market. Vosi had begun to develop the Vosi Cello integrated vehicular system and some other internet connected devices, one of which was intended to be a table-top device named the Vosi Symphony, networked with Bluetooth. Through the negotiations withMotorola, Vosi introduced and disclosed its intent to integrate Bluetooth in its devices. In the early 2000s a legal battle[24]ensued between Vosi and Motorola, which indefinitely suspended release of the devices. Later, Motorola implemented it in their devices, which initiated the significant propagation of Bluetooth in the public market due to its large market share at the time.
In 2012, Jaap Haartsen was nominated by theEuropean Patent Officefor theEuropean Inventor Award.[18]
Bluetooth operates at frequencies between 2.402 and 2.480GHz, or 2.400 and 2.4835GHz, includingguard bands2MHz wide at the bottom end and 3.5MHz wide at the top.[25]This is in the globally unlicensed (but not unregulated) industrial, scientific and medical (ISM) 2.4GHz short-range radio frequency band. Bluetooth uses a radio technology calledfrequency-hopping spread spectrum. Bluetooth divides transmitted data into packets, and transmits each packet on one of 79 designated Bluetooth channels. Each channel has a bandwidth of 1MHz. It usually performs 1600hops per second, withadaptive frequency-hopping(AFH) enabled.[25]Bluetooth Low Energyuses 2MHz spacing, which accommodates 40 channels.[26]
Originally,Gaussian frequency-shift keying(GFSK) modulation was the only modulation scheme available. Since the introduction of Bluetooth 2.0+EDR, π/4-DQPSK(differential quadrature phase-shift keying) and 8-DPSK modulation may also be used between compatible devices. Devices functioning with GFSK are said to be operating in basic rate (BR) mode, where an instantaneousbit rateof 1Mbit/sis possible. The termEnhanced Data Rate(EDR) is used to describe π/4-DPSK (EDR2) and 8-DPSK (EDR3) schemes, transferring 2 and 3Mbit/s respectively.
In 2019, Apple published an extension called HDR which supports data rates of 4 (HDR4) and 8 (HDR8) Mbit/s using π/4-DQPSKmodulation on 4 MHz channels with forward error correction (FEC).[27]
Bluetooth is apacket-based protocolwith amaster/slave architecture. One master may communicate with up to seven slaves in apiconet. All devices within a given piconet use the clock provided by the master as the base for packet exchange. The master clock ticks with a period of 312.5μs, two clock ticks then make up a slot of 625μs, and two slots make up a slot pair of 1250μs. In the simple case of single-slot packets, the master transmits in even slots and receives in odd slots. The slave, conversely, receives in even slots and transmits in odd slots. Packets may be 1, 3, or 5 slots long, but in all cases, the master's transmission begins in even slots and the slave's in odd slots.
The above excludes Bluetooth Low Energy, introduced in the 4.0 specification,[28]whichuses the same spectrum but somewhat differently.
A master BR/EDR Bluetooth device can communicate with a maximum of seven devices in a piconet (an ad hoc computer network using Bluetooth technology), though not all devices reach this maximum. The devices can switch roles, by agreement, and the slave can become the master (for example, a headset initiating a connection to a phone necessarily begins as master—as an initiator of the connection—but may subsequently operate as the slave).
The Bluetooth Core Specification provides for the connection of two or more piconets to form ascatternet, in which certain devices simultaneously play the master/leader role in one piconet and the slave role in another.
At any given time, data can be transferred between the master and one other device (except for the little-used broadcast mode). The master chooses which slave device to address; typically, it switches rapidly from one device to another in around-robinfashion. Since it is the master that chooses which slave to address, whereas a slave is (in theory) supposed to listen in each receive slot, being a master is a lighter burden than being a slave. Being a master of seven slaves is possible; being a slave of more than one master is possible. The specification is vague as to required behavior in scatternets.[29]
Bluetooth is a standard wire-replacement communications protocol primarily designed for low power consumption, with a short range based on low-costtransceivermicrochipsin each device.[30]Because the devices use a radio (broadcast) communications system, they do not have to be in visual line of sight of each other; however, aquasi opticalwireless path must be viable.[31]
Historically, the Bluetooth range was defined by the radio class, with a lower class (and higher output power) having larger range.[2]The actual range of a given link depends on several qualities of both communicating devices and theair and obstacles in between. The primary attributes affecting range are the data rate, protocol (Bluetooth Classic or Bluetooth Low Energy), transmission power, and receiver sensitivity, and the relative orientations and gains of both antennas.[32]
The effective range varies depending on propagation conditions, material coverage, production sample variations, antenna configurations and battery conditions. Most Bluetooth applications are for indoor conditions, where attenuation of walls and signal fading due to signal reflections make the range far lower than specified line-of-sight ranges of the Bluetooth products.
Most Bluetooth applications are battery-powered Class 2 devices, with little difference in range whether the other end of the link is a Class 1 or Class 2 device as the lower-powered device tends to set the range limit. In some cases the effective range of the data link can be extended when a Class 2 device is connecting to a Class 1 transceiver with both higher sensitivity and transmission power than a typical Class 2 device.[33]In general, however, Class 1 devices have sensitivities similar to those of Class 2 devices. Connecting two Class 1 devices with both high sensitivity and high power can allow ranges far in excess of the typical 100 m, depending on the throughput required by the application. Some such devices allow open field ranges of up to 1 km and beyond between two similar devices without exceeding legal emission limits.[34][35][36]
To use Bluetooth wireless technology, a device must be able to interpret certain Bluetooth profiles.
For example,
Profiles are definitions of possible applications and specify general behaviors that Bluetooth-enabled devices use to communicate with other Bluetooth devices. These profiles include settings to parameterize and to control the communication from the start. Adherence to profiles saves the time for transmitting the parameters anew before the bi-directional link becomes effective. There are a wide range of Bluetooth profiles that describe many different types of applications or use cases for devices.[37]
Bluetooth exists in numerous products such as telephones,speakers, tablets, media players, robotics systems, laptops, and game console equipment as well as some high definitionheadsets,modems,hearing aids[53]and even watches.[54]Bluetooth is useful when transferring information between two or more devices that are near each other in low-bandwidth situations. Bluetooth is commonly used to transfer sound data with telephones (i.e., with a Bluetooth headset) or byte data with hand-held computers (transferring files).
Bluetooth protocols simplify the discovery and setup of services between devices.[55]Bluetooth devices can advertise all of the services they provide.[56]This makes using services easier, because more of the security,network addressand permission configuration can be automated than with many other network types.[55]
A personal computer that does not have embedded Bluetooth can use a Bluetooth adapter that enables the PC to communicate with Bluetooth devices. While somedesktop computersand most recent laptops come with a built-in Bluetooth radio, others require an external adapter, typically in the form of a small USB "dongle".
Unlike its predecessor,IrDA, which requires a separate adapter for each device, Bluetooth lets multiple devices communicate with a computer over a single adapter.[57]
ForMicrosoftplatforms,Windows XP Service Pack 2and SP3 releases work natively with Bluetooth v1.1, v2.0 and v2.0+EDR.[58]Previous versions required users to install their Bluetooth adapter's own drivers, which were not directly supported by Microsoft.[59]Microsoft's own Bluetooth dongles (packaged with their Bluetooth computer devices) have no external drivers and thus require at least Windows XP Service Pack 2. Windows Vista RTM/SP1 with the Feature Pack for Wireless or Windows Vista SP2 work with Bluetooth v2.1+EDR.[58]Windows 7 works with Bluetooth v2.1+EDR and Extended Inquiry Response (EIR).[58]The Windows XP and Windows Vista/Windows 7 Bluetooth stacks support the following Bluetooth profiles natively: PAN, SPP,DUN, HID, HCRP. The Windows XP stack can be replaced by a third party stack that supports more profiles or newer Bluetooth versions. The Windows Vista/Windows 7 Bluetooth stack supports vendor-supplied additional profiles without requiring that the Microsoft stack be replaced.[58]Windows 8 and later support Bluetooth Low Energy (BLE). It is generally recommended to install the latest vendor driver and its associated stack to be able to use the Bluetooth device at its fullest extent.
Appleproducts have worked with Bluetooth sinceMac OSX v10.2, which was released in 2002.[60]
Linuxhas two popularBluetooth stacks, BlueZ and Fluoride. The BlueZ stack is included with most Linux kernels and was originally developed byQualcomm.[61]Fluoride, earlier known as Bluedroid is included in Android OS and was originally developed byBroadcom.[62]There is also Affix stack, developed byNokia. It was once popular, but has not been updated since 2005.[63]
FreeBSDhas included Bluetooth since its v5.0 release, implemented throughnetgraph.[64][65]
NetBSDhas included Bluetooth since its v4.0 release.[66][67]Its Bluetooth stack was ported toOpenBSDas well, however OpenBSD later removed it as unmaintained.[68][69]
DragonFly BSDhas had NetBSD's Bluetooth implementation since 1.11 (2008).[70][71]Anetgraph-based implementation fromFreeBSDhas also been available in the tree, possibly disabled until 2014-11-15, and may require more work.[72][73]
The specifications were formalized by theBluetooth Special Interest Group(SIG) and formally announced on 20 May 1998.[74]In 2014 it had a membership of over 30,000 companies worldwide.[75]It was established byEricsson,IBM,Intel,NokiaandToshiba, and later joined by many other companies.
All versions of the Bluetooth standards arebackward-compatiblewith all earlier versions.[76]
The Bluetooth Core Specification Working Group (CSWG) produces mainly four kinds of specifications:
Major enhancements include:
This version of the Bluetooth Core Specification was released before 2005. The main difference is the introduction of an Enhanced Data Rate (EDR) for fasterdata transfer. The data rate of EDR is 3Mbit/s, although the maximum data transfer rate (allowing for inter-packet time and acknowledgements) is 2.1Mbit/s.[79]EDR uses a combination ofGFSKandphase-shift keyingmodulation (PSK) with two variants, π/4-DQPSKand 8-DPSK.[81]EDR can provide a lower power consumption through a reducedduty cycle.
The specification is published asBluetooth v2.0 + EDR, which implies that EDR is an optional feature. Aside from EDR, the v2.0 specification contains other minor improvements, and products may claim compliance to "Bluetooth v2.0" without supporting the higher data rate. At least one commercial device states "Bluetooth v2.0 without EDR" on its data sheet.[82]
Bluetooth Core Specification version 2.1 + EDR was adopted by the Bluetooth SIG on 26 July 2007.[81]
The headline feature of v2.1 issecure simple pairing(SSP): this improves the pairing experience for Bluetooth devices, while increasing the use and strength of security.[83]
Version 2.1 allows various other improvements, includingextended inquiry response(EIR), which provides more information during the inquiry procedure to allow better filtering of devices before connection; and sniff subrating, which reduces the power consumption in low-power mode.
Version 3.0 + HS of the Bluetooth Core Specification[81]was adopted by the Bluetooth SIG on 21 April 2009. Bluetooth v3.0 + HS provides theoretical data transfer speeds of up to 24 Mbit/s, though not over the Bluetooth link itself. Instead, the Bluetooth link is used for negotiation and establishment, and the high data rate traffic is carried over a colocated802.11link.
The main new feature isAMP(Alternative MAC/PHY), the addition of802.11as a high-speed transport. The high-speed part of the specification is not mandatory, and hence only devices that display the "+HS" logo actually support Bluetooth over 802.11 high-speed data transfer. A Bluetooth v3.0 device without the "+HS" suffix is only required to support features introduced in Core Specification version 3.0[84]or earlier Core Specification Addendum 1.[85]
The high-speed (AMP) feature of Bluetooth v3.0 was originally intended forUWB, but the WiMedia Alliance, the body responsible for the flavor of UWB intended for Bluetooth, announced in March 2009 that it was disbanding, and ultimately UWB was omitted from the Core v3.0 specification.[86]
On 16 March 2009, theWiMedia Allianceannounced it was entering into technology transfer agreements for the WiMediaUltra-wideband(UWB) specifications. WiMedia has transferred all current and future specifications, including work on future high-speed and power-optimized implementations, to the Bluetooth Special Interest Group (SIG),Wireless USBPromoter Group and theUSB Implementers Forum. After successful completion of the technology transfer, marketing, and related administrative items, the WiMedia Alliance ceased operations.[87][88][89][90][91]
In October 2009, theBluetooth Special Interest Groupsuspended development of UWB as part of the alternative MAC/PHY, Bluetooth v3.0 + HS solution. A small, but significant, number of formerWiMediamembers had not and would not sign up to the necessary agreements for theIPtransfer. As of 2009, the Bluetooth SIG was in the process of evaluating other options for its longer-term roadmap.[92][93][94]
The Bluetooth SIG completed the Bluetooth Core Specification version 4.0 (called Bluetooth Smart) and has been adopted as of 30 June 2010[update]. It includesClassic Bluetooth,Bluetooth high speedandBluetooth Low Energy(BLE) protocols. Bluetooth high speed is based on Wi-Fi, and Classic Bluetooth consists of legacy Bluetooth protocols.
Bluetooth Low Energy, previously known as Wibree,[95]is a subset of Bluetooth v4.0 with an entirely new protocol stack for rapid build-up of simple links. As an alternative to the Bluetooth standard protocols that were introduced in Bluetooth v1.0 to v3.0, it is aimed at very low power applications powered by acoin cell. Chip designs allow for two types of implementation, dual-mode, single-mode and enhanced past versions.[96]The provisional namesWibreeandBluetooth ULP(Ultra Low Power) were abandoned and the BLE name was used for a while. In late 2011, new logos "Bluetooth Smart Ready" for hosts and "Bluetooth Smart" for sensors were introduced as the general-public face of BLE.[97]
Compared toClassic Bluetooth, Bluetooth Low Energy is intended to provide considerably reduced power consumption and cost while maintaining asimilar communication range. In terms of lengthening the battery life of Bluetooth devices,BLErepresents a significant progression.
Cost-reduced single-mode chips, which enable highly integrated and compact devices, feature a lightweight Link Layer providing ultra-low power idle mode operation, simple device discovery, and reliable point-to-multipoint data transfer with advanced power-save and secure encrypted connections at the lowest possible cost.
General improvements in version 4.0 include the changes necessary to facilitate BLE modes, as well the Generic Attribute Profile (GATT) and Security Manager (SM) services withAESEncryption.
Core Specification Addendum 2 was unveiled in December 2011; it contains improvements to the audio Host Controller Interface and to the High Speed (802.11) Protocol Adaptation Layer.
Core Specification Addendum 3 revision 2 has an adoption date of 24 July 2012.
Core Specification Addendum 4 has an adoption date of 12 February 2013.
The Bluetooth SIG announced formal adoption of the Bluetooth v4.1 specification on 4 December 2013. This specification is an incremental software update to Bluetooth Specification v4.0, and not a hardware update. The update incorporates Bluetooth Core Specification Addenda (CSA 1, 2, 3 & 4) and adds new features that improve consumer usability. These include increased co-existence support for LTE, bulk data exchange rates—and aid developer innovation by allowing devices to support multiple roles simultaneously.[106]
New features of this specification include:
Some features were already available in a Core Specification Addendum (CSA) before the release of v4.1.
Released on 2 December 2014,[108]it introduces features for theInternet of things.
The major areas of improvement are:
Older Bluetooth hardware may receive 4.2 features such as Data Packet Length Extension and improved privacy via firmware updates.[109][110]
The Bluetooth SIG released Bluetooth 5 on 6 December 2016.[111]Its new features are mainly focused on newInternet of Thingstechnology. Sony was the first to announce Bluetooth 5.0 support with itsXperia XZ Premiumin Feb 2017 during the Mobile World Congress 2017.[112]The SamsungGalaxy S8launched with Bluetooth 5 support in April 2017. In September 2017, theiPhone 8, 8 Plus andiPhone Xlaunched with Bluetooth 5 support as well.Applealso integrated Bluetooth 5 in its newHomePodoffering released on 9 February 2018.[113]Marketing drops the point number; so that it is just "Bluetooth 5" (unlike Bluetooth 4.0);[114]the change is for the sake of "Simplifying our marketing, communicating user benefits more effectively and making it easier to signal significant technology updates to the market."
Bluetooth 5 provides, forBLE, options that can double the data rate (2Mbit/s burst) at the expense of range, or provide up to four times the range at the expense of data rate. The increase in transmissions could be important for Internet of Things devices, where many nodes connect throughout a whole house. Bluetooth 5 increases capacity of connectionless services such as location-relevant navigation[115]of low-energy Bluetooth connections.[116][117][118]
The major areas of improvement are:
Features added in CSA5 – integrated in v5.0:
The following features were removed in this version of the specification:
The Bluetooth SIG presented Bluetooth 5.1 on 21 January 2019.[120]
The major areas of improvement are:
Features added in Core Specification Addendum (CSA) 6 – integrated in v5.1:
The following features were removed in this version of the specification:
On 31 December 2019, the Bluetooth SIG published the Bluetooth Core Specification version 5.2. The new specification adds new features:[121]
The Bluetooth SIG published the Bluetooth Core Specification version 5.3 on 13 July 2021. The feature enhancements of Bluetooth 5.3 are:[128]
The following features were removed in this version of the specification:
The Bluetooth SIG released the Bluetooth Core Specification version 5.4 on 7 February 2023. This new version adds the following features:[129]
The Bluetooth SIG released the Bluetooth Core Specification version 6.0 on 27 August 2024.[130]This version adds the following features:[131]
The Bluetooth SIG released the Bluetooth Core Specification version 6.1 on 7 May 2025.[132]
Seeking to extend the compatibility of Bluetooth devices, the devices that adhere to the standard use an interface called HCI (Host Controller Interface) between the host and the controller.
High-level protocols such as the SDP (Protocol used to find other Bluetooth devices within the communication range, also responsible for detecting the function of devices in range), RFCOMM (Protocol used to emulate serial port connections) and TCS (Telephony control protocol) interact with the baseband controller through the L2CAP (Logical Link Control and Adaptation Protocol). The L2CAP protocol is responsible for the segmentation and reassembly of the packets.
The hardware that makes up the Bluetooth device is made up of, logically, two parts; which may or may not be physically separate. A radio device, responsible for modulating and transmitting the signal; and a digital controller. The digital controller is likely a CPU, one of whose functions is to run a Link Controller; and interfaces with the host device; but some functions may be delegated to hardware. The Link Controller is responsible for the processing of the baseband and the management of ARQ and physical layer FEC protocols. In addition, it handles the transfer functions (both asynchronous and synchronous), audio coding (e.g.SBC (codec)) and data encryption. The CPU of the device is responsible for attending the instructions related to Bluetooth of the host device, in order to simplify its operation. To do this, the CPU runs software called Link Manager that has the function of communicating with other devices through the LMP protocol.
A Bluetooth device is ashort-rangewirelessdevice. Bluetooth devices arefabricatedonRF CMOSintegrated circuit(RF circuit) chips.[133][134]
Bluetooth is defined as a layer protocol architecture consisting of core protocols, cable replacement protocols, telephony control protocols, and adopted protocols.[135]Mandatory protocols for all Bluetooth stacks are LMP, L2CAP and SDP. In addition, devices that communicate with Bluetooth almost universally can use these protocols:HCIand RFCOMM.[citation needed]
The Link Manager (LM) is the system that manages establishing the connection between devices. It is responsible for the establishment, authentication and configuration of the link. The Link Manager locates other managers and communicates with them via the management protocol of the LMP link. To perform its function as a service provider, the LM uses the services included in the Link Controller (LC).
The Link Manager Protocol basically consists of several PDUs (Protocol Data Units) that are sent from one device to another. The following is a list of supported services:
The Host Controller Interface provides a command interface between the controller and the host.
TheLogical Link Control and Adaptation Protocol(L2CAP) is used to multiplex multiple logical connections between two devices using different higher level protocols.
Provides segmentation and reassembly of on-air packets.
InBasicmode, L2CAP provides packets with a payload configurable up to 64 kB, with 672 bytes as the defaultMTU, and 48 bytes as the minimum mandatory supported MTU.
InRetransmission and Flow Controlmodes, L2CAP can be configured either for isochronous data or reliable data per channel by performing retransmissions and CRC checks.
Bluetooth Core Specification Addendum 1 adds two additional L2CAP modes to the core specification. These modes effectively deprecate original Retransmission and Flow Control modes:
Reliability in any of these modes is optionally and/or additionally guaranteed by the lower layer Bluetooth BDR/EDR air interface by configuring the number of retransmissions and flush timeout (time after which the radio flushes packets). In-order sequencing is guaranteed by the lower layer.
Only L2CAP channels configured in ERTM or SM may be operated over AMP logical links.
TheService Discovery Protocol(SDP) allows a device to discover services offered by other devices, and their associated parameters. For example, when you use a mobile phone with a Bluetooth headset, the phone uses SDP to determine whichBluetooth profilesthe headset can use (Headset Profile, Hands Free Profile (HFP),Advanced Audio Distribution Profile (A2DP)etc.) and the protocol multiplexer settings needed for the phone to connect to the headset using each of them. Each service is identified by aUniversally unique identifier(UUID), with official services (Bluetooth profiles) assigned a short form UUID (16 bits rather than the full 128).
Radio Frequency Communications(RFCOMM) is a cable replacement protocol used for generating a virtual serial data stream. RFCOMM provides for binary data transport and emulatesEIA-232(formerly RS-232) control signals over the Bluetooth baseband layer, i.e., it is a serial port emulation.
RFCOMM provides a simple, reliable, data stream to the user, similar to TCP. It is used directly by many telephony related profiles as a carrier for AT commands, as well as being a transport layer for OBEX over Bluetooth.
Many Bluetooth applications use RFCOMM because of its widespread support and publicly available API on most operating systems. Additionally, applications that used a serial port to communicate can be quickly ported to use RFCOMM.
TheBluetooth Network Encapsulation Protocol(BNEP) is used for transferring another protocol stack's data via an L2CAP channel.
Its main purpose is the transmission of IP packets in the Personal Area Networking Profile.
BNEP performs a similar function toSNAPin Wireless LAN.
TheAudio/Video Control Transport Protocol(AVCTP) is used by the remote control profile to transfer AV/C commands over an L2CAP channel. The music control buttons on a stereo headset use this protocol to control the music player.
TheAudio/Video Distribution Transport Protocol(AVDTP) is used by the advanced audio distribution (A2DP) profile to stream music to stereo headsets over anL2CAPchannel intended for video distribution profile in the Bluetooth transmission.
TheTelephony Control Protocol– Binary(TCS BIN) is the bit-oriented protocol that defines the call control signaling for the establishment of voice and data calls between Bluetooth devices. Additionally, "TCS BIN defines mobility management procedures for handling groups of Bluetooth TCS devices."
TCS-BIN is only used by the cordless telephony profile, which failed to attract implementers. As such it is only of historical interest.
Adopted protocols are defined by other standards-making organizations and incorporated into Bluetooth's protocol stack, allowing Bluetooth to code protocols only when necessary. The adopted protocols include:
Depending on packet type, individual packets may be protected byerror correction, either 1/3 rateforward error correction(FEC) or 2/3 rate. In addition, packets with CRC will be retransmitted until acknowledged byautomatic repeat request(ARQ).
Any Bluetooth device indiscoverable modetransmits the following information on demand:
Any device may perform an inquiry to find other devices to connect to, and any device can be configured to respond to such inquiries. However, if the device trying to connect knows the address of the device, it always responds to direct connection requests and transmits the information shown in the list above if requested. Use of a device's services may require pairing or acceptance by its owner, but the connection itself can be initiated by any device and held until it goes out of range. Some devices can be connected to only one device at a time, and connecting to them prevents them from connecting to other devices and appearing in inquiries until they disconnect from the other device.
Every device has aunique 48-bit address. However, these addresses are generally not shown in inquiries. Instead, friendly Bluetooth names are used, which can be set by the user. This name appears when another user scans for devices and in lists of paired devices.
Most cellular phones have the Bluetooth name set to the manufacturer and model of the phone by default. Most cellular phones and laptops show only the Bluetooth names and special programs are required to get additional information about remote devices. This can be confusing as, for example, there could be several cellular phones in range namedT610(seeBluejacking).
Many services offered over Bluetooth can expose private data or let a connecting party control the Bluetooth device. Security reasons make it necessary to recognize specific devices, and thus enable control over which devices can connect to a given Bluetooth device. At the same time, it is useful for Bluetooth devices to be able to establish a connection without user intervention (for example, as soon as in range).
To resolve this conflict, Bluetooth uses a process calledbonding, and a bond is generated through a process calledpairing. The pairing process is triggered either by a specific request from a user to generate a bond (for example, the user explicitly requests to "Add a Bluetooth device"), or it is triggered automatically when connecting to a service where (for the first time) the identity of a device is required for security purposes. These two cases are referred to as dedicated bonding and general bonding respectively.
Pairing often involves some level of user interaction. This user interaction confirms the identity of the devices. When pairing completes, a bond forms between the two devices, enabling those two devices to connect in the future without repeating the pairing process to confirm device identities. When desired, the user can remove the bonding relationship.
During pairing, the two devices establish a relationship by creating ashared secretknown as alink key. If both devices store the same link key, they are said to bepairedorbonded. A device that wants to communicate only with a bonded device cancryptographicallyauthenticatethe identity of the other device, ensuring it is the same device it previously paired with. Once a link key is generated, an authenticatedACLlink between the devices may beencryptedto protect exchanged data againsteavesdropping. Users can delete link keys from either device, which removes the bond between the devices—so it is possible for one device to have a stored link key for a device it is no longer paired with.
Bluetooth services generally require either encryption or authentication and as such require pairing before they let a remote device connect. Some services, such as the Object Push Profile, elect not to explicitly require authentication or encryption so that pairing does not interfere with the user experience associated with the service use-cases.
Pairing mechanisms changed significantly with the introduction of Secure Simple Pairing in Bluetooth v2.1. The following summarizes the pairing mechanisms:
SSP is considered simple for the following reasons:
Prior to Bluetooth v2.1, encryption is not required and can be turned off at any time. Moreover, the encryption key is only good for approximately 23.5 hours; using a single encryption key longer than this time allows simpleXOR attacksto retrieve the encryption key.
Bluetooth v2.1 addresses this in the following ways:
Link keys may be stored on the device file system, not on the Bluetooth chip itself. Many Bluetooth chip manufacturers let link keys be stored on the device—however, if the device is removable, this means that the link key moves with the device.
Bluetooth implementsconfidentiality,authenticationandkeyderivation with custom algorithms based on theSAFER+block cipher. Bluetooth key generation is generally based on a Bluetooth PIN, which must be entered into both devices. This procedure might be modified if one of the devices has a fixed PIN (e.g., for headsets or similar devices with a restricted user interface). During pairing, an initialization key or master key is generated, using the E22 algorithm.[136]TheE0stream cipher is used for encrypting packets, granting confidentiality, and is based on a shared cryptographic secret, namely a previously generated link key or master key. Those keys, used for subsequent encryption of data sent via the air interface, rely on the Bluetooth PIN, which has been entered into one or both devices.
An overview of Bluetooth vulnerabilities exploits was published in 2007 by Andreas Becker.[137]
In September 2008, theNational Institute of Standards and Technology(NIST) published a Guide to Bluetooth Security as a reference for organizations. It describes Bluetooth security capabilities and how to secure Bluetooth technologies effectively. While Bluetooth has its benefits, it is susceptible to denial-of-service attacks, eavesdropping, man-in-the-middle attacks, message modification, and resource misappropriation. Users and organizations must evaluate their acceptable level of risk and incorporate security into the lifecycle of Bluetooth devices. To help mitigate risks, included in the NIST document are security checklists with guidelines and recommendations for creating and maintaining secure Bluetooth piconets, headsets, and smart card readers.[138]
Bluetooth v2.1 – finalized in 2007 with consumer devices first appearing in 2009 – makes significant changes to Bluetooth's security, including pairing. See thepairing mechanismssection for more about these changes.
Bluejacking is the sending of either a picture or a message from one user to an unsuspecting user through Bluetooth wireless technology. Common applications include short messages, e.g., "You've just been bluejacked!"[139]Bluejacking does not involve the removal or alteration of any data from the device.[140]
Some form ofDoSis also possible, even in modern devices, by sending unsolicited pairing requests in rapid succession; this becomes disruptive because most systems display a full screen notification for every connection request, interrupting every other activity, especially on less powerful devices.
In 2001, Jakobsson and Wetzel fromBell Laboratoriesdiscovered flaws in the Bluetooth pairing protocol and also pointed to vulnerabilities in the encryption scheme.[141]In 2003, Ben and Adam Laurie from A.L. Digital Ltd. discovered that serious flaws in some poor implementations of Bluetooth security may lead to disclosure of personal data.[142]In a subsequent experiment, Martin Herfurt from the trifinite.group was able to do a field-trial at theCeBITfairgrounds, showing the importance of the problem to the world. A new attack calledBlueBugwas used for this experiment.[143]In 2004 the first purportedvirususing Bluetooth to spread itself among mobile phones appeared on theSymbian OS.[144]The virus was first described byKaspersky Laband requires users to confirm the installation of unknown software before it can propagate. The virus was written as a proof-of-concept by a group of virus writers known as "29A" and sent to anti-virus groups. Thus, it should be regarded as a potential (but not real) security threat to Bluetooth technology orSymbian OSsince the virus has never spread outside of this system. In August 2004, a world-record-setting experiment (see alsoBluetooth sniping) showed that the range of Class 2 Bluetooth radios could be extended to 1.78 km (1.11 mi) with directional antennas and signal amplifiers.[145]This poses a potential security threat because it enables attackers to access vulnerable Bluetooth devices from a distance beyond expectation. The attacker must also be able to receive information from the victim to set up a connection. No attack can be made against a Bluetooth device unless the attacker knows its Bluetooth address and which channels to transmit on, although these can be deduced within a few minutes if the device is in use.[146]
In January 2005, a mobilemalwareworm known as Lasco surfaced. The worm began targeting mobile phones usingSymbian OS(Series 60 platform) using Bluetooth enabled devices to replicate itself and spread to other devices. The worm is self-installing and begins once the mobile user approves the transfer of the file (Velasco.sis) from another device. Once installed, the worm begins looking for other Bluetooth enabled devices to infect. Additionally, the worm infects other.SISfiles on the device, allowing replication to another device through the use of removable media (Secure Digital,CompactFlash, etc.). The worm can render the mobile device unstable.[147]
In April 2005,University of Cambridgesecurity researchers published results of their actual implementation of passive attacks against thePIN-basedpairing between commercial Bluetooth devices. They confirmed that attacks are practicably fast, and the Bluetooth symmetric key establishment method is vulnerable. To rectify this vulnerability, they designed an implementation that showed that stronger, asymmetric key establishment is feasible for certain classes of devices, such as mobile phones.[148]
In June 2005, Yaniv Shaked[149]and Avishai Wool[150]published a paper describing both passive and active methods for obtaining the PIN for a Bluetooth link. The passive attack allows a suitably equipped attacker to eavesdrop on communications and spoof if the attacker was present at the time of initial pairing. The active method makes use of a specially constructed message that must be inserted at a specific point in the protocol, to make the master and slave repeat the pairing process. After that, the first method can be used to crack the PIN. This attack's major weakness is that it requires the user of the devices under attack to re-enter the PIN during the attack when the device prompts them to. Also, this active attack probably requires custom hardware, since most commercially available Bluetooth devices are not capable of the timing necessary.[151]
In August 2005, police inCambridgeshire, England, issued warnings about thieves using Bluetooth enabled phones to track other devices left in cars. Police are advising users to ensure that any mobile networking connections are de-activated if laptops and other devices are left in this way.[152]
In April 2006, researchers fromSecure NetworkandF-Securepublished a report that warns of the large number of devices left in a visible state, and issued statistics on the spread of various Bluetooth services and the ease of spread of an eventual Bluetooth worm.[153]
In October 2006, at the Luxembourgish Hack.lu Security Conference, Kevin Finistere and Thierry Zoller demonstrated and released a remote root shell via Bluetooth on Mac OS X v10.3.9 and v10.4. They also demonstrated the first Bluetooth PIN and Linkkeys cracker, which is based on the research of Wool and Shaked.[154]
In April 2017, security researchers at Armis discovered multiple exploits in the Bluetooth software in various platforms, includingMicrosoft Windows,Linux, AppleiOS, and GoogleAndroid. These vulnerabilities are collectively called "BlueBorne". The exploits allow an attacker to connect to devices or systems without authentication and can give them "virtually full control over the device". Armis contacted Google, Microsoft, Apple, Samsung and Linux developers allowing them to patch their software before the coordinated announcement of the vulnerabilities on 12 September 2017.[155]
In July 2018, Lior Neumann andEli Biham, researchers at the Technion – Israel Institute of Technology identified a security vulnerability in the latest Bluetooth pairing procedures: Secure Simple Pairing and LE Secure Connections.[156][157]
Also, in October 2018, Karim Lounis, a network security researcher at Queen's University, identified a security vulnerability, called CDV (Connection Dumping Vulnerability), on various Bluetooth devices that allows an attacker to tear down an existing Bluetooth connection and cause the deauthentication and disconnection of the involved devices. The researcher demonstrated the attack on various devices of different categories and from different manufacturers.[158]
In August 2019, security researchers at theSingapore University of Technology and Design, Helmholtz Center for Information Security, andUniversity of Oxforddiscovered a vulnerability, called KNOB (Key Negotiation of Bluetooth) in the key negotiation that would "brute force the negotiated encryption keys, decrypt the eavesdropped ciphertext, and inject valid encrypted messages (in real-time)".[159][160]Google released anAndroidsecurity patch on 5 August 2019, which removed this vulnerability.[161]
In November 2023, researchers fromEurecomrevealed a new class of attacks known as BLUFFS (Bluetooth Low Energy Forward and Future Secrecy Attacks). These 6 new attacks expand on and work in conjunction with the previously known KNOB and BIAS (Bluetooth Impersonation AttackS) attacks. While the previous KNOB and BIAS attacks allowed an attacker to decrypt and spoof Bluetooth packets within a session, BLUFFS extends this capability to all sessions generated by a device (including past, present, and future). All devices running Bluetooth versions 4.2 up to and including 5.4 are affected.[162][163]
Bluetooth uses theradio frequencyspectrum in the 2.402GHz to 2.480GHz range,[164]which isnon-ionizing radiation, of similar bandwidth to that used by wireless and mobile phones. No specific harm has been demonstrated, even though wireless transmission has been included byIARCin the possiblecarcinogenlist. Maximum power output from a Bluetooth radio is 100mWfor Class1, 2.5mW for Class2, and 1mW for Class3 devices. Even the maximum power output of Class1 is a lower level than the lowest-powered mobile phones.[165]UMTSandW-CDMAoutput 250mW,GSM1800/1900outputs 1000mW, andGSM850/900outputs 2000mW.
The Bluetooth Innovation World Cup, a marketing initiative of the Bluetooth Special Interest Group (SIG), was an international competition that encouraged the development of innovations for applications leveraging Bluetooth technology in sports, fitness and health care products. The competition aimed to stimulate new markets.[166]
The Bluetooth Innovation World Cup morphed into the Bluetooth Breakthrough Awards in 2013. Bluetooth SIG subsequently launched the Imagine Blue Award in 2016 at Bluetooth World.[167]The Bluetooth Breakthrough Awards program highlights the most innovative products and applications available today, prototypes coming soon, and student-led projects in the making.[168]
|
https://en.wikipedia.org/wiki/Bluetooth#Security
|
Wired Equivalent Privacy(WEP) is an obsolete, severely flawedsecurityalgorithm for 802.11wireless networks. Introduced as part of the originalIEEE 802.11standard ratified in 1997, its intention was to provide security/privacy comparable to that of a traditional wirednetwork.[1]WEP, recognizable by its key of 10 or 26hexadecimaldigits (40 or 104 bits), was at one time widely used, and was often the first security choice presented to users by router configuration tools.[2][3]After a severe design flaw in the algorithm was disclosed in 2001,[4]WEP was no longer considered a secure method of wireless connection; however, in the vast majority of cases, Wi-Fi hardware devices relying on WEP security could not be upgraded to secure operation. Some of WEP's design flaws were addressed in WEP2, but it also proved insecure, and never saw wide adoption or standardization.[5]
In 2003, theWi-Fi Allianceannounced that WEP and WEP2 had been superseded byWi-Fi Protected Access(WPA). In 2004, with the ratification of the full 802.11i standard (i.e. WPA2), the IEEE declared that both WEP-40 and WEP-104 have been deprecated.[6]WPA retained some design characteristics of WEP that remained problematic.
WEP was the only encryption protocol available to802.11aand802.11bdevices built before the WPA standard, which was available for802.11gdevices. However, some 802.11b devices were later provided with firmware or software updates to enable WPA, and newer devices had it built in.[7]
WEP was ratified as a Wi-Fi security standard in 1999. The first versions of WEP were not particularly strong, even for the time they were released, due to U.S. restrictions on the export of various cryptographic technologies. These restrictions led to manufacturers restricting their devices to only 64-bit encryption. When the restrictions were lifted, the encryption was increased to 128 bits. Despite the introduction of 256-bit WEP, 128-bit remains one of the most common implementations.[8]
WEP was included as the privacy component of the originalIEEE 802.11[9]standard ratified in 1997.[10][11]WEP uses thestream cipherRC4forconfidentiality,[12]and theCRC-32checksum forintegrity.[13]It was deprecated in 2004 and is documented in the current standard.[14]
Standard 64-bit WEP uses a 40-bitkey (also known as WEP-40), which is concatenated with a 24-bitinitialization vector(IV) to form the RC4 key. At the time that the original WEP standard was drafted,the U.S. Government's export restrictions on cryptographic technologylimited thekey size. Once the restrictions were lifted, manufacturers of access points implemented an extended 128-bit WEP protocol using a 104-bit key size (WEP-104).
A 64-bit WEP key is usually entered as a string of 10hexadecimal(base 16) characters (0–9 and A–F). Each character represents 4 bits, 10 digits of 4 bits each gives 40 bits; adding the 24-bit IV produces the complete 64-bit WEP key (4 bits × 10 + 24-bit IV = 64-bit WEP key). Most devices also allow the user to enter the key as 5ASCIIcharacters (0–9, a–z, A–Z), each of which is turned into 8 bits using the character's byte value in ASCII (8 bits × 5 + 24-bit IV = 64-bit WEP key); however, this restricts each byte to be a printable ASCII character, which is only a small fraction of possible byte values, greatly reducing the space of possible keys.
A 128-bit WEP key is usually entered as a string of 26 hexadecimal characters. 26 digits of 4 bits each gives 104 bits; adding the 24-bit IV produces the complete 128-bit WEP key (4 bits × 26 + 24-bit IV = 128-bit WEP key). Most devices also allow the user to enter it as 13 ASCII characters (8 bits × 13 + 24-bit IV = 128-bit WEP key).
152-bit and 256-bit WEP systems are available from some vendors. As with the other WEP variants, 24 bits of that is for the IV, leaving 128 or 232 bits for actual protection. These 128 or 232 bits are typically entered as 32 or 58 hexadecimal characters (4 bits × 32 + 24-bit IV = 152-bit WEP key, 4 bits × 58 + 24-bit IV = 256-bit WEP key). Most devices also allow the user to enter it as 16 or 29 ASCII characters (8 bits × 16 + 24-bit IV = 152-bit WEP key, 8 bits × 29 + 24-bit IV = 256-bit WEP key).
Two methods of authentication can be used with WEP: Open System authentication and Shared Key authentication.
In Open System authentication, the WLAN client does not provide its credentials to the access point during authentication. Any client can authenticate with the access point and then attempt to associate. In effect, no authentication occurs. Subsequently, WEP keys can be used for encrypting data frames. At this point, the client must have the correct keys.
In Shared Key authentication, the WEP key is used for authentication in a four-stepchallenge–responsehandshake:
After the authentication and association, the pre-shared WEP key is also used for encrypting the data frames using RC4.
At first glance, it might seem as though Shared Key authentication is more secure than Open System authentication since the latter offers no real authentication. However, it is quite the reverse. It is possible to derive the keystream used for the handshake by capturing the challenge frames in Shared Key authentication.[15]Therefore, data can be more easily intercepted and decrypted with Shared Key authentication than with Open System authentication. If privacy is a primary concern, it is more advisable to use Open System authentication for WEP authentication, rather than Shared Key authentication; however, this also means that any WLAN client can connect to the AP. (Both authentication mechanisms are weak; Shared Key WEP is deprecated in favor of WPA/WPA2.)
Because RC4 is astream cipher, the same traffic key must never be used twice. The purpose of an IV, which is transmitted as plaintext, is to prevent any repetition, but a 24-bit IV is not long enough to ensure this on a busy network. The way the IV was used also opened WEP to arelated-key attack. For a 24-bit IV, there is a 50% probability the same IV will repeat after 5,000 packets.
In August 2001,Scott Fluhrer,Itsik Mantin, andAdi Shamirpublished acryptanalysisof WEP[4]that exploits the way the RC4 ciphers and IV are used in WEP, resulting in a passive attack that can recover the RC4keyafter eavesdropping on the network. Depending on the amount of network traffic, and thus the number of packets available for inspection, a successful key recovery could take as little as one minute. If an insufficient number of packets are being sent, there are ways for an attacker to send packets on the network and thereby stimulate reply packets, which can then be inspected to find the key. The attack was soon implemented, and automated tools have since been released. It is possible to perform the attack with a personal computer, off-the-shelf hardware, and freely available software such asaircrack-ngto crackanyWEP key in minutes.
Cam-Winget et al.[16]surveyed a variety of shortcomings in WEP. They wrote "Experiments in the field show that, with proper equipment, it is practical to eavesdrop on WEP-protected networks from distances of a mile or more from the target." They also reported two generic weaknesses:
In 2005, a group from the U.S.Federal Bureau of Investigationgave a demonstration where they cracked a WEP-protected network in three minutes using publicly available tools.[17]Andreas Klein presented another analysis of the RC4 stream cipher. Klein showed that there are more correlations between the RC4 keystream and the key than the ones found by Fluhrer, Mantin, and Shamir, which can additionally be used to break WEP in WEP-like usage modes.
In 2006, Bittau,Handley, and Lackey showed[2]that the 802.11 protocol itself can be used against WEP to enable earlier attacks that were previously thought impractical. After eavesdropping a single packet, an attacker can rapidly bootstrap to be able to transmit arbitrary data. The eavesdropped packet can then be decrypted one byte at a time (by transmitting about 128 packets per byte to decrypt) to discover the local network IP addresses. Finally, if the 802.11 network is connected to the Internet, the attacker can use 802.11 fragmentation to replay eavesdropped packets while crafting a new IP header onto them. The access point can then be used to decrypt these packets and relay them on to a buddy on the Internet, allowing real-time decryption of WEP traffic within a minute of eavesdropping the first packet.
In 2007, Erik Tews, Andrei Pyshkin, and Ralf-Philipp Weinmann were able to extend Klein's 2005 attack and optimize it for usage against WEP. With the new attack[18]it is possible to recover a 104-bit WEP key with a probability of 50% using only 40,000 captured packets. For 60,000 available data packets, the success probability is about 80%, and for 85,000 data packets, about 95%. Using active techniques likeWi-Fi deauthentication attacksandARPre-injection, 40,000 packets can be captured in less than one minute under good conditions. The actual computation takes about 3 seconds and 3 MB of main memory on aPentium-M1.7 GHz and can additionally be optimized for devices with slower CPUs. The same attack can be used for 40-bit keys with an even higher success probability.
In 2008 thePayment Card Industry Security Standards Council(PCI SSC) updated theData Security Standard(DSS) to prohibit use of WEP as part of any credit-card processing after 30 June 2010, and prohibit any new system from being installed that uses WEP after 31 March 2009. The use of WEP contributed to theTJ Maxxparent company network invasion.[19]
The Caffe Latte attack is another way to defeat WEP. It is not necessary for the attacker to be in the area of thenetworkusing this exploit. By using a process that targets theWindowswireless stack, it is possible to obtain the WEP key from a remote client.[20]By sending a flood of encryptedARPrequests, the assailant takes advantage of the shared key authentication and the message modification flaws in 802.11 WEP. The attacker uses the ARP responses to obtain the WEP key in less than 6 minutes.[21]
Use of encryptedtunneling protocols(e.g.,IPsec,Secure Shell) can provide secure data transmission over an insecure network. However, replacements for WEP have been developed with the goal of restoring security to the wireless network itself.
The recommended solution to WEP security problems is to switch to WPA2.WPAwas an intermediate solution for hardware that could not support WPA2. Both WPA and WPA2 are much more secure than WEP.[22]To add support for WPA or WPA2, some old Wi-Fiaccess pointsmight need to be replaced or have theirfirmwareupgraded. WPA was designed as an interim software-implementable solution for WEP that could forestall immediate deployment of new hardware.[23]However,TKIP(the basis of WPA) has reached the end of its designed lifetime, has been partially broken, and has been officially deprecated with the release of the 802.11-2012 standard.[24]
This stopgap enhancement to WEP was present in some of the early 802.11i drafts. It was implementable onsome(not all) hardware not able to handle WPA or WPA2, and extended both the IV and the key values to 128 bits.[9]It was hoped to eliminate the duplicate IV deficiency as well as stopbrute-force key attacks.
After it became clear that the overall WEP algorithm was deficient (and not just the IV and key sizes) and would require even more fixes, both the WEP2 name and original algorithm were dropped. The two extended key lengths remained in what eventually became WPA'sTKIP.
WEPplus, also known as WEP+, is a proprietary enhancement to WEP byAgere Systems(formerly a subsidiary ofLucent Technologies) that enhances WEP security by avoiding "weak IVs".[25]It is only completely effective when WEPplus is used atboth endsof the wireless connection. As this cannot easily be enforced, it remains a serious limitation. It also does not necessarily preventreplay attacks, and is ineffective against later statistical attacks that do not rely on weak IVs.
Dynamic WEP refers to the combination of 802.1x technology and theExtensible Authentication Protocol. Dynamic WEP changes WEP keys dynamically. It is a vendor-specific feature provided by several vendors such as3Com.
The dynamic change idea made it into 802.11i as part of TKIP, but not for the WEP protocol itself.
|
https://en.wikipedia.org/wiki/Wired_Equivalent_Privacy#Authentication
|
In cryptography, ablock cipher mode of operationis an algorithm that uses ablock cipherto provideinformation securitysuch asconfidentialityorauthenticity.[1]A block cipher by itself is only suitable for the secure cryptographic transformation (encryption or decryption) of one fixed-length group ofbitscalled ablock.[2]A mode of operation describes how to repeatedly apply a cipher's single-block operation to securely transform amounts of data larger than a block.[3][4][5]
Most modes require a unique binary sequence, often called aninitialization vector(IV), for each encryption operation. The IV must be non-repeating, and for some modes must also be random. The initialization vector is used to ensure that distinctciphertextsare produced even when the sameplaintextis encrypted multiple times independently with the samekey.[6]Block ciphers may be capable of operating on more than oneblock size, but during transformation the block size is always fixed. Block cipher modes operate on whole blocks and require that the final data fragment bepaddedto a full block if it is smaller than the current block size.[2]There are, however, modes that do not require padding because they effectively use a block cipher as astream cipher.
Historically, encryption modes have been studied extensively in regard to their error propagation properties under various scenarios of data modification. Later development regardedintegrity protectionas an entirely separate cryptographic goal. Some modern modes of operation combineconfidentialityandauthenticityin an efficient way, and are known asauthenticated encryptionmodes.[7]
The earliest modes of operation, ECB, CBC, OFB, and CFB (see below for all), date back to 1981 and were specified inFIPS 81,DES Modes of Operation. In 2001, the USNational Institute of Standards and Technology(NIST) revised its list of approved modes of operation by includingAESas a block cipher and adding CTR mode inSP800-38A,Recommendation for Block Cipher Modes of Operation. Finally, in January, 2010, NIST addedXTS-AESinSP800-38E,Recommendation for Block Cipher Modes of Operation: The XTS-AES Mode for Confidentiality on Storage Devices. Other confidentiality modes exist which have not been approved by NIST. For example, CTS isciphertext stealingmode and available in many popular cryptographic libraries.
The block cipher modes ECB, CBC, OFB, CFB, CTR, andXTSprovide confidentiality, but they do not protect against accidental modification or malicious tampering. Modification or tampering can be detected with a separatemessage authentication codesuch asCBC-MAC, or adigital signature. The cryptographic community recognized the need for dedicated integrity assurances and NIST responded with HMAC, CMAC, and GMAC.HMACwas approved in 2002 asFIPS 198,The Keyed-Hash Message Authentication Code (HMAC),CMACwas released in 2005 underSP800-38B,Recommendation for Block Cipher Modes of Operation: The CMAC Mode for Authentication, andGMACwas formalized in 2007 underSP800-38D,Recommendation for Block Cipher Modes of Operation: Galois/Counter Mode (GCM) and GMAC.
The cryptographic community observed that compositing (combining) a confidentiality mode with an authenticity mode could be difficult and error prone. They therefore began to supply modes which combined confidentiality and data integrity into a single cryptographic primitive (an encryption algorithm). These combined modes are referred to asauthenticated encryption, AE or "authenc". Examples of AE modes areCCM(SP800-38C),GCM(SP800-38D),CWC,EAX,IAPM, andOCB.
Modes of operation are defined by a number of national and internationally recognized standards bodies. Notable standards organizations includeNIST,ISO(with ISO/IEC 10116[5]), theIEC, theIEEE,ANSI, and theIETF.
An initialization vector (IV) or starting variable (SV)[5]is a block of bits that is used by several modes to randomize the encryption and hence to produce distinct ciphertexts even if the same plaintext is encrypted multiple times, without the need for a slower re-keying process.[citation needed]
An initialization vector has different security requirements than a key, so the IV usually does not need to be secret. For most block cipher modes it is important that an initialization vector is never reused under the same key, i.e. it must be acryptographic nonce. Many block cipher modes have stronger requirements, such as the IV must berandomorpseudorandom. Some block ciphers have particular problems with certain initialization vectors, such as all zero IV generating no encryption (for some keys).
It is recommended to review relevant IV requirements for the particular block cipher mode in relevant specification, for exampleSP800-38A.
For CBC and CFB, reusing an IV leaks some information about the first block of plaintext, and about any common prefix shared by the two messages.
For OFB and CTR, reusing an IV causes key bitstream re-use, which breaks security.[8]This can be seen because both modes effectively create a bitstream that is XORed with the plaintext, and this bitstream is dependent on the key and IV only.
In CBC mode, the IV must be unpredictable (random or pseudorandom) at encryption time; in particular, the (previously) common practice of re-using the last ciphertext block of a message as the IV for the next message is insecure (for example, this method was used by SSL 2.0). If an attacker knows the IV (or the previous block of ciphertext) before the next plaintext is specified, they can check their guess about plaintext of some block that was encrypted with the same key before (this is known as the TLS CBC IV attack).[9]
For some keys, an all-zero initialization vector may generate some block cipher modes (CFB-8, OFB-8) to get the internal state stuck at all-zero. For CFB-8, an all-zero IV and an all-zero plaintext, causes 1/256 of keys to generate no encryption, plaintext is returned as ciphertext.[10]For OFB-8, using all zero initialization vector will generate no encryption for 1/256 of keys.[11]OFB-8 encryption returns the plaintext unencrypted for affected keys.
Some modes (such as AES-SIV and AES-GCM-SIV) are built to be more nonce-misuse resistant, i.e. resilient to scenarios in which the randomness generation is faulty or under the control of the attacker.
Ablock cipherworks on units of a fixedsize(known as ablock size), but messages come in a variety of lengths. So some modes (namelyECBandCBC) require that the final block be padded before encryption. Severalpaddingschemes exist. The simplest is to addnull bytesto theplaintextto bring its length up to a multiple of the block size, but care must be taken that the original length of the plaintext can be recovered; this is trivial, for example, if the plaintext is aCstylestringwhich contains no null bytes except at the end. Slightly more complex is the originalDESmethod, which is to add a single onebit, followed by enough zerobitsto fill out the block; if the message ends on a block boundary, a whole padding block will be added. Most sophisticated are CBC-specific schemes such asciphertext stealingorresidual block termination, which do not cause any extra ciphertext, at the expense of some additional complexity.SchneierandFergusonsuggest two possibilities, both simple: append a byte with value 128 (hex 80), followed by as many zero bytes as needed to fill the last block, or pad the last block withnbytes all with valuen.
CFB, OFB and CTR modes do not require any special measures to handle messages whose lengths are not multiples of the block size, since the modes work byXORingthe plaintext with the output of the block cipher. The last partial block of plaintext is XORed with the first few bytes of the lastkeystreamblock, producing a final ciphertext block that is the same size as the final partial plaintext block. This characteristic of stream ciphers makes them suitable for applications that require the encrypted ciphertext data to be the same size as the original plaintext data, and for applications that transmit data in streaming form where it is inconvenient to add padding bytes.
A number of modes of operation have been designed to combine secrecy and authentication in a single cryptographic primitive. Examples of such modes are ,[12]integrity-aware cipher block chaining (IACBC)[clarification needed], integrity-aware parallelizable mode (IAPM),[13]OCB,EAX,CWC,CCM, andGCM.Authenticated encryptionmodes are classified as single-pass modes or double-pass modes.
In addition, some modes also allow for the authentication of unencrypted associated data, and these are calledAEAD(authenticated encryption with associated data) schemes. For example, EAX mode is a double-pass AEAD scheme while OCB mode is single-pass.
Galois/counter mode (GCM) combines the well-known counter mode of encryption with the new Galois mode of authentication. The key feature is the ease of parallel computation of the Galois field multiplication used for authentication. This feature permits higher throughput than encryption algorithms.
GCM is defined for block ciphers with a block size of 128 bits. Galois message authentication code (GMAC) is an authentication-only variant of the GCM which can form an incremental message authentication code. Both GCM and GMAC can accept initialization vectors of arbitrary length. GCM can take full advantage of parallel processing and implementing GCM can make efficient use of aninstruction pipelineor a hardware pipeline. The CBC mode of operation incurspipeline stallsthat hamper its efficiency and performance.
Like in CTR, blocks are numbered sequentially, and then this block number is combined with an IV and encrypted with a block cipherE, usually AES. The result of this encryption is then XORed with the plaintext to produce the ciphertext. Like all counter modes, this is essentially a stream cipher, and so it is essential that a different IV is used for each stream that is encrypted.
The ciphertext blocks are considered coefficients of apolynomialwhich is then evaluated at a key-dependent pointH, usingfinite field arithmetic. The result is then encrypted, producing anauthentication tagthat can be used to verify the integrity of the data. The encrypted text then contains the IV, ciphertext, and authentication tag.
Counter with cipher block chaining message authentication code(counter with CBC-MAC; CCM) is anauthenticated encryptionalgorithm designed to provide both authentication and confidentiality. CCM mode is only defined for block ciphers with a block length of 128 bits.[14][15]
Synthetic initialization vector (SIV) is a nonce-misuse resistant block cipher mode.
SIV synthesizes an internal IV using the pseudorandom function S2V. S2V is a keyed hash based on CMAC, and the input to the function is:
SIV encrypts the S2V output and the plaintext using AES-CTR, keyed with the encryption key (K2).
SIV can support external nonce-based authenticated encryption, in which case one of the authenticated data fields is utilized for this purpose. RFC5297[16]specifies that for interoperability purposes the last authenticated data field should be used external nonce.
Owing to the use of two keys, the authentication key K1and encryption key K2, naming schemes for SIV AEAD-variants may lead to some confusion; for example AEAD_AES_SIV_CMAC_256 refers to AES-SIV with two AES-128 keys andnotAES-256.
AES-GCM-SIVis a mode of operation for the Advanced Encryption Standard which provides similar performance to Galois/counter mode as well as misuse resistance in the event of the reuse of a cryptographic nonce. The construction is defined in RFC 8452.[17]
AES-GCM-SIV synthesizes the internal IV. It derives a hash of the additional authenticated data and plaintext using the POLYVAL Galois hash function. The hash is then encrypted an AES-key, and used as authentication tag and AES-CTR initialization vector.
AES-GCM-SIVis an improvement over the very similarly named algorithmGCM-SIV, with a few very small changes (e.g. how AES-CTR is initialized), but which yields practical benefits to its security "This addition allows for encrypting up to 250messages with the same key, compared to the significant limitation of only 232messages that were allowed with GCM-SIV."[18]
Many modes of operation have been defined. Some of these are described below. The purpose of cipher modes is to mask patterns which exist in encrypted data, as illustrated in the description of theweakness of ECB.
Different cipher modes mask patterns by cascading outputs from the cipher block or other globally deterministic variables into the subsequent cipher block. The inputs of the listed modes are summarized in the following table:
Note:g(i) is any deterministic function, often theidentity function.
The simplest of the encryption modes is theelectronic codebook(ECB) mode (named after conventional physicalcodebooks[19]). The message is divided into blocks, and each block is encrypted separately. ECB is not recommended for use in cryptographic protocols: the disadvantage of this method is a lack ofdiffusion, wherein it fails to hide data patterns when it encrypts identicalplaintextblocks into identicalciphertextblocks.[20][21][22]
A striking example of the degree to which ECB can leave plaintext data patterns in the ciphertext can be seen when ECB mode is used to encrypt abitmap imagewhich contains large areas of uniform color. While the color of each individualpixelhas supposedly been encrypted, the overall image may still be discerned, as the pattern of identically colored pixels in the original remains visible in the encrypted version.
ECB mode can also make protocols without integrity protection even more susceptible toreplay attacks, since each block gets decrypted in exactly the same way.[citation needed]
Ehrsam, Meyer, Smith and Tuchman invented the cipher block chaining (CBC) mode of operation in 1976. In CBC mode, each block of plaintext is XORed with the previous ciphertext block before being encrypted. This way, each ciphertext block depends on all plaintext blocks processed up to that point. To make each message unique, an initialization vector must be used in the first block.
If the first block has index 1, the mathematical formula for CBC encryption is
while the mathematical formula for CBC decryption is
CBC has been the most commonly used mode of operation. Its main drawbacks are that encryption is sequential (i.e., it cannot be parallelized), and that the message must be padded to a multiple of the cipher block size. One way to handle this last issue is through the method known as ciphertext stealing. Note that a one-bit change in a plaintext or initialization vector (IV) affects all following ciphertext blocks.
Decrypting with the incorrect IV causes the first block of plaintext to be corrupt but subsequent plaintext blocks will be correct. This is because each block is XORed with the ciphertext of the previous block, not the plaintext, so one does not need to decrypt the previous block before using it as the IV for the decryption of the current one. This means that a plaintext block can be recovered from two adjacent blocks of ciphertext. As a consequence, decryptioncanbe parallelized. Note that a one-bit change to the ciphertext causes complete corruption of the corresponding block of plaintext, and inverts the corresponding bit in the following block of plaintext, but the rest of the blocks remain intact. This peculiarity is exploited in different padding oracle attacks, such as POODLE.
Explicit initialization vectorstake advantage of this property by prepending a single random block to the plaintext. Encryption is done as normal, except the IV does not need to be communicated to the decryption routine. Whatever IV decryption uses, only the random block is "corrupted". It can be safely discarded and the rest of the decryption is the original plaintext.
Thepropagating cipher block chaining[23]orplaintext cipher-block chaining[24]mode was designed to cause small changes in the ciphertext to propagate indefinitely when decrypting, as well as when encrypting. In PCBC mode, each block of plaintext is XORed with both the previous plaintext block and the previous ciphertext block before being encrypted. Like with CBC mode, an initialization vector is used in the first block.
Unlike CBC, decrypting PCBC with the incorrect IV (initialization vector) causes all blocks of plaintext to be corrupt.
Encryption and decryption algorithms are as follows:
PCBC is used inKerberos v4andWASTE, most notably, but otherwise is not common.
On a message encrypted in PCBC mode, if two adjacent ciphertext blocks are exchanged, this does not affect the decryption of subsequent blocks.[25]For this reason, PCBC is not used in Kerberos v5.
Thecipher feedback(CFB) mode, in its simplest form uses the entire output of the block cipher. In this variation, it is very similar to CBC, turning a block cipher into a self-synchronizingstream cipher. CFB decryption in this variation is almost identical to CBC encryption performed in reverse:
NIST SP800-38A defines CFB with a bit-width.[26]The CFB mode also requires an integer parameter, denoted s, such that 1 ≤ s ≤ b. In the specification of the CFB mode below, each plaintext segment (Pj) and ciphertext segment (Cj) consists of s bits. The value of s is sometimes incorporated into the name of the mode, e.g., the 1-bit CFB mode, the 8-bit CFB mode, the 64-bit CFB mode, or the 128-bit CFB mode.
These modes will truncate the output of the underlying block cipher.
CFB-1 is considered self synchronizing and resilient to loss of ciphertext; "When the 1-bit CFB mode is used, then the synchronization is automatically restored b+1 positions after the inserted or deleted bit. For other values of s in the CFB mode, and for the other confidentiality modes in this recommendation, the synchronization must be restored externally." (NIST SP800-38A). I.e. 1-bit loss in a 128-bit-wide block cipher like AES will render 129 invalid bits before emitting valid bits.
CFB may also self synchronize in some special cases other than those specified. For example, a one bit change in CFB-128 with an underlying 128 bit block cipher, will re-synchronize after two blocks. (However, CFB-128 etc. will not handle bit loss gracefully; a one-bit loss will cause the decryptor to lose alignment with the encryptor)
Like CBC mode, changes in the plaintext propagate forever in the ciphertext, and encryption cannot be parallelized. Also like CBC, decryption can be parallelized.
CFB, OFB and CTR share two advantages over CBC mode: the block cipher is only ever used in the encrypting direction, and the message does not need to be padded to a multiple of the cipher block size (thoughciphertext stealingcan also be used for CBC mode to make padding unnecessary).
Theoutput feedback(OFB) mode makes a block cipher into a synchronousstream cipher. It generateskeystreamblocks, which are thenXORedwith the plaintext blocks to get the ciphertext. Just as with other stream ciphers, flipping a bit in the ciphertext produces a flipped bit in the plaintext at the same location. This property allows manyerror-correcting codesto function normally even when applied before encryption.
Because of the symmetry of the XOR operation, encryption and decryption are exactly the same:
Each output feedback block cipher operation depends on all previous ones, and so cannot be performed in parallel. However, because the plaintext or ciphertext is only used for the final XOR, the block cipher operations may be performed in advance, allowing the final step to be performed in parallel once the plaintext or ciphertext is available.
It is possible to obtain an OFB mode keystream by using CBC mode with a constant string of zeroes as input. This can be useful, because it allows the usage of fast hardware implementations of CBC mode for OFB mode encryption.
Using OFB mode with a partial block as feedback like CFB mode reduces the average cycle length by a factor of 232or more. A mathematical model proposed by Davies and Parkin and substantiated by experimental results showed that only with full feedback an average cycle length near to the obtainable maximum can be achieved. For this reason, support for truncated feedback was removed from the specification of OFB.[27]
Like OFB, counter mode turns ablock cipherinto astream cipher. It generates the nextkeystreamblock by encrypting successive values of a "counter". The counter can be any function which produces a sequence which is guaranteed not to repeat for a long time, although an actual increment-by-one counter is the simplest and most popular. The usage of a simple deterministic input function used to be controversial; critics argued that "deliberately exposing a cryptosystem to a known systematic input represents an unnecessary risk".[28]However, today CTR mode is widely accepted, and any problems are considered a weakness of the underlying block cipher, which is expected to be secure regardless of systemic bias in its input.[29]Along with CBC, CTR mode is one of two block cipher modes recommended by Niels Ferguson and Bruce Schneier.[30]
CTR mode was introduced byWhitfield DiffieandMartin Hellmanin 1979.[29]
CTR mode has similar characteristics to OFB, but also allows a random-access property during decryption. CTR mode is well suited to operate on a multi-processor machine, where blocks can be encrypted in parallel. Furthermore, it does not suffer from the short-cycle problem that can affect OFB.[31]
If the IV/nonce is random, then they can be combined with the counter using any invertible operation (concatenation, addition, or XOR) to produce the actual unique counter block for encryption. In case of a non-random nonce (such as a packet counter), the nonce and counter should be concatenated (e.g., storing the nonce in the upper 64 bits and the counter in the lower 64 bits of a 128-bit counter block). Simply adding or XORing the nonce and counter into a single value would break the security under achosen-plaintext attackin many cases, since the attacker may be able to manipulate the entire IV–counter pair to cause a collision. Once an attacker controls the IV–counter pair and plaintext, XOR of the ciphertext with the known plaintext would yield a value that, when XORed with the ciphertext of the other block sharing the same IV–counter pair, would decrypt that block.[32]
Note that thenoncein this diagram is equivalent to theinitialization vector(IV) in the other diagrams. However, if the offset/location information is corrupt, it will be impossible to partially recover such data due to the dependence on byte offset.
"Error propagation" properties describe how a decryption behaves during bit errors, i.e. how error in one bit cascades to different decrypted bits.
Bit errors may occur intentionally in attacks or randomly due to transmission errors.
For modernauthenticated encryption(AEAD) or protocols withmessage authentication codeschained in MAC-Then-Encrypt order, any bit error should completely abort decryption and must not generate any specific bit errors to decryptor. I.e. if decryption succeeded, there should not be any bit error. As such error propagation is less important subject in modern cipher modes than in traditional confidentiality-only modes.
(Source: SP800-38A Table D.2: Summary of Effect of Bit Errors on Decryption)
It might be observed, for example, that a one-block error in the transmitted ciphertext would result in a one-block error in the reconstructed plaintext for ECB mode encryption, while in CBC mode such an error would affect two blocks. Some felt that such resilience was desirable in the face of random errors (e.g., line noise), while others argued that error correcting increased the scope for attackers to maliciously tamper with a message.
However, when proper integrity protection is used, such an error will result (with high probability) in the entire message being rejected. If resistance to random error is desirable,error-correcting codesshould be applied to the ciphertext before transmission.
Many more modes of operation for block ciphers have been suggested. Some have been accepted, fully described (even standardized), and are in use. Others have been found insecure, and should never be used. Still others don't categorize as confidentiality, authenticity, or authenticated encryption – for example key feedback mode andDavies–Meyerhashing.
NISTmaintains a list of proposed modes for block ciphers atModes Development.[26][33]
Disk encryption often uses special purpose modes specifically designed for the application. Tweakable narrow-block encryption modes (LRW,XEX, andXTS) and wide-block encryption modes (CMCandEME) are designed to securely encrypt sectors of a disk (seedisk encryption theory).
Many modes use an initialization vector (IV) which, depending on the mode, may have requirements such as being only used once (a nonce) or being unpredictable ahead of its publication, etc. Reusing an IV with the same key in CTR, GCM or OFB mode results in XORing the same keystream with two or more plaintexts, a clear misuse of a stream, with a catastrophic loss of security. Deterministic authenticated encryption modes such as the NISTKey Wrapalgorithm and the SIV (RFC 5297) AEAD mode do not require an IV as an input, and return the same ciphertext and authentication tag every time for a given plaintext and key. Other IV misuse-resistant modes such asAES-GCM-SIVbenefit from an IV input, for example in the maximum amount of data that can be safely encrypted with one key, while not failing catastrophically if the same IV is used multiple times.
Block ciphers can also be used in othercryptographic protocols. They are generally used in modes of operation similar to the block modes described here. As with all protocols, to be cryptographically secure, care must be taken to design these modes of operation correctly.
There are several schemes which use a block cipher to build acryptographic hash function. Seeone-way compression functionfor descriptions of several such methods.
Cryptographically secure pseudorandom number generators(CSPRNGs) can also be built using block ciphers.
Message authentication codes(MACs) are often built from block ciphers.CBC-MAC,OMACandPMACare examples.
|
https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation
|
TheRSA(Rivest–Shamir–Adleman)cryptosystemis apublic-key cryptosystem, one of the oldest widely used for secure data transmission. Theinitialism"RSA" comes from the surnames ofRon Rivest,Adi ShamirandLeonard Adleman, who publicly described the algorithm in 1977. An equivalent system was developed secretly in 1973 atGovernment Communications Headquarters(GCHQ), the Britishsignals intelligenceagency, by the English mathematicianClifford Cocks. That system wasdeclassifiedin 1997.[2]
In a public-keycryptosystem, theencryption keyis public and distinct from thedecryption key, which is kept secret (private).
An RSA user creates and publishes a public key based on two largeprime numbers, along with an auxiliary value. The prime numbers are kept secret. Messages can be encrypted by anyone, via the public key, but can only be decrypted by someone who knows the private key.[1]
The security of RSA relies on the practical difficulty offactoringthe product of two largeprime numbers, the "factoring problem". Breaking RSA encryption is known as theRSA problem. Whether it is as difficult as the factoring problem is an open question.[3]There are no published methods to defeat the system if a large enough key is used.
RSA is a relatively slow algorithm. Because of this, it is not commonly used to directly encrypt user data. More often, RSA is used to transmit shared keys forsymmetric-keycryptography, which are then used for bulk encryption–decryption.
The idea of an asymmetric public-private key cryptosystem is attributed toWhitfield DiffieandMartin Hellman, who published this concept in 1976. They also introduced digital signatures and attempted to apply number theory. Their formulation used a shared-secret-key created from exponentiation of some number, modulo a prime number. However, they left open the problem of realizing a one-way function, possibly because the difficulty of factoring was not well-studied at the time.[4]Moreover, likeDiffie-Hellman, RSA is based onmodular exponentiation.
Ron Rivest,Adi Shamir, andLeonard Adlemanat theMassachusetts Institute of Technologymade several attempts over the course of a year to create a function that was hard to invert. Rivest and Shamir, as computer scientists, proposed many potential functions, while Adleman, as a mathematician, was responsible for finding their weaknesses. They tried many approaches, including "knapsack-based" and "permutation polynomials". For a time, they thought what they wanted to achieve was impossible due to contradictory requirements.[5]In April 1977, they spentPassoverat the house of a student and drank a good deal of wine before returning to their homes at around midnight.[6]Rivest, unable to sleep, lay on the couch with a math textbook and started thinking about their one-way function. He spent the rest of the night formalizing his idea, and he had much of the paper ready by daybreak. The algorithm is now known as RSA – the initials of their surnames in same order as their paper.[7]
Clifford Cocks, an Englishmathematicianworking for theBritishintelligence agencyGovernment Communications Headquarters(GCHQ), described a similar system in an internal document in 1973.[8]However, given the relatively expensive computers needed to implement it at the time, it was considered to be mostly a curiosity and, as far as is publicly known, was never deployed. His ideas and concepts were not revealed until 1997 due to its top-secret classification.
Kid-RSA (KRSA) is a simplified, insecure public-key cipher published in 1997, designed for educational purposes. Some people feel that learning Kid-RSA gives insight into RSA and other public-key ciphers, analogous tosimplified DES.[9][10][11][12][13]
Apatentdescribing the RSA algorithm was granted toMITon 20 September 1983:U.S. patent 4,405,829"Cryptographic communications system and method". FromDWPI's abstract of the patent:
The system includes a communications channel coupled to at least one terminal having an encoding device and to at least one terminal having a decoding device. A message-to-be-transferred is enciphered to ciphertext at the encoding terminal by encoding the message as a number M in a predetermined set. That number is then raised to a first predetermined power (associated with the intended receiver) and finally computed. The remainder or residue, C, is... computed when the exponentiated number is divided by the product of two predetermined prime numbers (associated with the intended receiver).
A detailed description of the algorithm was published in August 1977, inScientific American'sMathematical Gamescolumn.[7]This preceded the patent's filing date of December 1977. Consequently, the patent had no legal standing outside theUnited States. Had Cocks' work been publicly known, a patent in the United States would not have been legal either.
When the patent was issued,terms of patentwere 17 years. The patent was about to expire on 21 September 2000, butRSA Securityreleased the algorithm to the public domain on 6 September 2000.[14]
The RSA algorithm involves four steps:keygeneration, key distribution, encryption, and decryption.
A basic principle behind RSA is the observation that it is practical to find three very large positive integerse,d, andn, such that for all integersm(0 ≤m<n), both(me)d{\displaystyle (m^{e})^{d}}andm{\displaystyle m}have the sameremainderwhen divided byn{\displaystyle n}(they arecongruent modulon{\displaystyle n}):(me)d≡m(modn).{\displaystyle (m^{e})^{d}\equiv m{\pmod {n}}.}However, when given onlyeandn, it is extremely difficult to findd.
The integersnandecomprise the public key,drepresents the private key, andmrepresents the message. Themodular exponentiationtoeanddcorresponds to encryption and decryption, respectively.
In addition, because the two exponentscan be swapped, the private and public key can also be swapped, allowing for messagesigning and verificationusing the same algorithm.
The keys for the RSA algorithm are generated in the following way:
Thepublic keyconsists of the modulusnand the public (or encryption) exponente. Theprivate keyconsists of the private (or decryption) exponentd, which must be kept secret.p,q, andλ(n)must also be kept secret because they can be used to calculated. In fact, they can all be discarded afterdhas been computed.[16]
In the original RSA paper,[1]theEuler totient functionφ(n) = (p− 1)(q− 1)is used instead ofλ(n)for calculating the private exponentd. Sinceφ(n)is always divisible byλ(n), the algorithm works as well. The possibility of usingEuler totient functionresults also fromLagrange's theoremapplied to themultiplicative group of integers modulopq. Thus anydsatisfyingd⋅e≡ 1 (modφ(n))also satisfiesd⋅e≡ 1 (modλ(n)). However, computingdmoduloφ(n)will sometimes yield a result that is larger than necessary (i.e.d>λ(n)). Most of the implementations of RSA will accept exponents generated using either method (if they use the private exponentdat all, rather than using the optimized decryption methodbased on the Chinese remainder theoremdescribed below), but some standards such asFIPS 186-4(Section B.3.1) may require thatd<λ(n). Any "oversized" private exponents not meeting this criterion may always be reduced moduloλ(n)to obtain a smaller equivalent exponent.
Since any common factors of(p− 1)and(q− 1)are present in the factorisation ofn− 1=pq− 1=(p− 1)(q− 1) + (p− 1) + (q− 1),[17][self-published source?]it is recommended that(p− 1)and(q− 1)have only very small common factors, if any, besides the necessary 2.[1][18][19][failed verification][20][failed verification]
Note: The authors of the original RSA paper carry out the key generation by choosingdand then computingeas themodular multiplicative inverseofdmoduloφ(n), whereas most current implementations of RSA, such as those followingPKCS#1, do the reverse (chooseeand computed). Since the chosen key can be small, whereas the computed key normally is not, the RSA paper's algorithm optimizes decryption compared to encryption, while the modern algorithm optimizes encryption instead.[1][21]
Suppose thatBobwants to send information toAlice. If they decide to use RSA, Bob must know Alice's public key to encrypt the message, and Alice must use her private key to decrypt the message.
To enable Bob to send his encrypted messages, Alice transmits her public key(n,e)to Bob via a reliable, but not necessarily secret, route. Alice's private key(d)is never distributed.
After Bob obtains Alice's public key, he can send a messageMto Alice.
To do it, he first turnsM(strictly speaking, the un-padded plaintext) into an integerm(strictly speaking, thepaddedplaintext), such that0 ≤m<nby using an agreed-upon reversible protocol known as apadding scheme. He then computes the ciphertextc, using Alice's public keye, corresponding to
c≡me(modn).{\displaystyle c\equiv m^{e}{\pmod {n}}.}
This can be done reasonably quickly, even for very large numbers, usingmodular exponentiation. Bob then transmitscto Alice. Note that at least nine values ofmwill yield a ciphertextcequal tom,[a]but this is very unlikely to occur in practice.
Alice can recovermfromcby using her private key exponentdby computing
cd≡(me)d≡m(modn).{\displaystyle c^{d}\equiv (m^{e})^{d}\equiv m{\pmod {n}}.}
Givenm, she can recover the original messageMby reversing the padding scheme.
Here is an example of RSA encryption and decryption:[b]
Thepublic keyis(n= 3233,e= 17). For a paddedplaintextmessagem, the encryption function isc(m)=memodn=m17mod3233.{\displaystyle {\begin{aligned}c(m)&=m^{e}{\bmod {n}}\\&=m^{17}{\bmod {3}}233.\end{aligned}}}
Theprivate keyis(n= 3233,d= 413). For an encryptedciphertextc, the decryption function ism(c)=cdmodn=c413mod3233.{\displaystyle {\begin{aligned}m(c)&=c^{d}{\bmod {n}}\\&=c^{413}{\bmod {3}}233.\end{aligned}}}
For instance, in order to encryptm= 65, one calculatesc=6517mod3233=2790.{\displaystyle c=65^{17}{\bmod {3}}233=2790.}
To decryptc= 2790, one calculatesm=2790413mod3233=65.{\displaystyle m=2790^{413}{\bmod {3}}233=65.}
Both of these calculations can be computed efficiently using thesquare-and-multiply algorithmformodular exponentiation. In real-life situations the primes selected would be much larger; in our example it would be trivial to factorn= 3233(obtained from the freely available public key) back to the primespandq.e, also from the public key, is then inverted to getd, thus acquiring the private key.
Practical implementations use theChinese remainder theoremto speed up the calculation using modulus of factors (modpqusing modpand modq).
The valuesdp,dqandqinv, which are part of the private key are computed as follows:dp=dmod(p−1)=413mod(61−1)=53,dq=dmod(q−1)=413mod(53−1)=49,qinv=q−1modp=53−1mod61=38⇒(qinv×q)modp=38×53mod61=1.{\displaystyle {\begin{aligned}d_{p}&=d{\bmod {(}}p-1)=413{\bmod {(}}61-1)=53,\\d_{q}&=d{\bmod {(}}q-1)=413{\bmod {(}}53-1)=49,\\q_{\text{inv}}&=q^{-1}{\bmod {p}}=53^{-1}{\bmod {6}}1=38\\&\Rightarrow (q_{\text{inv}}\times q){\bmod {p}}=38\times 53{\bmod {6}}1=1.\end{aligned}}}
Here is howdp,dqandqinvare used for efficient decryption (encryption is efficient by choice of a suitabledandepair):m1=cdpmodp=279053mod61=4,m2=cdqmodq=279049mod53=12,h=(qinv×(m1−m2))modp=(38×−8)mod61=1,m=m2+h×q=12+1×53=65.{\displaystyle {\begin{aligned}m_{1}&=c^{d_{p}}{\bmod {p}}=2790^{53}{\bmod {6}}1=4,\\m_{2}&=c^{d_{q}}{\bmod {q}}=2790^{49}{\bmod {5}}3=12,\\h&=(q_{\text{inv}}\times (m_{1}-m_{2})){\bmod {p}}=(38\times -8){\bmod {6}}1=1,\\m&=m_{2}+h\times q=12+1\times 53=65.\end{aligned}}}
SupposeAliceusesBob's public key to send him an encrypted message. In the message, she can claim to be Alice, but Bob has no way of verifying that the message was from Alice, since anyone can use Bob's public key to send him encrypted messages. In order to verify the origin of a message, RSA can also be used tosigna message.
Suppose Alice wishes to send a signed message to Bob. She can use her own private key to do so. She produces ahash valueof the message, raises it to the power ofd(modulon) (as she does when decrypting a message), and attaches it as a "signature" to the message. When Bob receives the signed message, he uses the same hash algorithm in conjunction with Alice's public key. He raises the signature to the power ofe(modulon) (as he does when encrypting a message), and compares the resulting hash value with the message's hash value. If the two agree, he knows that the author of the message was in possession of Alice's private key and that the message has not been tampered with since being sent.
This works because ofexponentiationrules:h=hash(m),{\displaystyle h=\operatorname {hash} (m),}(he)d=hed=hde=(hd)e≡h(modn).{\displaystyle (h^{e})^{d}=h^{ed}=h^{de}=(h^{d})^{e}\equiv h{\pmod {n}}.}
Thus the keys may be swapped without loss of generality, that is, a private key of a key pair may be used either to:
The proof of the correctness of RSA is based onFermat's little theorem, stating thatap− 1≡ 1 (modp)for any integeraand primep, not dividinga.[note 1]
We want to show that(me)d≡m(modpq){\displaystyle (m^{e})^{d}\equiv m{\pmod {pq}}}for every integermwhenpandqare distinct prime numbers andeanddare positive integers satisfyinged≡ 1 (modλ(pq)).
Sinceλ(pq) =lcm(p− 1,q− 1)is, by construction, divisible by bothp− 1andq− 1, we can writeed−1=h(p−1)=k(q−1){\displaystyle ed-1=h(p-1)=k(q-1)}for some nonnegative integershandk.[note 2]
To check whether two numbers, such asmedandm, are congruentmodpq, it suffices (and in fact is equivalent) to check that they are congruentmodpandmodqseparately.[note 3]
To showmed≡m(modp), we consider two cases:
The verification thatmed≡m(modq)proceeds in a completely analogous way:
This completes the proof that, for any integerm, and integerse,dsuch thated≡ 1 (modλ(pq)),(me)d≡m(modpq).{\displaystyle (m^{e})^{d}\equiv m{\pmod {pq}}.}
Although the original paper of Rivest, Shamir, and Adleman used Fermat's little theorem to explain why RSA works, it is common to find proofs that rely instead onEuler's theorem.
We want to show thatmed≡m(modn), wheren=pqis a product of two different prime numbers, andeanddare positive integers satisfyinged≡ 1 (modφ(n)). Sinceeanddare positive, we can writeed= 1 +hφ(n)for some non-negative integerh.Assumingthatmis relatively prime ton, we havemed=m1+hφ(n)=m(mφ(n))h≡m(1)h≡m(modn),{\displaystyle m^{ed}=m^{1+h\varphi (n)}=m(m^{\varphi (n)})^{h}\equiv m(1)^{h}\equiv m{\pmod {n}},}
where the second-last congruence follows fromEuler's theorem.
More generally, for anyeanddsatisfyinged≡ 1 (modλ(n)), the same conclusion follows fromCarmichael's generalization of Euler's theorem, which states thatmλ(n)≡ 1 (modn)for allmrelatively prime ton.
Whenmis not relatively prime ton, the argument just given is invalid. This is highly improbable (only a proportion of1/p+ 1/q− 1/(pq)numbers have this property), but even in this case, the desired congruence is still true. Eitherm≡ 0 (modp)orm≡ 0 (modq), and these cases can be treated using the previous proof.
There are a number of attacks against plain RSA as described below.
To avoid these problems, practical RSA implementations typically embed some form of structured, randomizedpaddinginto the valuembefore encrypting it. This padding ensures thatmdoes not fall into the range of insecure plaintexts, and that a given message, once padded, will encrypt to one of a large number of different possible ciphertexts.
Standards such asPKCS#1have been carefully designed to securely pad messages prior to RSA encryption. Because these schemes pad the plaintextmwith some number of additional bits, the size of the un-padded messageMmust be somewhat smaller. RSA padding schemes must be carefully designed so as to prevent sophisticated attacks that may be facilitated by a predictable message structure. Early versions of the PKCS#1 standard (up to version 1.5) used a construction that appears to make RSA semantically secure. However, atCrypto1998, Bleichenbacher showed that this version is vulnerable to a practicaladaptive chosen-ciphertext attack. Furthermore, atEurocrypt2000, Coron et al.[25]showed that for some types of messages, this padding does not provide a high enough level of security. Later versions of the standard includeOptimal Asymmetric Encryption Padding(OAEP), which prevents these attacks. As such, OAEP should be used in any new application, and PKCS#1 v1.5 padding should be replaced wherever possible. The PKCS#1 standard also incorporates processing schemes designed to provide additional security for RSA signatures, e.g. the Probabilistic Signature Scheme for RSA (RSA-PSS).
Secure padding schemes such as RSA-PSS are as essential for the security of message signing as they are for message encryption. Two USA patents on PSS were granted (U.S. patent 6,266,771andU.S. patent 7,036,014); however, these patents expired on 24 July 2009 and 25 April 2010 respectively. Use of PSS no longer seems to be encumbered by patents.[original research?]Note that using different RSA key pairs for encryption and signing is potentially more secure.[26]
For efficiency, many popular crypto libraries (such asOpenSSL,Javaand.NET) use for decryption and signing the following optimization based on theChinese remainder theorem.[27][citation needed]The following values are precomputed and stored as part of the private key:
These values allow the recipient to compute the exponentiationm=cd(modpq)more efficiently as follows:m1=cdP(modp){\displaystyle m_{1}=c^{d_{P}}{\pmod {p}}},m2=cdQ(modq){\displaystyle m_{2}=c^{d_{Q}}{\pmod {q}}},h=qinv(m1−m2)(modp){\displaystyle h=q_{\text{inv}}(m_{1}-m_{2}){\pmod {p}}},[c]m=m2+hq{\displaystyle m=m_{2}+hq}.
This is more efficient than computingexponentiation by squaring, even though two modular exponentiations have to be computed. The reason is that these two modular exponentiations both use a smaller exponent and a smaller modulus.
The security of the RSA cryptosystem is based on two mathematical problems: the problem offactoring large numbersand theRSA problem. Full decryption of an RSA ciphertext is thought to be infeasible on the assumption that both of these problems arehard, i.e., no efficient algorithm exists for solving them. Providing security againstpartialdecryption may require the addition of a securepadding scheme.[28]
TheRSA problemis defined as the task of takingeth roots modulo a compositen: recovering a valuemsuch thatc≡me(modn), where(n,e)is an RSA public key, andcis an RSA ciphertext. Currently the most promising approach to solving the RSA problem is to factor the modulusn. With the ability to recover prime factors, an attacker can compute the secret exponentdfrom a public key(n,e), then decryptcusing the standard procedure. To accomplish this, an attacker factorsnintopandq, and computeslcm(p− 1,q− 1)that allows the determination ofdfrome. No polynomial-time method for factoring large integers on a classical computer has yet been found, but it has not been proven that none exists; seeinteger factorizationfor a discussion of this problem.
The first RSA-512 factorization in 1999 used hundreds of computers and required the equivalent of 8,400 MIPS years, over an elapsed time of about seven months.[29]By 2009, Benjamin Moody could factor an 512-bit RSA key in 73 days using only public software (GGNFS) and his desktop computer (a dual-coreAthlon64with a 1,900 MHz CPU). Just less than 5 gigabytes of disk storage was required and about 2.5 gigabytes of RAM for the sieving process.
Rivest, Shamir, and Adleman noted[1]that Miller has shown that – assuming the truth of theextended Riemann hypothesis– findingdfromnandeis as hard as factoringnintopandq(up to a polynomial time difference).[30]However, Rivest, Shamir, and Adleman noted, in section IX/D of their paper, that they had not found a proof that inverting RSA is as hard as factoring.
As of 2020[update], the largest publicly known factoredRSA numberhad 829 bits (250 decimal digits,RSA-250).[31]Its factorization, by a state-of-the-art distributed implementation, took about 2,700 CPU-years. In practice, RSA keys are typically 1024 to 4096 bits long. In 2003,RSA Securityestimated that 1024-bit keys were likely to become crackable by 2010.[32]As of 2020, it is not known whether such keys can be cracked, but minimum recommendations have moved to at least 2048 bits.[33]It is generally presumed that RSA is secure ifnis sufficiently large, outside of quantum computing.
Ifnis 300bitsor shorter, it can be factored in a few hours on apersonal computer, using software already freely available. Keys of 512 bits have been shown to be practically breakable in 1999, whenRSA-155was factored by using several hundred computers, and these are now factored in a few weeks using common hardware. Exploits using 512-bit code-signing certificates that may have been factored were reported in 2011.[34]A theoretical hardware device namedTWIRL, described by Shamir and Tromer in 2003, called into question the security of 1024-bit keys.[32]
In 1994,Peter Shorshowed that aquantum computer– if one could ever be practically created for the purpose – would be able to factor inpolynomial time, breaking RSA; seeShor's algorithm.
Finding the large primespandqis usually done by testing random numbers of the correct size with probabilisticprimality teststhat quickly eliminate virtually all of the nonprimes.
The numberspandqshould not be "too close", lest theFermat factorizationfornbe successful. Ifp−qis less than2n1/4(n=p⋅q, which even for "small" 1024-bit values ofnis3×1077), solving forpandqis trivial. Furthermore, if eitherp− 1orq− 1has only small prime factors,ncan be factored quickly byPollard'sp− 1 algorithm, and hence such values ofporqshould be discarded.
It is important that the private exponentdbe large enough. Michael J. Wiener showed that ifpis betweenqand2q(which is quite typical) andd<n1/4/3, thendcan be computed efficiently fromnande.[35]
There is no known attack against small public exponents such ase= 3, provided that the proper padding is used.Coppersmith's attackhas many applications in attacking RSA specifically if the public exponenteis small and if the encrypted message is short and not padded.65537is a commonly used value fore; this value can be regarded as a compromise between avoiding potential small-exponent attacks and still allowing efficient encryptions (or signature verification). The NIST Special Publication on Computer Security (SP 800-78 Rev. 1 of August 2007) does not allow public exponentsesmaller than 65537, but does not state a reason for this restriction.
In October 2017, a team of researchers fromMasaryk Universityannounced theROCA vulnerability, which affects RSA keys generated by an algorithm embodied in a library fromInfineonknown as RSALib. A large number ofsmart cardsandtrusted platform modules(TPM) were shown to be affected. Vulnerable RSA keys are easily identified using a test program the team released.[36]
A cryptographically strongrandom number generator, which has been properly seeded with adequate entropy, must be used to generate the primespandq. An analysis comparing millions of public keys gathered from the Internet was carried out in early 2012 byArjen K. Lenstra, James P. Hughes, Maxime Augier, Joppe W. Bos, Thorsten Kleinjung and Christophe Wachter. They were able to factor 0.2% of the keys using only Euclid's algorithm.[37][38][self-published source?]
They exploited a weakness unique to cryptosystems based on integer factorization. Ifn=pqis one public key, andn′ =p′q′is another, then if by chancep=p′(butqis not equal toq'), then a simple computation ofgcd(n,n′) =pfactors bothnandn', totally compromising both keys. Lenstra et al. note that this problem can be minimized by using a strong random seed of bit length twice the intended security level, or by employing a deterministic function to chooseqgivenp, instead of choosingpandqindependently.
Nadia Heningerwas part of a group that did a similar experiment. They used an idea ofDaniel J. Bernsteinto compute the GCD of each RSA keynagainst the product of all the other keysn' they had found (a 729-million-digit number), instead of computing eachgcd(n,n′)separately, thereby achieving a very significant speedup, since after one large division, the GCD problem is of normal size.
Heninger says in her blog that the bad keys occurred almost entirely in embedded applications, including "firewalls, routers, VPN devices, remote server administration devices, printers, projectors, and VOIP phones" from more than 30 manufacturers. Heninger explains that the one-shared-prime problem uncovered by the two groups results from situations where the pseudorandom number generator is poorly seeded initially, and then is reseeded between the generation of the first and second primes. Using seeds of sufficiently high entropy obtained from key stroke timings or electronic diode noise oratmospheric noisefrom a radio receiver tuned between stations should solve the problem.[39]
Strong random number generation is important throughout every phase of public-key cryptography. For instance, if a weak generator is used for the symmetric keys that are being distributed by RSA, then an eavesdropper could bypass RSA and guess the symmetric keys directly.
Kocherdescribed a new attack on RSA in 1995: if the attacker Eve knows Alice's hardware in sufficient detail and is able to measure the decryption times for several known ciphertexts, Eve can deduce the decryption keydquickly. This attack can also be applied against the RSA signature scheme. In 2003,BonehandBrumleydemonstrated a more practical attack capable of recovering RSA factorizations over a network connection (e.g., from aSecure Sockets Layer(SSL)-enabled webserver).[40]This attack takes advantage of information leaked by theChinese remainder theoremoptimization used by many RSA implementations.
One way to thwart these attacks is to ensure that the decryption operation takes a constant amount of time for every ciphertext. However, this approach can significantly reduce performance. Instead, most RSA implementations use an alternate technique known ascryptographic blinding. RSA blinding makes use of the multiplicative property of RSA. Instead of computingcd(modn), Alice first chooses a secret random valuerand computes(rec)d(modn). The result of this computation, after applyingEuler's theorem, isrcd(modn), and so the effect ofrcan be removed by multiplying by its inverse. A new value ofris chosen for each ciphertext. With blinding applied, the decryption time is no longer correlated to the value of the input ciphertext, and so the timing attack fails.
In 1998,Daniel Bleichenbacherdescribed the first practicaladaptive chosen-ciphertext attackagainst RSA-encrypted messages using the PKCS #1 v1padding scheme(a padding scheme randomizes and adds structure to an RSA-encrypted message, so it is possible to determine whether a decrypted message is valid). Due to flaws with the PKCS #1 scheme, Bleichenbacher was able to mount a practical attack against RSA implementations of theSecure Sockets Layerprotocol and to recover session keys. As a result of this work, cryptographers now recommend the use of provably secure padding schemes such asOptimal Asymmetric Encryption Padding, and RSA Laboratories has released new versions of PKCS #1 that are not vulnerable to these attacks.
A variant of this attack, dubbed "BERserk", came back in 2014.[41][42]It impacted the Mozilla NSS Crypto Library, which was used notably by Firefox and Chrome.
A side-channel attack using branch-prediction analysis (BPA) has been described. Many processors use abranch predictorto determine whether a conditional branch in the instruction flow of a program is likely to be taken or not. Often these processors also implementsimultaneous multithreading(SMT). Branch-prediction analysis attacks use a spy process to discover (statistically) the private key when processed with these processors.
Simple Branch Prediction Analysis (SBPA) claims to improve BPA in a non-statistical way. In their paper, "On the Power of Simple Branch Prediction Analysis",[43]the authors of SBPA (Onur Aciicmez and Cetin Kaya Koc) claim to have discovered 508 out of 512 bits of an RSA key in 10 iterations.
A power-fault attack on RSA implementations was described in 2010.[44]The author recovered the key by varying the CPU power voltage outside limits; this caused multiple power faults on the server.
There are many details to keep in mind in order to implement RSA securely (strongPRNG, acceptable public exponent, etc.). This makes the implementation challenging, to the point the book Practical Cryptography With Go suggests avoiding RSA if possible.[45]
Some cryptography libraries that provide support for RSA include:
|
https://en.wikipedia.org/wiki/RSA_(cryptosystem)
|
Theone-time pad(OTP) is anencryptiontechnique that cannot becrackedincryptography. It requires the use of a single-usepre-shared keythat is larger than or equal to the size of the message being sent. In this technique, aplaintextis paired with a random secretkey(also referred to as aone-time pad). Then, each bit or character of the plaintext is encrypted by combining it with the corresponding bit or character from the pad usingmodular addition.[1]
The resultingciphertextis impossible to decrypt or break if the following four conditions are met:[2][3]
These requirements make the OTP the only known encryption system that is mathematically proven to be unbreakable under the principles of information theory.[4]
Digital versions of one-time pad ciphers have been used by nations for criticaldiplomaticandmilitary communication, but the problems of securekey distributionmake them impractical for many applications.
The concept was first described byFrank Millerin 1882,[5][6]the one-time pad was re-invented in 1917. On July 22, 1919, U.S. Patent 1,310,719 was issued toGilbert Vernamfor theXORoperation used for the encryption of a one-time pad.[7]One-time use came later, whenJoseph Mauborgnerecognized that if the key tape were totally random, thencryptanalysiswould be impossible.[8]To increase security, one-time pads were sometimes printed onto sheets of highly flammablenitrocellulose, so that they could easily be burned after use.
Frank Millerin 1882 was the first to describe the one-time pad system for securing telegraphy.[6][9]
The next one-time pad system was electrical. In 1917,Gilbert Vernam(ofAT&T Corporation) invented[10]and later patented in 1919 (U.S. patent 1,310,719) a cipher based onteleprintertechnology. Each character in a message was electrically combined with a character on apunched paper tapekey.Joseph Mauborgne(then acaptainin theU.S. Armyand later chief of theSignal Corps) recognized that the character sequence on the key tape could be completely random and that, if so, cryptanalysis would be more difficult. Together they invented the first one-time tape system.[11]
The next development was the paper pad system. Diplomats had long used codes and ciphers for confidentiality and to minimizetelegraphcosts. For the codes, words and phrases were converted to groups of numbers (typically 4 or 5 digits) using a dictionary-likecodebook. For added security, secret numbers could be combined with (usually modular addition) each code group before transmission, with the secret numbers being changed periodically (this was calledsuperencryption). In the early 1920s, three German cryptographers (Werner Kunze, Rudolf Schauffler, and Erich Langlotz), who were involved in breaking such systems, realized that they could never be broken if a separate randomly chosen additive number was used for every code group. They had duplicate paper pads printed with lines of random number groups. Each page had a serial number and eight lines. Each line had six 5-digit numbers. A page would be used as a work sheet to encode a message and then destroyed. Theserial numberof the page would be sent with the encoded message. The recipient would reverse the procedure and then destroy his copy of the page. The German foreign office put this system into operation by 1923.[11]
A separate notion was the use of a one-time pad of letters to encode plaintext directly as in the example below.Leo Marksdescribes inventing such a system for the BritishSpecial Operations ExecutiveduringWorld War II, though he suspected at the time that it was already known in the highly compartmentalized world of cryptography, as for instance atBletchley Park.[12]
The final discovery was made by information theoristClaude Shannonin the 1940s who recognized and proved the theoretical significance of the one-time pad system. Shannon delivered his results in a classified report in 1945 and published them openly in 1949.[4]At the same time, Soviet information theoristVladimir Kotelnikovhad independently proved the absolute security of the one-time pad; his results were delivered in 1941 in a report that apparently remains classified.[13]
There also exists a quantum analogue of the one time pad, which can be used to exchangequantum statesalong a one-wayquantum channelwith perfect secrecy, which is sometimes used in quantum computing. It can be shown that a shared secret of at least 2n classical bits is required to exchange an n-qubit quantum state along a one-way quantum channel (by analogue with the result that a key of n bits is required to exchange an n bit message with perfect secrecy). A scheme proposed in 2000 achieves this bound. One way to implement this quantum one-time pad is by dividing the 2n bit key into n pairs of bits. To encrypt the state, for each pair of bits i in the key, one would apply an X gate to qubit i of the state if and only if the first bit of the pair is 1, and apply a Z gate to qubit i of the state if and only if the second bit of the pair is 1. Decryption involves applying this transformation again, since X and Z are their own inverses. This can be shown to be perfectly secret in a quantum setting.[14]
SupposeAlicewishes to send the messagehellotoBob. Assume two pads of paper containing identical random sequences of letters were somehow previously produced and securely issued to both. Alice chooses the appropriate unused page from the pad. The way to do this is normally arranged for in advance, as for instance "use the 12th sheet on 1 May", or "use the next available sheet for the next message".
The material on the selected sheet is thekeyfor this message. Each letter from the pad will be combined in a predetermined way with one letter of the message. (It is common, but not required, toassign each letter a numerical value, e.g.,ais 0,bis 1, and so on.)
In this example, the technique is to combine the key and the message usingmodular addition, not unlike theVigenère cipher. The numerical values of corresponding message and key letters are added together, modulo 26. So, if key material begins withXMCKLand the message ishello, then the coding would be done as follows:
If a number is larger than 25, then the remainder after subtraction of 26 is taken in modular arithmetic fashion. This simply means that if the computations "go past" Z, the sequence starts again at A.
The ciphertext to be sent to Bob is thusEQNVZ. Bob uses the matching key page and the same process, but in reverse, to obtain theplaintext. Here the key issubtractedfrom the ciphertext, again using modular arithmetic:
Similar to the above, if a number is negative, then 26 is added to make the number zero or higher.
Thus Bob recovers Alice's plaintext, the messagehello. Both Alice and Bob destroy the key sheet immediately after use, thus preventing reuse and an attack against the cipher. TheKGBoften issued itsagentsone-time pads printed on tiny sheets of flash paper, paper chemically converted tonitrocellulose, which burns almost instantly and leaves no ash.[15]
The classical one-time pad of espionage used actual pads of minuscule, easily concealed paper, a sharp pencil, and somemental arithmetic. The method can be implemented now as a software program, using data files as input (plaintext), output (ciphertext) and key material (the required random sequence). Theexclusive or(XOR) operation is often used to combine the plaintext and the key elements, and is especially attractive on computers since it is usually a native machine instruction and is therefore very fast. It is, however, difficult to ensure that the key material is actually random, is used only once, never becomes known to the opposition, and is completely destroyed after use. The auxiliary parts of a software one-time pad implementation present real challenges: secure handling/transmission of plaintext, truly random keys, and one-time-only use of the key.
To continue the example from above, suppose Eve intercepts Alice's ciphertext:EQNVZ. If Eve tried every possible key, she would find that the keyXMCKLwould produce the plaintexthello, but she would also find that the keyTQURIwould produce the plaintextlater, an equally plausible message:
In fact, it is possible to "decrypt" out of the ciphertext any message whatsoever with the same number of characters, simply by using a different key, and there is no information in the ciphertext that will allow Eve to choose among the various possible readings of the ciphertext.[16]
If the key is not truly random, it is possible to use statistical analysis to determine which of the plausible keys is the "least" random and therefore more likely to be the correct one. If a key is reused, it will noticeably be the only key that produces sensible plaintexts from both ciphertexts (the chances of some randomincorrectkey also producing two sensible plaintexts are very slim).
One-time pads are "information-theoretically secure" in that the encrypted message (i.e., theciphertext) provides no information about the original message to acryptanalyst(except the maximum possible length[note 1]of the message). This is a very strong notion of security first developed during WWII byClaude Shannonand proved, mathematically, to be true for the one-time pad by Shannon at about the same time. His result was published in theBell System Technical Journalin 1949.[17]If properly used, one-time pads are secure in this sense even against adversaries with infinite computational power.
Shannon proved, usinginformation theoreticconsiderations, that the one-time pad has a property he termedperfect secrecy; that is, the ciphertextCgives absolutely no additionalinformationabout theplaintext.[note 2]This is because (intuitively), given a truly uniformly random key that is used only once, a ciphertext can be translated intoanyplaintext of the same length, and all are equally likely. Thus, thea prioriprobability of a plaintext messageMis the same as thea posterioriprobability of a plaintext messageMgiven the corresponding ciphertext.
Conventionalsymmetric encryption algorithmsuse complex patterns ofsubstitutionandtranspositions. For the best of these currently in use, it is not known whether there can be a cryptanalytic procedure that can efficiently reverse (or evenpartially reverse) these transformations without knowing the key used during encryption. Asymmetric encryption algorithms depend on mathematical problems that arethought to be difficultto solve, such asinteger factorizationor thediscrete logarithm. However, there is no proof that these problems are hard, and a mathematical breakthrough could make existing systems vulnerable to attack.[note 3]
Given perfect secrecy, in contrast to conventional symmetric encryption, the one-time pad is immune even to brute-force attacks. Trying all keys simply yields all plaintexts, all equally likely to be the actual plaintext. Even with a partially known plaintext, brute-force attacks cannot be used, since an attacker is unable to gain any information about the parts of the key needed to decrypt the rest of the message. The parts of the plaintext that are known will revealonlythe parts of the key corresponding to them, and they correspond on astrictly one-to-one basis; a uniformly random key's bits will beindependent.
Quantum cryptographyandpost-quantum cryptographyinvolve studying the impact of quantum computers oninformation security.Quantum computershave been shown byPeter Shorand others to be much faster at solving some problems that the security of traditional asymmetric encryption algorithms depends on. The cryptographic algorithms that depend on these problems' difficulty would be rendered obsolete with a powerful enough quantum computer.
One-time pads, however, would remain secure, as perfect secrecy does not depend on assumptions about the computational resources of an attacker.
Despite Shannon's proof of its security, the one-time pad has serious drawbacks in practice because it requires:
One-time pads solve few current practical problems in cryptography. High-qualityciphersare widely available and their security is not currently considered a major worry.[18]Such ciphers are almost always easier to employ than one-time pads because the amount of key material that must be properly and securely generated, distributed and stored is far smaller.[16]Additionally,public key cryptographyovercomes the problem of key distribution.
High-quality random numbers are difficult to generate. The random number generation functions in mostprogramming languagelibraries are not suitable for cryptographic use. Even those generators that are suitable for normal cryptographic use, including/dev/randomand manyhardware random number generators, may make some use of cryptographic functions whose security has not been proven. An example of a technique for generating pure randomness is measuringradioactive emissions.[19]
In particular, one-time use is absolutely necessary. For example, ifp1{\displaystyle p_{1}}andp2{\displaystyle p_{2}}represent two distinct plaintext messages and they are each encrypted by a common keyk{\displaystyle k}, then the respective ciphertexts are given by:
where⊕{\displaystyle \oplus }meansXOR. If an attacker were to have both ciphertextsc1{\displaystyle c_{1}}andc2{\displaystyle c_{2}}, then simply taking theXORofc1{\displaystyle c_{1}}andc2{\displaystyle c_{2}}yields theXORof the two plaintextsp1⊕p2{\displaystyle p_{1}\oplus p_{2}}. (This is because taking theXORof the common keyk{\displaystyle k}with itself yields a constant bitstream of zeros.)p1⊕p2{\displaystyle p_{1}\oplus p_{2}}is then the equivalent of a running key cipher.[citation needed]
If both plaintexts are in anatural language(e.g., English or Russian), each stands a very high chance of being recovered byheuristiccryptanalysis, with possibly a few ambiguities. Of course, a longer message can only be broken for the portion that overlaps a shorter message, plus perhaps a little more by completing a word or phrase. The most famous exploit of this vulnerability occurred with theVenona project.[20]
Because the pad, like allshared secrets, must be passed and kept secure, and the pad has to be at least as long as the message, there is often no point in using a one-time pad, as one can simply send the plain text instead of the pad (as both can be the same size and have to be sent securely).[16]However, once a very long pad has been securely sent (e.g., a computer disk full of random data), it can be used for numerous future messages, until the sum of the messages' sizes equals the size of the pad.Quantum key distributionalso proposes a solution to this problem, assumingfault-tolerantquantum computers.
Distributing very long one-time pad keys is inconvenient and usually poses a significant security risk.[2]The pad is essentially the encryption key, but unlike keys for modern ciphers, it must be extremely long and is far too difficult for humans to remember. Storage media such asthumb drives,DVD-Rsor personaldigital audio playerscan be used to carry a very large one-time-pad from place to place in a non-suspicious way, but the need to transport the pad physically is a burden compared to the key negotiation protocols of a modern public-key cryptosystem. Such media cannot reliably be erased securely by any means short of physical destruction (e.g., incineration). A 4.7 GB DVD-R full of one-time-pad data, if shredded into particles 1 mm2(0.0016 sq in) in size, leaves over 4megabitsof data on each particle.[citation needed]In addition, the risk of compromise during transit (for example, apickpocketswiping, copying and replacing the pad) is likely to be much greater in practice than the likelihood of compromise for a cipher such asAES. Finally, the effort needed to manage one-time pad key materialscalesvery badly for large networks of communicants—the number of pads required goes up as thesquareof the number of users freely exchanging messages. For communication between only two persons, or astar networktopology, this is less of a problem.
The key material must be securely disposed of after use, to ensure the key material is never reused and to protect the messages sent.[2]Because the key material must be transported from one endpoint to another, and persist until the message is sent or received, it can be more vulnerable toforensic recoverythan the transient plaintext it protects (because of possible data remanence).
As traditionally used, one-time pads provide nomessage authentication, the lack of which can pose a security threat in real-world systems. For example, an attacker who knows that the message contains "meet jane and me tomorrow at three thirty pm" can derive the corresponding codes of the pad directly from the two known elements (the encrypted text and the known plaintext). The attacker can then replace that text by any other text of exactly the same length, such as "three thirty meeting is cancelled, stay home". The attacker's knowledge of the one-time pad is limited to this byte length, which must be maintained for any other content of the message to remain valid. This is different frommalleability[21]where the plaintext is not necessarily known. Without knowing the message, the attacker can also flip bits in a message sent with a one-time pad, without the recipient being able to detect it. Because of their similarities, attacks on one-time pads are similar toattacks on stream ciphers.[22]
Standard techniques to prevent this, such as the use of amessage authentication codecan be used along with a one-time pad system to prevent such attacks, as can classical methods such as variable lengthpaddingandRussian copulation, but they all lack the perfect security the OTP itself has.Universal hashingprovides a way to authenticate messages up to an arbitrary security bound (i.e., for anyp> 0, a large enough hash ensures that even a computationally unbounded attacker's likelihood of successful forgery is less thanp), but this uses additional random data from the pad, and some of these techniques remove the possibility of implementing the system without a computer.
Due to its relative simplicity of implementation, and due to its promise of perfect secrecy, one-time-pad enjoys high popularity among students learning about cryptography, especially as it is often the first algorithm to be presented and implemented during a course. Such "first" implementations often break the requirements for information theoretical security in one or more ways:
Despite its problems, the one-time-pad retains some practical interest. In some hypothetical espionage situations, the one-time pad might be useful because encryption and decryption can be computed by hand with only pencil and paper. Nearly all other high quality ciphers are entirely impractical without computers. In the modern world, however, computers (such as those embedded inmobile phones) are so ubiquitous that possessing a computer suitable for performing conventional encryption (for example, a phone that can run concealed cryptographic software) will usually not attract suspicion.
A common use of the one-time pad inquantum cryptographyis being used in association withquantum key distribution(QKD). QKD is typically associated with the one-time pad because it provides a way of distributing a long shared secret key securely and efficiently (assuming the existence of practicalquantum networkinghardware). A QKD algorithm uses properties of quantum mechanical systems to let two parties agree on a shared, uniformly random string. Algorithms for QKD, such asBB84, are also able to determine whether an adversarial party has been attempting to intercept key material, and allow for a shared secret key to be agreed upon with relatively few messages exchanged and relatively low computational overhead. At a high level, the schemes work by taking advantage of the destructive way quantum states are measured to exchange a secret and detect tampering. In the original BB84 paper, it was proven that the one-time pad, with keys distributed via QKD, is aperfectly secureencryption scheme.[25]However, this result depends on the QKD scheme being implemented correctly in practice. Attacks on real-world QKD systems exist. For instance, many systems do not send a single photon (or other object in the desired quantum state) per bit of the key because of practical limitations, and an attacker could intercept and measure some of the photons associated with a message, gaining information about the key (i.e. leaking information about the pad), while passing along unmeasured photons corresponding to the same bit of the key.[26]Combining QKD with a one-time pad can also loosen the requirements for key reuse. In 1982,BennettandBrassardshowed that if a QKD protocol does not detect that an adversary was trying to intercept an exchanged key, then the key can safely be reused while preserving perfect secrecy.[27]
The one-time pad is an example of post-quantum cryptography, because perfect secrecy is a definition of security that does not depend on the computational resources of the adversary. Consequently, an adversary with a quantum computer would still not be able to gain any more information about a message encrypted with a one time pad than an adversary with just a classical computer.
One-time pads have been used in special circumstances since the early 1900s. In 1923, they were employed for diplomatic communications by the German diplomatic establishment.[28]TheWeimar RepublicDiplomatic Service began using the method in about 1920. The breaking of poorSovietcryptography by theBritish, with messages made public for political reasons in two instances in the 1920s (ARCOS case), appear to have caused the Soviet Union to adopt one-time pads for some purposes by around 1930.KGBspies are also known to have used pencil and paper one-time pads more recently. Examples include ColonelRudolf Abel, who was arrested and convicted inNew York Cityin the 1950s, and the 'Krogers' (i.e.,MorrisandLona Cohen), who were arrested and convicted of espionage in theUnited Kingdomin the early 1960s. Both were found with physical one-time pads in their possession.
A number of nations have used one-time pad systems for their sensitive traffic.Leo Marksreports that the BritishSpecial Operations Executiveused one-time pads in World War II to encode traffic between its offices. One-time pads for use with its overseas agents were introduced late in the war.[12]A few British one-time tape cipher machines include theRockexandNoreen. The GermanStasiSprach Machine was also capable of using one time tape that East Germany, Russia, and even Cuba used to send encrypted messages to their agents.[29]
TheWorld War IIvoicescramblerSIGSALYwas also a form of one-time system. It added noise to the signal at one end and removed it at the other end. The noise was distributed to the channel ends in the form of large shellac records that were manufactured in unique pairs. There were both starting synchronization and longer-term phase drift problems that arose and had to be solved before the system could be used.[30]
ThehotlinebetweenMoscowandWashington, D.C., established in 1963 after the 1962Cuban Missile Crisis, usedteleprintersprotected by a commercial one-time tape system. Each country prepared the keying tapes used to encode its messages and delivered them via their embassy in the other country. A unique advantage of the OTP in this case was that neither country had to reveal more sensitive encryption methods to the other.[31]
U.S. Army Special Forces used one-time pads in Vietnam. By using Morse code with one-time pads and continuous wave radio transmission (the carrier for Morse code), they achieved both secrecy and reliable communications.[32]
Starting in 1988, theAfrican National Congress(ANC) used disk-based one-time pads as part of asecure communicationsystem between ANC leaders outsideSouth Africaand in-country operatives as part of Operation Vula,[33]a successful effort to build a resistance network inside South Africa. Random numbers on the disk were erased after use. A Belgian flight attendant acted as courier to bring in the pad disks. A regular resupply of new disks was needed as they were used up fairly quickly. One problem with the system was that it could not be used for secure data storage. Later Vula added a stream cipher keyed by book codes to solve this problem.[34]
A related notion is theone-time code—a signal, used only once; e.g., "Alpha" for "mission completed", "Bravo" for "mission failed" or even "Torch" for "Allied invasion of French Northern Africa"[35]cannot be "decrypted" in any reasonable sense of the word. Understanding the message will require additional information, often 'depth' of repetition, or sometraffic analysis. However, such strategies (though often used by real operatives, andbaseballcoaches)[36]are not a cryptographic one-time pad in any significant sense.
At least into the 1970s, the U.S.National Security Agency(NSA) produced a variety of manual one-time pads, both general purpose and specialized, with 86,000 one-time pads produced in fiscal year 1972. Special purpose pads were produced for what the NSA called "pro forma" systems, where "the basic framework, form or format of every message text is identical or nearly so; the same kind of information, message after message, is to be presented in the same order, and only specific values, like numbers, change with each message." Examples included nuclear launch messages and radio direction finding reports (COMUS).[37]: pp. 16–18
General purpose pads were produced in several formats, a simple list of random letters (DIANA) or just numbers (CALYPSO), tiny pads for covert agents (MICKEY MOUSE), and pads designed for more rapid encoding of short messages, at the cost of lower density. One example, ORION, had 50 rows of plaintext alphabets on one side and the corresponding random cipher text letters on the other side. By placing a sheet on top of a piece ofcarbon paperwith the carbon face up, one could circle one letter in each row on one side and the corresponding letter on the other side would be circled by the carbon paper. Thus one ORION sheet could quickly encode or decode a message up to 50 characters long. Production of ORION pads required printing both sides in exact registration, a difficult process, so NSA switched to another pad format, MEDEA, with 25 rows of paired alphabets and random characters. (SeeCommons:Category:NSA one-time padsfor illustrations.)
The NSA also built automated systems for the "centralized headquarters of CIA and Special Forces units so that they can efficiently process the many separate one-time pad messages to and from individual pad holders in the field".[37]: pp. 21–26
During World War II and into the 1950s, the U.S. made extensive use of one-time tape systems. In addition to providing confidentiality, circuits secured by one-time tape ran continually, even when there was no traffic, thus protecting againsttraffic analysis. In 1955, NSA produced some 1,660,000 rolls of one time tape. Each roll was 8 inches in diameter, contained 100,000 characters, lasted 166 minutes and cost $4.55 to produce. By 1972, only 55,000 rolls were produced, as one-time tapes were replaced byrotor machinessuch as SIGTOT, and later by electronic devices based onshift registers.[37]: pp. 39–44The NSA describes one-time tape systems like5-UCOand SIGTOT as being used for intelligence traffic until the introduction of the electronic cipher basedKW-26in 1957.[38]
While one-time pads provide perfect secrecy if generated and used properly, small mistakes can lead to successful cryptanalysis:
|
https://en.wikipedia.org/wiki/One-time_pad
|
Acommitment schemeis acryptographic primitivethat allows one to commit to a chosen value (or chosen statement) while keeping it hidden to others, with the ability to reveal the committed value later.[1]Commitment schemes are designed so that a party cannot change the value or statement after they have committed to it: that is, commitment schemes arebinding. Commitment schemes have important applications in a number ofcryptographic protocolsincluding secure coin flipping,zero-knowledge proofs, andsecure computation.
A way to visualize a commitment scheme is to think of a sender as putting a message in a locked box, and giving the box to a receiver. The message in the box is hidden from the receiver, who cannot open the lock themselves. Since the receiver has the box, the message inside cannot be changed—merely revealed if the sender chooses to give them the key at some later time.
Interactions in a commitment scheme take place in two phases:
In the above metaphor, the commit phase is the sender putting the message in the box, and locking it. The reveal phase is the sender giving the key to the receiver, who uses it to open the box and verify its contents. The locked box is the commitment, and the key is the proof.
In simple protocols, the commit phase consists of a single message from the sender to the receiver. This message is calledthe commitment. It is essential that the specific value chosen cannot be extracted from the message by the receiver at that time (this is called thehidingproperty). A simple reveal phase would consist of a single message,the opening, from the sender to the receiver, followed by a check performed by the receiver. The value chosen during the commit phase must be the only one that the sender can compute and that validates during the reveal phase (this is called thebindingproperty).
The concept of commitment schemes was perhaps first formalized byGilles Brassard,David Chaum, andClaude Crépeauin 1988,[2]as part of various zero-knowledge protocols forNP, based on various types of commitment schemes.[3][4]But the concept was used prior to that without being treated formally.[5][6]The notion of commitments appeared earliest in works byManuel Blum,[7]Shimon Even,[8]andAdi Shamiret al.[9]The terminology seems to have been originated by Blum,[6]although commitment schemes can be interchangeably calledbit commitment schemes—sometimes reserved for the special case where the committed value is abit. Prior to that, commitment via one-way hash functions was considered, e.g., as part of, say,Lamport signature, the original one-time one-bit signature scheme.
Suppose Alice and Bob want to resolve some dispute viacoin flipping. If they are physically in the same place, a typical procedure might be:
If Alice and Bob are not in the same place a problem arises. Once Alice has "called" the coin flip, Bob can stipulate the flip "results" to be whatever is most desirable for him. Similarly, if Alice doesn't announce her "call" to Bob, after Bob flips the coin and announces the result, Alice can report that she called whatever result is most desirable for her. Alice and Bob can use commitments in a procedure that will allow both to trust the outcome:
For Bob to be able to skew the results to his favor, he must be able to understand the call hidden in Alice's commitment. If the commitment scheme is a good one, Bob cannot skew the results. Similarly, Alice cannot affect the result if she cannot change the value she commits to.
A real-life application of this problem exists, when people (often in media) commit to a decision or give an answer in a "sealed envelope", which is then opened later. "Let's find out if that's what the candidate answered", for example on a game show, can serve as a model of this system.
One particular motivating example is the use of commitment schemes inzero-knowledge proofs. Commitments are used in zero-knowledge proofs for two main purposes: first, to allow the prover to participate in "cut and choose" proofs where the verifier will be presented with a choice of what to learn, and the prover will reveal only what corresponds to the verifier's choice. Commitment schemes allow the prover to specify all the information in advance, and only reveal what should be revealed later in the proof.[10]Second, commitments are also used in zero-knowledge proofs by the verifier, who will often specify their choices ahead of time in a commitment. This allows zero-knowledge proofs to be composed in parallel without revealing additional information to the prover.[11]
TheLamport signaturescheme is adigital signaturesystem that relies on maintaining two sets of secretdata packets, publishingverifiable hashesof the data packets, and then selectively revealing partial secret data packets in a manner that conforms specifically to the data to be signed. In this way, the prior public commitment to the secret values becomes a critical part of the functioning of the system.
Because the Lamport signature system cannot be used more than once, a system to combine many Lamport key-sets under a single public value that can be tied to a person and verified by others was developed. This system uses trees ofhashesto compress many published Lamport-key-commitment sets into a single hash value that can be associated with the prospective author of later-verified data.
Another important application of commitments is inverifiable secret sharing, a critical building block ofsecure multiparty computation. In asecret sharingscheme, each of several parties receive "shares" of a value that is meant to be hidden from everyone. If enough parties get together, their shares can be used to reconstruct the secret, but even a malicious cabal of insufficient size should learn nothing. Secret sharing is at the root of many protocols forsecure computation: in order to securely compute a function of some shared input, the secret shares are manipulated instead. However, if shares are to be generated by malicious parties, it may be important that those shares can be checked for correctness. In a verifiable secret sharing scheme, the distribution of a secret is accompanied by commitments to the individual shares. The commitments reveal nothing that can help a dishonest cabal, but the shares allow each individual party to check to see if their shares are correct.[12]
Formal definitions of commitment schemes vary strongly in notation and in flavour. The first such flavour is whether the commitment scheme provides perfect or computational security with respect to the hiding or binding properties. Another such flavour is whether the commitment is interactive, i.e. whether both the commit phase and the reveal phase can be seen as being executed by acryptographic protocolor whether they are non-interactive, consisting of two algorithmsCommitandCheckReveal. In the latter caseCheckRevealcan often be seen as a derandomised version ofCommit, with the randomness used byCommitconstituting the opening information.
If the commitmentCto a valuexis computed asC:=Commit(x,open)withopenbeing the randomness used for computing the commitment, thenCheckReveal (C,x,open)reduces to simply verifying the equationC=Commit (x,open).
Using this notation and some knowledge aboutmathematical functionsandprobability theorywe formalise different versions of the binding and hiding properties of commitments. The two most important combinations of these properties are perfectly binding and computationally hiding commitment schemes and computationally binding and perfectly hiding commitment schemes. Note that no commitment scheme can be at the same time perfectly binding and perfectly hiding – a computationally unbounded adversary can simply generateCommit(x,open)for every value ofxandopenuntil finding a pair that outputsC, and in a perfectly binding scheme this uniquely identifiesx.
Letopenbe chosen from a set of size2k{\displaystyle 2^{k}}, i.e., it can be represented as akbit string, and letCommitk{\displaystyle {\text{Commit}}_{k}}be the corresponding commitment scheme. As the size ofkdetermines the security of the commitment scheme it is called thesecurity parameter.
Then for allnon-uniformprobabilistic polynomial time algorithmsthat outputx,x′{\displaystyle x,x'}andopen,open′{\displaystyle open,open'}of increasing lengthk, the probability thatx≠x′{\displaystyle x\neq x'}andCommitk(x,open)=Commitk(x′,open′){\displaystyle {\text{Commit}}_{k}(x,open)={\text{Commit}}_{k}(x',open')}is anegligible functionink.
This is a form ofasymptotic analysis. It is also possible to state the same requirement usingconcrete security: A commitment schemeCommitis(t,ϵ){\displaystyle (t,\epsilon )}secure, if for all algorithms that run in timetand outputx,x′,open,open′{\displaystyle x,x',open,open'}the probability thatx≠x′{\displaystyle x\neq x'}andCommit(x,open)=Commit(x′,open′){\displaystyle {\text{Commit}}(x,open)={\text{Commit}}(x',open')}is at mostϵ{\displaystyle \epsilon }.
LetUk{\displaystyle U_{k}}be the uniform distribution over the2k{\displaystyle 2^{k}}opening values for security parameterk. A commitment scheme is respectively perfect, statistical, or computational hiding, if for allx≠x′{\displaystyle x\neq x'}theprobability ensembles{Commitk(x,Uk)}k∈N{\displaystyle \{{\text{Commit}}_{k}(x,U_{k})\}_{k\in \mathbb {N} }}and{Commitk(x′,Uk)}k∈N{\displaystyle \{{\text{Commit}}_{k}(x',U_{k})\}_{k\in \mathbb {N} }}are equal,statistically close, orcomputationally indistinguishable.
It is impossible to realize commitment schemes in theuniversal composability(UC) framework. The reason is that UC commitment has to beextractable, as shown by Canetti and Fischlin[13]and explained below.
The ideal commitment functionality, denoted here byF, works roughly as follows. CommitterCsends valuemtoF, which
stores it and sends "receipt" to receiverR. Later,Csends "open" toF, which sendsmtoR.
Now, assume we have a protocolπthat realizes this functionality. Suppose that the committerCis corrupted. In the UC framework, that essentially means thatCis now controlled by the environment, which attempts to distinguish protocol execution from the ideal process. Consider an environment that chooses a messagemand then tellsCto act as prescribed byπ, as if it has committed tom. Note here that in order to realizeF, the receiver must, after receiving a commitment, output a message "receipt". After the environment sees this message, it tellsCto open the commitment.
The protocol is only secure if this scenario is indistinguishable from the ideal case, where the functionality interacts with a simulatorS. Here,Shas control ofC. In particular, wheneverRoutputs "receipt",Fhas to do likewise. The only way to do that is forSto tellCto send a value toF. However, note
that by this point,mis not known toS. Hence, when the commitment is opened during protocol execution, it is unlikely thatFwill open tom, unlessScan extractmfrom the messages it received from the environment beforeRoutputs the receipt.
However a protocol that is extractable in this sense cannot be statistically hiding. Suppose such a simulatorSexists. Now consider an
environment that, instead of corruptingC, corruptsRinstead. Additionally it runs a copy ofS. Messages received fromCare fed intoS, and replies fromSare forwarded toC.
The environment initially tellsCto commit to a messagem. At some point in the interaction,Swill commit to a valuem′. This message is handed toR, who outputsm′. Note that by assumption we havem' = mwith high probability. Now in the ideal process the simulator has to come up withm. But this is impossible, because at this point the commitment has not been opened yet, so the only messageRcan have received in the ideal process is a "receipt" message. We thus have a contradiction.
A commitment scheme can either be perfectly binding (it is impossible for Alice to alter her commitment after she has made it, even if she has unbounded computational resources); or perfectly concealing (it is impossible for Bob to find out the commitment without Alice revealing it, even if he has unbounded computational resources); or formulated as an instance-dependent commitment scheme, which is either hiding or binding depending on the solution to another problem.[14][15]A commitment scheme cannot be both perfectly hiding and perfectly binding at the same time.
Bit-commitment schemes are trivial to construct in therandom oraclemodel. Given ahash functionH with a 3kbit output, to commit thek-bit messagem, Alice generates a randomkbit stringRand sends Bob H(R||m). The probability that anyR′,m′exist wherem′≠msuch that H(R′||m′) = H(R||m) is ≈ 2−k, but to test any guess at the messagemBob will need to make 2k(for an incorrect guess) or 2k-1(on average, for a correct guess) queries to the random oracle.[16]We note that earlier schemes based on hash functions, essentially can be thought of schemes based on idealization of these hash functions as random oracle.
One can create a bit-commitment scheme from anyone-way functionthat isinjective. The scheme relies on the fact that every one-way function can be modified (via theGoldreich-Levin theorem) to possess a computationallyhard-core predicate(while retaining the injective property).
Letfbe an injective one-way function, withha hard-core predicate. Then to commit to a bitbAlice picks a random inputxand sends the triple
to Bob, where⊕{\displaystyle \oplus }denotes XOR,i.e., bitwise addition modulo 2. To decommit, Alice simply sendsxto Bob. Bob verifies by computingf(x) and comparing to the committed value. This scheme is concealing because for Bob to recoverbhe must recoverh(x). Sincehis a computationally hard-core predicate, recoveringh(x) fromf(x) with probability greater than one-half is as hard as invertingf. Perfect binding follows from the fact thatfis injective and thusf(x) has exactly one preimage.
Note that since we do not know how to construct a one-way permutation from any one-way function, this section reduces the strength of the cryptographic assumption necessary to construct a bit-commitment protocol.
In 1991 Moni Naor showed how to create a bit-commitment scheme from acryptographically secure pseudorandom number generator.[17]The construction is as follows. IfGis a pseudo-random generator such thatGtakesnbits to 3nbits, then if Alice wants to commit to a bitb:
To decommit Alice sendsYto Bob, who can then check whether he initially receivedG(Y) orG(Y)⊕{\displaystyle \oplus }R.
This scheme is statistically binding, meaning that even if Alice is computationally unbounded she cannot cheat with probability greater than 2−n. For Alice to cheat, she would need to find aY', such thatG(Y') =G(Y)⊕{\displaystyle \oplus }R. If she could find such a value, she could decommit by sending the truth andY, or send the opposite answer andY'. However,G(Y) andG(Y') are only able to produce 2npossible values each (that's 22n) whileRis picked out of 23nvalues. She does not pickR, so there is a 22n/23n= 2−nprobability that aY'satisfying the equation required to cheat will exist.
The concealing property follows from a standard reduction, if Bob can tell whether Alice committed to a zero or one, he can also distinguish the output of the pseudo-random generatorGfrom true-random, which contradicts the cryptographic security ofG.
Alice chooses aringof prime orderp, with multiplicative generatorg.
Alice randomly picks a secret valuexfrom0top− 1 to commit to and calculatesc=gxand publishesc. Thediscrete logarithm problemdictates that fromc, it is computationally infeasible to computex, so under this assumption, Bob cannot computex. On the other hand, Alice cannot compute ax′<>x, such thatgx′=c, so the scheme is binding.
This scheme isn't perfectly concealing as someone could find the commitment if he manages to solve thediscrete logarithm problem. In fact, this scheme isn't hiding at all with respect to the standard hiding game, where an adversary should be unable to guess which of two messages he chose were committed to - similar to theIND-CPAgame. One consequence of this is that if the space of possible values ofxis small, then an attacker could simply try them all and the commitment would not be hiding.
A better example of a perfectly binding commitment scheme is one where the commitment is the encryption ofxunder asemantically secure, public-key encryption scheme with perfect completeness, and the decommitment is the string of random bits used to encryptx. An example of an information-theoretically hiding commitment scheme is the Pedersen commitment scheme,[18]which is computationally binding under the discrete logarithm assumption.[19]Additionally to the scheme above, it uses another generatorhof the prime group and a random numberr. The commitment is setc=gxhr{\displaystyle c=g^{x}h^{r}}.[20]
These constructions are tightly related to and based on the algebraic properties of the underlying groups, and the notion originally seemed to be very much related to the algebra. However, it was shown that basing statistically binding commitment schemes on general unstructured assumption is possible, via the notion of interactive hashing
for commitments from general complexity assumptions (specifically and originally, based on any one way permutation) as in.[21]
Alice selectsN{\displaystyle N}such thatN=p⋅q{\displaystyle N=p\cdot q}, wherep{\displaystyle p}andq{\displaystyle q}are large secret prime numbers. Additionally, she selects a primee{\displaystyle e}such thate>N2{\displaystyle e>N^{2}}andgcd(e,ϕ(N2))=1{\displaystyle gcd(e,\phi (N^{2}))=1}. Alice then computes a public numbergm{\displaystyle g_{m}}as an element of maximum order in theZN2∗{\displaystyle \mathbb {Z} _{N^{2}}^{*}}group.[22]Finally, Alice commits to her secretm{\displaystyle m}by first generating a random numberr{\displaystyle r}fromZN2∗{\displaystyle \mathbb {Z} _{N^{2}}^{*}}and then by computingc=megmr{\displaystyle c=m^{e}g_{m}^{r}}.
The security of the above commitment relies on the hardness of the RSA problem and has perfect hiding and computational binding.[23]
The Pedersen commitment scheme introduces an interesting homomorphic property that allows performing addition between two commitments. More specifically, given two messagesm1{\displaystyle m_{1}}andm2{\displaystyle m_{2}}and randomnessr1{\displaystyle r_{1}}andr2{\displaystyle r_{2}}, respectively, it is possible to generate a new commitment such that:C(m1,r1)⋅C(m2,r2)=C(m1+m2,r1+r2){\displaystyle C(m_{1},r_{1})\cdot C(m_{2},r_{2})=C(m_{1}+m_{2},r_{1}+r_{2})}. Formally:
C(m1,r1)⋅C(m2,r2)=gm1hr1⋅gm2hr2=gm1+m2hr1+r2=C(m1+m2,r1+r2){\displaystyle C(m_{1},r_{1})\cdot C(m_{2},r_{2})=g^{m_{1}}h^{r_{1}}\cdot g^{m_{2}}h^{r_{2}}=g^{m_{1}+m_{2}}h^{r_{1}+r_{2}}=C(m_{1}+m_{2},r_{1}+r_{2})}
To open the above Pedersen commitment to a new messagem1+m2{\displaystyle m_{1}+m_{2}}, the randomnessr1{\displaystyle r_{1}}andr2{\displaystyle r_{2}}has to be added.
Similarly, the RSA-based commitment mentioned above has a homomorphic property with respect to the multiplication operation. Given two messagesm1{\displaystyle m_{1}}andm2{\displaystyle m_{2}}with randomnessr1{\displaystyle r_{1}}andr2{\displaystyle r_{2}}, respectively, one can compute:C(m1,r1)⋅C(m2,r2)=C(m1⋅m2,r1+r2){\displaystyle C(m_{1},r_{1})\cdot C(m_{2},r_{2})=C(m_{1}\cdot m_{2},r_{1}+r_{2})}. Formally:C(m1,r1)⋅C(m2,r2)=m1egmr1⋅m2egmr2=(m1⋅m2)egmr1+r2=C(m1⋅m2,r1+r2){\displaystyle C(m_{1},r_{1})\cdot C(m_{2},r_{2})=m_{1}^{e}g_{m}^{r_{1}}\cdot m_{2}^{e}g_{m}^{r_{2}}=(m_{1}\cdot m_{2})^{e}g_{m}^{r_{1}+r_{2}}=C(m_{1}\cdot m_{2},r_{1}+r_{2})}.
To open the above commitment to a new messagem1⋅m2{\displaystyle m_{1}\cdot m_{2}}, the randomnessr1{\displaystyle r_{1}}andr2{\displaystyle r_{2}}has to be added. This newly generated commitment is distributed similarly to a new commitment tom1⋅m2{\displaystyle m_{1}\cdot m_{2}}.
Some commitment schemes permit a proof to be given of only a portion of the committed value. In these schemes, the secret valueX{\displaystyle X}is a vector of many individually separable values.
The commitmentC{\displaystyle C}is computed fromX{\displaystyle X}in the commit phase. Normally, in the reveal phase, the prover would reveal all ofX{\displaystyle X}and some additional proof data (such asR{\displaystyle R}insimple bit-commitment). Instead, the prover is able to reveal any single value from theX{\displaystyle X}vector, and create an efficient proof that it is the authentici{\displaystyle i}th element of the original vector that created the commitmentC{\displaystyle C}. The proof does not require any values ofX{\displaystyle X}other thanxi{\displaystyle x_{i}}to be revealed, and it is impossible to create valid proofs that reveal different values for any of thexi{\displaystyle x_{i}}than the true one.[24]
Vector hashing is a naive vector commitment partial reveal scheme based on bit-commitment. Valuesm1,m2,...mn{\displaystyle m_{1},m_{2},...m_{n}}are chosen randomly. Individual commitments are created by hashingy1=H(x1||m1),y2=H(x2||m2),...{\displaystyle y_{1}=H(x_{1}||m_{1}),y_{2}=H(x_{2}||m_{2}),...}. The overall commitment is computed as
In order to prove one element of the vectorX{\displaystyle X}, the prover reveals the values
The verifier is able to computeyi{\displaystyle y_{i}}fromxi{\displaystyle x_{i}}andmi{\displaystyle m_{i}}, and then is able to verify that the hash of ally{\displaystyle y}values is the commitmentC{\displaystyle C}.
Unfortunately the proof isO(n){\displaystyle O(n)}in size and verification time. Alternately, ifC{\displaystyle C}is the set of ally{\displaystyle y}values, then the commitment isO(n){\displaystyle O(n)}in size, and the proof isO(1){\displaystyle O(1)}in size and verification time. Either way, the commitment or the proof scales withO(n){\displaystyle O(n)}which is not optimal.
A common example of a practical partial reveal scheme is aMerkle tree, in which a binary hash tree is created of the elements ofX{\displaystyle X}. This scheme creates commitments that areO(1){\displaystyle O(1)}in size, and proofs that areO(log2n){\displaystyle O(\log _{2}{n})}in size and verification time. The root hash of the tree is the commitmentC{\displaystyle C}. To prove that a revealedxi{\displaystyle x_{i}}is part of the original tree, onlylog2n{\displaystyle \log _{2}{n}}hash values from the tree, one from each level, must be revealed as the proof. The verifier is able to follow the path from the claimed leaf node all the way up to the root, hashing in the sibling nodes at each level, and eventually arriving at a root node value that must equalC{\displaystyle C}.[25]
A Kate-Zaverucha-Goldberg commitment usespairing-based cryptographyto build a partial reveal scheme withO(1){\displaystyle O(1)}commitment sizes, proof sizes, and proof verification time. In other words, asn{\displaystyle n}, the number of values inX{\displaystyle X}, increases, the commitments and proofs do not get larger, and the proofs do not take any more effort to verify.
A KZG commitment requires a predetermined set of parameters to create apairing, and a trusted trapdoor element. For example, aTate pairingcan be used. Assume thatG1,G2{\displaystyle \mathbb {G} _{1},\mathbb {G} _{2}}are the additive groups, andGT{\displaystyle \mathbb {G} _{T}}is the multiplicative group of the pairing. In other words, the pairing is the mape:G1×G2→GT{\displaystyle e:\mathbb {G} _{1}\times \mathbb {G} _{2}\rightarrow \mathbb {G} _{T}}. Lett∈Fp{\displaystyle t\in \mathbb {F} _{p}}be the trapdoor element (ifp{\displaystyle p}is the prime order ofG1{\displaystyle \mathbb {G} _{1}}andG2{\displaystyle \mathbb {G} _{2}}), and letG{\displaystyle G}andH{\displaystyle H}be the generators ofG1{\displaystyle \mathbb {G} _{1}}andG2{\displaystyle \mathbb {G} _{2}}respectively. As part of the parameter setup, we assume thatG⋅ti{\displaystyle G\cdot t^{i}}andH⋅ti{\displaystyle H\cdot t^{i}}are known and shared values for arbitrarily many positive integer values ofi{\displaystyle i}, while the trapdoor valuet{\displaystyle t}itself is discarded and known to no one.
A KZG commitment reformulates the vector of values to be committed as a polynomial. First, we calculate a polynomial such thatp(i)=xi{\displaystyle p(i)=x_{i}}for all values ofxi{\displaystyle x_{i}}in our vector.Lagrange interpolationallows us to compute that polynomial
Under this formulation, the polynomial now encodes the vector, wherep(0)=x0,p(1)=x1,...{\displaystyle p(0)=x_{0},p(1)=x_{1},...}. Letp0,p1,...,pn−1{\displaystyle p_{0},p_{1},...,p_{n-1}}be the coefficients ofp{\displaystyle p}, such thatp(x)=∑i=0n−1pixi{\textstyle p(x)=\sum _{i=0}^{n-1}p_{i}x^{i}}. The commitment is calculated as
This is computed simply as adot productbetween the predetermined valuesG⋅ti{\displaystyle G\cdot t^{i}}and the polynomial coefficientspi{\displaystyle p_{i}}. SinceG1{\displaystyle \mathbb {G} _{1}}is an additive group with associativity and commutativity,C{\displaystyle C}is equal to simplyG⋅p(t){\displaystyle G\cdot p(t)}, since all the additions and multiplications withG{\displaystyle G}can be distributed out of the evaluation. Since the trapdoor valuet{\displaystyle t}is unknown, the commitmentC{\displaystyle C}is essentially the polynomial evaluated at a number known to no one, with the outcome obfuscated into an opaque element ofG1{\displaystyle \mathbb {G} _{1}}.
A KZG proof must demonstrate that the revealed data is the authentic value ofxi{\displaystyle x_{i}}whenC{\displaystyle C}was computed. Lety=xi{\displaystyle y=x_{i}}, the revealed value we must prove. Since the vector ofxi{\displaystyle x_{i}}was reformulated into a polynomial, we really need to prove that the polynomialp{\displaystyle p}, when evaluated ati{\displaystyle i}, takes on the valuey{\displaystyle y}. Simply, we just need to prove thatp(i)=y{\displaystyle p(i)=y}. We will do this by demonstrating that subtractingy{\displaystyle y}fromp{\displaystyle p}yields a root ati{\displaystyle i}. Define the polynomialq{\displaystyle q}as
This polynomial is itself the proof thatp(i)=y{\displaystyle p(i)=y}, because ifq{\displaystyle q}exists, thenp(x)−y{\displaystyle p(x)-y}is divisible byx−i{\displaystyle x-i}, meaning it has a root ati{\displaystyle i}, sop(i)−y=0{\displaystyle p(i)-y=0}(or, in other words,p(i)=y{\displaystyle p(i)=y}). The KZG proof will demonstrate thatq{\displaystyle q}exists and has this property.
The prover computesq{\displaystyle q}through the above polynomial division, then calculates the KZG proof valueπ{\displaystyle \pi }
This is equal toG⋅q(t){\displaystyle G\cdot q(t)}, as above. In other words, the proof value is the polynomialq{\displaystyle q}again evaluated at the trapdoor valuet{\displaystyle t}, hidden in the generatorG{\displaystyle G}ofG1{\displaystyle \mathbb {G} _{1}}.
This computation is only possible if the above polynomials were evenly divisible, because in that case the quotientq{\displaystyle q}is a polynomial, not arational function. Due to the construction of the trapdoor, it is not possible to evaluate a rational function at the trapdoor value, only to evaluate a polynomial using linear combinations of the precomputed known constants ofG⋅ti{\displaystyle G\cdot t^{i}}. This is why it is impossible to create a proof for an incorrect value ofxi{\displaystyle x_{i}}.
To verify the proof, the bilinear map of thepairingis used to show that the proof valueπ{\displaystyle \pi }summarizes a real polynomialq{\displaystyle q}that demonstrates the desired property, which is thatp(x)−y{\displaystyle p(x)-y}was evenly divided byx−i{\displaystyle x-i}. The verification computation checks the equality
wheree{\displaystyle e}is the bilinear map function as above.H⋅t{\displaystyle H\cdot t}is a precomputed constant,H⋅i{\displaystyle H\cdot i}is computed based oni{\displaystyle i}.
By rewriting the computation in the pairing groupGT{\displaystyle \mathbb {G} _{T}}, substituting inπ=q(t)⋅G{\displaystyle \pi =q(t)\cdot G}andC=p(t)⋅G{\displaystyle C=p(t)\cdot G}, and lettingτ(x)=e(G,H)x{\displaystyle \tau (x)=e(G,H)^{x}}be a helper function for lifting into the pairing group, the proof verification is more clear.
Assuming that the bilinear map is validly constructed, this demonstrates thatq(x)(x−i)=p(x)−y{\displaystyle q(x)(x-i)=p(x)-y}, without the validator knowing whatp{\displaystyle p}orq{\displaystyle q}are. The validator can be assured of this because ifτ(q(t)⋅(t−i))=τ(p(t)−y){\displaystyle \tau (q(t)\cdot (t-i))=\tau (p(t)-y)}, then the polynomials evaluate to the same output at the trapdoor valuex=t{\displaystyle x=t}. This demonstrates the polynomials are identical, because, if the parameters were validly constructed, the trapdoor value is known to no one, meaning that engineering a polynomial to have a specific value at the trapdoor is impossible (according to theSchwartz–Zippel lemma). Ifq(x)(x−i)=p(x)−y{\displaystyle q(x)(x-i)=p(x)-y}is now verified to be true, thenq{\displaystyle q}is verified to exist, thereforep(x)−y{\displaystyle p(x)-y}must be polynomial-divisible by(x−i){\displaystyle (x-i)}, sop(i)−y=0{\displaystyle p(i)-y=0}due to thefactor theorem. This proves that thei{\displaystyle i}th value of the committed vector must have equaledy{\displaystyle y}, since that is the output of evaluating the committed polynomial ati{\displaystyle i}.
The utility of the bilinear map pairing is to allow the multiplication ofq(x){\displaystyle q(x)}byx−i{\displaystyle x-i}to happen securely. These values truly lie inG1{\displaystyle \mathbb {G} _{1}}, where division is assumed to be computationally hard. For example,G1{\displaystyle \mathbb {G} _{1}}might be anelliptic curveover a finite field, as is common inelliptic-curve cryptography. Then, the division assumption is called theelliptic curve discrete logarithm problem[broken anchor], and this assumption is also what guards the trapdoor value from being computed, making it also a foundation of KZG commitments. In that case, we want to check ifq(x)(x−i)=p(x)−y{\displaystyle q(x)(x-i)=p(x)-y}. This cannot be done without a pairing, because with values on the curve ofG⋅q(x){\displaystyle G\cdot q(x)}andG⋅(x−i){\displaystyle G\cdot (x-i)}, we cannot computeG⋅(q(x)(x−i)){\displaystyle G\cdot (q(x)(x-i))}. That would violate thecomputational Diffie–Hellman assumption, a foundational assumption inelliptic-curve cryptography. We instead use apairingto sidestep this problem.q(x){\displaystyle q(x)}is still multiplied byG{\displaystyle G}to getG⋅q(x){\displaystyle G\cdot q(x)}, but the other side of the multiplication is done in the paired groupG2{\displaystyle \mathbb {G} _{2}}, so,H⋅(t−i){\displaystyle H\cdot (t-i)}. We computee(G⋅q(t),H⋅(t−i)){\displaystyle e(G\cdot q(t),H\cdot (t-i))}, which, due to thebilinearityof the map, is equal toe(G,H)q(t)⋅(t−i){\displaystyle e(G,H)^{q(t)\cdot (t-i)}}. In this output groupGT{\displaystyle \mathbb {G} _{T}}we still have thediscrete logarithm problem, so even though we know that value ande(G,H){\displaystyle e(G,H)}, we cannot extract the exponentq(t)⋅(t−i){\displaystyle q(t)\cdot (t-i)}, preventing any contradiction with discrete logarithm earlier. This value can be compared toe(G⋅(p(t)−y),H)=e(G,H)p(t)−y{\displaystyle e(G\cdot (p(t)-y),H)=e(G,H)^{p(t)-y}}though, and ife(G,H)q(t)⋅(t−i)=e(G,H)p(t)−y{\displaystyle e(G,H)^{q(t)\cdot (t-i)}=e(G,H)^{p(t)-y}}we are able to conclude thatq(t)⋅(t−i)=p(t)−y{\displaystyle q(t)\cdot (t-i)=p(t)-y}, without ever knowing what the actual value oft{\displaystyle t}is, let aloneq(t)(t−i){\displaystyle q(t)(t-i)}.
Additionally, a KZG commitment can be extended to prove the values of any arbitraryk{\displaystyle k}values ofX{\displaystyle X}(not just one value), with the proof size remainingO(1){\displaystyle O(1)}, but the proof verification time scales withO(k){\displaystyle O(k)}. The proof is the same, but instead of subtracting a constanty{\displaystyle y}, we subtract a polynomial that causes multiple roots, at all the locations we want to prove, and instead of dividing byx−i{\displaystyle x-i}we divide by∏ix−i{\textstyle \prod _{i}x-i}for those same locations.[26]
It is an interesting question inquantum cryptographyifunconditionally securebit commitment protocols exist on the quantum level, that is, protocols which are (at least asymptotically) binding and concealing even if there are no restrictions on the computational resources. One could hope that there might be a way to exploit the intrinsic properties ofquantum mechanics, as in the protocols forunconditionally secure key distribution.
However, this is impossible, as Dominic Mayers showed in 1996 (see[27]for the original proof). Any such protocol can be reduced to a protocol where the system is in one of two pure states after the commitment phase, depending on the bit Alice wants to commit. If the protocol is unconditionally concealing, then Alice can unitarily transform these states into each other using the properties of theSchmidt decomposition, effectively defeating the binding property.
One subtle assumption of the proof is that the commit phase must be finished at some point in time. This leaves room for protocols that require a continuing information flow until the bit is unveiled or the protocol is cancelled, in which case it is not binding anymore.[28]More generally, Mayers' proof applies only to protocols that exploitquantum physicsbut notspecial relativity. Kent has shown that there exist unconditionally secure protocols for bit commitment that exploit the principle ofspecial relativitystating that information cannot travel faster than light.[29]
Physical unclonable functions(PUFs) rely on the use of a physical key with internal randomness, which is hard to clone or to emulate. Electronic, optical and other types of PUFs[30]have been discussed extensively in the literature, in connection with their potential cryptographic applications including commitment schemes.[31][32]
|
https://en.wikipedia.org/wiki/Commitment_scheme
|
Inmathematics, anelliptic curveis asmooth,projective,algebraic curveofgenusone, on which there is a specified pointO. An elliptic curve is defined over afieldKand describes points inK2, theCartesian productofKwith itself. If the field'scharacteristicis different from 2 and 3, then the curve can be described as aplane algebraic curvewhich consists of solutions(x,y)for:
for some coefficientsaandbinK. The curve is required to benon-singular, which means that the curve has nocuspsorself-intersections. (This is equivalent to the condition4a3+ 27b2≠ 0, that is, beingsquare-freeinx.) It is always understood that the curve is really sitting in theprojective plane, with the pointObeing the uniquepoint at infinity. Many sources define an elliptic curve to be simply a curve given by an equation of this form. (When thecoefficient fieldhas characteristic 2 or 3, the above equation is not quite general enough to include all non-singularcubic curves; see§ Elliptic curves over a general fieldbelow.)
An elliptic curve is anabelian variety– that is, it has a group law defined algebraically, with respect to which it is anabelian group– andOserves as the identity element.
Ify2=P(x), wherePis any polynomial of degree three inxwith no repeated roots, the solution set is a nonsingular plane curve ofgenusone, an elliptic curve. IfPhas degree four and issquare-freethis equation again describes a plane curve of genus one; however, it has no natural choice of identity element. More generally, any algebraic curve of genus one, for example the intersection of twoquadric surfacesembedded in three-dimensional projective space, is called an elliptic curve, provided that it is equipped with a marked point to act as the identity.
Using the theory ofelliptic functions, it can be shown that elliptic curves defined over thecomplex numberscorrespond to embeddings of thetorusinto thecomplex projective plane. The torus is also anabelian group, and this correspondence is also agroup isomorphism.
Elliptic curves are especially important innumber theory, and constitute a major area of current research; for example, they were used inAndrew Wiles's proof of Fermat's Last Theorem. They also find applications inelliptic curve cryptography(ECC) andinteger factorization.
An elliptic curve isnotanellipsein the sense of a projective conic, which has genus zero: seeelliptic integralfor the origin of the term. However, there is a natural representation of real elliptic curves with shape invariantj≥ 1as ellipses in the hyperbolic planeH2{\displaystyle \mathbb {H} ^{2}}. Specifically, the intersections of the Minkowski hyperboloid with quadric surfaces characterized by a certain constant-angle property produce the Steiner ellipses inH2{\displaystyle \mathbb {H} ^{2}}(generated by orientation-preserving collineations). Further, the orthogonal trajectories of these ellipses comprise the elliptic curves withj≤ 1, and any ellipse inH2{\displaystyle \mathbb {H} ^{2}}described as a locus relative to two foci is uniquely the elliptic curve sum of two Steiner ellipses, obtained by adding the pairs of intersections on each orthogonal trajectory. Here, the vertex of the hyperboloid serves as the identity on each trajectory curve.[1]
Topologically, a complex elliptic curve is atorus, while a complex ellipse is asphere.
Although the formal definition of an elliptic curve requires some background inalgebraic geometry, it is possible to describe some features of elliptic curves over thereal numbersusing only introductoryalgebraandgeometry.
In this context, an elliptic curve is aplane curvedefined by an equation of the form
after a linear change of variables (aandbare real numbers). This type of equation is called a Weierstrass equation, and said to be in Weierstrass form, or Weierstrass normal form.
The definition of elliptic curve also requires that the curve benon-singular. Geometrically, this means that the graph has nocusps, self-intersections, orisolated points. Algebraically, this holds if and only if thediscriminant,Δ{\displaystyle \Delta }, is not equal to zero.
The discriminant is zero whena=−3k2,b=2k3{\displaystyle a=-3k^{2},b=2k^{3}}.
(Although the factor −16 is irrelevant to whether or not the curve is non-singular, this definition of the discriminant is useful in a more advanced study of elliptic curves.)[2]
The real graph of a non-singular curve hastwocomponents if its discriminant is positive, andonecomponent if it is negative. For example, in the graphs shown in figure to the right, the discriminant in the first case is 64, and in the second case is −368. Following the convention atConic section#Discriminant,ellipticcurves require that the discriminant is negative.
When working in theprojective plane, the equation inhomogeneous coordinatesbecomes
This equation is not defined on theline at infinity, but we can multiply byZ3{\displaystyle Z^{3}}to get one that is:
This resulting equation is defined on the whole projective plane, and the curve it defines projects onto the elliptic curve of interest. To find its intersection with the line at infinity, we can just positZ=0{\displaystyle Z=0}. This impliesX3=0{\displaystyle X^{3}=0}, which in afieldmeansX=0{\displaystyle X=0}.Y{\displaystyle Y}on the other hand can take any value, and thus all triplets(0,Y,0){\displaystyle (0,Y,0)}satisfy the equation. In projective geometry this set is simply the pointO=[0:1:0]{\displaystyle O=[0:1:0]}, which is thus the unique intersection of the curve with the line at infinity.
Since the curve is smooth, hencecontinuous, it can be shown that this point at infinity is the identity element of agroupstructure whose operation is geometrically described as follows:
Since the curve is symmetric about thexaxis, given any pointP, we can take−Pto be the point opposite it. We then have−O=O{\displaystyle -O=O}, asO{\displaystyle O}lies on theXZplane, so that−O{\displaystyle -O}is also the symmetrical ofO{\displaystyle O}about the origin, and thus represents the same projective point.
IfPandQare two points on the curve, then we can uniquely describe a third pointP+Qin the following way. First, draw the line that intersectsPandQ. This will generally intersect the cubic at a third point,R. We then takeP+Qto be−R, the point oppositeR.
This definition for addition works except in a few special cases related to the point at infinity and intersection multiplicity. The first is when one of the points isO. Here, we defineP+O=P=O+P, makingOthe identity of the group. IfP=Q, we only have one point, thus we cannot define the line between them. In this case, we use the tangent line to the curve at this point as our line. In most cases, the tangent will intersect a second pointR, and we can take its opposite. IfPandQare opposites of each other, we defineP+Q=O. Lastly, ifPis aninflection point(a point where the concavity of the curve changes), we takeRto bePitself, andP+Pis simply the point opposite itself, i.e. itself.
LetKbe a field over which the curve is defined (that is, the coefficients of the defining equation or equations of the curve are inK) and denote the curve byE. Then theK-rational pointsofEare the points onEwhose coordinates all lie inK, including the point at infinity. The set ofK-rational points is denoted byE(K).E(K)is a group, because properties of polynomial equations show that ifPis inE(K), then−Pis also inE(K), and if two ofP,Q,Rare inE(K), then so is the third. Additionally, ifKis a subfield ofL, thenE(K)is asubgroupofE(L).
The above groups can be described algebraically as well as geometrically. Given the curvey2=x3+bx+cover the fieldK(whosecharacteristicwe assume to be neither 2 nor 3), and pointsP= (xP,yP)andQ= (xQ,yQ)on the curve, assume first thatxP≠xQ(case1). Lety=sx+dbe the equation of the line that intersectsPandQ, which has the following slope:
The line equation and the curve equation intersect at the pointsxP,xQ, andxR, so the equations have identicalyvalues at these values.
which is equivalent to
SincexP,xQ, andxRare solutions, this equation has its roots at exactly the samexvalues as
and because both equations are cubics, they must be the same polynomial up to a scalar. Thenequating the coefficientsofx2in both equations
and solving for the unknownxR,
yRfollows from the line equation
and this is an element ofK, becausesis.
IfxP=xQ, then there are two options: ifyP= −yQ(case3), including the case whereyP=yQ= 0(case4), then the sum is defined as 0; thus, the inverse of each point on the curve is found by reflecting it across thexaxis.
IfyP=yQ≠ 0, thenQ=PandR= (xR,yR) = −(P+P) = −2P= −2Q(case2usingPasR). The slope is given by the tangent to the curve at (xP,yP).
A more general expression fors{\displaystyle s}that works in both case 1 and case 2 is
where equality toyP−yQ/xP−xQrelies onPandQobeyingy2=x3+bx+c.
For the curvey2=x3+ax2+bx+c(the general form of an elliptic curve withcharacteristic3), the formulas are similar, withs=xP2+xPxQ+xQ2+axP+axQ+b/yP+yQandxR=s2−a−xP−xQ.
For a general cubic curve not in Weierstrass normal form, we can still define a group structure by designating one of its nine inflection points as the identityO. In the projective plane, each line will intersect a cubic at three points when accounting for multiplicity. For a pointP,−Pis defined as the unique third point on the line passing throughOandP. Then, for anyPandQ,P+Qis defined as−RwhereRis the unique third point on the line containingPandQ.
For an example of the group law over a non-Weierstrass curve, seeHessian curves.
A curveEdefined over the field of rational numbers is also defined over the field of real numbers. Therefore, the law of addition (of points with real coordinates) by the tangent and secant method can be applied toE. The explicit formulae show that the sum of two pointsPandQwith rational coordinates has again rational coordinates, since the line joiningPandQhas rational coefficients. This way, one shows that the set of rational points ofEforms a subgroup of the group of real points ofE.
This section is concerned with pointsP= (x,y) ofEsuch thatxis an integer.
For example, the equationy2=x3+ 17 has eight integral solutions withy> 0:[3][4]
As another example,Ljunggren's equation, a curve whose Weierstrass form isy2=x3− 2x, has only four solutions withy≥ 0 :[5]
Rational points can be constructed by the method of tangents and secants detailedabove, starting with afinitenumber of rational points. More precisely[6]theMordell–Weil theoremstates that the groupE(Q) is afinitely generated(abelian) group. By thefundamental theorem of finitely generated abelian groupsit is therefore a finite direct sum of copies ofZand finite cyclic groups.
The proof of the theorem[7]involves two parts. The first part shows that for any integerm> 1, thequotient groupE(Q)/mE(Q) is finite (this is the weak Mordell–Weil theorem). Second, introducing aheight functionhon the rational pointsE(Q) defined byh(P0) = 0 andh(P) = log max(|p|, |q|)ifP(unequal to the point at infinityP0) has asabscissathe rational numberx=p/q(withcoprimepandq). This height functionhhas the property thath(mP) grows roughly like the square ofm. Moreover, only finitely many rational points with height smaller than any constant exist onE.
The proof of the theorem is thus a variant of the method ofinfinite descent[8]and relies on the repeated application ofEuclidean divisionsonE: letP∈E(Q) be a rational point on the curve, writingPas the sum 2P1+Q1whereQ1is a fixed representant ofPinE(Q)/2E(Q), the height ofP1is about1/4of the one ofP(more generally, replacing 2 by anym> 1, and1/4by1/m2). Redoing the same withP1, that is to sayP1= 2P2+Q2, thenP2= 2P3+Q3, etc. finally expressesPas an integral linear combination of pointsQiand of points whose height is bounded by a fixed constant chosen in advance: by the weak Mordell–Weil theorem and the second property of the height functionPis thus expressed as an integral linear combination of a finite number of fixed points.
The theorem however doesn't provide a method to determine any representatives ofE(Q)/mE(Q).
TherankofE(Q), that is the number of copies ofZinE(Q) or, equivalently, the number of independent points of infinite order, is called therankofE. TheBirch and Swinnerton-Dyer conjectureis concerned with determining the rank. One conjectures that it can be arbitrarily large, even if only examples with relatively small rank are known. The elliptic curve with the currently largest exactly-known rank is
It has rank 20, found byNoam Elkiesand Zev Klagsbrun in 2020. Curves of rank higher than 20 have been known since 1994, with lower bounds on their ranks ranging from 21 to 29, but their exact ranks are not known and in particular it is not proven which of them have higher rank than the others or which is the true "current champion".[9]
As for the groups constituting thetorsion subgroupofE(Q), the following is known:[10]the torsion subgroup ofE(Q) is one of the 15 following groups (a theoremdue toBarry Mazur):Z/NZforN= 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, or 12, orZ/2Z×Z/2NZwithN= 1, 2, 3, 4. Examples for every case are known. Moreover, elliptic curves whose Mordell–Weil groups overQhave the same torsion groups belong to a parametrized family.[11]
TheBirch and Swinnerton-Dyer conjecture(BSD) is one of theMillennium problemsof theClay Mathematics Institute. The conjecture relies on analytic and arithmetic objects defined by the elliptic curve in question.
At the analytic side, an important ingredient is a function of a complex variable,L, theHasse–Weil zeta functionofEoverQ. This function is a variant of theRiemann zeta functionandDirichlet L-functions. It is defined as anEuler product, with one factor for everyprime numberp.
For a curveEoverQgiven by a minimal equation
with integral coefficientsai{\displaystyle a_{i}}, reducing the coefficientsmodulopdefines an elliptic curve over thefinite fieldFp(except for a finite number of primesp, where the reduced curve has asingularityand thus fails to be elliptic, in which caseEis said to be ofbad reductionatp).
The zeta function of an elliptic curve over a finite fieldFpis, in some sense, agenerating functionassembling the information of the number of points ofEwith values in the finitefield extensionsFpnofFp. It is given by[12]
The interior sum of the exponential resembles the development of thelogarithmand, in fact, the so-defined zeta function is arational functioninT:
where the 'trace of Frobenius' term[13]ap{\displaystyle a_{p}}is defined to be the difference between the 'expected' numberp+1{\displaystyle p+1}and the number of points on the elliptic curveE{\displaystyle E}overFp{\displaystyle \mathbb {F} _{p}}, viz.
or equivalently,
We may define the same quantities and functions over an arbitrary finite field of characteristicp{\displaystyle p}, withq=pn{\displaystyle q=p^{n}}replacingp{\displaystyle p}everywhere.
TheL-functionofEoverQis then defined by collecting this information together, for all primesp. It is defined by
whereNis theconductorofE, i.e. the product of primes with bad reduction(Δ(Emodp)=0{\displaystyle (\Delta (E\mod p)=0}),[14]in which caseapis defined differently from the method above: see Silverman (1986) below.
For exampleE:y2=x3+14x+19{\displaystyle E:y^{2}=x^{3}+14x+19}has bad reduction at 17, becauseEmod17:y2=x3−3x+2{\displaystyle E\mod 17:y^{2}=x^{3}-3x+2}hasΔ=0{\displaystyle \Delta =0}.
This productconvergesfor Re(s) > 3/2 only. Hasse's conjecture affirms that theL-function admits ananalytic continuationto the whole complex plane and satisfies afunctional equationrelating, for anys,L(E,s) toL(E, 2 −s). In 1999 this was shown to be a consequence of the proof of the Shimura–Taniyama–Weil conjecture, which asserts that every elliptic curve overQis amodular curve, which implies that itsL-function is theL-function of amodular formwhose analytic continuation is known. One can therefore speak about the values ofL(E,s) at any complex numbers.
Ats= 1 (the conductor product can be discarded as it is finite), theL-function becomes
TheBirch and Swinnerton-Dyer conjecturerelates the arithmetic of the curve to the behaviour of thisL-function ats= 1. It affirms that the vanishing order of theL-function ats= 1 equals the rank ofEand predicts the leading term of the Laurent series ofL(E,s) at that point in terms of several quantities attached to the elliptic curve.
Much like theRiemann hypothesis, the truth of the BSD conjecture would have multiple consequences, including the following two:
LetK=Fqbe thefinite fieldwithqelements andEan elliptic curve defined overK. While the precisenumber of rational points of an elliptic curveEoverKis in general difficult to compute,Hasse's theorem on elliptic curvesgives the following inequality:
In other words, the number of points on the curve grows proportionally to the number of elements in the field. This fact can be understood and proven with the help of some general theory; seelocal zeta functionandétale cohomologyfor example.
The set of pointsE(Fq) is a finite abelian group. It is always cyclic or the product of two cyclic groups. For example,[17]the curve defined by
overF71has 72 points (71affine pointsincluding (0,0) and onepoint at infinity) over this field, whose group structure is given byZ/2Z×Z/36Z. The number of points on a specific curve can be computed withSchoof's algorithm.
Studying the curve over thefield extensionsofFqis facilitated by the introduction of the local zeta function ofEoverFq, defined by a generating series (also see above)
where the fieldKnis the (unique up to isomorphism) extension ofK=Fqof degreen(that is,Kn=Fqn{\displaystyle K_{n}=F_{q^{n}}}).
The zeta function is a rational function inT. To see this, consider the integera{\displaystyle a}such that
There is a complex numberα{\displaystyle \alpha }such that
whereα¯{\displaystyle {\bar {\alpha }}}is thecomplex conjugate, and so we have
We chooseα{\displaystyle \alpha }so that itsabsolute valueisq{\displaystyle {\sqrt {q}}}, that isα=q12eiθ,α¯=q12e−iθ{\displaystyle \alpha =q^{\frac {1}{2}}e^{i\theta },{\bar {\alpha }}=q^{\frac {1}{2}}e^{-i\theta }}, and thatcosθ=a2q{\displaystyle \cos \theta ={\frac {a}{2{\sqrt {q}}}}}. Note that|a|≤2q{\displaystyle |a|\leq 2{\sqrt {q}}}.
α{\displaystyle \alpha }can then be used in the local zeta function as its values when raised to the various powers ofncan be said to reasonably approximate the behaviour ofan{\displaystyle a_{n}}, in that
Using theTaylor series for the natural logarithm,
Then(1−αT)(1−α¯T)=1−aT+qT2{\displaystyle (1-\alpha T)(1-{\bar {\alpha }}T)=1-aT+qT^{2}}, so finally
For example,[18]the zeta function ofE:y2+y=x3over the fieldF2is given by
which follows from:
asq=2{\displaystyle q=2}, then|E|=21+1=3=1−a+2{\displaystyle |E|=2^{1}+1=3=1-a+2}, soa=0{\displaystyle a=0}.
Thefunctional equationis
As we are only interested in the behaviour ofan{\displaystyle a_{n}}, we can use a reduced zeta function
and so
which leads directly to the local L-functions
TheSato–Tate conjectureis a statement about how the error term2q{\displaystyle 2{\sqrt {q}}}in Hasse's theorem varies with the different primesq, if an elliptic curve E overQis reduced modulo q. It was proven (for almost all such curves) in 2006 due to the results of Taylor, Harris and Shepherd-Barron,[19]and says that the error terms are equidistributed.
Elliptic curves over finite fields are notably applied incryptographyand for thefactorizationof large integers. These algorithms often make use of the group structure on the points ofE. Algorithms that are applicable to general groups, for example the group of invertible elements in finite fields,F*q, can thus be applied to the group of points on an elliptic curve. For example, thediscrete logarithmis such an algorithm. The interest in this is that choosing an elliptic curve allows for more flexibility than choosingq(and thus the group of units inFq). Also, the group structure of elliptic curves is generally more complicated.
Elliptic curves can be defined over anyfieldK; the formal definition of an elliptic curve is a non-singular projective algebraic curve overKwithgenus1 and endowed with a distinguished point defined overK.
If thecharacteristicofKis neither 2 nor 3, then every elliptic curve overKcan be written in the form
after a linear change of variables. Herepandqare elements ofKsuch that the right hand side polynomialx3−px−qdoes not have any double roots. If the characteristic is 2 or 3, then more terms need to be kept: in characteristic 3, the most general equation is of the form
for arbitrary constantsb2,b4,b6such that the polynomial on the right-hand side has distinct roots (the notation is chosen for historical reasons). In characteristic 2, even this much is not possible, and the most general equation is
provided that the variety it defines is non-singular. If characteristic were not an obstruction, each equation would reduce to the previous ones by a suitable linear change of variables.
One typically takes the curve to be the set of all points (x,y) which satisfy the above equation and such that bothxandyare elements of thealgebraic closureofK. Points of the curve whose coordinates both belong toKare calledK-rational points.
Many of the preceding results remain valid when the field of definition ofEis anumber fieldK, that is to say, a finitefield extensionofQ. In particular, the groupE(K)ofK-rational points of an elliptic curveEdefined overKis finitely generated, which generalizes the Mordell–Weil theorem above. A theorem due toLoïc Merelshows that for a given integerd, there are (up toisomorphism) only finitely many groups that can occur as the torsion groups ofE(K) for an elliptic curve defined over a number fieldKofdegreed. More precisely,[20]there is a numberB(d) such that for any elliptic curveEdefined over a number fieldKof degreed, any torsion point ofE(K) is oforderless thanB(d). The theorem is effective: ford> 1, if a torsion point is of orderp, withpprime, then
As for the integral points, Siegel's theorem generalizes to the following: LetEbe an elliptic curve defined over a number fieldK,xandythe Weierstrass coordinates. Then there are only finitely many points ofE(K)whosex-coordinate is in thering of integersOK.
The properties of the Hasse–Weil zeta function and the Birch and Swinnerton-Dyer conjecture can also be extended to this more general situation.
The formulation of elliptic curves as the embedding of atorusin thecomplex projective planefollows naturally from a curious property ofWeierstrass's elliptic functions. These functions and their first derivative are related by the formula
Here,g2andg3are constants;℘(z)is theWeierstrass elliptic functionand℘′(z)its derivative. It should be clear that this relation is in the form of an elliptic curve (over thecomplex numbers). The Weierstrass functions are doubly periodic; that is, they areperiodicwith respect to alatticeΛ; in essence, the Weierstrass functions are naturally defined on a torusT=C/Λ. This torus may be embedded in the complex projective plane by means of the map
This map is agroup isomorphismof the torus (considered with its natural group structure) with the chord-and-tangent group law on the cubic curve which is the image of this map. It is also an isomorphism ofRiemann surfacesfrom the torus to the cubic curve, so topologically, an elliptic curve is a torus. If the latticeΛis related by multiplication by a non-zero complex numbercto a latticecΛ, then the corresponding curves are isomorphic. Isomorphism classes of elliptic curves are specified by thej-invariant.
The isomorphism classes can be understood in a simpler way as well. The constantsg2andg3, called themodular invariants, are uniquely determined by the lattice, that is, by the structure of the torus. However, all real polynomials factorize completely into linear factors over the complex numbers, since the field of complex numbers is thealgebraic closureof the reals. So, the elliptic curve may be written as
One finds that
and
withj-invariantj(τ)andλ(τ)is sometimes called themodular lambda function. For example, letτ= 2i, thenλ(2i) = (−1 +√2)4which impliesg′2,g′3, and thereforeg′23− 27g′32of the formula above are allalgebraic numbersifτinvolves animaginary quadratic field. In fact, it yields the integerj(2i) = 663=287496.
In contrast, themodular discriminant
is generally atranscendental number. In particular, the value of theDedekind eta functionη(2i)is
Note that theuniformization theoremimplies that everycompactRiemann surface of genus one can be represented as a torus. This also allows an easy understanding of thetorsion pointson an elliptic curve: if the latticeΛis spanned by the fundamental periodsω1andω2, then then-torsion points are the (equivalence classes of) points of the form
for integersaandbin the range0 ≤ (a,b) <n.
If
is an elliptic curve over the complex numbers and
then a pair of fundamental periods ofEcan be calculated very rapidly by
M(w,z)is thearithmetic–geometric meanofwandz. At each step of the arithmetic–geometric mean iteration, the signs ofznarising from the ambiguity of geometric mean iterations are chosen such that|wn−zn| ≤ |wn+zn|wherewnandzndenote the individual arithmetic mean and geometric mean iterations ofwandz, respectively. When|wn−zn| = |wn+zn|, there is an additional condition thatIm(zn/wn)> 0.[21]
Over the complex numbers, every elliptic curve has nineinflection points. Every line through two of these points also passes through a third inflection point; the nine points and 12 lines formed in this way form a realization of theHesse configuration.
Given anisogeny
of elliptic curves of degreen{\displaystyle n}, thedual isogenyis an isogeny
of the same degree such that
Here[n]{\displaystyle [n]}denotes the multiplication-by-n{\displaystyle n}isogenye↦ne{\displaystyle e\mapsto ne}which has degreen2.{\displaystyle n^{2}.}
Often only the existence of a dual isogeny is needed, but it can be explicitly given as the composition
whereDiv0{\displaystyle \operatorname {Div} ^{0}}is the group ofdivisorsof degree 0. To do this, we need mapsE→Div0(E){\displaystyle E\to \operatorname {Div} ^{0}(E)}given byP→P−O{\displaystyle P\to P-O}whereO{\displaystyle O}is the neutral point ofE{\displaystyle E}andDiv0(E)→E{\displaystyle \operatorname {Div} ^{0}(E)\to E}given by∑nPP→∑nPP.{\displaystyle \sum n_{P}P\to \sum n_{P}P.}
To see thatf∘f^=[n]{\displaystyle f\circ {\hat {f}}=[n]}, note that the original isogenyf{\displaystyle f}can be written as a composite
and that sincef{\displaystyle f}isfiniteof degreen{\displaystyle n},f∗f∗{\displaystyle f_{*}f^{*}}is multiplication byn{\displaystyle n}onDiv0(E′).{\displaystyle \operatorname {Div} ^{0}(E').}
Alternatively, we can use the smallerPicard groupPic0{\displaystyle \operatorname {Pic} ^{0}}, aquotientofDiv0.{\displaystyle \operatorname {Div} ^{0}.}The mapE→Div0(E){\displaystyle E\to \operatorname {Div} ^{0}(E)}descends to anisomorphism,E→Pic0(E).{\displaystyle E\to \operatorname {Pic} ^{0}(E).}The dual isogeny is
Note that the relationf∘f^=[n]{\displaystyle f\circ {\hat {f}}=[n]}also implies the conjugate relationf^∘f=[n].{\displaystyle {\hat {f}}\circ f=[n].}Indeed, letϕ=f^∘f.{\displaystyle \phi ={\hat {f}}\circ f.}Thenϕ∘f^=f^∘[n]=[n]∘f^.{\displaystyle \phi \circ {\hat {f}}={\hat {f}}\circ [n]=[n]\circ {\hat {f}}.}Butf^{\displaystyle {\hat {f}}}issurjective, so we must haveϕ=[n].{\displaystyle \phi =[n].}
Elliptic curves over finite fields are used in somecryptographicapplications as well as forinteger factorization. Typically, the general idea in these applications is that a knownalgorithmwhich makes use of certain finite groups is rewritten to use the groups of rational points of elliptic curves. For more see also:
Serge Lang, in the introduction to the book cited below, stated that "It is possible to write endlessly on elliptic curves. (This is not a threat.)" The following short list is thus at best a guide to the vast expository literature available on the theoretical, algorithmic, and cryptographic aspects of elliptic curves.
This article incorporates material from Isogeny onPlanetMath, which is licensed under theCreative Commons Attribution/Share-Alike License.
|
https://en.wikipedia.org/wiki/Elliptic_curve
|
Elliptic-curve cryptography(ECC) is an approach topublic-key cryptographybased on thealgebraic structureofelliptic curvesoverfinite fields. ECC allows smaller keys to provide equivalent security, compared to cryptosystems based on modular exponentiation inGalois fields, such as theRSA cryptosystemandElGamal cryptosystem.[1]
Elliptic curves are applicable forkey agreement,digital signatures,pseudo-random generatorsand other tasks. Indirectly, they can be used forencryptionby combining the key agreement with asymmetric encryptionscheme. They are also used in severalinteger factorizationalgorithmsthat have applications in cryptography, such asLenstra elliptic-curve factorization.
The use of elliptic curves in cryptography was suggested independently byNeal Koblitz[2]andVictor S. Miller[3]in 1985. Elliptic curve cryptography algorithms entered wide use in 2004 to 2005.
In 1999, NIST recommended fifteen elliptic curves. Specifically, FIPS 186-4[4]has ten recommended finite fields:
The NIST recommendation thus contains a total of five prime curves and ten binary curves. The curves were chosen for optimal security and implementation efficiency.[5]
At theRSA Conference2005, theNational Security Agency(NSA) announcedSuite B, which exclusively uses ECC for digital signature generation and key exchange. The suite is intended to protect both classified and unclassified national security systems and information.[1]National Institute of Standards and Technology(NIST) has endorsed elliptic curve cryptography in itsSuite Bset of recommended algorithms, specificallyelliptic-curve Diffie–Hellman(ECDH) for key exchange andElliptic Curve Digital Signature Algorithm(ECDSA) for digital signature. The NSA allows their use for protecting information classified up totop secretwith 384-bit keys.[6]
Recently,[when?]a large number of cryptographic primitives based on bilinear mappings on various elliptic curve groups, such as theWeilandTate pairings, have been introduced. Schemes based on these primitives provide efficientidentity-based encryptionas well as pairing-based signatures,signcryption,key agreement, andproxy re-encryption.[citation needed]
Elliptic curve cryptography is used successfully in numerous popular protocols, such asTransport Layer SecurityandBitcoin.
In 2013,The New York Timesstated thatDual Elliptic Curve Deterministic Random Bit Generation(or Dual_EC_DRBG) had been included as a NIST national standard due to the influence ofNSA, which had included a deliberate weakness in the algorithm and the recommended elliptic curve.[7]RSA Securityin September 2013 issued an advisory recommending that its customers discontinue using any software based on Dual_EC_DRBG.[8][9]In the wake of the exposure of Dual_EC_DRBG as "an NSA undercover operation", cryptography experts have also expressed concern over the security of the NIST recommended elliptic curves,[10]suggesting a return to encryption based on non-elliptic-curve groups.
Additionally, in August 2015, the NSA announced that it plans to replace Suite B with a new cipher suite due to concerns aboutquantum computingattacks on ECC.[11][12]
While the RSA patent expired in 2000, there may be patents in force covering certain aspects of ECC technology, including at least one ECC scheme (ECMQV). However,RSA Laboratories[13]andDaniel J. Bernstein[14]have argued that theUS governmentelliptic curve digital signature standard (ECDSA; NIST FIPS 186-3) and certain practical ECC-based key exchange schemes (including ECDH) can be implemented without infringing those patents.
For the purposes of this article, anelliptic curveis aplane curveover afinite field(rather than the real numbers) which consists of the points satisfying the equation
along with a distinguishedpoint at infinity, denoted ∞. The coordinates here are to be chosen from a fixedfinite fieldofcharacteristicnot equal to 2 or 3, or the curve equation would be somewhat more complicated.
This set of points, together with thegroup operation of elliptic curves, is anabelian group, with the point at infinity as an identity element. The structure of the group is inherited from thedivisor groupof the underlyingalgebraic variety:
Public-key cryptographyis based on theintractabilityof certain mathematicalproblems. Early public-key systems, such asRSA's 1983 patent, based their security on the assumption that it is difficult tofactora large integer composed of two or more large prime factors which are far apart. For later elliptic-curve-based protocols, the base assumption is that finding thediscrete logarithmof a random elliptic curve element with respect to a publicly known base point is infeasible (thecomputational Diffie–Hellman assumption): this is the "elliptic curve discrete logarithm problem" (ECDLP). The security of elliptic curve cryptography depends on the ability to compute apoint multiplicationand the inability to compute the multiplicand given the original point and product point. The size of the elliptic curve, measured by the total number of discrete integer pairs satisfying the curve equation, determines the difficulty of the problem.
The primary benefit promised by elliptic curve cryptography over alternatives such as RSA is a smallerkey size, reducing storage and transmission requirements.[1]For example, a 256-bit elliptic curve public key should providecomparable securityto a 3072-bit RSA public key.
Severaldiscrete logarithm-based protocols have been adapted to elliptic curves, replacing the group(Zp)×{\displaystyle (\mathbb {Z} _{p})^{\times }}with an elliptic curve:
Some common implementation considerations include:
To use ECC, all parties must agree on all the elements defining the elliptic curve, that is, thedomain parametersof the scheme. The size of the field used is typically either prime (and denoted as p) or is a power of two (2m{\displaystyle 2^{m}}); the latter case is calledthe binary case, and this case necessitates the choice of an auxiliary curve denoted byf. Thus the field is defined bypin the prime case and the pair ofmandfin the binary case. The elliptic curve is defined by the constantsaandbused in its defining equation. Finally, the cyclic subgroup is defined by itsgenerator(a.k.a.base point)G. For cryptographic application, theorderofG, that is the smallest positive numbernsuch thatnG=O{\displaystyle nG={\mathcal {O}}}(thepoint at infinityof the curve, and theidentity element), is normally prime. Sincenis the size of a subgroup ofE(Fp){\displaystyle E(\mathbb {F} _{p})}it follows fromLagrange's theoremthat the numberh=1n|E(Fp)|{\displaystyle h={\frac {1}{n}}|E(\mathbb {F} _{p})|}is an integer. In cryptographic applications, this numberh, called thecofactor, must be small (h≤4{\displaystyle h\leq 4}) and, preferably,h=1{\displaystyle h=1}. To summarize: in the prime case, the domain parameters are(p,a,b,G,n,h){\displaystyle (p,a,b,G,n,h)}; in the binary case, they are(m,f,a,b,G,n,h){\displaystyle (m,f,a,b,G,n,h)}.
Unless there is an assurance that domain parameters were generated by a party trusted with respect to their use, the domain parametersmustbe validated before use.
The generation of domain parameters is not usually done by each participant because this involves computingthe number of points on a curvewhich is time-consuming and troublesome to implement. As a result, several standard bodies published domain parameters of elliptic curves for several common field sizes. Such domain parameters are commonly known as "standard curves" or "named curves"; a named curve can be referenced either by name or by the uniqueobject identifierdefined in the standard documents:
SECG test vectors are also available.[17]NIST has approved many SECG curves, so there is a significant overlap between the specifications published by NIST and SECG. EC domain parameters may be specified either by value or by name.
If, despite the preceding admonition, one decides to construct one's own domain parameters, one should select the underlying field and then use one of the following strategies to find a curve with appropriate (i.e., near prime) number of points using one of the following methods:
Several classes of curves are weak and should be avoided:
Because all the fastest known algorithms that allow one to solve the ECDLP (baby-step giant-step,Pollard's rho, etc.), needO(n){\displaystyle O({\sqrt {n}})}steps, it follows that the size of the underlying field should be roughly twice the security parameter. For example, for 128-bit security one needs a curve overFq{\displaystyle \mathbb {F} _{q}}, whereq≈2256{\displaystyle q\approx 2^{256}}. This can be contrasted with finite-field cryptography (e.g.,DSA) which requires[27]3072-bit public keys and 256-bit private keys, and integer factorization cryptography (e.g.,RSA) which requires a 3072-bit value ofn, where the private key should be just as large. However, the public key may be smaller to accommodate efficient encryption, especially when processing power is limited.
The hardest ECC scheme (publicly) broken to date[when?]had a 112-bit key for the prime field case and a 109-bit key for the binary field case. For the prime field case, this was broken in July 2009 using a cluster of over 200PlayStation 3game consoles and could have been finished in 3.5 months using this cluster when running continuously.[28]The binary field case was broken in April 2004 using 2600 computers over 17 months.[29]
A current project is aiming at breaking the ECC2K-130 challenge by Certicom, by using a wide range of different hardware: CPUs, GPUs, FPGA.[30]
A close examination of the addition rules shows that in order to add two points, one needs not only several additions and multiplications inFq{\displaystyle \mathbb {F} _{q}}but also aninversionoperation. Theinversion(for givenx∈Fq{\displaystyle x\in \mathbb {F} _{q}}findy∈Fq{\displaystyle y\in \mathbb {F} _{q}}such thatxy=1{\displaystyle xy=1}) is one to two orders of magnitude slower[31]than multiplication. However, points on a curve can be represented in different coordinate systems which do not require aninversionoperation to add two points. Several such systems were proposed: in theprojectivesystem each point is represented by three coordinates(X,Y,Z){\displaystyle (X,Y,Z)}using the following relation:x=XZ{\displaystyle x={\frac {X}{Z}}},y=YZ{\displaystyle y={\frac {Y}{Z}}}; in theJacobian systema point is also represented with three coordinates(X,Y,Z){\displaystyle (X,Y,Z)}, but a different relation is used:x=XZ2{\displaystyle x={\frac {X}{Z^{2}}}},y=YZ3{\displaystyle y={\frac {Y}{Z^{3}}}}; in theLópez–Dahab systemthe relation isx=XZ{\displaystyle x={\frac {X}{Z}}},y=YZ2{\displaystyle y={\frac {Y}{Z^{2}}}}; in themodified Jacobiansystem the same relations are used but four coordinates are stored and used for calculations(X,Y,Z,aZ4){\displaystyle (X,Y,Z,aZ^{4})}; and in theChudnovsky Jacobiansystem five coordinates are used(X,Y,Z,Z2,Z3){\displaystyle (X,Y,Z,Z^{2},Z^{3})}. Note that there may be different naming conventions, for example,IEEE P1363-2000 standard uses "projective coordinates" to refer to what is commonly called Jacobian coordinates. An additional speed-up is possible if mixed coordinates are used.[32]
Reduction modulop(which is needed for addition and multiplication) can be executed much faster if the primepis apseudo-Mersenne prime, that isp≈2d{\displaystyle p\approx 2^{d}}; for example,p=2521−1{\displaystyle p=2^{521}-1}orp=2256−232−29−28−27−26−24−1.{\displaystyle p=2^{256}-2^{32}-2^{9}-2^{8}-2^{7}-2^{6}-2^{4}-1.}Compared toBarrett reduction, there can be an order of magnitude speed-up.[33]The speed-up here is a practical rather than theoretical one, and derives from the fact that the moduli of numbers against numbers near powers of two can be performed efficiently by computers operating on binary numbers withbitwise operations.
The curves overFp{\displaystyle \mathbb {F} _{p}}with pseudo-Mersennepare recommended by NIST. Yet another advantage of the NIST curves is that they usea= −3, which improves addition in Jacobian coordinates.
According to Bernstein and Lange, many of the efficiency-related decisions in NIST FIPS 186-2 are suboptimal. Other curves are more secure and run just as fast.[34]
Unlike most otherDLPsystems (where it is possible to use the same procedure for squaring and multiplication), the EC addition is significantly different for doubling (P=Q) and general addition (P≠Q) depending on the coordinate system used. Consequently, it is important to counteractside-channel attacks(e.g., timing orsimple/differential power analysis attacks) using, for example, fixed pattern window (a.k.a. comb) methods[clarification needed][35](note that this does not increase computation time). Alternatively one can use anEdwards curve; this is a special family of elliptic curves for which doubling and addition can be done with the same operation.[36]Another concern for ECC-systems is the danger offault attacks, especially when running onsmart cards.[37]
Cryptographic experts have expressed concerns that theNational Security Agencyhas inserted akleptographicbackdoor into at least one elliptic curve-based pseudo random generator.[38]Internal memos leaked by former NSA contractorEdward Snowdensuggest that the NSA put a backdoor in theDual EC DRBGstandard.[39]One analysis of the possible backdoor concluded that an adversary in possession of the algorithm's secret key could obtain encryption keys given only 32 bytes of PRNG output.[40]
The SafeCurves project has been launched in order to catalog curves that are easy to implement securely and are designed in a fully publicly verifiable way to minimize the chance of a backdoor.[41]
Shor's algorithmcan be used to break elliptic curve cryptography by computing discrete logarithms on a hypotheticalquantum computer. The latest quantum resource estimates for breaking a curve with a 256-bit modulus (128-bit security level) are 2330qubitsand 126 billionToffoli gates.[42]For the binary elliptic curve case, 906 qubits are necessary (to break 128 bits of security).[43]In comparison, using Shor's algorithm to break theRSAalgorithm requires 4098 qubits and 5.2 trillion Toffoli gates for a 2048-bit RSA key, suggesting that ECC is an easier target for quantum computers than RSA. All of these figures vastly exceed any quantum computer that has ever been built, and estimates place the creation of such computers at a decade or more away.[citation needed][44]
Supersingular Isogeny Diffie–Hellman Key Exchangeclaimed to provide apost-quantumsecure form of elliptic curve cryptography by usingisogeniesto implementDiffie–Hellmankey exchanges. This key exchange uses much of the same field arithmetic as existing elliptic curve cryptography and requires computational and transmission overhead similar to many currently used public key systems.[45]However, new classical attacks undermined the security of this protocol.[46]
In August 2015, the NSA announced that it planned to transition "in the not distant future" to a new cipher suite that is resistant toquantumattacks. "Unfortunately, the growth of elliptic curve use has bumped up against the fact of continued progress in the research on quantum computing, necessitating a re-evaluation of our cryptographic strategy."[11]
When ECC is used invirtual machines, an attacker may use an invalid curve to get a complete PDH private key.[47]
Alternative representations of elliptic curves include:
|
https://en.wikipedia.org/wiki/Elliptic_curve_cryptography
|
Diffie–Hellman(DH)key exchange[nb 1]is a mathematicalmethodof securely generating a symmetriccryptographic keyover a public channel and was one of the firstpublic-key protocolsas conceived byRalph Merkleand named afterWhitfield DiffieandMartin Hellman.[1][2]DH is one of the earliest practical examples of public key exchange implemented within the field of cryptography. Published in 1976 by Diffie and Hellman, this is the earliest publicly known work that proposed the idea of a private key and a corresponding public key.
Traditionally, secure encrypted communication between two parties required that they first exchange keys by some secure physical means, such as paper key lists transported by a trustedcourier. The Diffie–Hellman key exchange method allows two parties that have no prior knowledge of each other to jointly establish ashared secretkey over aninsecure channel. This key can then be used to encrypt subsequent communications using asymmetric-keycipher.
Diffie–Hellman is used to secure a variety ofInternetservices. However, research published in October 2015 suggests that the parameters in use for many DH Internet applications at that time are not strong enough to prevent compromise by very well-funded attackers, such as the security services of some countries.[3]
The scheme was published by Whitfield Diffie and Martin Hellman in 1976,[2]but in 1997 it was revealed thatJames H. Ellis,[4]Clifford Cocks, andMalcolm J. WilliamsonofGCHQ, the British signals intelligence agency, had previously shown in 1969[5]how public-key cryptography could be achieved.[6]
Although Diffie–Hellman key exchange itself is a non-authenticatedkey-agreement protocol, it provides the basis for a variety of authenticated protocols, and is used to provideforward secrecyinTransport Layer Security'sephemeralmodes (referred to as EDH or DHE depending on thecipher suite).
The method was followed shortly afterwards byRSA, an implementation of public-key cryptography using asymmetric algorithms.
Expired US patent 4200770[7]from 1977 describes the nowpublic-domainalgorithm. It credits Hellman, Diffie, and Merkle as inventors.
In 2006, Hellman suggested the algorithm be calledDiffie–Hellman–Merkle key exchangein recognition ofRalph Merkle's contribution to the invention ofpublic-key cryptography(Hellman, 2006), writing:
The system ... has since become known as Diffie–Hellman key exchange. While that system was first described in a paper by Diffie and me, it is a public key distribution system, a concept developed by Merkle, and hence should be called 'Diffie–Hellman–Merkle key exchange' if names are to be associated with it. I hope this small pulpit might help in that endeavor to recognize Merkle's equal contribution to the invention of public key cryptography.[8]
Diffie–Hellman key exchange establishes a shared secret between two parties that can be used for secret communication for exchanging data over a public network. An analogy illustrates the concept of public key exchange by using colors instead of very large numbers:
The process begins by having the two parties,Alice and Bob, publicly agree on an arbitrary starting color that does not need to be kept secret. In this example, the color is yellow. Each person also selects a secret color that they keep to themselves – in this case, red and cyan. The crucial part of the process is that Alice and Bob each mix their own secret color together with their mutually shared color, resulting in orange-tan and light-blue mixtures respectively, and then publicly exchange the two mixed colors. Finally, each of them mixes the color they received from the partner with their own private color. The result is a final color mixture (yellow-brown in this case) that is identical to their partner's final color mixture.
If a third party listened to the exchange, they would only know the common color (yellow) and the first mixed colors (orange-tan and light-blue), but it would be very hard for them to find out the final secret color (yellow-brown). Bringing the analogy back to areal-lifeexchange using large numbers rather than colors, this determination is computationally expensive. It is impossible to compute in a practical amount of time even for modernsupercomputers.
The simplest and the original implementation,[2]later formalized asFinite Field Diffie–Hellmanin RFC 7919,[9]of the protocol uses themultiplicative group of integers modulop, wherepisprime, andgis aprimitive root modulop. To guard against potential vulnerabilities, it is recommended to use prime numbers of at least 2048 bits in length. This increases the difficulty for an adversary attempting to compute the discrete logarithm and compromise the shared secret. These two values are chosen in this way to ensure that the resulting shared secret can take on any value from 1 top− 1. Here is an example of the protocol, with non-secret values inblue, and secret values inred.
Both Alice and Bob have arrived at the same values because under modp,
More specifically,
Onlyaandbare kept secret. All the other values –p,g,gamodp, andgbmodp– are sent in the clear. The strength of the scheme comes from the fact thatgabmodp=gbamodptake extremely long times to compute by any known algorithm just from the knowledge ofp,g,gamodp, andgbmodp. Such a function that is easy to compute but hard to invert is called aone-way function. Once Alice and Bob compute the shared secret they can use it as an encryption key, known only to them, for sending messages across the same open communications channel.
Of course, much larger values ofa,b, andpwould be needed to make this example secure, since there are only 23 possible results ofnmod 23. However, ifpis a prime of at least 600 digits, then even the fastest modern computers using the fastest known algorithm cannot findagiven onlyg,pandgamodp. Such a problem is called thediscrete logarithm problem.[3]The computation ofgamodpis known asmodular exponentiationand can be done efficiently even for large numbers.
Note thatgneed not be large at all, and in practice is usually a small integer (like 2, 3, ...).
The chart below depicts who knows what, again with non-secret values inblue, and secret values inred. HereEveis aneavesdropper– she watches what is sent between Alice and Bob, but she does not alter the contents of their communications.
Nowsis the shared secret key and it is known to both Alice and Bob, butnotto Eve. Note that it is not helpful for Eve to computeAB, which equalsga+bmodp.
Note: It should be difficult for Alice to solve for Bob's private key or for Bob to solve for Alice's private key. If it is not difficult for Alice to solve for Bob's private key (or vice versa), then an eavesdropper,Eve, may simply substitute her own private / public key pair, plug Bob's public key into her private key, produce a fake shared secret key, and solve for Bob's private key (and use that to solve for the shared secret key).Evemay attempt to choose a public / private key pair that will make it easy for her to solve for Bob's private key.
Here is a more general description of the protocol:[10]
Both Alice and Bob are now in possession of the group elementgab=gba, which can serve as the shared secret key. The groupGsatisfies the requisite condition forsecure communicationas long as there is no efficient algorithm for determininggabgiveng,ga, andgb.
For example, theelliptic curve Diffie–Hellmanprotocol is a variant that represents an element of G as a point on an elliptic curve instead of as an integer modulo n. Variants usinghyperelliptic curveshave also been proposed. Thesupersingular isogeny key exchangeis a Diffie–Hellman variant that was designed to be secure againstquantum computers, but it was broken in July 2022.[11]
The used keys can either be ephemeral or static (long term) key, but could even be mixed, so called semi-static DH. These variants have different properties and hence different use cases. An overview over many variants and some also discussions can for example be found in NIST SP 800-56A.[12]A basic list:
It is possible to use ephemeral and static keys in one key agreement to provide more security as for example shown in NIST SP 800-56A, but it is also possible to combine those in a single DH key exchange, which is then called triple DH (3-DH).
In 1997 a kind of triple DH was proposed by Simon Blake-Wilson, Don Johnson, Alfred Menezes in 1997,[13]which was improved by C. Kudla and K. G. Paterson in 2005[14]and shown to be secure.
The long term secret keys of Alice and Bob are denoted byaandbrespectively, with public keysAandB, as well as the ephemeral key pairs (x,X) and (y,Y). Then protocol is:
The long term public keys need to be transferred somehow. That can be done beforehand in a separate, trusted channel, or the public keys can be encrypted using some partial key agreement to preserve anonymity. For more of such details as well as other improvements likeside channel protectionor explicitkey confirmation, as well as early messages and additional password authentication, see e.g. US patent "Advanced modular handshake for key agreement and optional authentication".[15]
X3DH was initially proposed as part of theDouble Ratchet Algorithmused in theSignal Protocol. The protocol offers forward secrecy and cryptographic deniability. It operates on an elliptic curve.[16]
The protocol uses five public keys. Alice has an identity key IKAand an ephemeral key EKA. Bob has an identity key IKB, a signed prekey SPKB, and a one-time prekey OPKB.[16]Bob first publishes his three keys to a server, which Alice downloads and verifies the signature on. Alice then initiates the exchange to Bob.[16]The OPK is optional.[16]
Diffie–Hellman key agreement is not limited to negotiating a key shared by only two participants. Any number of users can take part in an agreement by performing iterations of the agreement protocol and exchanging intermediate data (which does not itself need to be kept secret). For example, Alice, Bob, and Carol could participate in a Diffie–Hellman agreement as follows, with all operations taken to be modulop:
An eavesdropper has been able to seegamodp,gbmodp,gcmodp,gabmodp,gacmodp, andgbcmodp, but cannot use any combination of these to efficiently reproducegabcmodp.
To extend this mechanism to larger groups, two basic principles must be followed:
These principles leave open various options for choosing in which order participants contribute to keys. The simplest and most obvious solution is to arrange theNparticipants in a circle and haveNkeys rotate around the circle, until eventually every key has been contributed to by allNparticipants (ending with its owner) and each participant has contributed toNkeys (ending with their own). However, this requires that every participant performNmodular exponentiations.
By choosing a more desirable order, and relying on the fact that keys can be duplicated, it is possible to reduce the number of modular exponentiations performed by each participant tolog2(N) + 1using adivide-and-conquer-styleapproach, given here for eight participants:
Once this operation has been completed all participants will possess the secretgabcdefgh, but each participant will have performed only four modular exponentiations, rather than the eight implied by a simple circular arrangement.
The protocol is considered secure against eavesdroppers ifGandgare chosen properly. In particular, the order of the group G must be large, particularly if the same group is used for large amounts of traffic. The eavesdropper has to solve theDiffie–Hellman problemto obtaingab. This is currently considered difficult for groups whose order is large enough. An efficient algorithm to solve thediscrete logarithm problemwould make it easy to computeaorband solve the Diffie–Hellman problem, making this and many other public key cryptosystems insecure. Fields of small characteristic may be less secure.[17]
TheorderofGshould have a large prime factor to prevent use of thePohlig–Hellman algorithmto obtainaorb. For this reason, aSophie Germain primeqis sometimes used to calculatep= 2q+ 1, called asafe prime, since the order ofGis then only divisible by 2 andq. Sometimesgis chosen to generate the orderqsubgroup ofG, rather thanG, so that theLegendre symbolofganever reveals the low order bit ofa. A protocol using such a choice is for exampleIKEv2.[18]
The generatorgis often a small integer such as 2. Because of therandom self-reducibilityof the discrete logarithm problem a smallgis equally secure as any other generator of the same group.
If Alice and Bob userandom number generatorswhose outputs are not completely random and can be predicted to some extent, then it is much easier to eavesdrop.
In the original description, the Diffie–Hellman exchange by itself does not provideauthenticationof the communicating parties and can be vulnerable to aman-in-the-middle attack.
Mallory (an active attacker executing the man-in-the-middle attack) may establish two distinct key exchanges, one with Alice and the other with Bob, effectively masquerading as Alice to Bob, and vice versa, allowing her to decrypt, then re-encrypt, the messages passed between them. Note that Mallory must be in the middle from the beginning and continuing to be so, actively decrypting and re-encrypting messages every time Alice and Bob communicate. If she arrives after the keys have been generated and the encrypted conversation between Alice and Bob has already begun, the attack cannot succeed. If she is ever absent, her previous presence is then revealed to Alice and Bob. They will know that all of their private conversations had been intercepted and decoded by someone in the channel. In most cases it will not help them get Mallory's private key, even if she used the same key for both exchanges.
A method to authenticate the communicating parties to each other is generally needed to prevent this type of attack. Variants of Diffie–Hellman, such asSTS protocol, may be used instead to avoid these types of attacks.
ACVEreleased in 2021 (CVE-2002-20001) disclosed adenial-of-service attack(DoS) against the protocol variants use ephemeral keys, called D(HE)at attack.[19]The attack exploits that the Diffie–Hellman key exchange allows attackers to send arbitrary numbers that are actually not public keys, triggering expensive modular exponentiation calculations on the victim's side. Another CVEs released disclosed that the Diffie–Hellman key exchange implementations may use long private exponents (CVE-2022-40735) that arguably make modular exponentiation calculations unnecessarily expensive[20]or may unnecessary check peer's public key (CVE-2024-41996) has similar resource requirement as key calculation using a long exponent.[21]An attacker can exploit both vulnerabilities together.
Thenumber field sievealgorithm, which is generally the most effective in solving thediscrete logarithm problem, consists of four computational steps. The first three steps only depend on the order of the group G, not on the specific number whose finite log is desired.[22]It turns out that much Internet traffic uses one of a handful of groups that are of order 1024 bits or less.[3]Byprecomputingthe first three steps of the number field sieve for the most common groups, an attacker need only carry out the last step, which is much less computationally expensive than the first three steps, to obtain a specific logarithm. TheLogjamattack used this vulnerability to compromise a variety of Internet services that allowed the use of groups whose order was a 512-bit prime number, so calledexport grade. The authors needed several thousand CPU cores for a week to precompute data for a single 512-bit prime. Once that was done, individual logarithms could be solved in about a minute using two 18-core Intel Xeon CPUs.[3]
As estimated by the authors behind the Logjam attack, the much more difficult precomputation needed to solve the discrete log problem for a 1024-bit prime would cost on the order of $100 million, well within the budget of a large nationalintelligence agencysuch as the U.S.National Security Agency(NSA). The Logjam authors speculate that precomputation against widely reused 1024-bit DH primes is behind claims inleaked NSA documentsthat NSA is able to break much of current cryptography.[3]
To avoid these vulnerabilities, the Logjam authors recommend use ofelliptic curve cryptography, for which no similar attack is known. Failing that, they recommend that the order,p, of the Diffie–Hellman group should be at least 2048 bits. They estimate that the pre-computation required for a 2048-bit prime is 109times more difficult than for 1024-bit primes.[3]
Quantum computerscan break public-key cryptographic schemes, such as RSA, finite-field DH and elliptic-curve DH key-exchange protocols, usingShor's algorithmfor solving thefactoring problem, thediscrete logarithm problem, and the period-finding problem. Apost-quantum variant of Diffie-Hellman algorithmwas proposed in 2023, and relies on a combination of the quantum-resistant CRYSTALS-Kyber protocol, as well as the old elliptic curveX25519protocol. A quantum Diffie-Hellman key-exchange protocol that relies on aquantum one-way function, and its security relies on fundamental principles of quantum mechanics has also been proposed in the literature.[23]
Public key encryption schemes based on the Diffie–Hellman key exchange have been proposed. The first such scheme is theElGamal encryption. A more modern variant is theIntegrated Encryption Scheme.
Protocols that achieveforward secrecygenerate new key pairs for eachsessionand discard them at the end of the session. The Diffie–Hellman key exchange is a frequent choice for such protocols, because of its fast key generation.
When Alice and Bob share a password, they may use apassword-authenticated key agreement(PK) form of Diffie–Hellman to prevent man-in-the-middle attacks. One simple scheme is to compare thehashofsconcatenated with the password calculated independently on both ends of channel. A feature of these schemes is that an attacker can only test one specific password on each iteration with the other party, and so the system provides good security with relatively weak passwords. This approach is described inITU-TRecommendationX.1035, which is used by theG.hnhome networking standard.
An example of such a protocol is theSecure Remote Password protocol.
It is also possible to use Diffie–Hellman as part of apublic key infrastructure, allowing Bob to encrypt a message so that only Alice will be able to decrypt it, with no prior communication between them other than Bob having trusted knowledge of Alice's public key. Alice's public key is(gamodp,g,p){\displaystyle (g^{a}{\bmod {p}},g,p)}. To send her a message, Bob chooses a randomband then sends Alicegbmodp{\displaystyle g^{b}{\bmod {p}}}(unencrypted) together with the message encrypted with symmetric key(ga)bmodp{\displaystyle (g^{a})^{b}{\bmod {p}}}. Only Alice can determine the symmetric key and hence decrypt the message because only she hasa(the private key). A pre-shared public key also prevents man-in-the-middle attacks.
In practice, Diffie–Hellman is not used in this way, withRSAbeing the dominant public key algorithm. This is largely for historical and commercial reasons,[citation needed]namely thatRSA Securitycreated acertificate authorityfor key signing that becameVerisign. Diffie–Hellman, as elaborated above, cannot directly be used to sign certificates. However, theElGamalandDSAsignature algorithms are mathematically related to it, as well asMQV,STSand theIKEcomponent of theIPsecprotocol suite for securingInternet Protocolcommunications.
|
https://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_exchange
|
TheRabin cryptosystemis a family ofpublic-key encryptionschemes
based on atrapdoor functionwhose security, like that ofRSA, is related to the difficulty ofinteger factorization.[1][2]
The Rabin trapdoor function has the advantage that inverting it has beenmathematicallyproven to be as hard as factoring integers, while there is no such proof known for the RSA trapdoor function.
It has the disadvantage that each output of the Rabin function can be generated by any of four possible inputs; if each output is a ciphertext, extra complexity is required on decryption to identify which of the four possible inputs was the true plaintext.
Naive attempts to work around this often either enable a chosen-ciphertext attack to recover the secret key or, by encoding redundancy in the plaintext space, invalidate the proof of security relative to factoring.[1]
Public-key encryption schemes based on the Rabin trapdoor function are used mainly for examples in textbooks.
In contrast, RSA is the basis of standard public-key encryption schemes such asRSAES-PKCS1-v1_5andRSAES-OAEPthat are used widely in practice.
The Rabin trapdoor function was first published as part of theRabin signaturescheme in 1978 byMichael O. Rabin.[3][4][5]The Rabin signature scheme was the firstdigital signaturescheme where forging a signature could be proven to be as hard as factoring.
The trapdoor function was later repurposed in textbooks as an example of apublic-key encryptionscheme,[6][7][1]which came to be known as the Rabin cryptosystem even though Rabin never published it as an encryption scheme.
Like all asymmetric cryptosystems, the Rabin system uses a key pair: apublic keyfor encryption and aprivate keyfor decryption. The public key is published for anyone to use, while the private key remains known only to the recipient of the message.
The keys for the Rabin cryptosystem are generated as follows:
Thenn{\displaystyle n}is the public key and the pair(p,q){\displaystyle (p,q)}is the private key.
A messageM{\displaystyle M}can be encrypted by first converting it to a numberm<n{\displaystyle m<n}using a reversible mapping, then computingc=m2modn{\displaystyle c=m^{2}{\bmod {n}}}. The ciphertext isc{\displaystyle c}.
The messagem{\displaystyle m}can be recovered from the ciphertextc{\displaystyle c}by taking its square root modulon{\displaystyle n}as follows.
One of these four values is the original plaintextm{\displaystyle m}, although which of the four is the correct one cannot be determined without additional information.
We can show that the formulas in step 1 above actually produce the square roots ofc{\displaystyle c}as follows. For the first formula, we want to prove thatmp2≡cmodp{\displaystyle m_{p}^{2}\equiv c{\bmod {p}}}. Sincep≡3mod4,{\displaystyle p\equiv 3{\bmod {4}},}the exponent14(p+1){\textstyle {\frac {1}{4}}(p+1)}is an integer. The proof is trivial ifc≡0modp{\displaystyle c\equiv 0{\bmod {p}}}, so we may assume thatp{\displaystyle p}does not dividec{\displaystyle c}. Note thatc≡m2modpq{\displaystyle c\equiv m^{2}{\bmod {pq}}}implies thatc≡m2modp{\displaystyle c\equiv m^{2}{\bmod {p}}}, so c is aquadratic residuemodulop{\displaystyle p}. Then
The last step is justified byEuler's criterion.
As an example, takep=7{\displaystyle p=7}andq=11{\displaystyle q=11}, thenn=77{\displaystyle n=77}. Takem=20{\displaystyle m=20}as our plaintext. The ciphertext is thusc=m2modn=400mod77=15{\displaystyle c=m^{2}{\bmod {n}}=400{\bmod {77}}=15}.
Decryption proceeds as follows:
and we see thatr3{\displaystyle r_{3}}is the desired plaintext. Note that all four candidates are square roots of 15 mod 77. That is, for each candidate,ri2mod77=15{\displaystyle r_{i}^{2}{\bmod {77}}=15}, so eachri{\displaystyle r_{i}}encrypts to the same value, 15.
Decrypting produces three false results in addition to the correct one, so that the correct result must be guessed. This is the major disadvantage of the Rabin cryptosystem and one of the factors which have prevented it from finding widespread practical use.
If the plaintext is intended to represent a text message, guessing is not difficult; however, if the plaintext is intended to represent a numerical value, this issue becomes a problem that must be resolved by some kind of disambiguation scheme. It is possible to choose plaintexts with special structures, or to addpadding, to eliminate this problem. A way of removing the ambiguity of inversion was suggested by Blum and Williams: the two primes used are restricted to primes congruent to 3 modulo 4 and the domain of the squaring is restricted to the set of quadratic residues. These restrictions make the squaring function into atrapdoorpermutation, eliminating the ambiguity.[8]
For encryption, a square modulonmust be calculated. This is more efficient thanRSA, which requires the calculation of at least a cube.
For decryption, theChinese remainder theoremis applied, along with twomodular exponentiations. Here the efficiency is comparable to RSA.
It has been proven that any algorithm which finds one of the possible plaintexts for every Rabin-encrypted ciphertext can be used to factor the modulusn{\displaystyle n}. Thus, Rabin decryption for random plaintext is at least as hard as the integer factorization problem, something that has not been proven for RSA. It is generally believed that there is no polynomial-time algorithm for factoring, which implies that there is no efficient algorithm for decrypting a random Rabin-encrypted value without the private key(p,q){\displaystyle (p,q)}.
The Rabin cryptosystem does not provideindistinguishabilityagainstchosen plaintextattacks since the process of encryption is deterministic. An adversary, given a ciphertext and a candidate message, can easily determine whether or not the ciphertext encodes the candidate message (by simply checking whether encrypting the candidate message yields the given ciphertext).
The Rabin cryptosystem is insecure against achosen ciphertext attack(even when challenge messages are chosen uniformly at random from the message space).[6]: 214By adding redundancies, for example, the repetition of the last 64 bits, the system can be made to produce a single root. This thwarts this specific chosen-ciphertext attack, since the decryption algorithm then only produces the root that the attacker already knows. If this technique is applied, the proof of the equivalence with the factorization problem fails, so it is uncertain as of 2004 if this variant is secure. TheHandbook of Applied Cryptographyby Menezes, Oorschot and Vanstone considers this equivalence probable, however, as long as the finding of the roots remains a two-part process (1. rootsmodp{\displaystyle {\bmod {p}}}andmodq{\displaystyle {\bmod {q}}}and 2. application of the Chinese remainder theorem).
|
https://en.wikipedia.org/wiki/Rabin_cryptosystem
|
Intheoretical computer scienceand mathematics,computational complexity theoryfocuses on classifyingcomputational problemsaccording to their resource usage, and explores the relationships between these classifications. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as analgorithm.
A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematicalmodels of computationto study these problems and quantifying theircomputational complexity, i.e., the amount of resources needed to solve them, such as time and storage. Other measures of complexity are also used, such as the amount of communication (used incommunication complexity), the number ofgatesin a circuit (used incircuit complexity) and the number of processors (used inparallel computing). One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do. TheP versus NP problem, one of the sevenMillennium Prize Problems,[1]is part of the field of computational complexity.
Closely related fields intheoretical computer scienceareanalysis of algorithmsandcomputability theory. A key distinction between analysis of algorithms and computational complexity theory is that the former is devoted to analyzing the amount of resources needed by a particular algorithm to solve a problem, whereas the latter asks a more general question about all possible algorithms that could be used to solve the same problem. More precisely, computational complexity theory tries to classify problems that can or cannot be solved with appropriately restricted resources. In turn, imposing restrictions on the available resources is what distinguishes computational complexity from computability theory: the latter theory asks what kinds of problems can, in principle, be solved algorithmically.
Acomputational problemcan be viewed as an infinite collection ofinstancestogether with a set (possibly empty) ofsolutionsfor every instance. The input string for a computational problem is referred to as a problem instance, and should not be confused with the problem itself. In computational complexity theory, a problem refers to the abstract question to be solved. In contrast, an instance of this problem is a rather concrete utterance, which can serve as the input for a decision problem. For example, consider the problem ofprimality testing. The instance is a number (e.g., 15) and the solution is "yes" if the number is prime and "no" otherwise (in this case, 15 is not prime and the answer is "no"). Stated another way, theinstanceis a particular input to the problem, and thesolutionis the output corresponding to the given input.
To further highlight the difference between a problem and an instance, consider the following instance of the decision version of thetravelling salesman problem: Is there a route of at most 2000 kilometres passing through all of Germany's 14 largest cities? The quantitative answer to this particular problem instance is of little use for solving other instances of the problem, such as asking for a round trip through all sites inMilanwhose total length is at most 10 km. For this reason, complexity theory addresses computational problems and not particular problem instances.
When considering computational problems, a problem instance is astringover analphabet. Usually, the alphabet is taken to be the binary alphabet (i.e., the set {0,1}), and thus the strings arebitstrings. As in a real-worldcomputer, mathematical objects other than bitstrings must be suitably encoded. For example,integerscan be represented inbinary notation, andgraphscan be encoded directly via theiradjacency matrices, or by encoding theiradjacency listsin binary.
Even though some proofs of complexity-theoretic theorems regularly assume some concrete choice of input encoding, one tries to keep the discussion abstract enough to be independent of the choice of encoding. This can be achieved by ensuring that different representations can be transformed into each other efficiently.
Decision problemsare one of the central objects of study in computational complexity theory. A decision problem is a type of computational problem where the answer is eitheryesorno(alternatively, 1 or 0). A decision problem can be viewed as aformal language, where the members of the language are instances whose output is yes, and the non-members are those instances whose output is no. The objective is to decide, with the aid of analgorithm, whether a given input string is a member of the formal language under consideration. If the algorithm deciding this problem returns the answeryes, the algorithm is said to accept the input string, otherwise it is said to reject the input.
An example of a decision problem is the following. The input is an arbitrarygraph. The problem consists in deciding whether the given graph isconnectedor not. The formal language associated with this decision problem is then the set of all connected graphs — to obtain a precise definition of this language, one has to decide how graphs are encoded as binary strings.
Afunction problemis a computational problem where a single output (of atotal function) is expected for every input, but the output is more complex than that of adecision problem—that is, the output is not just yes or no. Notable examples include thetraveling salesman problemand theinteger factorization problem.
It is tempting to think that the notion of function problems is much richer than the notion of decision problems. However, this is not really the case, since function problems can be recast as decision problems. For example, the multiplication of two integers can be expressed as the set of triples(a,b,c){\displaystyle (a,b,c)}such that the relationa×b=c{\displaystyle a\times b=c}holds. Deciding whether a given triple is a member of this set corresponds to solving the problem of multiplying two numbers.
To measure the difficulty of solving a computational problem, one may wish to see how much time the best algorithm requires to solve the problem. However, the running time may, in general, depend on the instance. In particular, larger instances will require more time to solve. Thus the time required to solve a problem (or the space required, or any measure of complexity) is calculated as a function of the size of the instance. The input size is typically measured in bits. Complexity theory studies how algorithms scale as input size increases. For instance, in the problem of finding whether a graph is connected, how much more time does it take to solve a problem for a graph with2n{\displaystyle 2n}vertices compared to the time taken for a graph withn{\displaystyle n}vertices?
If the input size isn{\displaystyle n}, the time taken can be expressed as a function ofn{\displaystyle n}. Since the time taken on different inputs of the same size can be different, the worst-case time complexityT(n){\displaystyle T(n)}is defined to be the maximum time taken over all inputs of sizen{\displaystyle n}. IfT(n){\displaystyle T(n)}is a polynomial inn{\displaystyle n}, then the algorithm is said to be apolynomial timealgorithm.Cobham's thesisargues that a problem can be solved with a feasible amount of resources if it admits a polynomial-time algorithm.
A Turing machine is a mathematical model of a general computing machine. It is a theoretical device that manipulates symbols contained on a strip of tape. Turing machines are not intended as a practical computing technology, but rather as a general model of a computing machine—anything from an advanced supercomputer to a mathematician with a pencil and paper. It is believed that if a problem can be solved by an algorithm, there exists a Turing machine that solves the problem. Indeed, this is the statement of theChurch–Turing thesis. Furthermore, it is known that everything that can be computed on other models of computation known to us today, such as aRAM machine,Conway's Game of Life,cellular automata,lambda calculusor any programming language can be computed on a Turing machine. Since Turing machines are easy to analyze mathematically, and are believed to be as powerful as any other model of computation, the Turing machine is the most commonly used model in complexity theory.
Many types of Turing machines are used to define complexity classes, such asdeterministic Turing machines,probabilistic Turing machines,non-deterministic Turing machines,quantum Turing machines,symmetric Turing machinesandalternating Turing machines. They are all equally powerful in principle, but when resources (such as time or space) are bounded, some of these may be more powerful than others.
A deterministic Turing machine is the most basic Turing machine, which uses a fixed set of rules to determine its future actions. A probabilistic Turing machine is a deterministic Turing machine with an extra supply of random bits. The ability to make probabilistic decisions often helps algorithms solve problems more efficiently. Algorithms that use random bits are calledrandomized algorithms. A non-deterministic Turing machine is a deterministic Turing machine with an added feature of non-determinism, which allows a Turing machine to have multiple possible future actions from a given state. One way to view non-determinism is that the Turing machine branches into many possible computational paths at each step, and if it solves the problem in any of these branches, it is said to have solved the problem. Clearly, this model is not meant to be a physically realizable model, it is just a theoretically interesting abstract machine that gives rise to particularly interesting complexity classes. For examples, seenon-deterministic algorithm.
Many machine models different from the standardmulti-tape Turing machineshave been proposed in the literature, for examplerandom-access machines. Perhaps surprisingly, each of these models can be converted to another without providing any extra computational power. The time and memory consumption of these alternate models may vary.[2]What all these models have in common is that the machines operatedeterministically.
However, some computational problems are easier to analyze in terms of more unusual resources. For example, a non-deterministic Turing machine is a computational model that is allowed to branch out to check many different possibilities at once. The non-deterministic Turing machine has very little to do with how we physically want to compute algorithms, but its branching exactly captures many of the mathematical models we want to analyze, so thatnon-deterministic timeis a very important resource in analyzing computational problems.
For a precise definition of what it means to solve a problem using a given amount of time and space, a computational model such as thedeterministic Turing machineis used. The time required by a deterministic Turing machineM{\displaystyle M}on inputx{\displaystyle x}is the total number of state transitions, or steps, the machine makes before it halts and outputs the answer ("yes" or "no"). A Turing machineM{\displaystyle M}is said to operate within timef(n){\displaystyle f(n)}if the time required byM{\displaystyle M}on each input of lengthn{\displaystyle n}is at mostf(n){\displaystyle f(n)}. A decision problemA{\displaystyle A}can be solved in timef(n){\displaystyle f(n)}if there exists a Turing machine operating in timef(n){\displaystyle f(n)}that solves the problem. Since complexity theory is interested in classifying problems based on their difficulty, one defines sets of problems based on some criteria. For instance, the set of problems solvable within timef(n){\displaystyle f(n)}on a deterministic Turing machine is then denoted byDTIME(f(n){\displaystyle f(n)}).
Analogous definitions can be made for space requirements. Although time and space are the most well-known complexity resources, anycomplexity measurecan be viewed as a computational resource. Complexity measures are very generally defined by theBlum complexity axioms. Other complexity measures used in complexity theory includecommunication complexity,circuit complexity, anddecision tree complexity.
The complexity of an algorithm is often expressed usingbig O notation.
Thebest, worst and average casecomplexity refer to three different ways of measuring the time complexity (or any other complexity measure) of different inputs of the same size. Since some inputs of sizen{\displaystyle n}may be faster to solve than others, we define the following complexities:
The order from cheap to costly is: Best, average (ofdiscrete uniform distribution), amortized, worst.
For example, the deterministic sorting algorithmquicksortaddresses the problem of sorting a list of integers. The worst-case is when the pivot is always the largest or smallest value in the list (so the list is never divided). In this case, the algorithm takes timeO(n2{\displaystyle n^{2}}). If we assume that all possible permutations of the input list are equally likely, the average time taken for sorting isO(nlogn){\displaystyle O(n\log n)}. The best case occurs when each pivoting divides the list in half, also needingO(nlogn){\displaystyle O(n\log n)}time.
To classify the computation time (or similar resources, such as space consumption), it is helpful to demonstrate upper and lower bounds on the maximum amount of time required by the most efficient algorithm to solve a given problem. The complexity of an algorithm is usually taken to be its worst-case complexity unless specified otherwise. Analyzing a particular algorithm falls under the field ofanalysis of algorithms. To show an upper boundT(n){\displaystyle T(n)}on the time complexity of a problem, one needs to show only that there is a particular algorithm with running time at mostT(n){\displaystyle T(n)}. However, proving lower bounds is much more difficult, since lower bounds make a statement about all possible algorithms that solve a given problem. The phrase "all possible algorithms" includes not just the algorithms known today, but any algorithm that might be discovered in the future. To show a lower bound ofT(n){\displaystyle T(n)}for a problem requires showing that no algorithm can have time complexity lower thanT(n){\displaystyle T(n)}.
Upper and lower bounds are usually stated using thebig O notation, which hides constant factors and smaller terms. This makes the bounds independent of the specific details of the computational model used. For instance, ifT(n)=7n2+15n+40{\displaystyle T(n)=7n^{2}+15n+40}, in big O notation one would writeT(n)∈O(n2){\displaystyle T(n)\in O(n^{2})}.
Acomplexity classis a set of problems of related complexity. Simpler complexity classes are defined by the following factors:
Some complexity classes have complicated definitions that do not fit into this framework. Thus, a typical complexity class has a definition like the following:
But bounding the computation time above by some concrete functionf(n){\displaystyle f(n)}often yields complexity classes that depend on the chosen machine model. For instance, the language{xx∣xis any binary string}{\displaystyle \{xx\mid x{\text{ is any binary string}}\}}can be solved inlinear timeon a multi-tape Turing machine, but necessarily requires quadratic time in the model of single-tape Turing machines. If we allow polynomial variations in running time,Cobham-Edmonds thesisstates that "the time complexities in any two reasonable and general models of computation are polynomially related" (Goldreich 2008, Chapter 1.2). This forms the basis for the complexity classP, which is the set of decision problems solvable by a deterministic Turing machine within polynomial time. The corresponding set of function problems isFP.
Many important complexity classes can be defined by bounding the time or space used by the algorithm. Some important complexity classes of decision problems defined in this manner are the following:
Logarithmic-space classes do not account for the space required to represent the problem.
It turns out that PSPACE = NPSPACE and EXPSPACE = NEXPSPACE bySavitch's theorem.
Other important complexity classes includeBPP,ZPPandRP, which are defined usingprobabilistic Turing machines;ACandNC, which are defined using Boolean circuits; andBQPandQMA, which are defined using quantum Turing machines.#Pis an important complexity class of counting problems (not decision problems). Classes likeIPandAMare defined usingInteractive proof systems.ALLis the class of all decision problems.
For the complexity classes defined in this way, it is desirable to prove that relaxing the requirements on (say) computation time indeed defines a bigger set of problems. In particular, although DTIME(n{\displaystyle n}) is contained in DTIME(n2{\displaystyle n^{2}}), it would be interesting to know if the inclusion is strict. For time and space requirements, the answer to such questions is given by the time and space hierarchy theorems respectively. They are called hierarchy theorems because they induce a proper hierarchy on the classes defined by constraining the respective resources. Thus there are pairs of complexity classes such that one is properly included in the other. Having deduced such proper set inclusions, we can proceed to make quantitative statements about how much more additional time or space is needed in order to increase the number of problems that can be solved.
More precisely, thetime hierarchy theoremstates thatDTIME(o(f(n)))⊊DTIME(f(n)⋅log(f(n))){\displaystyle {\mathsf {DTIME}}{\big (}o(f(n)){\big )}\subsetneq {\mathsf {DTIME}}{\big (}f(n)\cdot \log(f(n)){\big )}}.
Thespace hierarchy theoremstates thatDSPACE(o(f(n)))⊊DSPACE(f(n)){\displaystyle {\mathsf {DSPACE}}{\big (}o(f(n)){\big )}\subsetneq {\mathsf {DSPACE}}{\big (}f(n){\big )}}.
The time and space hierarchy theorems form the basis for most separation results of complexity classes. For instance, the time hierarchy theorem tells us that P is strictly contained in EXPTIME, and the space hierarchy theorem tells us that L is strictly contained in PSPACE.
Many complexity classes are defined using the concept of a reduction. A reduction is a transformation of one problem into another problem. It captures the informal notion of a problem being at most as difficult as another problem. For instance, if a problemX{\displaystyle X}can be solved using an algorithm forY{\displaystyle Y},X{\displaystyle X}is no more difficult thanY{\displaystyle Y}, and we say thatX{\displaystyle X}reducestoY{\displaystyle Y}. There are many different types of reductions, based on the method of reduction, such as Cook reductions, Karp reductions and Levin reductions, and the bound on the complexity of reductions, such aspolynomial-time reductionsorlog-space reductions.
The most commonly used reduction is a polynomial-time reduction. This means that the reduction process takes polynomial time. For example, the problem of squaring an integer can be reduced to the problem of multiplying two integers. This means an algorithm for multiplying two integers can be used to square an integer. Indeed, this can be done by giving the same input to both inputs of the multiplication algorithm. Thus we see that squaring is not more difficult than multiplication, since squaring can be reduced to multiplication.
This motivates the concept of a problem being hard for a complexity class. A problemX{\displaystyle X}ishardfor a class of problemsC{\displaystyle C}if every problem inC{\displaystyle C}can be reduced toX{\displaystyle X}. Thus no problem inC{\displaystyle C}is harder thanX{\displaystyle X}, since an algorithm forX{\displaystyle X}allows us to solve any problem inC{\displaystyle C}. The notion of hard problems depends on the type of reduction being used. For complexity classes larger than P, polynomial-time reductions are commonly used. In particular, the set of problems that are hard for NP is the set ofNP-hardproblems.
If a problemX{\displaystyle X}is inC{\displaystyle C}and hard forC{\displaystyle C}, thenX{\displaystyle X}is said to becompleteforC{\displaystyle C}. This means thatX{\displaystyle X}is the hardest problem inC{\displaystyle C}. (Since many problems could be equally hard, one might say thatX{\displaystyle X}is one of the hardest problems inC{\displaystyle C}.) Thus the class ofNP-completeproblems contains the most difficult problems in NP, in the sense that they are the ones most likely not to be in P. Because the problem P = NP is not solved, being able to reduce a known NP-complete problem,Π2{\displaystyle \Pi _{2}}, to another problem,Π1{\displaystyle \Pi _{1}}, would indicate that there is no known polynomial-time solution forΠ1{\displaystyle \Pi _{1}}. This is because a polynomial-time solution toΠ1{\displaystyle \Pi _{1}}would yield a polynomial-time solution toΠ2{\displaystyle \Pi _{2}}. Similarly, because all NP problems can be reduced to the set, finding anNP-completeproblem that can be solved in polynomial time would mean that P = NP.[3]
The complexity class P is often seen as a mathematical abstraction modeling those computational tasks that admit an efficient algorithm. This hypothesis is called theCobham–Edmonds thesis. The complexity classNP, on the other hand, contains many problems that people would like to solve efficiently, but for which no efficient algorithm is known, such as theBoolean satisfiability problem, theHamiltonian path problemand thevertex cover problem. Since deterministic Turing machines are special non-deterministic Turing machines, it is easily observed that each problem in P is also member of the class NP.
The question of whether P equals NP is one of the most important open questions in theoretical computer science because of the wide implications of a solution.[3]If the answer is yes, many important problems can be shown to have more efficient solutions. These include various types ofinteger programmingproblems inoperations research, many problems inlogistics,protein structure predictioninbiology,[5]and the ability to find formal proofs ofpure mathematicstheorems.[6]The P versus NP problem is one of theMillennium Prize Problemsproposed by theClay Mathematics Institute. There is a US$1,000,000 prize for resolving the problem.[7]
It was shown by Ladner that ifP≠NP{\displaystyle {\textsf {P}}\neq {\textsf {NP}}}then there exist problems inNP{\displaystyle {\textsf {NP}}}that are neither inP{\displaystyle {\textsf {P}}}norNP{\displaystyle {\textsf {NP}}}-complete.[4]Such problems are calledNP-intermediateproblems. Thegraph isomorphism problem, thediscrete logarithm problemand theinteger factorization problemare examples of problems believed to be NP-intermediate. They are some of the very few NP problems not known to be inP{\displaystyle {\textsf {P}}}or to beNP{\displaystyle {\textsf {NP}}}-complete.
Thegraph isomorphism problemis the computational problem of determining whether two finitegraphsareisomorphic. An important unsolved problem in complexity theory is whether the graph isomorphism problem is inP{\displaystyle {\textsf {P}}},NP{\displaystyle {\textsf {NP}}}-complete, or NP-intermediate. The answer is not known, but it is believed that the problem is at least not NP-complete.[8]If graph isomorphism is NP-complete, thepolynomial time hierarchycollapses to its second level.[9]Since it is widely believed that the polynomial hierarchy does not collapse to any finite level, it is believed that graph isomorphism is not NP-complete. The best algorithm for this problem, due toLászló BabaiandEugene Lukshas run timeO(2nlogn){\displaystyle O(2^{\sqrt {n\log n}})}for graphs withn{\displaystyle n}vertices, although some recent work by Babai offers some potentially new perspectives on this.[10]
Theinteger factorization problemis the computational problem of determining theprime factorizationof a given integer. Phrased as a decision problem, it is the problem of deciding whether the input has a prime factor less thank{\displaystyle k}. No efficient integer factorization algorithm is known, and this fact forms the basis of several modern cryptographic systems, such as theRSAalgorithm. The integer factorization problem is inNP{\displaystyle {\textsf {NP}}}and inco-NP{\displaystyle {\textsf {co-NP}}}(and even in UP and co-UP[11]). If the problem isNP{\displaystyle {\textsf {NP}}}-complete, the polynomial time hierarchy will collapse to its first level (i.e.,NP{\displaystyle {\textsf {NP}}}will equalco-NP{\displaystyle {\textsf {co-NP}}}). The best known algorithm for integer factorization is thegeneral number field sieve, which takes timeO(e(6493)(logn)3(loglogn)23){\displaystyle O(e^{\left({\sqrt[{3}]{\frac {64}{9}}}\right){\sqrt[{3}]{(\log n)}}{\sqrt[{3}]{(\log \log n)^{2}}}})}[12]to factor an odd integern{\displaystyle n}. However, the best knownquantum algorithmfor this problem,Shor's algorithm, does run in polynomial time. Unfortunately, this fact doesn't say much about where the problem lies with respect to non-quantum complexity classes.
Many known complexity classes are suspected to be unequal, but this has not been proved. For instanceP⊆NP⊆PP⊆PSPACE{\displaystyle {\textsf {P}}\subseteq {\textsf {NP}}\subseteq {\textsf {PP}}\subseteq {\textsf {PSPACE}}}, but it is possible thatP=PSPACE{\displaystyle {\textsf {P}}={\textsf {PSPACE}}}. IfP{\displaystyle {\textsf {P}}}is not equal toNP{\displaystyle {\textsf {NP}}}, thenP{\displaystyle {\textsf {P}}}is not equal toPSPACE{\displaystyle {\textsf {PSPACE}}}either. Since there are many known complexity classes betweenP{\displaystyle {\textsf {P}}}andPSPACE{\displaystyle {\textsf {PSPACE}}}, such asRP{\displaystyle {\textsf {RP}}},BPP{\displaystyle {\textsf {BPP}}},PP{\displaystyle {\textsf {PP}}},BQP{\displaystyle {\textsf {BQP}}},MA{\displaystyle {\textsf {MA}}},PH{\displaystyle {\textsf {PH}}}, etc., it is possible that all these complexity classes collapse to one class. Proving that any of these classes are unequal would be a major breakthrough in complexity theory.
Along the same lines,co-NP{\displaystyle {\textsf {co-NP}}}is the class containing thecomplementproblems (i.e. problems with theyes/noanswers reversed) ofNP{\displaystyle {\textsf {NP}}}problems. It is believed[13]thatNP{\displaystyle {\textsf {NP}}}is not equal toco-NP{\displaystyle {\textsf {co-NP}}}; however, it has not yet been proven. It is clear that if these two complexity classes are not equal thenP{\displaystyle {\textsf {P}}}is not equal toNP{\displaystyle {\textsf {NP}}}, sinceP=co-P{\displaystyle {\textsf {P}}={\textsf {co-P}}}. Thus ifP=NP{\displaystyle P=NP}we would haveco-P=co-NP{\displaystyle {\textsf {co-P}}={\textsf {co-NP}}}whenceNP=P=co-P=co-NP{\displaystyle {\textsf {NP}}={\textsf {P}}={\textsf {co-P}}={\textsf {co-NP}}}.
Similarly, it is not known ifL{\displaystyle {\textsf {L}}}(the set of all problems that can be solved in logarithmic space) is strictly contained inP{\displaystyle {\textsf {P}}}or equal toP{\displaystyle {\textsf {P}}}. Again, there are many complexity classes between the two, such asNL{\displaystyle {\textsf {NL}}}andNC{\displaystyle {\textsf {NC}}}, and it is not known if they are distinct or equal classes.
It is suspected thatP{\displaystyle {\textsf {P}}}andBPP{\displaystyle {\textsf {BPP}}}are equal. However, it is currently open ifBPP=NEXP{\displaystyle {\textsf {BPP}}={\textsf {NEXP}}}.
A problem that can theoretically be solved, but requires impractical and infinite resources (e.g., time) to do so, is known as anintractable problem.[14]Conversely, a problem that can be solved in practice is called atractable problem, literally "a problem that can be handled". The terminfeasible(literally "cannot be done") is sometimes used interchangeably withintractable,[15]though this risks confusion with afeasible solutioninmathematical optimization.[16]
Tractable problems are frequently identified with problems that have polynomial-time solutions (P{\displaystyle {\textsf {P}}},PTIME{\displaystyle {\textsf {PTIME}}}); this is known as theCobham–Edmonds thesis. Problems that are known to be intractable in this sense include those that areEXPTIME-hard. IfNP{\displaystyle {\textsf {NP}}}is not the same asP{\displaystyle {\textsf {P}}}, thenNP-hardproblems are also intractable in this sense.
However, this identification is inexact: a polynomial-time solution with large degree or large leading coefficient grows quickly, and may be impractical for practical size problems; conversely, an exponential-time solution that grows slowly may be practical on realistic input, or a solution that takes a long time in the worst case may take a short time in most cases or the average case, and thus still be practical. Saying that a problem is not inP{\displaystyle {\textsf {P}}}does not imply that all large cases of the problem are hard or even that most of them are. For example, the decision problem inPresburger arithmetichas been shown not to be inP{\displaystyle {\textsf {P}}}, yet algorithms have been written that solve the problem in reasonable times in most cases. Similarly, algorithms can solve the NP-completeknapsack problemover a wide range of sizes in less than quadratic time andSAT solversroutinely handle large instances of the NP-completeBoolean satisfiability problem.
To see why exponential-time algorithms are generally unusable in practice, consider a program that makes2n{\displaystyle 2^{n}}operations before halting. For smalln{\displaystyle n}, say 100, and assuming for the sake of example that the computer does1012{\displaystyle 10^{12}}operations each second, the program would run for about4×1010{\displaystyle 4\times 10^{10}}years, which is the same order of magnitude as theage of the universe. Even with a much faster computer, the program would only be useful for very small instances and in that sense the intractability of a problem is somewhat independent of technological progress. However, an exponential-time algorithm that takes1.0001n{\displaystyle 1.0001^{n}}operations is practical untiln{\displaystyle n}gets relatively large.
Similarly, a polynomial time algorithm is not always practical. If its running time is, say,n15{\displaystyle n^{15}}, it is unreasonable to consider it efficient and it is still useless except on small instances. Indeed, in practice evenn3{\displaystyle n^{3}}orn2{\displaystyle n^{2}}algorithms are often impractical on realistic sizes of problems.
Continuous complexity theory can refer to complexity theory of problems that involve continuous functions that are approximated by discretizations, as studied innumerical analysis. One approach to complexity theory of numerical analysis[17]isinformation based complexity.
Continuous complexity theory can also refer to complexity theory of the use ofanalog computation, which uses continuousdynamical systemsanddifferential equations.[18]Control theorycan be considered a form of computation and differential equations are used in the modelling of continuous-time and hybrid discrete-continuous-time systems.[19]
An early example of algorithm complexity analysis is the running time analysis of theEuclidean algorithmdone byGabriel Laméin 1844.
Before the actual research explicitly devoted to the complexity of algorithmic problems started off, numerous foundations were laid out by various researchers. Most influential among these was the definition of Turing machines byAlan Turingin 1936, which turned out to be a very robust and flexible simplification of a computer.
The beginning of systematic studies in computational complexity is attributed to the seminal 1965 paper "On the Computational Complexity of Algorithms" byJuris HartmanisandRichard E. Stearns, which laid out the definitions oftime complexityandspace complexity, and proved the hierarchy theorems.[20]In addition, in 1965Edmondssuggested to consider a "good" algorithm to be one with running time bounded by a polynomial of the input size.[21]
Earlier papers studying problems solvable by Turing machines with specific bounded resources include[20]John Myhill's definition oflinear bounded automata(Myhill 1960),Raymond Smullyan's study of rudimentary sets (1961), as well asHisao Yamada's paper[22]on real-time computations (1962). Somewhat earlier,Boris Trakhtenbrot(1956), a pioneer in the field from the USSR, studied another specific complexity measure.[23]As he remembers:
However, [my] initial interest [in automata theory] was increasingly set aside in favor of computational complexity, an exciting fusion of combinatorial methods, inherited fromswitching theory, with the conceptual arsenal of the theory of algorithms. These ideas had occurred to me earlier in 1955 when I coined the term "signalizing function", which is nowadays commonly known as "complexity measure".[24]
In 1967,Manuel Blumformulated a set ofaxioms(now known asBlum axioms) specifying desirable properties of complexity measures on the set of computable functions and proved an important result, the so-calledspeed-up theorem. The field began to flourish in 1971 whenStephen CookandLeonid Levinprovedthe existence of practically relevant problems that areNP-complete. In 1972,Richard Karptook this idea a leap forward with his landmark paper, "Reducibility Among Combinatorial Problems", in which he showed that 21 diversecombinatorialandgraph theoreticalproblems, each infamous for its computational intractability, are NP-complete.[25]
|
https://en.wikipedia.org/wiki/Complexity_of_algorithms
|
Incryptography, theElGamal encryption systemis anasymmetric key encryption algorithmforpublic-key cryptographywhich is based on the Diffie–Hellman key exchange. It was described byTaher Elgamalin 1985.[1]ElGamal encryption is used in the freeGNU Privacy Guardsoftware, recent versions ofPGP, and othercryptosystems. TheDigital Signature Algorithm(DSA) is a variant of theElGamal signature scheme, which should not be confused with ElGamal encryption.
ElGamal encryption can be defined over anycyclic groupG{\displaystyle G}, likemultiplicative group of integers modulonif and only ifnis 1, 2, 4,pkor 2pk, wherepis an odd prime andk> 0. Its security depends upon the difficulty of theDecisional Diffie Hellman ProbleminG{\displaystyle G}.
The algorithm can be described as first performing a Diffie–Hellman key exchange to establish a shared secrets{\displaystyle s}, then using this as aone-time padfor encrypting the message. ElGamal encryption is performed in three phases: the key generation, the encryption, and the decryption. The first is purely key exchange, whereas the latter two mix key exchange computations with message computations.
The first party, Alice, generates a key pair as follows:
A second party, Bob, encrypts a messageM{\displaystyle M}to Alice under her public key(G,q,g,h){\displaystyle (G,q,g,h)}as follows:
Note that if one knows both the ciphertext(c1,c2){\displaystyle (c_{1},c_{2})}and the plaintextm{\displaystyle m}, one can easily find the shared secrets{\displaystyle s}, sincec2⋅m−1=s{\displaystyle c_{2}\cdot m^{-1}=s}. Therefore, a newy{\displaystyle y}and hence a news{\displaystyle s}is generated for every message to improve security. For this reason,y{\displaystyle y}is also called anephemeral key.
Alice decrypts a ciphertext(c1,c2){\displaystyle (c_{1},c_{2})}with her private keyx{\displaystyle x}as follows:
Like most public key systems, the ElGamal cryptosystem is usually used as part of ahybrid cryptosystem, where the message itself is encrypted using a symmetric cryptosystem, and ElGamal is then used to encrypt only the symmetric key. This is because asymmetric cryptosystems like ElGamal are usually slower than symmetric ones for the samelevel of security, so it is faster to encrypt the message, which can be arbitrarily large, with a symmetric cipher, and then use ElGamal only to encrypt the symmetric key, which usually is quite small compared to the size of the message.
The security of the ElGamal scheme depends on the properties of the underlying groupG{\displaystyle G}as well as any padding scheme used on the messages. If thecomputational Diffie–Hellman assumption(CDH) holds in the underlying cyclic groupG{\displaystyle G}, then the encryption function isone-way.[2]
If thedecisional Diffie–Hellman assumption(DDH) holds inG{\displaystyle G}, then
ElGamal achievessemantic security.[2][3]Semantic security is not implied by the computational Diffie–Hellman assumption alone. SeeDecisional Diffie–Hellman assumptionfor a discussion of groups where the assumption is believed to hold.
ElGamal encryption is unconditionallymalleable, and therefore is not secure underchosen ciphertext attack. For example, given an encryption(c1,c2){\displaystyle (c_{1},c_{2})}of some (possibly unknown) messagem{\displaystyle m}, one can easily construct a valid encryption(c1,2c2){\displaystyle (c_{1},2c_{2})}of the message2m{\displaystyle 2m}.
To achieve chosen-ciphertext security, the scheme must be further modified, or an appropriate padding scheme must be used. Depending on the modification, the DDH assumption may or may not be necessary.
Other schemes related to ElGamal which achieve security against chosen ciphertext attacks have also been proposed. TheCramer–Shoup cryptosystemis secure under chosen ciphertext attack assuming DDH holds forG{\displaystyle G}. Its proof does not use therandom oracle model. Another proposed scheme isDHIES,[4]whose proof requires an assumption that is stronger than the DDH assumption.
ElGamal encryption isprobabilistic, meaning that a singleplaintextcan be encrypted to many possible ciphertexts, with the consequence that a general ElGamal encryption produces a 1:2 expansion in size from plaintext to ciphertext.
Encryption under ElGamal requires twoexponentiations; however, these exponentiations are independent of the message and can be computed ahead of time if needed. Decryption requires one exponentiation and one computation of a group inverse, which can, however, be easily combined into just one exponentiation.
|
https://en.wikipedia.org/wiki/ElGamal_encryption
|
Incryptography, ablock cipheris adeterministic algorithmthat operates on fixed-length groups ofbits, calledblocks. Block ciphers are the elementarybuilding blocksof manycryptographic protocols. They are ubiquitous in the storage and exchange of data, where such data is secured and authenticated viaencryption.
A block cipher uses blocks as an unvarying transformation. Even a secure block cipher is suitable for the encryption of only a single block of data at a time, using a fixed key. A multitude ofmodes of operationhave been designed to allow their repeated use in a secure way to achieve the security goals of confidentiality andauthenticity. However, block ciphers may also feature as building blocks in other cryptographic protocols, such asuniversal hash functionsandpseudorandom number generators.
A block cipher consists of two pairedalgorithms, one for encryption,E, and the other for decryption,D.[1]Both algorithms accept two inputs: an input block of sizenbits and akeyof sizekbits; and both yield ann-bit output block. The decryption algorithmDis defined to be theinverse functionof encryption, i.e.,D=E−1. More formally,[2][3]a block cipher is specified by an encryption function
which takes as input a keyK, of bit lengthk(called thekey size), and a bit stringP, of lengthn(called theblock size), and returns a stringCofnbits.Pis called theplaintext, andCis termed theciphertext. For eachK, the functionEK(P) is required to be an invertible mapping on{0,1}n. The inverse forEis defined as a function
taking a keyKand a ciphertextCto return a plaintext valueP, such that
For example, a block cipher encryption algorithm might take a 128-bit block of plaintext as input, and output a corresponding 128-bit block of ciphertext. The exact transformation is controlled using a second input – the secret key. Decryption is similar: the decryption algorithm takes, in this example, a 128-bit block of ciphertext together with the secret key, and yields the original 128-bit block of plain text.[4]
For each keyK,EKis apermutation(abijectivemapping) over the set of input blocks. Each key selects one permutation from the set of(2n)!{\displaystyle (2^{n})!}possible permutations.[5]
The modern design of block ciphers is based on the concept of an iteratedproduct cipher. In his seminal 1949 publication,Communication Theory of Secrecy Systems,Claude Shannonanalyzed product ciphers and suggested them as a means of effectively improving security by combining simple operations such assubstitutionsandpermutations.[6]Iterated product ciphers carry out encryption in multiplerounds, each of which uses a different subkey derived from the original key. One widespread implementation of such ciphers named aFeistel networkafterHorst Feistelis notably implemented in theDEScipher.[7]Many other realizations of block ciphers, such as theAES, are classified assubstitution–permutation networks.[8]
The root of allcryptographicblock formats used within thePayment Card Industry Data Security Standard(PCI DSS) andAmerican National Standards Institute(ANSI) standards lies with theAtalla Key Block(AKB), which was a key innovation of theAtalla Box, the firsthardware security module(HSM). It was developed in 1972 byMohamed M. Atalla, founder ofAtalla Corporation(nowUtimaco Atalla), and released in 1973. The AKB was a key block, which is required to securely interchangesymmetric keysorPINswith other actors in thebanking industry. This secure interchange is performed using the AKB format.[9]The Atalla Box protected over 90% of allATMnetworks in operation as of 1998,[10]and Atalla products still secure the majority of the world's ATM transactions as of 2014.[11]
The publication of the DES cipher by the United States National Bureau of Standards (subsequently the U.S.National Institute of Standards and Technology, NIST) in 1977 was fundamental in the public understanding of modern block cipher design. It also influenced the academic development ofcryptanalytic attacks. Bothdifferentialandlinear cryptanalysisarose out of studies on DES design. As of 2016[update], there is a palette of attack techniques against which a block cipher must be secure, in addition to being robust againstbrute-force attacks.
Most block cipher algorithms are classified asiterated block cipherswhich means that they transform fixed-size blocks ofplaintextinto identically sized blocks ofciphertext, via the repeated application of an invertible transformation known as theround function, with each iteration referred to as around.[12]
Usually, the round functionRtakes differentround keysKias a second input, which is derived from the original key:[13]
whereM0{\displaystyle M_{0}}is the plaintext andMr{\displaystyle M_{r}}the ciphertext, withrbeing the number of rounds.
Frequently,key whiteningis used in addition to this. At the beginning and the end, the data is modified with key material (often withXOR):
Given one of the standard iterated block cipher design schemes, it is fairly easy to construct a block cipher that is cryptographically secure, simply by using a large number of rounds. However, this will make the cipher inefficient. Thus, efficiency is the most important additional design criterion for professional ciphers. Further, a good block cipher is designed to avoid side-channel attacks, such as branch prediction and input-dependent memory accesses that might leak secret data via the cache state or the execution time. In addition, the cipher should be concise, for small hardware and software implementations.
One important type of iterated block cipher known as asubstitution–permutation network(SPN)takes a block of the plaintext and the key as inputs and applies several alternating rounds consisting of asubstitution stagefollowed by apermutation stage—to produce each block of ciphertext output.[14]The non-linear substitution stage mixes the key bits with those of the plaintext, creating Shannon'sconfusion. The linear permutation stage then dissipates redundancies, creatingdiffusion.[15][16]
Asubstitution box(S-box)substitutes a small block of input bits with another block of output bits. This substitution must beone-to-one, to ensure invertibility (hence decryption). A secure S-box will have the property that changing one input bit will change about half of the output bits on average, exhibiting what is known as theavalanche effect—i.e. it has the property that each output bit will depend on every input bit.[17]
Apermutation box(P-box)is apermutationof all the bits: it takes the outputs of all the S-boxes of one round, permutes the bits, and feeds them into the S-boxes of the next round. A good P-box has the property that the output bits of any S-box are distributed to as many S-box inputs as possible.[18]
At each round, the round key (obtained from the key with some simple operations, for instance, using S-boxes and P-boxes) is combined using some group operation, typicallyXOR.[citation needed]
Decryptionis done by simply reversing the process (using the inverses of the S-boxes and P-boxes and applying the round keys in reversed order).[19]
In aFeistel cipher, the block of plain text to be encrypted is split into two equal-sized halves. The round function is applied to one half, using a subkey, and then the output is XORed with the other half. The two halves are then swapped.[20]
LetF{\displaystyle {\rm {F}}}be the round function and letK0,K1,…,Kn{\displaystyle K_{0},K_{1},\ldots ,K_{n}}be the sub-keys for the rounds0,1,…,n{\displaystyle 0,1,\ldots ,n}respectively.
Then the basic operation is as follows:[20]
Split the plaintext block into two equal pieces, (L0{\displaystyle L_{0}},R0{\displaystyle R_{0}})
For each roundi=0,1,…,n{\displaystyle i=0,1,\dots ,n}, compute
Then the ciphertext is(Rn+1,Ln+1){\displaystyle (R_{n+1},L_{n+1})}.
The decryption of a ciphertext(Rn+1,Ln+1){\displaystyle (R_{n+1},L_{n+1})}is accomplished by computing fori=n,n−1,…,0{\displaystyle i=n,n-1,\ldots ,0}
Then(L0,R0){\displaystyle (L_{0},R_{0})}is the plaintext again.
One advantage of the Feistel model compared to asubstitution–permutation networkis that the round functionF{\displaystyle {\rm {F}}}does not have to be invertible.[21]
The Lai–Massey scheme offers security properties similar to those of theFeistel structure. It also shares the advantage that the round functionF{\displaystyle \mathrm {F} }does not have to be invertible. Another similarity is that it also splits the input block into two equal pieces. However, the round function is applied to the difference between the two, and the result is then added to both half blocks.
LetF{\displaystyle \mathrm {F} }be the round function andH{\displaystyle \mathrm {H} }a half-round function and letK0,K1,…,Kn{\displaystyle K_{0},K_{1},\ldots ,K_{n}}be the sub-keys for the rounds0,1,…,n{\displaystyle 0,1,\ldots ,n}respectively.
Then the basic operation is as follows:
Split the plaintext block into two equal pieces, (L0{\displaystyle L_{0}},R0{\displaystyle R_{0}})
For each roundi=0,1,…,n{\displaystyle i=0,1,\dots ,n}, compute
whereTi=F(Li′−Ri′,Ki){\displaystyle T_{i}=\mathrm {F} (L_{i}'-R_{i}',K_{i})}and(L0′,R0′)=H(L0,R0){\displaystyle (L_{0}',R_{0}')=\mathrm {H} (L_{0},R_{0})}
Then the ciphertext is(Ln+1,Rn+1)=(Ln+1′,Rn+1′){\displaystyle (L_{n+1},R_{n+1})=(L_{n+1}',R_{n+1}')}.
The decryption of a ciphertext(Ln+1,Rn+1){\displaystyle (L_{n+1},R_{n+1})}is accomplished by computing fori=n,n−1,…,0{\displaystyle i=n,n-1,\ldots ,0}
whereTi=F(Li+1′−Ri+1′,Ki){\displaystyle T_{i}=\mathrm {F} (L_{i+1}'-R_{i+1}',K_{i})}and(Ln+1′,Rn+1′)=H−1(Ln+1,Rn+1){\displaystyle (L_{n+1}',R_{n+1}')=\mathrm {H} ^{-1}(L_{n+1},R_{n+1})}
Then(L0,R0)=(L0′,R0′){\displaystyle (L_{0},R_{0})=(L_{0}',R_{0}')}is the plaintext again.
Many modern block ciphers and hashes areARXalgorithms—their round function involves only three operations: (A) modular addition, (R)rotationwith fixed rotation amounts, and (X)XOR. Examples includeChaCha20,Speck,XXTEA, andBLAKE. Many authors draw an ARX network, a kind ofdata flow diagram, to illustrate such a round function.[22]
These ARX operations are popular because they are relatively fast and cheap in hardware and software, their implementation can be made extremely simple, and also because they run in constant time, and therefore are immune totiming attacks. Therotational cryptanalysistechnique attempts to attack such round functions.
Other operations often used in block ciphers include data-dependent rotations as inRC5andRC6, asubstitution boximplemented as alookup tableas inData Encryption StandardandAdvanced Encryption Standard, apermutation box, and multiplication as inIDEA.
A block cipher by itself allows encryption only of a single data block of the cipher's block length. For a variable-length message, the data must first be partitioned into separate cipher blocks. In the simplest case, known aselectronic codebook(ECB) mode, a message is first split into separate blocks of the cipher's block size (possibly extending the last block withpaddingbits), and then each block is encrypted and decrypted independently. However, such a naive method is generally insecure because equal plaintext blocks will always generate equal ciphertext blocks (for the same key), so patterns in the plaintext message become evident in the ciphertext output.[23]
To overcome this limitation, several so-calledblock cipher modes of operationhave been designed[24][25]and specified in national recommendations such as NIST 800-38A[26]andBSITR-02102[27]and international standards such asISO/IEC 10116.[28]The general concept is to userandomizationof the plaintext data based on an additional input value, frequently called aninitialization vector, to create what is termedprobabilistic encryption.[29]In the popularcipher block chaining(CBC) mode, for encryption to besecurethe initialization vector passed along with the plaintext message must be a random orpseudo-randomvalue, which is added in anexclusive-ormanner to the first plaintext block before it is encrypted. The resultant ciphertext block is then used as the new initialization vector for the next plaintext block. In thecipher feedback(CFB) mode, which emulates aself-synchronizing stream cipher, the initialization vector is first encrypted and then added to the plaintext block. Theoutput feedback(OFB) mode repeatedly encrypts the initialization vector to create akey streamfor the emulation of asynchronous stream cipher. The newercounter(CTR) mode similarly creates a key stream, but has the advantage of only needing unique and not (pseudo-)random values as initialization vectors; the needed randomness is derived internally by using the initialization vector as a block counter and encrypting this counter for each block.[26]
From asecurity-theoreticpoint of view, modes of operation must provide what is known assemantic security.[30]Informally, it means that given some ciphertext under an unknown key one cannot practically derive any information from the ciphertext (other than the length of the message) over what one would have known without seeing the ciphertext. It has been shown that all of the modes discussed above, with the exception of the ECB mode, provide this property under so-calledchosen plaintext attacks.
Some modes such as the CBC mode only operate on complete plaintext blocks. Simply extending the last block of a message with zero bits is insufficient since it does not allow a receiver to easily distinguish messages that differ only in the number of padding bits. More importantly, such a simple solution gives rise to very efficientpadding oracle attacks.[31]A suitablepadding schemeis therefore needed to extend the last plaintext block to the cipher's block size. While many popular schemes described in standards and in the literature have been shown to be vulnerable to padding oracle attacks,[31][32]a solution that adds a one-bit and then extends the last block with zero-bits, standardized as "padding method 2" in ISO/IEC 9797-1,[33]has been proven secure against these attacks.[32]
This property results in the cipher's security degrading quadratically, and needs to be taken into account when selecting a block size. There is a trade-off though as large block sizes can result in the algorithm becoming inefficient to operate.[34]Earlier block ciphers such as theDEShave typically selected a 64-bit block size, while newer designs such as theAESsupport block sizes of 128 bits or more, with some ciphers supporting a range of different block sizes.[35]
A linear cryptanalysisis a form of cryptanalysis based on findingaffineapproximations to the action of acipher. Linear cryptanalysis is one of the two most widely used attacks on block ciphers; the other beingdifferential cryptanalysis.[36]
The discovery is attributed toMitsuru Matsui, who first applied the technique to theFEALcipher (Matsui and Yamagishi, 1992).[37]
Integral cryptanalysisis a cryptanalytic attack that is particularly applicable to block ciphers based on substitution–permutation networks. Unlike differential cryptanalysis, which uses pairs of chosen plaintexts with a fixed XOR difference, integral cryptanalysis uses sets or even multisets of chosen plaintexts of which part is held constant and another part varies through all possibilities. For example, an attack might use 256 chosen plaintexts that have all but 8 of their bits the same, but all differ in those 8 bits. Such a set necessarily has an XOR sum of 0, and the XOR sums of the corresponding sets of ciphertexts provide information about the cipher's operation. This contrast between the differences between pairs of texts and the sums of larger sets of texts inspired the name "integral cryptanalysis", borrowing the terminology of calculus.[citation needed]
In addition to linear and differential cryptanalysis, there is a growing catalog of attacks:truncated differential cryptanalysis, partial differential cryptanalysis,integral cryptanalysis, which encompasses square and integral attacks,slide attacks,boomerang attacks, theXSL attack,impossible differential cryptanalysis, and algebraic attacks. For a new block cipher design to have any credibility, it must demonstrate evidence of security against known attacks.[38]
When a block cipher is used in a givenmode of operation, the resulting algorithm should ideally be about as secure as the block cipher itself. ECB (discussed above) emphatically lacks this property: regardless of how secure the underlying block cipher is, ECB mode can easily be attacked. On the other hand, CBC mode can be proven to be secure under the assumption that the underlying block cipher is likewise secure. Note, however, that making statements like this requires formal mathematical definitions for what it means for an encryption algorithm or a block cipher to "be secure". This section describes two common notions for what properties a block cipher should have. Each corresponds to a mathematical model that can be used to prove properties of higher-level algorithms, such as CBC.
This general approach to cryptography – proving higher-level algorithms (such as CBC) are secure under explicitly stated assumptions regarding their components (such as a block cipher) – is known asprovable security.
Informally, a block cipher is secure in the standard model if an attacker cannot tell the difference between the block cipher (equipped with a random key) and a random permutation.
To be a bit more precise, letEbe ann-bit block cipher. We imagine the following game:
The attacker, which we can model as an algorithm, is called anadversary. The functionf(which the adversary was able to query) is called anoracle.
Note that an adversary can trivially ensure a 50% chance of winning simply by guessing at random (or even by, for example, always guessing "heads"). Therefore, letPE(A) denote the probability that adversaryAwins this game againstE, and define theadvantageofAas 2(PE(A) − 1/2). It follows that ifAguesses randomly, its advantage will be 0; on the other hand, ifAalways wins, then its advantage is 1. The block cipherEis apseudo-random permutation(PRP) if no adversary has an advantage significantly greater than 0, given specified restrictions onqand the adversary's running time. If in Step 2 above adversaries have the option of learningf−1(X) instead off(X) (but still have only small advantages) thenEis astrongPRP (SPRP). An adversary isnon-adaptiveif it chooses allqvalues forXbefore the game begins (that is, it does not use any information gleaned from previous queries to choose eachXas it goes).
These definitions have proven useful for analyzing various modes of operation. For example, one can define a similar game for measuring the security of a block cipher-based encryption algorithm, and then try to show (through areduction argument) that the probability of an adversary winning this new game is not much more thanPE(A) for someA. (The reduction typically provides limits onqand the running time ofA.) Equivalently, ifPE(A) is small for all relevantA, then no attacker has a significant probability of winning the new game. This formalizes the idea that the higher-level algorithm inherits the block cipher's security.
Block ciphers may be evaluated according to multiple criteria in practice. Common factors include:[39][40]
Luciferis generally considered to be the first civilian block cipher, developed atIBMin the 1970s based on work done byHorst Feistel. A revised version of the algorithm was adopted as a U.S. governmentFederal Information Processing Standard: FIPS PUB 46Data Encryption Standard(DES).[42]It was chosen by the U.S. National Bureau of Standards (NBS) after a public invitation for submissions and some internal changes byNBS(and, potentially, theNSA). DES was publicly released in 1976 and has been widely used.[citation needed]
DES was designed to, among other things, resist a certain cryptanalytic attack known to the NSA and rediscovered by IBM, though unknown publicly until rediscovered again and published byEli BihamandAdi Shamirin the late 1980s. The technique is calleddifferential cryptanalysisand remains one of the few general attacks against block ciphers;linear cryptanalysisis another but may have been unknown even to the NSA, prior to its publication byMitsuru Matsui. DES prompted a large amount of other work and publications in cryptography andcryptanalysisin the open community and it inspired many new cipher designs.[citation needed]
DES has a block size of 64 bits and akey sizeof 56 bits. 64-bit blocks became common in block cipher designs after DES. Key length depended on several factors, including government regulation. Many observers[who?]in the 1970s commented that the 56-bit key length used for DES was too short. As time went on, its inadequacy became apparent, especially after aspecial-purpose machine designed to break DESwas demonstrated in 1998 by theElectronic Frontier Foundation. An extension to DES,Triple DES, triple-encrypts each block with either two independent keys (112-bit key and 80-bit security) or three independent keys (168-bit key and 112-bit security). It was widely adopted as a replacement. As of 2011, the three-key version is still considered secure, though theNational Institute of Standards and Technology(NIST) standards no longer permit the use of the two-key version in new applications, due to its 80-bit security level.[43]
TheInternational Data Encryption Algorithm(IDEA) is a block cipher designed byJames MasseyofETH ZurichandXuejia Lai; it was first described in 1991, as an intended replacement for DES.
IDEA operates on 64-bitblocksusing a 128-bit key and consists of a series of eight identical transformations (around) and an output transformation (thehalf-round). The processes for encryption and decryption are similar. IDEA derives much of its security by interleaving operations from differentgroups–modularaddition and multiplication, and bitwiseexclusive or(XOR)– which are algebraically "incompatible" in some sense.
The designers analysed IDEA to measure its strength againstdifferential cryptanalysisand concluded that it is immune under certain assumptions. No successfullinearor algebraic weaknesses have been reported. As of 2012[update], the best attack which applies to all keys can break a full 8.5-round IDEA using a narrow-bicliques attack about four times faster than brute force.
RC5 is a block cipher designed byRonald Rivestin 1994 which, unlike many other ciphers, has a variable block size (32, 64, or 128 bits), key size (0 to 2040 bits), and a number of rounds (0 to 255). The original suggested choice of parameters was a block size of 64 bits, a 128-bit key, and 12 rounds.
A key feature of RC5 is the use of data-dependent rotations; one of the goals of RC5 was to prompt the study and evaluation of such operations as a cryptographic primitive. RC5 also consists of a number ofmodularadditions and XORs. The general structure of the algorithm is aFeistel-like a network. The encryption and decryption routines can be specified in a few lines of code. The key schedule, however, is more complex, expanding the key using an essentiallyone-way functionwith the binary expansions of botheand thegolden ratioas sources of "nothing up my sleeve numbers". The tantalizing simplicity of the algorithm together with the novelty of the data-dependent rotations has made RC5 an attractive object of study for cryptanalysts.
12-round RC5 (with 64-bit blocks) is susceptible to adifferential attackusing 244chosen plaintexts.[44]18–20 rounds are suggested as sufficient protection.
TheRijndaelcipher developed by Belgian cryptographers,Joan DaemenandVincent Rijmenwas one of the competing designs to replace DES. It won the5-year public competitionto become the AES (Advanced Encryption Standard).
Adopted by NIST in 2001, AES has a fixed block size of 128 bits and a key size of 128, 192, or 256 bits, whereas Rijndael can be specified with block and key sizes in any multiple of 32 bits, with a minimum of 128 bits. The block size has a maximum of 256 bits, but the key size has no theoretical maximum. AES operates on a 4×4column-major ordermatrix of bytes, termed thestate(versions of Rijndael with a larger block size have additional columns in the state).
Blowfishis a block cipher, designed in 1993 byBruce Schneierand included in a large number of cipher suites and encryption products. Blowfish has a 64-bit block size and a variablekey lengthfrom 1 bit up to 448 bits.[45]It is a 16-roundFeistel cipherand uses large key-dependentS-boxes. Notable features of the design include the key-dependentS-boxesand a highly complexkey schedule.
It was designed as a general-purpose algorithm, intended as an alternative to the aging DES and free of the problems and constraints associated with other algorithms. At the time Blowfish was released, many other designs were proprietary, encumbered bypatents, or were commercial/government secrets. Schneier has stated that "Blowfish is unpatented, and will remain so in all countries. The algorithm is hereby placed in thepublic domain, and can be freely used by anyone." The same applies toTwofish, a successor algorithm from Schneier.
M. Liskov, R. Rivest, and D. Wagner have described a generalized version of block ciphers called "tweakable" block ciphers.[46]A tweakable block cipher accepts a second input called thetweakalong with its usual plaintext or ciphertext input. The tweak, along with the key, selects the permutation computed by the cipher. If changing tweaks is sufficiently lightweight (compared with a usually fairly expensive key setup operation), then some interesting new operation modes become possible. Thedisk encryption theoryarticle describes some of these modes.
Block ciphers traditionally work over a binaryalphabet. That is, both the input and the output are binary strings, consisting ofnzeroes and ones. In some situations, however, one may wish to have a block cipher that works over some other alphabet; for example, encrypting 16-digit credit card numbers in such a way that the ciphertext is also a 16-digit number might facilitate adding an encryption layer to legacy software. This is an example offormat-preserving encryption. More generally, format-preserving encryption requires a keyed permutation on some finitelanguage. This makes format-preserving encryption schemes a natural generalization of (tweakable) block ciphers. In contrast, traditional encryption schemes, such as CBC, are not permutations because the same plaintext can encrypt multiple different ciphertexts, even when using a fixed key.
Block ciphers can be used to build other cryptographic primitives, such as those below. For these other primitives to be cryptographically secure, care has to be taken to build them the right way.
Just as block ciphers can be used to build hash functions, like SHA-1 and SHA-2 are based on block ciphers which are also used independently asSHACAL, hash functions can be used to build block ciphers. Examples of such block ciphers areBEAR and LION.
|
https://en.wikipedia.org/wiki/Block_cipher#Modes_of_operation
|
Astream cipheris asymmetric keycipherwhere plaintext digits are combined with apseudorandomcipher digit stream (keystream). In a stream cipher, eachplaintextdigitis encrypted one at a time with the corresponding digit of the keystream, to give a digit of theciphertextstream. Since encryption of each digit is dependent on the current state of the cipher, it is also known asstate cipher. In practice, a digit is typically abitand the combining operation is anexclusive-or(XOR).
The pseudorandom keystream is typically generated serially from a randomseed valueusing digitalshift registers. Theseed valueserves as thecryptographic keyfor decrypting the ciphertext stream. Stream ciphers represent a different approach to symmetric encryption fromblock ciphers. Block ciphers operate on large blocks of digits with a fixed, unvarying transformation. This distinction is not always clear-cut: in somemodes of operation, a block cipher primitive is used in such a way that it acts effectively as a stream cipher. Stream ciphers typically execute at a higher speed than block ciphers and have lower hardware complexity. However, stream ciphers can be susceptible to security breaches (seestream cipher attacks); for example, when the same starting state (seed) is used twice.
Stream ciphers can be viewed as approximating the action of a proven unbreakable cipher, theone-time pad(OTP). A one-time pad uses akeystreamof completelyrandomdigits. The keystream is combined with the plaintext digits one at a time to form the ciphertext. This system was proven to be secure byClaude E. Shannonin 1949.[1]However, the keystream must be generated completely at random with at least the same length as the plaintext and cannot be used more than once. This makes the system cumbersome to implement in many practical applications, and as a result the one-time pad has not been widely used, except for the most critical applications. Key generation, distribution and management are critical for those applications.
A stream cipher makes use of a much smaller and more convenient key such as 128 bits. Based on this key, it generates a pseudorandom keystream which can be combined with the plaintext digits in a similar fashion to the one-time pad. However, this comes at a cost. The keystream is now pseudorandom and so is not truly random. The proof of security associated with the one-time pad no longer holds. It is quite possible for a stream cipher to be completely insecure.[citation needed]
A stream cipher generates successive elements of the keystream based on an internal state. This state is updated in essentially two ways: if the state changes independently of the plaintext orciphertextmessages, the cipher is classified as asynchronousstream cipher. By contrast,self-synchronisingstream ciphers update their state based on previous plaintext or ciphertext digits. A system that incorporates the plaintext into the key is also known as anautokey cipheror autoclave cipher.
In asynchronous stream ciphera stream of pseudorandom digits is generated independently of the plaintext and ciphertext messages, and then combined with the plaintext (to encrypt) or the ciphertext (to decrypt). In the most common form, binary digits are used (bits), and the keystream is combined with the plaintext using theexclusive oroperation (XOR). This is termed abinary additive stream cipher.
In a synchronous stream cipher, the sender and receiver must be exactly in step for decryption to be successful. If digits are added or removed from the message during transmission, synchronisation is lost. To restore synchronisation, various offsets can be tried systematically to obtain the correct decryption. Another approach is to tag the ciphertext with markers at regular points in the output.
If, however, a digit is corrupted in transmission, rather than added or lost, only a single digit in the plaintext is affected and the error does not propagate to other parts of the message. This property is useful when the transmission error rate is high; however, it makes it less likely the error would be detected without further mechanisms. Moreover, because of this property, synchronous stream ciphers are very susceptible toactive attacks: if an attacker can change a digit in the ciphertext, they might be able to make predictable changes to the corresponding plaintext bit; for example, flipping a bit in the ciphertext causes the same bit to be flipped in the plaintext.
Another approach uses several of the previousNciphertext digits to compute the keystream. Such schemes are known asself-synchronizing stream ciphers,asynchronous stream ciphersorciphertext autokey(CTAK). The idea of self-synchronization was patented in 1946 and has the advantage that the receiver will automatically synchronise with the keystream generator after receivingNciphertext digits, making it easier to recover if digits are dropped or added to the message stream. Single-digit errors are limited in their effect, affecting only up toNplaintext digits.
An example of a self-synchronising stream cipher is a block cipher incipher feedback(CFB)mode.
Binary stream ciphers are often constructed usinglinear-feedback shift registers(LFSRs) because they can be easily implemented in hardware and can be readily analysed mathematically. The use of LFSRs on their own, however, is insufficient to provide good security. Various schemes have been proposed to increase the security of LFSRs.
Because LFSRs are inherently linear, one technique for removing the linearity is to feed the outputs of several parallel LFSRs into a non-linearBoolean functionto form acombination generator. Various properties of such acombining functionare critical for ensuring the security of the resultant scheme, for example, in order to avoidcorrelation attacks.
Normally LFSRs are stepped regularly. One approach to introducing non-linearity is to have the LFSR clocked irregularly, controlled by the output of a second LFSR. Such generators include thestop-and-go generator, thealternating step generatorand theshrinking generator.
Analternating step generatorcomprises three LFSRs, which we will call LFSR0, LFSR1 and LFSR2 for convenience. The output of one of the registers decides which of the other two is to be used; for instance, if LFSR2 outputs a 0, LFSR0 is clocked, and if it outputs a 1, LFSR1 is clocked instead. The output is the exclusive OR of the last bit produced by LFSR0 and LFSR1. The initial state of the three LFSRs is the key.
The stop-and-go generator (Beth and Piper, 1984) consists of two LFSRs. One LFSR is clocked if the output of a second is a 1, otherwise it repeats its previous output. This output is then (in some versions) combined with the output of a third LFSR clocked at a regular rate.
Theshrinking generatortakes a different approach. Two LFSRs are used, both clocked regularly. If the output of the first LFSR is 1, the output of the second LFSR becomes the output of the generator. If the first LFSR outputs 0, however, the output of the second is discarded, and no bit is output by the generator. This mechanism suffers from timing attacks on the second generator, since the speed of the output is variable in a manner that depends on the second generator's state. This can be alleviated by buffering the output.
Another approach to improving the security of an LFSR is to pass the entire state of a single LFSR into a non-linearfiltering function.
Instead of a linear driving device, one may use a nonlinear update function. For example, Klimov and Shamir proposed triangular functions (T-functions) with a single cycle on n-bit words.
For a stream cipher to be secure, its keystream must have a largeperiod, and it must be impossible torecover the cipher's keyor internal state from the keystream. Cryptographers also demand that the keystream be free of even subtle biases that would let attackersdistinguisha stream from random noise, and free of detectable relationships between keystreams that correspond torelated keysor relatedcryptographic nonces. That should be true for all keys (there should be noweak keys), even if the attacker canknoworchoosesomeplaintextorciphertext.
As with other attacks in cryptography, stream cipher attacks can becertificationalso they are not necessarily practical ways to break the cipher but indicate that the cipher might have other weaknesses.
Securely using a secure synchronous stream cipher requires that one never reuse the same keystream twice. That generally means a differentnonceor key must be supplied to each invocation of the cipher. Application designers must also recognize that most stream ciphers provide notauthenticitybutprivacy: encrypted messages may still have been modified in transit.
Short periods for stream ciphers have been a practical concern. For example, 64-bit block ciphers likeDEScan be used to generate a keystream inoutput feedback(OFB) mode. However, when not using full feedback, the resulting stream has a period of around 232blocks on average; for many applications, the period is far too low. For example, if encryption is being performed at a rate of 8megabytesper second, a stream of period 232blocks will repeat after about an hour.
Some applications using the stream cipherRC4are attackable because of weaknesses in RC4's key setup routine; new applications should either avoid RC4 or make sure all keys are unique and ideallyunrelated(such as generated by a well-seededCSPRNGor acryptographic hash function) and that the first bytes of the keystream are discarded.
The elements of stream ciphers are often much simpler to understand than block ciphers and are thus less likely to hide any accidental or malicious weaknesses.
Stream ciphers are often used for their speed and simplicity of implementation in hardware, and in applications where plaintext comes in quantities of unknowable length like a securewirelessconnection. If ablock cipher(not operating in a stream cipher mode) were to be used in this type of application, the designer would need to choose either transmission efficiency or implementation complexity, since block ciphers cannot directly work on blocks shorter than their block size. For example, if a 128-bit block cipher received separate 32-bit bursts of plaintext, three quarters of the data transmitted would bepadding. Block ciphers must be used inciphertext stealingorresidual block terminationmode to avoid padding, while stream ciphers eliminate this issue by naturally operating on the smallest unit that can be transmitted (usually bytes).
Another advantage of stream ciphers in military cryptography is that the cipher stream can be generated in a separate box that is subject to strict security measures and fed to other devices such as a radio set, which will perform the XOR operation as part of their function. The latter device can then be designed and used in less stringent environments.
ChaChais becoming the most widely used stream cipher in software;[2]others include:RC4,A5/1,A5/2,Chameleon,FISH,Helix,ISAAC,MUGI,Panama,Phelix,Pike,Salsa20,SEAL,SOBER,SOBER-128,
andWAKE.
|
https://en.wikipedia.org/wiki/Stream_cipher
|
Inmathematics, for givenreal numbersa{\displaystyle a}andb{\displaystyle b}, thelogarithmlogb(a){\displaystyle \log _{b}(a)}is a numberx{\displaystyle x}such thatbx=a{\displaystyle b^{x}=a}. Analogously, in anygroupG{\displaystyle G}, powersbk{\displaystyle b^{k}}can be defined for allintegersk{\displaystyle k}, and thediscrete logarithmlogb(a){\displaystyle \log _{b}(a)}is an integerk{\displaystyle k}such thatbk=a{\displaystyle b^{k}=a}. Inarithmetic moduloan integerm{\displaystyle m}, the more commonly used term isindex: One can writek=indba(modm){\displaystyle k=\mathbb {ind} _{b}a{\pmod {m}}}(read "the index ofa{\displaystyle a}to the baseb{\displaystyle b}modulom{\displaystyle m}") forbk≡a(modm){\displaystyle b^{k}\equiv a{\pmod {m}}}ifb{\displaystyle b}is aprimitive rootofm{\displaystyle m}andgcd(a,m)=1{\displaystyle \gcd(a,m)=1}.
Discrete logarithms are quickly computable in a few special cases. However, no efficient method is known for computing them in general. In cryptography, the computational complexity of the discrete logarithm problem, along with its application, was first proposed in theDiffie–Hellman problem. Several importantalgorithmsinpublic-key cryptography, such asElGamal, base their security on thehardness assumptionthat the discrete logarithm problem (DLP) over carefully chosen groups has no efficient solution.[1]
LetG{\displaystyle G}be any group. Denote itsgroup operationby multiplication and itsidentity elementby1{\displaystyle 1}. Letb{\displaystyle b}be any element ofG{\displaystyle G}. For any positive integerk{\displaystyle k}, the expressionbk{\displaystyle b^{k}}denotes the product ofb{\displaystyle b}with itselfk{\displaystyle k}times:[2]
Similarly, letb−k{\displaystyle b^{-k}}denote the product ofb−1{\displaystyle b^{-1}}with itselfk{\displaystyle k}times. Fork=0{\displaystyle k=0}, thek{\displaystyle k}thpower is the identity:b0=1{\displaystyle b^{0}=1}.
Leta{\displaystyle a}also be an element ofG{\displaystyle G}. An integerk{\displaystyle k}that solves the equationbk=a{\displaystyle b^{k}=a}is termed adiscrete logarithm(or simplylogarithm, in this context) ofa{\displaystyle a}to the baseb{\displaystyle b}. One writesk=logba{\displaystyle k=\log _{b}a}.
Thepowers of 10are
For any numbera{\displaystyle a}in this list, one can computelog10a{\displaystyle \log _{10}a}. For example,log1010000=4{\displaystyle \log _{10}{10000}=4}, andlog100.001=−3{\displaystyle \log _{10}{0.001}=-3}. These are instances of the discrete logarithm problem.
Other base-10 logarithms in the real numbers are not instances of the discrete logarithm problem, because they involve non-integer exponents. For example, the equationlog1053=1.724276…{\displaystyle \log _{10}{53}=1.724276\ldots }means that101.724276…{\displaystyle 10^{1.724276\ldots }}. While integer exponents can be defined in any group using products and inverses, arbitrary real exponents, such as this 1.724276…, require other concepts such as theexponential function.
Ingroup-theoreticterms, the powers of 10 form acyclic groupG{\displaystyle G}under multiplication, and 10 is ageneratorfor this group. The discrete logarithmlog10a{\displaystyle \log _{10}a}is defined for anya{\displaystyle a}inG{\displaystyle G}.
A similar example holds for any non-zero real numberb{\displaystyle b}. The powers form a multiplicativesubgroupG={…,b−2,b−1,1,b1,b2,…}{\displaystyle G=\{\ldots ,b^{-2},b^{-1},1,b^{1},b^{2},\ldots \}}of the non-zero real numbers. For any elementa{\displaystyle a}ofG{\displaystyle G}, one can computelogba{\displaystyle \log _{b}a}.
One of the simplest settings for discrete logarithms is the groupZp×. This is the group of multiplicationmodulotheprimep{\displaystyle p}. Its elements are non-zerocongruence classesmodulop{\displaystyle p}, and the group product of two elements may be obtained by ordinary integer multiplication of the elements followed by reduction modulop{\displaystyle p}.
Thek{\displaystyle k}thpowerof one of the numbers in this group may be computed by finding its 'k{\displaystyle k}thpower as an integer and then finding the remainder after division byp{\displaystyle p}. When the numbers involved are large, it is more efficient to reduce modulop{\displaystyle p}multiple times during the computation. Regardless of the specific algorithm used, this operation is calledmodular exponentiation. For example, considerZ17×. To compute34{\displaystyle 3^{4}}in this group, compute34=81{\displaystyle 3^{4}=81}, and then divide81{\displaystyle 81}by17{\displaystyle 17}, obtaining a remainder of13{\displaystyle 13}. Thus34=13{\displaystyle 3^{4}=13}in the groupZ17×.
The discrete logarithm is just the inverse operation. For example, consider the equation3k≡13(mod17){\displaystyle 3^{k}\equiv 13{\pmod {17}}}. From the example above, one solution isk=4{\displaystyle k=4}, but it is not the only solution. Since316≡1(mod17){\displaystyle 3^{16}\equiv 1{\pmod {17}}}—as follows fromFermat's little theorem— it also follows that ifn{\displaystyle n}is an integer then34+16n≡34⋅(316)n≡34⋅1n≡34≡13(mod17){\displaystyle 3^{4+16n}\equiv 3^{4}\cdot (3^{16})^{n}\equiv 3^{4}\cdot 1^{n}\equiv 3^{4}\equiv 13{\pmod {17}}}. Hence the equation has infinitely many solutions of the form4+16n{\displaystyle 4+16n}. Moreover, because16{\displaystyle 16}is the smallest positive integerm{\displaystyle m}satisfying3m≡1(mod17){\displaystyle 3^{m}\equiv 1{\pmod {17}}}, these are the only solutions. Equivalently, the set of all possible solutions can be expressed by the constraint thatk≡4(mod16){\displaystyle k\equiv 4{\pmod {16}}}.
In the special case whereb{\displaystyle b}is the identity element1{\displaystyle 1}of the groupG{\displaystyle G}, the discrete logarithmlogba{\displaystyle \log _{b}a}is undefined fora{\displaystyle a}other than1{\displaystyle 1}, and every integerk{\displaystyle k}is a discrete logarithm fora=1{\displaystyle a=1}.
Powers obey the usual algebraic identitybk+l=bk⋅bl{\displaystyle b^{k+l}=b^{k}\cdot b^{l}}.[2]In other words, thefunction
defined byf(k)=bk{\displaystyle f(k)=b^{k}}is agroup homomorphismfrom the group of integersZ{\displaystyle \mathbf {Z} }under additionontothesubgroupH{\displaystyle H}ofG{\displaystyle G}generatedbyb{\displaystyle b}. For alla{\displaystyle a}inH{\displaystyle H},logba{\displaystyle \log _{b}a}exists.Conversely,logba{\displaystyle \log _{b}a}does not exist fora{\displaystyle a}that are not inH{\displaystyle H}.
IfH{\displaystyle H}isinfinite, thenlogba{\displaystyle \log _{b}a}is also unique, and the discrete logarithm amounts to agroup isomorphism
On the other hand, ifH{\displaystyle H}isfiniteofordern{\displaystyle n}, thenlogba{\displaystyle \log _{b}a}is 0 unique only up tocongruence modulon{\displaystyle n}, and the discrete logarithm amounts to a group isomorphism
whereZn{\displaystyle \mathbf {Z} _{n}}denotes the additive group of integers modulon{\displaystyle n}.
The familiar base change formula for ordinary logarithms remains valid: Ifc{\displaystyle c}is another generator ofH{\displaystyle H}, then
The discrete logarithm problem is considered to be computationally intractable. That is, no efficient classical algorithm is known for computing discrete logarithms in general.
A general algorithm for computinglogba{\displaystyle \log _{b}a}in finite groupsG{\displaystyle G}is to raiseb{\displaystyle b}to larger and larger powersk{\displaystyle k}until the desireda{\displaystyle a}is found. This algorithm is sometimes calledtrial multiplication. It requiresrunning timelinearin the size of the groupG{\displaystyle G}and thusexponentialin the number of digits in the size of the group. Therefore, it is an exponential-time algorithm, practical only for small groupsG{\displaystyle G}.
More sophisticated algorithms exist, usually inspired by similar algorithms forinteger factorization. These algorithms run faster than the naïve algorithm, some of them proportional to thesquare rootof the size of the group, and thus exponential in half the number of digits in the size of the group. However, none of them runs inpolynomial time(in the number of digits in the size of the group).
There is an efficientquantum algorithmdue toPeter Shor.[3]
Efficient classical algorithms also exist in certain special cases. For example, in the group of the integers modulop{\displaystyle p}under addition, the powerbk{\displaystyle b^{k}}becomes a productb⋅k{\displaystyle b\cdot k}, and equality means congruence modulop{\displaystyle p}in the integers. Theextended Euclidean algorithmfindsk{\displaystyle k}quickly.
WithDiffie–Hellman, a cyclic group modulo a primep{\displaystyle p}is used, allowing an efficient computation of the discrete logarithm with Pohlig–Hellman if the order of the group (beingp−1{\displaystyle p-1}) is sufficientlysmooth, i.e. has no largeprime factors.
While computing discrete logarithms and integer factorization are distinct problems, they share some properties:
There exist groups for which computing discrete logarithms is apparently difficult. In some cases (e.g. large prime order subgroups of groupsZp×{\displaystyle \mathbf {Z} _{p}^{\times }}) there is not only no efficient algorithm known for the worst case, but theaverage-case complexitycan be shown to be about as hard as the worst case usingrandom self-reducibility.[4]
At the same time, the inverse problem of discrete exponentiation is not difficult (it can be computed efficiently usingexponentiation by squaring, for example). This asymmetry is analogous to the one between integer factorization and integer multiplication. Both asymmetries (and other possiblyone-way functions) have been exploited in the construction of cryptographic systems.
Popular choices for the groupG{\displaystyle G}in discrete logarithm cryptography (DLC) are the cyclic groupsZp×{\displaystyle \mathbf {Z} _{p}^{\times }}(e.g.ElGamal encryption,Diffie–Hellman key exchange, and theDigital Signature Algorithm) and cyclic subgroups ofelliptic curvesoverfinite fields(seeElliptic curve cryptography).
While there is no publicly known algorithm for solving the discrete logarithm problem in general, the first three steps of thenumber field sievealgorithm only depend on the groupG{\displaystyle G}, not on the specific elements ofG{\displaystyle G}whose finitelog{\displaystyle \log }is desired. Byprecomputingthese three steps for a specific group, one need only carry out the last step, which is much less computationally expensive than the first three, to obtain a specific logarithm in that group.[5]
It turns out that muchinternettraffic uses one of a handful of groups that are of order 1024 bits or less, e.g. cyclic groups with order of the Oakley primes specified in RFC 2409.[6]TheLogjamattack used this vulnerability to compromise a variety of internet services that allowed the use of groups whose order was a 512-bit prime number, so calledexport grade.[5]
The authors of the Logjam attack estimate that the much more difficult precomputation needed to solve the discrete log problem for a 1024-bit prime would be within the budget of a large nationalintelligence agencysuch as the U.S.National Security Agency(NSA). The Logjam authors speculate that precomputation against widely reused 1024 DH primes is behind claims inleaked NSA documentsthat NSA is able to break much of current cryptography.[5]
|
https://en.wikipedia.org/wiki/Discrete_logarithm
|
Bluetoothis a short-rangewirelesstechnology standard that is used for exchanging data between fixed and mobile devices over short distances and buildingpersonal area networks(PANs). In the most widely used mode, transmission power is limited to 2.5milliwatts, giving it a very short range of up to 10 metres (33 ft). It employsUHFradio wavesin theISM bands, from 2.402GHzto 2.48GHz.[3]It is mainly used as an alternative to wired connections to exchange files between nearby portable devices and connectcell phonesand music players withwireless headphones,wireless speakers,HIFIsystems,car audioand wireless transmission betweenTVsandsoundbars.
Bluetooth is managed by theBluetooth Special Interest Group(SIG), which has more than 35,000 member companies in the areas of telecommunication, computing, networking, and consumer electronics. TheIEEEstandardized Bluetooth asIEEE 802.15.1but no longer maintains the standard. The Bluetooth SIG oversees the development of the specification, manages the qualification program, and protects the trademarks.[4]A manufacturer must meetBluetooth SIG standardsto market it as a Bluetooth device.[5]A network ofpatentsapplies to the technology, which is licensed to individual qualifying devices. As of 2021[update], 4.7 billion Bluetoothintegrated circuitchips are shipped annually.[6]Bluetooth was first demonstrated in space in 2024, an early test envisioned to enhanceIoTcapabilities.[7]
The name "Bluetooth" was proposed in 1997 by Jim Kardach ofIntel, one of the founders of the Bluetooth SIG. The name was inspired by a conversation with Sven Mattisson who related Scandinavian history through tales fromFrans G. Bengtsson'sThe Long Ships, a historical novel about Vikings and the 10th-century Danish kingHarald Bluetooth. Upon discovering a picture of therunestone of Harald Bluetooth[8]in the bookA History of the VikingsbyGwyn Jones, Kardach proposed Bluetooth as the codename for the short-range wireless program which is now called Bluetooth.[9][10][11]
According to Bluetooth's official website,
Bluetooth was only intended as a placeholder until marketing could come up with something really cool.
Later, when it came time to select a serious name, Bluetooth was to be replaced with either RadioWire or PAN (Personal Area Networking). PAN was the front runner, but an exhaustive search discovered it already had tens of thousands of hits throughout the internet.
A full trademark search on RadioWire couldn't be completed in time for launch, making Bluetooth the only choice. The name caught on fast and before it could be changed, it spread throughout the industry, becoming synonymous with short-range wireless technology.[12]
Bluetooth is theAnglicisedversion of the ScandinavianBlåtand/Blåtann(or inOld Norseblátǫnn). It was theepithetof King Harald Bluetooth, who united the disparate
Danish tribes into a single kingdom; Kardach chose the name to imply that Bluetooth similarly unites communication protocols.[13]
The Bluetooth logois abind runemerging theYounger Futharkrunes(ᚼ,Hagall) and(ᛒ,Bjarkan), Harald's initials.[14][15]
The development of the "short-link" radio technology, later named Bluetooth, was initiated in 1989 by Nils Rydbeck, CTO atEricsson MobileinLund, Sweden. The purpose was to develop wireless headsets, according to two inventions byJohan Ullman,SE 8902098-6, issued 12 June 1989andSE 9202239, issued 24 July 1992. Nils Rydbeck taskedTord Wingrenwith specifying and DutchmanJaap Haartsenand Sven Mattisson with developing.[16]Both were working for Ericsson in Lund.[17]Principal design and development began in 1994 and by 1997 the team had a workable solution.[18]From 1997 Örjan Johansson became the project leader and propelled the technology and standardization.[19][20][21][22]
In 1997, Adalio Sanchez, then head ofIBMThinkPadproduct R&D, approached Nils Rydbeck about collaborating on integrating amobile phoneinto a ThinkPad notebook. The two assigned engineers fromEricssonandIBMstudied the idea. The conclusion was that power consumption on cellphone technology at that time was too high to allow viable integration into a notebook and still achieve adequate battery life. Instead, the two companies agreed to integrate Ericsson's short-link technology on both a ThinkPad notebook and an Ericsson phone to accomplish the goal.
Since neither IBM ThinkPad notebooks nor Ericsson phones were the market share leaders in their respective markets at that time, Adalio Sanchez and Nils Rydbeck agreed to make the short-link technology an open industry standard to permit each player maximum market access. Ericsson contributed the short-link radio technology, and IBM contributed patents around the logical layer. Adalio Sanchez of IBM then recruited Stephen Nachtsheim of Intel to join and then Intel also recruitedToshibaandNokia. In May 1998, the Bluetooth SIG was launched with IBM and Ericsson as the founding signatories and a total of five members: Ericsson, Intel, Nokia, Toshiba, and IBM.
The first Bluetooth device was revealed in 1999. It was a hands-free mobile headset that earned the "Best of show Technology Award" atCOMDEX. The first Bluetooth mobile phone was the unreleased prototype Ericsson T36, though it was the revised Ericsson modelT39that actually made it to store shelves in June 2001. However Ericsson released the R520m in Quarter 1 of 2001,[23]making the R520m the first ever commercially available Bluetooth phone. In parallel, IBM introduced the IBM ThinkPad A30 in October 2001 which was the first notebook with integrated Bluetooth.
Bluetooth's early incorporation into consumer electronics products continued at Vosi Technologies in Costa Mesa, California, initially overseen by founding members Bejan Amini and Tom Davidson. Vosi Technologies had been created by real estate developer Ivano Stegmenga, with United States Patent 608507, for communication between a cellular phone and a vehicle's audio system. At the time, Sony/Ericsson had only a minor market share in the cellular phone market, which was dominated in the US by Nokia and Motorola. Due to ongoing negotiations for an intended licensing agreement with Motorola beginning in the late 1990s, Vosi could not publicly disclose the intention, integration, and initial development of other enabled devices which were to be the first "Smart Home" internet connected devices.
Vosi needed a means for the system to communicate without a wired connection from the vehicle to the other devices in the network. Bluetooth was chosen, sinceWi-Fiwas not yet readily available or supported in the public market. Vosi had begun to develop the Vosi Cello integrated vehicular system and some other internet connected devices, one of which was intended to be a table-top device named the Vosi Symphony, networked with Bluetooth. Through the negotiations withMotorola, Vosi introduced and disclosed its intent to integrate Bluetooth in its devices. In the early 2000s a legal battle[24]ensued between Vosi and Motorola, which indefinitely suspended release of the devices. Later, Motorola implemented it in their devices, which initiated the significant propagation of Bluetooth in the public market due to its large market share at the time.
In 2012, Jaap Haartsen was nominated by theEuropean Patent Officefor theEuropean Inventor Award.[18]
Bluetooth operates at frequencies between 2.402 and 2.480GHz, or 2.400 and 2.4835GHz, includingguard bands2MHz wide at the bottom end and 3.5MHz wide at the top.[25]This is in the globally unlicensed (but not unregulated) industrial, scientific and medical (ISM) 2.4GHz short-range radio frequency band. Bluetooth uses a radio technology calledfrequency-hopping spread spectrum. Bluetooth divides transmitted data into packets, and transmits each packet on one of 79 designated Bluetooth channels. Each channel has a bandwidth of 1MHz. It usually performs 1600hops per second, withadaptive frequency-hopping(AFH) enabled.[25]Bluetooth Low Energyuses 2MHz spacing, which accommodates 40 channels.[26]
Originally,Gaussian frequency-shift keying(GFSK) modulation was the only modulation scheme available. Since the introduction of Bluetooth 2.0+EDR, π/4-DQPSK(differential quadrature phase-shift keying) and 8-DPSK modulation may also be used between compatible devices. Devices functioning with GFSK are said to be operating in basic rate (BR) mode, where an instantaneousbit rateof 1Mbit/sis possible. The termEnhanced Data Rate(EDR) is used to describe π/4-DPSK (EDR2) and 8-DPSK (EDR3) schemes, transferring 2 and 3Mbit/s respectively.
In 2019, Apple published an extension called HDR which supports data rates of 4 (HDR4) and 8 (HDR8) Mbit/s using π/4-DQPSKmodulation on 4 MHz channels with forward error correction (FEC).[27]
Bluetooth is apacket-based protocolwith amaster/slave architecture. One master may communicate with up to seven slaves in apiconet. All devices within a given piconet use the clock provided by the master as the base for packet exchange. The master clock ticks with a period of 312.5μs, two clock ticks then make up a slot of 625μs, and two slots make up a slot pair of 1250μs. In the simple case of single-slot packets, the master transmits in even slots and receives in odd slots. The slave, conversely, receives in even slots and transmits in odd slots. Packets may be 1, 3, or 5 slots long, but in all cases, the master's transmission begins in even slots and the slave's in odd slots.
The above excludes Bluetooth Low Energy, introduced in the 4.0 specification,[28]whichuses the same spectrum but somewhat differently.
A master BR/EDR Bluetooth device can communicate with a maximum of seven devices in a piconet (an ad hoc computer network using Bluetooth technology), though not all devices reach this maximum. The devices can switch roles, by agreement, and the slave can become the master (for example, a headset initiating a connection to a phone necessarily begins as master—as an initiator of the connection—but may subsequently operate as the slave).
The Bluetooth Core Specification provides for the connection of two or more piconets to form ascatternet, in which certain devices simultaneously play the master/leader role in one piconet and the slave role in another.
At any given time, data can be transferred between the master and one other device (except for the little-used broadcast mode). The master chooses which slave device to address; typically, it switches rapidly from one device to another in around-robinfashion. Since it is the master that chooses which slave to address, whereas a slave is (in theory) supposed to listen in each receive slot, being a master is a lighter burden than being a slave. Being a master of seven slaves is possible; being a slave of more than one master is possible. The specification is vague as to required behavior in scatternets.[29]
Bluetooth is a standard wire-replacement communications protocol primarily designed for low power consumption, with a short range based on low-costtransceivermicrochipsin each device.[30]Because the devices use a radio (broadcast) communications system, they do not have to be in visual line of sight of each other; however, aquasi opticalwireless path must be viable.[31]
Historically, the Bluetooth range was defined by the radio class, with a lower class (and higher output power) having larger range.[2]The actual range of a given link depends on several qualities of both communicating devices and theair and obstacles in between. The primary attributes affecting range are the data rate, protocol (Bluetooth Classic or Bluetooth Low Energy), transmission power, and receiver sensitivity, and the relative orientations and gains of both antennas.[32]
The effective range varies depending on propagation conditions, material coverage, production sample variations, antenna configurations and battery conditions. Most Bluetooth applications are for indoor conditions, where attenuation of walls and signal fading due to signal reflections make the range far lower than specified line-of-sight ranges of the Bluetooth products.
Most Bluetooth applications are battery-powered Class 2 devices, with little difference in range whether the other end of the link is a Class 1 or Class 2 device as the lower-powered device tends to set the range limit. In some cases the effective range of the data link can be extended when a Class 2 device is connecting to a Class 1 transceiver with both higher sensitivity and transmission power than a typical Class 2 device.[33]In general, however, Class 1 devices have sensitivities similar to those of Class 2 devices. Connecting two Class 1 devices with both high sensitivity and high power can allow ranges far in excess of the typical 100 m, depending on the throughput required by the application. Some such devices allow open field ranges of up to 1 km and beyond between two similar devices without exceeding legal emission limits.[34][35][36]
To use Bluetooth wireless technology, a device must be able to interpret certain Bluetooth profiles.
For example,
Profiles are definitions of possible applications and specify general behaviors that Bluetooth-enabled devices use to communicate with other Bluetooth devices. These profiles include settings to parameterize and to control the communication from the start. Adherence to profiles saves the time for transmitting the parameters anew before the bi-directional link becomes effective. There are a wide range of Bluetooth profiles that describe many different types of applications or use cases for devices.[37]
Bluetooth exists in numerous products such as telephones,speakers, tablets, media players, robotics systems, laptops, and game console equipment as well as some high definitionheadsets,modems,hearing aids[53]and even watches.[54]Bluetooth is useful when transferring information between two or more devices that are near each other in low-bandwidth situations. Bluetooth is commonly used to transfer sound data with telephones (i.e., with a Bluetooth headset) or byte data with hand-held computers (transferring files).
Bluetooth protocols simplify the discovery and setup of services between devices.[55]Bluetooth devices can advertise all of the services they provide.[56]This makes using services easier, because more of the security,network addressand permission configuration can be automated than with many other network types.[55]
A personal computer that does not have embedded Bluetooth can use a Bluetooth adapter that enables the PC to communicate with Bluetooth devices. While somedesktop computersand most recent laptops come with a built-in Bluetooth radio, others require an external adapter, typically in the form of a small USB "dongle".
Unlike its predecessor,IrDA, which requires a separate adapter for each device, Bluetooth lets multiple devices communicate with a computer over a single adapter.[57]
ForMicrosoftplatforms,Windows XP Service Pack 2and SP3 releases work natively with Bluetooth v1.1, v2.0 and v2.0+EDR.[58]Previous versions required users to install their Bluetooth adapter's own drivers, which were not directly supported by Microsoft.[59]Microsoft's own Bluetooth dongles (packaged with their Bluetooth computer devices) have no external drivers and thus require at least Windows XP Service Pack 2. Windows Vista RTM/SP1 with the Feature Pack for Wireless or Windows Vista SP2 work with Bluetooth v2.1+EDR.[58]Windows 7 works with Bluetooth v2.1+EDR and Extended Inquiry Response (EIR).[58]The Windows XP and Windows Vista/Windows 7 Bluetooth stacks support the following Bluetooth profiles natively: PAN, SPP,DUN, HID, HCRP. The Windows XP stack can be replaced by a third party stack that supports more profiles or newer Bluetooth versions. The Windows Vista/Windows 7 Bluetooth stack supports vendor-supplied additional profiles without requiring that the Microsoft stack be replaced.[58]Windows 8 and later support Bluetooth Low Energy (BLE). It is generally recommended to install the latest vendor driver and its associated stack to be able to use the Bluetooth device at its fullest extent.
Appleproducts have worked with Bluetooth sinceMac OSX v10.2, which was released in 2002.[60]
Linuxhas two popularBluetooth stacks, BlueZ and Fluoride. The BlueZ stack is included with most Linux kernels and was originally developed byQualcomm.[61]Fluoride, earlier known as Bluedroid is included in Android OS and was originally developed byBroadcom.[62]There is also Affix stack, developed byNokia. It was once popular, but has not been updated since 2005.[63]
FreeBSDhas included Bluetooth since its v5.0 release, implemented throughnetgraph.[64][65]
NetBSDhas included Bluetooth since its v4.0 release.[66][67]Its Bluetooth stack was ported toOpenBSDas well, however OpenBSD later removed it as unmaintained.[68][69]
DragonFly BSDhas had NetBSD's Bluetooth implementation since 1.11 (2008).[70][71]Anetgraph-based implementation fromFreeBSDhas also been available in the tree, possibly disabled until 2014-11-15, and may require more work.[72][73]
The specifications were formalized by theBluetooth Special Interest Group(SIG) and formally announced on 20 May 1998.[74]In 2014 it had a membership of over 30,000 companies worldwide.[75]It was established byEricsson,IBM,Intel,NokiaandToshiba, and later joined by many other companies.
All versions of the Bluetooth standards arebackward-compatiblewith all earlier versions.[76]
The Bluetooth Core Specification Working Group (CSWG) produces mainly four kinds of specifications:
Major enhancements include:
This version of the Bluetooth Core Specification was released before 2005. The main difference is the introduction of an Enhanced Data Rate (EDR) for fasterdata transfer. The data rate of EDR is 3Mbit/s, although the maximum data transfer rate (allowing for inter-packet time and acknowledgements) is 2.1Mbit/s.[79]EDR uses a combination ofGFSKandphase-shift keyingmodulation (PSK) with two variants, π/4-DQPSKand 8-DPSK.[81]EDR can provide a lower power consumption through a reducedduty cycle.
The specification is published asBluetooth v2.0 + EDR, which implies that EDR is an optional feature. Aside from EDR, the v2.0 specification contains other minor improvements, and products may claim compliance to "Bluetooth v2.0" without supporting the higher data rate. At least one commercial device states "Bluetooth v2.0 without EDR" on its data sheet.[82]
Bluetooth Core Specification version 2.1 + EDR was adopted by the Bluetooth SIG on 26 July 2007.[81]
The headline feature of v2.1 issecure simple pairing(SSP): this improves the pairing experience for Bluetooth devices, while increasing the use and strength of security.[83]
Version 2.1 allows various other improvements, includingextended inquiry response(EIR), which provides more information during the inquiry procedure to allow better filtering of devices before connection; and sniff subrating, which reduces the power consumption in low-power mode.
Version 3.0 + HS of the Bluetooth Core Specification[81]was adopted by the Bluetooth SIG on 21 April 2009. Bluetooth v3.0 + HS provides theoretical data transfer speeds of up to 24 Mbit/s, though not over the Bluetooth link itself. Instead, the Bluetooth link is used for negotiation and establishment, and the high data rate traffic is carried over a colocated802.11link.
The main new feature isAMP(Alternative MAC/PHY), the addition of802.11as a high-speed transport. The high-speed part of the specification is not mandatory, and hence only devices that display the "+HS" logo actually support Bluetooth over 802.11 high-speed data transfer. A Bluetooth v3.0 device without the "+HS" suffix is only required to support features introduced in Core Specification version 3.0[84]or earlier Core Specification Addendum 1.[85]
The high-speed (AMP) feature of Bluetooth v3.0 was originally intended forUWB, but the WiMedia Alliance, the body responsible for the flavor of UWB intended for Bluetooth, announced in March 2009 that it was disbanding, and ultimately UWB was omitted from the Core v3.0 specification.[86]
On 16 March 2009, theWiMedia Allianceannounced it was entering into technology transfer agreements for the WiMediaUltra-wideband(UWB) specifications. WiMedia has transferred all current and future specifications, including work on future high-speed and power-optimized implementations, to the Bluetooth Special Interest Group (SIG),Wireless USBPromoter Group and theUSB Implementers Forum. After successful completion of the technology transfer, marketing, and related administrative items, the WiMedia Alliance ceased operations.[87][88][89][90][91]
In October 2009, theBluetooth Special Interest Groupsuspended development of UWB as part of the alternative MAC/PHY, Bluetooth v3.0 + HS solution. A small, but significant, number of formerWiMediamembers had not and would not sign up to the necessary agreements for theIPtransfer. As of 2009, the Bluetooth SIG was in the process of evaluating other options for its longer-term roadmap.[92][93][94]
The Bluetooth SIG completed the Bluetooth Core Specification version 4.0 (called Bluetooth Smart) and has been adopted as of 30 June 2010[update]. It includesClassic Bluetooth,Bluetooth high speedandBluetooth Low Energy(BLE) protocols. Bluetooth high speed is based on Wi-Fi, and Classic Bluetooth consists of legacy Bluetooth protocols.
Bluetooth Low Energy, previously known as Wibree,[95]is a subset of Bluetooth v4.0 with an entirely new protocol stack for rapid build-up of simple links. As an alternative to the Bluetooth standard protocols that were introduced in Bluetooth v1.0 to v3.0, it is aimed at very low power applications powered by acoin cell. Chip designs allow for two types of implementation, dual-mode, single-mode and enhanced past versions.[96]The provisional namesWibreeandBluetooth ULP(Ultra Low Power) were abandoned and the BLE name was used for a while. In late 2011, new logos "Bluetooth Smart Ready" for hosts and "Bluetooth Smart" for sensors were introduced as the general-public face of BLE.[97]
Compared toClassic Bluetooth, Bluetooth Low Energy is intended to provide considerably reduced power consumption and cost while maintaining asimilar communication range. In terms of lengthening the battery life of Bluetooth devices,BLErepresents a significant progression.
Cost-reduced single-mode chips, which enable highly integrated and compact devices, feature a lightweight Link Layer providing ultra-low power idle mode operation, simple device discovery, and reliable point-to-multipoint data transfer with advanced power-save and secure encrypted connections at the lowest possible cost.
General improvements in version 4.0 include the changes necessary to facilitate BLE modes, as well the Generic Attribute Profile (GATT) and Security Manager (SM) services withAESEncryption.
Core Specification Addendum 2 was unveiled in December 2011; it contains improvements to the audio Host Controller Interface and to the High Speed (802.11) Protocol Adaptation Layer.
Core Specification Addendum 3 revision 2 has an adoption date of 24 July 2012.
Core Specification Addendum 4 has an adoption date of 12 February 2013.
The Bluetooth SIG announced formal adoption of the Bluetooth v4.1 specification on 4 December 2013. This specification is an incremental software update to Bluetooth Specification v4.0, and not a hardware update. The update incorporates Bluetooth Core Specification Addenda (CSA 1, 2, 3 & 4) and adds new features that improve consumer usability. These include increased co-existence support for LTE, bulk data exchange rates—and aid developer innovation by allowing devices to support multiple roles simultaneously.[106]
New features of this specification include:
Some features were already available in a Core Specification Addendum (CSA) before the release of v4.1.
Released on 2 December 2014,[108]it introduces features for theInternet of things.
The major areas of improvement are:
Older Bluetooth hardware may receive 4.2 features such as Data Packet Length Extension and improved privacy via firmware updates.[109][110]
The Bluetooth SIG released Bluetooth 5 on 6 December 2016.[111]Its new features are mainly focused on newInternet of Thingstechnology. Sony was the first to announce Bluetooth 5.0 support with itsXperia XZ Premiumin Feb 2017 during the Mobile World Congress 2017.[112]The SamsungGalaxy S8launched with Bluetooth 5 support in April 2017. In September 2017, theiPhone 8, 8 Plus andiPhone Xlaunched with Bluetooth 5 support as well.Applealso integrated Bluetooth 5 in its newHomePodoffering released on 9 February 2018.[113]Marketing drops the point number; so that it is just "Bluetooth 5" (unlike Bluetooth 4.0);[114]the change is for the sake of "Simplifying our marketing, communicating user benefits more effectively and making it easier to signal significant technology updates to the market."
Bluetooth 5 provides, forBLE, options that can double the data rate (2Mbit/s burst) at the expense of range, or provide up to four times the range at the expense of data rate. The increase in transmissions could be important for Internet of Things devices, where many nodes connect throughout a whole house. Bluetooth 5 increases capacity of connectionless services such as location-relevant navigation[115]of low-energy Bluetooth connections.[116][117][118]
The major areas of improvement are:
Features added in CSA5 – integrated in v5.0:
The following features were removed in this version of the specification:
The Bluetooth SIG presented Bluetooth 5.1 on 21 January 2019.[120]
The major areas of improvement are:
Features added in Core Specification Addendum (CSA) 6 – integrated in v5.1:
The following features were removed in this version of the specification:
On 31 December 2019, the Bluetooth SIG published the Bluetooth Core Specification version 5.2. The new specification adds new features:[121]
The Bluetooth SIG published the Bluetooth Core Specification version 5.3 on 13 July 2021. The feature enhancements of Bluetooth 5.3 are:[128]
The following features were removed in this version of the specification:
The Bluetooth SIG released the Bluetooth Core Specification version 5.4 on 7 February 2023. This new version adds the following features:[129]
The Bluetooth SIG released the Bluetooth Core Specification version 6.0 on 27 August 2024.[130]This version adds the following features:[131]
The Bluetooth SIG released the Bluetooth Core Specification version 6.1 on 7 May 2025.[132]
Seeking to extend the compatibility of Bluetooth devices, the devices that adhere to the standard use an interface called HCI (Host Controller Interface) between the host and the controller.
High-level protocols such as the SDP (Protocol used to find other Bluetooth devices within the communication range, also responsible for detecting the function of devices in range), RFCOMM (Protocol used to emulate serial port connections) and TCS (Telephony control protocol) interact with the baseband controller through the L2CAP (Logical Link Control and Adaptation Protocol). The L2CAP protocol is responsible for the segmentation and reassembly of the packets.
The hardware that makes up the Bluetooth device is made up of, logically, two parts; which may or may not be physically separate. A radio device, responsible for modulating and transmitting the signal; and a digital controller. The digital controller is likely a CPU, one of whose functions is to run a Link Controller; and interfaces with the host device; but some functions may be delegated to hardware. The Link Controller is responsible for the processing of the baseband and the management of ARQ and physical layer FEC protocols. In addition, it handles the transfer functions (both asynchronous and synchronous), audio coding (e.g.SBC (codec)) and data encryption. The CPU of the device is responsible for attending the instructions related to Bluetooth of the host device, in order to simplify its operation. To do this, the CPU runs software called Link Manager that has the function of communicating with other devices through the LMP protocol.
A Bluetooth device is ashort-rangewirelessdevice. Bluetooth devices arefabricatedonRF CMOSintegrated circuit(RF circuit) chips.[133][134]
Bluetooth is defined as a layer protocol architecture consisting of core protocols, cable replacement protocols, telephony control protocols, and adopted protocols.[135]Mandatory protocols for all Bluetooth stacks are LMP, L2CAP and SDP. In addition, devices that communicate with Bluetooth almost universally can use these protocols:HCIand RFCOMM.[citation needed]
The Link Manager (LM) is the system that manages establishing the connection between devices. It is responsible for the establishment, authentication and configuration of the link. The Link Manager locates other managers and communicates with them via the management protocol of the LMP link. To perform its function as a service provider, the LM uses the services included in the Link Controller (LC).
The Link Manager Protocol basically consists of several PDUs (Protocol Data Units) that are sent from one device to another. The following is a list of supported services:
The Host Controller Interface provides a command interface between the controller and the host.
TheLogical Link Control and Adaptation Protocol(L2CAP) is used to multiplex multiple logical connections between two devices using different higher level protocols.
Provides segmentation and reassembly of on-air packets.
InBasicmode, L2CAP provides packets with a payload configurable up to 64 kB, with 672 bytes as the defaultMTU, and 48 bytes as the minimum mandatory supported MTU.
InRetransmission and Flow Controlmodes, L2CAP can be configured either for isochronous data or reliable data per channel by performing retransmissions and CRC checks.
Bluetooth Core Specification Addendum 1 adds two additional L2CAP modes to the core specification. These modes effectively deprecate original Retransmission and Flow Control modes:
Reliability in any of these modes is optionally and/or additionally guaranteed by the lower layer Bluetooth BDR/EDR air interface by configuring the number of retransmissions and flush timeout (time after which the radio flushes packets). In-order sequencing is guaranteed by the lower layer.
Only L2CAP channels configured in ERTM or SM may be operated over AMP logical links.
TheService Discovery Protocol(SDP) allows a device to discover services offered by other devices, and their associated parameters. For example, when you use a mobile phone with a Bluetooth headset, the phone uses SDP to determine whichBluetooth profilesthe headset can use (Headset Profile, Hands Free Profile (HFP),Advanced Audio Distribution Profile (A2DP)etc.) and the protocol multiplexer settings needed for the phone to connect to the headset using each of them. Each service is identified by aUniversally unique identifier(UUID), with official services (Bluetooth profiles) assigned a short form UUID (16 bits rather than the full 128).
Radio Frequency Communications(RFCOMM) is a cable replacement protocol used for generating a virtual serial data stream. RFCOMM provides for binary data transport and emulatesEIA-232(formerly RS-232) control signals over the Bluetooth baseband layer, i.e., it is a serial port emulation.
RFCOMM provides a simple, reliable, data stream to the user, similar to TCP. It is used directly by many telephony related profiles as a carrier for AT commands, as well as being a transport layer for OBEX over Bluetooth.
Many Bluetooth applications use RFCOMM because of its widespread support and publicly available API on most operating systems. Additionally, applications that used a serial port to communicate can be quickly ported to use RFCOMM.
TheBluetooth Network Encapsulation Protocol(BNEP) is used for transferring another protocol stack's data via an L2CAP channel.
Its main purpose is the transmission of IP packets in the Personal Area Networking Profile.
BNEP performs a similar function toSNAPin Wireless LAN.
TheAudio/Video Control Transport Protocol(AVCTP) is used by the remote control profile to transfer AV/C commands over an L2CAP channel. The music control buttons on a stereo headset use this protocol to control the music player.
TheAudio/Video Distribution Transport Protocol(AVDTP) is used by the advanced audio distribution (A2DP) profile to stream music to stereo headsets over anL2CAPchannel intended for video distribution profile in the Bluetooth transmission.
TheTelephony Control Protocol– Binary(TCS BIN) is the bit-oriented protocol that defines the call control signaling for the establishment of voice and data calls between Bluetooth devices. Additionally, "TCS BIN defines mobility management procedures for handling groups of Bluetooth TCS devices."
TCS-BIN is only used by the cordless telephony profile, which failed to attract implementers. As such it is only of historical interest.
Adopted protocols are defined by other standards-making organizations and incorporated into Bluetooth's protocol stack, allowing Bluetooth to code protocols only when necessary. The adopted protocols include:
Depending on packet type, individual packets may be protected byerror correction, either 1/3 rateforward error correction(FEC) or 2/3 rate. In addition, packets with CRC will be retransmitted until acknowledged byautomatic repeat request(ARQ).
Any Bluetooth device indiscoverable modetransmits the following information on demand:
Any device may perform an inquiry to find other devices to connect to, and any device can be configured to respond to such inquiries. However, if the device trying to connect knows the address of the device, it always responds to direct connection requests and transmits the information shown in the list above if requested. Use of a device's services may require pairing or acceptance by its owner, but the connection itself can be initiated by any device and held until it goes out of range. Some devices can be connected to only one device at a time, and connecting to them prevents them from connecting to other devices and appearing in inquiries until they disconnect from the other device.
Every device has aunique 48-bit address. However, these addresses are generally not shown in inquiries. Instead, friendly Bluetooth names are used, which can be set by the user. This name appears when another user scans for devices and in lists of paired devices.
Most cellular phones have the Bluetooth name set to the manufacturer and model of the phone by default. Most cellular phones and laptops show only the Bluetooth names and special programs are required to get additional information about remote devices. This can be confusing as, for example, there could be several cellular phones in range namedT610(seeBluejacking).
Many services offered over Bluetooth can expose private data or let a connecting party control the Bluetooth device. Security reasons make it necessary to recognize specific devices, and thus enable control over which devices can connect to a given Bluetooth device. At the same time, it is useful for Bluetooth devices to be able to establish a connection without user intervention (for example, as soon as in range).
To resolve this conflict, Bluetooth uses a process calledbonding, and a bond is generated through a process calledpairing. The pairing process is triggered either by a specific request from a user to generate a bond (for example, the user explicitly requests to "Add a Bluetooth device"), or it is triggered automatically when connecting to a service where (for the first time) the identity of a device is required for security purposes. These two cases are referred to as dedicated bonding and general bonding respectively.
Pairing often involves some level of user interaction. This user interaction confirms the identity of the devices. When pairing completes, a bond forms between the two devices, enabling those two devices to connect in the future without repeating the pairing process to confirm device identities. When desired, the user can remove the bonding relationship.
During pairing, the two devices establish a relationship by creating ashared secretknown as alink key. If both devices store the same link key, they are said to bepairedorbonded. A device that wants to communicate only with a bonded device cancryptographicallyauthenticatethe identity of the other device, ensuring it is the same device it previously paired with. Once a link key is generated, an authenticatedACLlink between the devices may beencryptedto protect exchanged data againsteavesdropping. Users can delete link keys from either device, which removes the bond between the devices—so it is possible for one device to have a stored link key for a device it is no longer paired with.
Bluetooth services generally require either encryption or authentication and as such require pairing before they let a remote device connect. Some services, such as the Object Push Profile, elect not to explicitly require authentication or encryption so that pairing does not interfere with the user experience associated with the service use-cases.
Pairing mechanisms changed significantly with the introduction of Secure Simple Pairing in Bluetooth v2.1. The following summarizes the pairing mechanisms:
SSP is considered simple for the following reasons:
Prior to Bluetooth v2.1, encryption is not required and can be turned off at any time. Moreover, the encryption key is only good for approximately 23.5 hours; using a single encryption key longer than this time allows simpleXOR attacksto retrieve the encryption key.
Bluetooth v2.1 addresses this in the following ways:
Link keys may be stored on the device file system, not on the Bluetooth chip itself. Many Bluetooth chip manufacturers let link keys be stored on the device—however, if the device is removable, this means that the link key moves with the device.
Bluetooth implementsconfidentiality,authenticationandkeyderivation with custom algorithms based on theSAFER+block cipher. Bluetooth key generation is generally based on a Bluetooth PIN, which must be entered into both devices. This procedure might be modified if one of the devices has a fixed PIN (e.g., for headsets or similar devices with a restricted user interface). During pairing, an initialization key or master key is generated, using the E22 algorithm.[136]TheE0stream cipher is used for encrypting packets, granting confidentiality, and is based on a shared cryptographic secret, namely a previously generated link key or master key. Those keys, used for subsequent encryption of data sent via the air interface, rely on the Bluetooth PIN, which has been entered into one or both devices.
An overview of Bluetooth vulnerabilities exploits was published in 2007 by Andreas Becker.[137]
In September 2008, theNational Institute of Standards and Technology(NIST) published a Guide to Bluetooth Security as a reference for organizations. It describes Bluetooth security capabilities and how to secure Bluetooth technologies effectively. While Bluetooth has its benefits, it is susceptible to denial-of-service attacks, eavesdropping, man-in-the-middle attacks, message modification, and resource misappropriation. Users and organizations must evaluate their acceptable level of risk and incorporate security into the lifecycle of Bluetooth devices. To help mitigate risks, included in the NIST document are security checklists with guidelines and recommendations for creating and maintaining secure Bluetooth piconets, headsets, and smart card readers.[138]
Bluetooth v2.1 – finalized in 2007 with consumer devices first appearing in 2009 – makes significant changes to Bluetooth's security, including pairing. See thepairing mechanismssection for more about these changes.
Bluejacking is the sending of either a picture or a message from one user to an unsuspecting user through Bluetooth wireless technology. Common applications include short messages, e.g., "You've just been bluejacked!"[139]Bluejacking does not involve the removal or alteration of any data from the device.[140]
Some form ofDoSis also possible, even in modern devices, by sending unsolicited pairing requests in rapid succession; this becomes disruptive because most systems display a full screen notification for every connection request, interrupting every other activity, especially on less powerful devices.
In 2001, Jakobsson and Wetzel fromBell Laboratoriesdiscovered flaws in the Bluetooth pairing protocol and also pointed to vulnerabilities in the encryption scheme.[141]In 2003, Ben and Adam Laurie from A.L. Digital Ltd. discovered that serious flaws in some poor implementations of Bluetooth security may lead to disclosure of personal data.[142]In a subsequent experiment, Martin Herfurt from the trifinite.group was able to do a field-trial at theCeBITfairgrounds, showing the importance of the problem to the world. A new attack calledBlueBugwas used for this experiment.[143]In 2004 the first purportedvirususing Bluetooth to spread itself among mobile phones appeared on theSymbian OS.[144]The virus was first described byKaspersky Laband requires users to confirm the installation of unknown software before it can propagate. The virus was written as a proof-of-concept by a group of virus writers known as "29A" and sent to anti-virus groups. Thus, it should be regarded as a potential (but not real) security threat to Bluetooth technology orSymbian OSsince the virus has never spread outside of this system. In August 2004, a world-record-setting experiment (see alsoBluetooth sniping) showed that the range of Class 2 Bluetooth radios could be extended to 1.78 km (1.11 mi) with directional antennas and signal amplifiers.[145]This poses a potential security threat because it enables attackers to access vulnerable Bluetooth devices from a distance beyond expectation. The attacker must also be able to receive information from the victim to set up a connection. No attack can be made against a Bluetooth device unless the attacker knows its Bluetooth address and which channels to transmit on, although these can be deduced within a few minutes if the device is in use.[146]
In January 2005, a mobilemalwareworm known as Lasco surfaced. The worm began targeting mobile phones usingSymbian OS(Series 60 platform) using Bluetooth enabled devices to replicate itself and spread to other devices. The worm is self-installing and begins once the mobile user approves the transfer of the file (Velasco.sis) from another device. Once installed, the worm begins looking for other Bluetooth enabled devices to infect. Additionally, the worm infects other.SISfiles on the device, allowing replication to another device through the use of removable media (Secure Digital,CompactFlash, etc.). The worm can render the mobile device unstable.[147]
In April 2005,University of Cambridgesecurity researchers published results of their actual implementation of passive attacks against thePIN-basedpairing between commercial Bluetooth devices. They confirmed that attacks are practicably fast, and the Bluetooth symmetric key establishment method is vulnerable. To rectify this vulnerability, they designed an implementation that showed that stronger, asymmetric key establishment is feasible for certain classes of devices, such as mobile phones.[148]
In June 2005, Yaniv Shaked[149]and Avishai Wool[150]published a paper describing both passive and active methods for obtaining the PIN for a Bluetooth link. The passive attack allows a suitably equipped attacker to eavesdrop on communications and spoof if the attacker was present at the time of initial pairing. The active method makes use of a specially constructed message that must be inserted at a specific point in the protocol, to make the master and slave repeat the pairing process. After that, the first method can be used to crack the PIN. This attack's major weakness is that it requires the user of the devices under attack to re-enter the PIN during the attack when the device prompts them to. Also, this active attack probably requires custom hardware, since most commercially available Bluetooth devices are not capable of the timing necessary.[151]
In August 2005, police inCambridgeshire, England, issued warnings about thieves using Bluetooth enabled phones to track other devices left in cars. Police are advising users to ensure that any mobile networking connections are de-activated if laptops and other devices are left in this way.[152]
In April 2006, researchers fromSecure NetworkandF-Securepublished a report that warns of the large number of devices left in a visible state, and issued statistics on the spread of various Bluetooth services and the ease of spread of an eventual Bluetooth worm.[153]
In October 2006, at the Luxembourgish Hack.lu Security Conference, Kevin Finistere and Thierry Zoller demonstrated and released a remote root shell via Bluetooth on Mac OS X v10.3.9 and v10.4. They also demonstrated the first Bluetooth PIN and Linkkeys cracker, which is based on the research of Wool and Shaked.[154]
In April 2017, security researchers at Armis discovered multiple exploits in the Bluetooth software in various platforms, includingMicrosoft Windows,Linux, AppleiOS, and GoogleAndroid. These vulnerabilities are collectively called "BlueBorne". The exploits allow an attacker to connect to devices or systems without authentication and can give them "virtually full control over the device". Armis contacted Google, Microsoft, Apple, Samsung and Linux developers allowing them to patch their software before the coordinated announcement of the vulnerabilities on 12 September 2017.[155]
In July 2018, Lior Neumann andEli Biham, researchers at the Technion – Israel Institute of Technology identified a security vulnerability in the latest Bluetooth pairing procedures: Secure Simple Pairing and LE Secure Connections.[156][157]
Also, in October 2018, Karim Lounis, a network security researcher at Queen's University, identified a security vulnerability, called CDV (Connection Dumping Vulnerability), on various Bluetooth devices that allows an attacker to tear down an existing Bluetooth connection and cause the deauthentication and disconnection of the involved devices. The researcher demonstrated the attack on various devices of different categories and from different manufacturers.[158]
In August 2019, security researchers at theSingapore University of Technology and Design, Helmholtz Center for Information Security, andUniversity of Oxforddiscovered a vulnerability, called KNOB (Key Negotiation of Bluetooth) in the key negotiation that would "brute force the negotiated encryption keys, decrypt the eavesdropped ciphertext, and inject valid encrypted messages (in real-time)".[159][160]Google released anAndroidsecurity patch on 5 August 2019, which removed this vulnerability.[161]
In November 2023, researchers fromEurecomrevealed a new class of attacks known as BLUFFS (Bluetooth Low Energy Forward and Future Secrecy Attacks). These 6 new attacks expand on and work in conjunction with the previously known KNOB and BIAS (Bluetooth Impersonation AttackS) attacks. While the previous KNOB and BIAS attacks allowed an attacker to decrypt and spoof Bluetooth packets within a session, BLUFFS extends this capability to all sessions generated by a device (including past, present, and future). All devices running Bluetooth versions 4.2 up to and including 5.4 are affected.[162][163]
Bluetooth uses theradio frequencyspectrum in the 2.402GHz to 2.480GHz range,[164]which isnon-ionizing radiation, of similar bandwidth to that used by wireless and mobile phones. No specific harm has been demonstrated, even though wireless transmission has been included byIARCin the possiblecarcinogenlist. Maximum power output from a Bluetooth radio is 100mWfor Class1, 2.5mW for Class2, and 1mW for Class3 devices. Even the maximum power output of Class1 is a lower level than the lowest-powered mobile phones.[165]UMTSandW-CDMAoutput 250mW,GSM1800/1900outputs 1000mW, andGSM850/900outputs 2000mW.
The Bluetooth Innovation World Cup, a marketing initiative of the Bluetooth Special Interest Group (SIG), was an international competition that encouraged the development of innovations for applications leveraging Bluetooth technology in sports, fitness and health care products. The competition aimed to stimulate new markets.[166]
The Bluetooth Innovation World Cup morphed into the Bluetooth Breakthrough Awards in 2013. Bluetooth SIG subsequently launched the Imagine Blue Award in 2016 at Bluetooth World.[167]The Bluetooth Breakthrough Awards program highlights the most innovative products and applications available today, prototypes coming soon, and student-led projects in the making.[168]
|
https://en.wikipedia.org/wiki/Bluetooth#Link_keys_and_encryption_keys
|
Bluetoothis a short-rangewirelesstechnology standard that is used for exchanging data between fixed and mobile devices over short distances and buildingpersonal area networks(PANs). In the most widely used mode, transmission power is limited to 2.5milliwatts, giving it a very short range of up to 10 metres (33 ft). It employsUHFradio wavesin theISM bands, from 2.402GHzto 2.48GHz.[3]It is mainly used as an alternative to wired connections to exchange files between nearby portable devices and connectcell phonesand music players withwireless headphones,wireless speakers,HIFIsystems,car audioand wireless transmission betweenTVsandsoundbars.
Bluetooth is managed by theBluetooth Special Interest Group(SIG), which has more than 35,000 member companies in the areas of telecommunication, computing, networking, and consumer electronics. TheIEEEstandardized Bluetooth asIEEE 802.15.1but no longer maintains the standard. The Bluetooth SIG oversees the development of the specification, manages the qualification program, and protects the trademarks.[4]A manufacturer must meetBluetooth SIG standardsto market it as a Bluetooth device.[5]A network ofpatentsapplies to the technology, which is licensed to individual qualifying devices. As of 2021[update], 4.7 billion Bluetoothintegrated circuitchips are shipped annually.[6]Bluetooth was first demonstrated in space in 2024, an early test envisioned to enhanceIoTcapabilities.[7]
The name "Bluetooth" was proposed in 1997 by Jim Kardach ofIntel, one of the founders of the Bluetooth SIG. The name was inspired by a conversation with Sven Mattisson who related Scandinavian history through tales fromFrans G. Bengtsson'sThe Long Ships, a historical novel about Vikings and the 10th-century Danish kingHarald Bluetooth. Upon discovering a picture of therunestone of Harald Bluetooth[8]in the bookA History of the VikingsbyGwyn Jones, Kardach proposed Bluetooth as the codename for the short-range wireless program which is now called Bluetooth.[9][10][11]
According to Bluetooth's official website,
Bluetooth was only intended as a placeholder until marketing could come up with something really cool.
Later, when it came time to select a serious name, Bluetooth was to be replaced with either RadioWire or PAN (Personal Area Networking). PAN was the front runner, but an exhaustive search discovered it already had tens of thousands of hits throughout the internet.
A full trademark search on RadioWire couldn't be completed in time for launch, making Bluetooth the only choice. The name caught on fast and before it could be changed, it spread throughout the industry, becoming synonymous with short-range wireless technology.[12]
Bluetooth is theAnglicisedversion of the ScandinavianBlåtand/Blåtann(or inOld Norseblátǫnn). It was theepithetof King Harald Bluetooth, who united the disparate
Danish tribes into a single kingdom; Kardach chose the name to imply that Bluetooth similarly unites communication protocols.[13]
The Bluetooth logois abind runemerging theYounger Futharkrunes(ᚼ,Hagall) and(ᛒ,Bjarkan), Harald's initials.[14][15]
The development of the "short-link" radio technology, later named Bluetooth, was initiated in 1989 by Nils Rydbeck, CTO atEricsson MobileinLund, Sweden. The purpose was to develop wireless headsets, according to two inventions byJohan Ullman,SE 8902098-6, issued 12 June 1989andSE 9202239, issued 24 July 1992. Nils Rydbeck taskedTord Wingrenwith specifying and DutchmanJaap Haartsenand Sven Mattisson with developing.[16]Both were working for Ericsson in Lund.[17]Principal design and development began in 1994 and by 1997 the team had a workable solution.[18]From 1997 Örjan Johansson became the project leader and propelled the technology and standardization.[19][20][21][22]
In 1997, Adalio Sanchez, then head ofIBMThinkPadproduct R&D, approached Nils Rydbeck about collaborating on integrating amobile phoneinto a ThinkPad notebook. The two assigned engineers fromEricssonandIBMstudied the idea. The conclusion was that power consumption on cellphone technology at that time was too high to allow viable integration into a notebook and still achieve adequate battery life. Instead, the two companies agreed to integrate Ericsson's short-link technology on both a ThinkPad notebook and an Ericsson phone to accomplish the goal.
Since neither IBM ThinkPad notebooks nor Ericsson phones were the market share leaders in their respective markets at that time, Adalio Sanchez and Nils Rydbeck agreed to make the short-link technology an open industry standard to permit each player maximum market access. Ericsson contributed the short-link radio technology, and IBM contributed patents around the logical layer. Adalio Sanchez of IBM then recruited Stephen Nachtsheim of Intel to join and then Intel also recruitedToshibaandNokia. In May 1998, the Bluetooth SIG was launched with IBM and Ericsson as the founding signatories and a total of five members: Ericsson, Intel, Nokia, Toshiba, and IBM.
The first Bluetooth device was revealed in 1999. It was a hands-free mobile headset that earned the "Best of show Technology Award" atCOMDEX. The first Bluetooth mobile phone was the unreleased prototype Ericsson T36, though it was the revised Ericsson modelT39that actually made it to store shelves in June 2001. However Ericsson released the R520m in Quarter 1 of 2001,[23]making the R520m the first ever commercially available Bluetooth phone. In parallel, IBM introduced the IBM ThinkPad A30 in October 2001 which was the first notebook with integrated Bluetooth.
Bluetooth's early incorporation into consumer electronics products continued at Vosi Technologies in Costa Mesa, California, initially overseen by founding members Bejan Amini and Tom Davidson. Vosi Technologies had been created by real estate developer Ivano Stegmenga, with United States Patent 608507, for communication between a cellular phone and a vehicle's audio system. At the time, Sony/Ericsson had only a minor market share in the cellular phone market, which was dominated in the US by Nokia and Motorola. Due to ongoing negotiations for an intended licensing agreement with Motorola beginning in the late 1990s, Vosi could not publicly disclose the intention, integration, and initial development of other enabled devices which were to be the first "Smart Home" internet connected devices.
Vosi needed a means for the system to communicate without a wired connection from the vehicle to the other devices in the network. Bluetooth was chosen, sinceWi-Fiwas not yet readily available or supported in the public market. Vosi had begun to develop the Vosi Cello integrated vehicular system and some other internet connected devices, one of which was intended to be a table-top device named the Vosi Symphony, networked with Bluetooth. Through the negotiations withMotorola, Vosi introduced and disclosed its intent to integrate Bluetooth in its devices. In the early 2000s a legal battle[24]ensued between Vosi and Motorola, which indefinitely suspended release of the devices. Later, Motorola implemented it in their devices, which initiated the significant propagation of Bluetooth in the public market due to its large market share at the time.
In 2012, Jaap Haartsen was nominated by theEuropean Patent Officefor theEuropean Inventor Award.[18]
Bluetooth operates at frequencies between 2.402 and 2.480GHz, or 2.400 and 2.4835GHz, includingguard bands2MHz wide at the bottom end and 3.5MHz wide at the top.[25]This is in the globally unlicensed (but not unregulated) industrial, scientific and medical (ISM) 2.4GHz short-range radio frequency band. Bluetooth uses a radio technology calledfrequency-hopping spread spectrum. Bluetooth divides transmitted data into packets, and transmits each packet on one of 79 designated Bluetooth channels. Each channel has a bandwidth of 1MHz. It usually performs 1600hops per second, withadaptive frequency-hopping(AFH) enabled.[25]Bluetooth Low Energyuses 2MHz spacing, which accommodates 40 channels.[26]
Originally,Gaussian frequency-shift keying(GFSK) modulation was the only modulation scheme available. Since the introduction of Bluetooth 2.0+EDR, π/4-DQPSK(differential quadrature phase-shift keying) and 8-DPSK modulation may also be used between compatible devices. Devices functioning with GFSK are said to be operating in basic rate (BR) mode, where an instantaneousbit rateof 1Mbit/sis possible. The termEnhanced Data Rate(EDR) is used to describe π/4-DPSK (EDR2) and 8-DPSK (EDR3) schemes, transferring 2 and 3Mbit/s respectively.
In 2019, Apple published an extension called HDR which supports data rates of 4 (HDR4) and 8 (HDR8) Mbit/s using π/4-DQPSKmodulation on 4 MHz channels with forward error correction (FEC).[27]
Bluetooth is apacket-based protocolwith amaster/slave architecture. One master may communicate with up to seven slaves in apiconet. All devices within a given piconet use the clock provided by the master as the base for packet exchange. The master clock ticks with a period of 312.5μs, two clock ticks then make up a slot of 625μs, and two slots make up a slot pair of 1250μs. In the simple case of single-slot packets, the master transmits in even slots and receives in odd slots. The slave, conversely, receives in even slots and transmits in odd slots. Packets may be 1, 3, or 5 slots long, but in all cases, the master's transmission begins in even slots and the slave's in odd slots.
The above excludes Bluetooth Low Energy, introduced in the 4.0 specification,[28]whichuses the same spectrum but somewhat differently.
A master BR/EDR Bluetooth device can communicate with a maximum of seven devices in a piconet (an ad hoc computer network using Bluetooth technology), though not all devices reach this maximum. The devices can switch roles, by agreement, and the slave can become the master (for example, a headset initiating a connection to a phone necessarily begins as master—as an initiator of the connection—but may subsequently operate as the slave).
The Bluetooth Core Specification provides for the connection of two or more piconets to form ascatternet, in which certain devices simultaneously play the master/leader role in one piconet and the slave role in another.
At any given time, data can be transferred between the master and one other device (except for the little-used broadcast mode). The master chooses which slave device to address; typically, it switches rapidly from one device to another in around-robinfashion. Since it is the master that chooses which slave to address, whereas a slave is (in theory) supposed to listen in each receive slot, being a master is a lighter burden than being a slave. Being a master of seven slaves is possible; being a slave of more than one master is possible. The specification is vague as to required behavior in scatternets.[29]
Bluetooth is a standard wire-replacement communications protocol primarily designed for low power consumption, with a short range based on low-costtransceivermicrochipsin each device.[30]Because the devices use a radio (broadcast) communications system, they do not have to be in visual line of sight of each other; however, aquasi opticalwireless path must be viable.[31]
Historically, the Bluetooth range was defined by the radio class, with a lower class (and higher output power) having larger range.[2]The actual range of a given link depends on several qualities of both communicating devices and theair and obstacles in between. The primary attributes affecting range are the data rate, protocol (Bluetooth Classic or Bluetooth Low Energy), transmission power, and receiver sensitivity, and the relative orientations and gains of both antennas.[32]
The effective range varies depending on propagation conditions, material coverage, production sample variations, antenna configurations and battery conditions. Most Bluetooth applications are for indoor conditions, where attenuation of walls and signal fading due to signal reflections make the range far lower than specified line-of-sight ranges of the Bluetooth products.
Most Bluetooth applications are battery-powered Class 2 devices, with little difference in range whether the other end of the link is a Class 1 or Class 2 device as the lower-powered device tends to set the range limit. In some cases the effective range of the data link can be extended when a Class 2 device is connecting to a Class 1 transceiver with both higher sensitivity and transmission power than a typical Class 2 device.[33]In general, however, Class 1 devices have sensitivities similar to those of Class 2 devices. Connecting two Class 1 devices with both high sensitivity and high power can allow ranges far in excess of the typical 100 m, depending on the throughput required by the application. Some such devices allow open field ranges of up to 1 km and beyond between two similar devices without exceeding legal emission limits.[34][35][36]
To use Bluetooth wireless technology, a device must be able to interpret certain Bluetooth profiles.
For example,
Profiles are definitions of possible applications and specify general behaviors that Bluetooth-enabled devices use to communicate with other Bluetooth devices. These profiles include settings to parameterize and to control the communication from the start. Adherence to profiles saves the time for transmitting the parameters anew before the bi-directional link becomes effective. There are a wide range of Bluetooth profiles that describe many different types of applications or use cases for devices.[37]
Bluetooth exists in numerous products such as telephones,speakers, tablets, media players, robotics systems, laptops, and game console equipment as well as some high definitionheadsets,modems,hearing aids[53]and even watches.[54]Bluetooth is useful when transferring information between two or more devices that are near each other in low-bandwidth situations. Bluetooth is commonly used to transfer sound data with telephones (i.e., with a Bluetooth headset) or byte data with hand-held computers (transferring files).
Bluetooth protocols simplify the discovery and setup of services between devices.[55]Bluetooth devices can advertise all of the services they provide.[56]This makes using services easier, because more of the security,network addressand permission configuration can be automated than with many other network types.[55]
A personal computer that does not have embedded Bluetooth can use a Bluetooth adapter that enables the PC to communicate with Bluetooth devices. While somedesktop computersand most recent laptops come with a built-in Bluetooth radio, others require an external adapter, typically in the form of a small USB "dongle".
Unlike its predecessor,IrDA, which requires a separate adapter for each device, Bluetooth lets multiple devices communicate with a computer over a single adapter.[57]
ForMicrosoftplatforms,Windows XP Service Pack 2and SP3 releases work natively with Bluetooth v1.1, v2.0 and v2.0+EDR.[58]Previous versions required users to install their Bluetooth adapter's own drivers, which were not directly supported by Microsoft.[59]Microsoft's own Bluetooth dongles (packaged with their Bluetooth computer devices) have no external drivers and thus require at least Windows XP Service Pack 2. Windows Vista RTM/SP1 with the Feature Pack for Wireless or Windows Vista SP2 work with Bluetooth v2.1+EDR.[58]Windows 7 works with Bluetooth v2.1+EDR and Extended Inquiry Response (EIR).[58]The Windows XP and Windows Vista/Windows 7 Bluetooth stacks support the following Bluetooth profiles natively: PAN, SPP,DUN, HID, HCRP. The Windows XP stack can be replaced by a third party stack that supports more profiles or newer Bluetooth versions. The Windows Vista/Windows 7 Bluetooth stack supports vendor-supplied additional profiles without requiring that the Microsoft stack be replaced.[58]Windows 8 and later support Bluetooth Low Energy (BLE). It is generally recommended to install the latest vendor driver and its associated stack to be able to use the Bluetooth device at its fullest extent.
Appleproducts have worked with Bluetooth sinceMac OSX v10.2, which was released in 2002.[60]
Linuxhas two popularBluetooth stacks, BlueZ and Fluoride. The BlueZ stack is included with most Linux kernels and was originally developed byQualcomm.[61]Fluoride, earlier known as Bluedroid is included in Android OS and was originally developed byBroadcom.[62]There is also Affix stack, developed byNokia. It was once popular, but has not been updated since 2005.[63]
FreeBSDhas included Bluetooth since its v5.0 release, implemented throughnetgraph.[64][65]
NetBSDhas included Bluetooth since its v4.0 release.[66][67]Its Bluetooth stack was ported toOpenBSDas well, however OpenBSD later removed it as unmaintained.[68][69]
DragonFly BSDhas had NetBSD's Bluetooth implementation since 1.11 (2008).[70][71]Anetgraph-based implementation fromFreeBSDhas also been available in the tree, possibly disabled until 2014-11-15, and may require more work.[72][73]
The specifications were formalized by theBluetooth Special Interest Group(SIG) and formally announced on 20 May 1998.[74]In 2014 it had a membership of over 30,000 companies worldwide.[75]It was established byEricsson,IBM,Intel,NokiaandToshiba, and later joined by many other companies.
All versions of the Bluetooth standards arebackward-compatiblewith all earlier versions.[76]
The Bluetooth Core Specification Working Group (CSWG) produces mainly four kinds of specifications:
Major enhancements include:
This version of the Bluetooth Core Specification was released before 2005. The main difference is the introduction of an Enhanced Data Rate (EDR) for fasterdata transfer. The data rate of EDR is 3Mbit/s, although the maximum data transfer rate (allowing for inter-packet time and acknowledgements) is 2.1Mbit/s.[79]EDR uses a combination ofGFSKandphase-shift keyingmodulation (PSK) with two variants, π/4-DQPSKand 8-DPSK.[81]EDR can provide a lower power consumption through a reducedduty cycle.
The specification is published asBluetooth v2.0 + EDR, which implies that EDR is an optional feature. Aside from EDR, the v2.0 specification contains other minor improvements, and products may claim compliance to "Bluetooth v2.0" without supporting the higher data rate. At least one commercial device states "Bluetooth v2.0 without EDR" on its data sheet.[82]
Bluetooth Core Specification version 2.1 + EDR was adopted by the Bluetooth SIG on 26 July 2007.[81]
The headline feature of v2.1 issecure simple pairing(SSP): this improves the pairing experience for Bluetooth devices, while increasing the use and strength of security.[83]
Version 2.1 allows various other improvements, includingextended inquiry response(EIR), which provides more information during the inquiry procedure to allow better filtering of devices before connection; and sniff subrating, which reduces the power consumption in low-power mode.
Version 3.0 + HS of the Bluetooth Core Specification[81]was adopted by the Bluetooth SIG on 21 April 2009. Bluetooth v3.0 + HS provides theoretical data transfer speeds of up to 24 Mbit/s, though not over the Bluetooth link itself. Instead, the Bluetooth link is used for negotiation and establishment, and the high data rate traffic is carried over a colocated802.11link.
The main new feature isAMP(Alternative MAC/PHY), the addition of802.11as a high-speed transport. The high-speed part of the specification is not mandatory, and hence only devices that display the "+HS" logo actually support Bluetooth over 802.11 high-speed data transfer. A Bluetooth v3.0 device without the "+HS" suffix is only required to support features introduced in Core Specification version 3.0[84]or earlier Core Specification Addendum 1.[85]
The high-speed (AMP) feature of Bluetooth v3.0 was originally intended forUWB, but the WiMedia Alliance, the body responsible for the flavor of UWB intended for Bluetooth, announced in March 2009 that it was disbanding, and ultimately UWB was omitted from the Core v3.0 specification.[86]
On 16 March 2009, theWiMedia Allianceannounced it was entering into technology transfer agreements for the WiMediaUltra-wideband(UWB) specifications. WiMedia has transferred all current and future specifications, including work on future high-speed and power-optimized implementations, to the Bluetooth Special Interest Group (SIG),Wireless USBPromoter Group and theUSB Implementers Forum. After successful completion of the technology transfer, marketing, and related administrative items, the WiMedia Alliance ceased operations.[87][88][89][90][91]
In October 2009, theBluetooth Special Interest Groupsuspended development of UWB as part of the alternative MAC/PHY, Bluetooth v3.0 + HS solution. A small, but significant, number of formerWiMediamembers had not and would not sign up to the necessary agreements for theIPtransfer. As of 2009, the Bluetooth SIG was in the process of evaluating other options for its longer-term roadmap.[92][93][94]
The Bluetooth SIG completed the Bluetooth Core Specification version 4.0 (called Bluetooth Smart) and has been adopted as of 30 June 2010[update]. It includesClassic Bluetooth,Bluetooth high speedandBluetooth Low Energy(BLE) protocols. Bluetooth high speed is based on Wi-Fi, and Classic Bluetooth consists of legacy Bluetooth protocols.
Bluetooth Low Energy, previously known as Wibree,[95]is a subset of Bluetooth v4.0 with an entirely new protocol stack for rapid build-up of simple links. As an alternative to the Bluetooth standard protocols that were introduced in Bluetooth v1.0 to v3.0, it is aimed at very low power applications powered by acoin cell. Chip designs allow for two types of implementation, dual-mode, single-mode and enhanced past versions.[96]The provisional namesWibreeandBluetooth ULP(Ultra Low Power) were abandoned and the BLE name was used for a while. In late 2011, new logos "Bluetooth Smart Ready" for hosts and "Bluetooth Smart" for sensors were introduced as the general-public face of BLE.[97]
Compared toClassic Bluetooth, Bluetooth Low Energy is intended to provide considerably reduced power consumption and cost while maintaining asimilar communication range. In terms of lengthening the battery life of Bluetooth devices,BLErepresents a significant progression.
Cost-reduced single-mode chips, which enable highly integrated and compact devices, feature a lightweight Link Layer providing ultra-low power idle mode operation, simple device discovery, and reliable point-to-multipoint data transfer with advanced power-save and secure encrypted connections at the lowest possible cost.
General improvements in version 4.0 include the changes necessary to facilitate BLE modes, as well the Generic Attribute Profile (GATT) and Security Manager (SM) services withAESEncryption.
Core Specification Addendum 2 was unveiled in December 2011; it contains improvements to the audio Host Controller Interface and to the High Speed (802.11) Protocol Adaptation Layer.
Core Specification Addendum 3 revision 2 has an adoption date of 24 July 2012.
Core Specification Addendum 4 has an adoption date of 12 February 2013.
The Bluetooth SIG announced formal adoption of the Bluetooth v4.1 specification on 4 December 2013. This specification is an incremental software update to Bluetooth Specification v4.0, and not a hardware update. The update incorporates Bluetooth Core Specification Addenda (CSA 1, 2, 3 & 4) and adds new features that improve consumer usability. These include increased co-existence support for LTE, bulk data exchange rates—and aid developer innovation by allowing devices to support multiple roles simultaneously.[106]
New features of this specification include:
Some features were already available in a Core Specification Addendum (CSA) before the release of v4.1.
Released on 2 December 2014,[108]it introduces features for theInternet of things.
The major areas of improvement are:
Older Bluetooth hardware may receive 4.2 features such as Data Packet Length Extension and improved privacy via firmware updates.[109][110]
The Bluetooth SIG released Bluetooth 5 on 6 December 2016.[111]Its new features are mainly focused on newInternet of Thingstechnology. Sony was the first to announce Bluetooth 5.0 support with itsXperia XZ Premiumin Feb 2017 during the Mobile World Congress 2017.[112]The SamsungGalaxy S8launched with Bluetooth 5 support in April 2017. In September 2017, theiPhone 8, 8 Plus andiPhone Xlaunched with Bluetooth 5 support as well.Applealso integrated Bluetooth 5 in its newHomePodoffering released on 9 February 2018.[113]Marketing drops the point number; so that it is just "Bluetooth 5" (unlike Bluetooth 4.0);[114]the change is for the sake of "Simplifying our marketing, communicating user benefits more effectively and making it easier to signal significant technology updates to the market."
Bluetooth 5 provides, forBLE, options that can double the data rate (2Mbit/s burst) at the expense of range, or provide up to four times the range at the expense of data rate. The increase in transmissions could be important for Internet of Things devices, where many nodes connect throughout a whole house. Bluetooth 5 increases capacity of connectionless services such as location-relevant navigation[115]of low-energy Bluetooth connections.[116][117][118]
The major areas of improvement are:
Features added in CSA5 – integrated in v5.0:
The following features were removed in this version of the specification:
The Bluetooth SIG presented Bluetooth 5.1 on 21 January 2019.[120]
The major areas of improvement are:
Features added in Core Specification Addendum (CSA) 6 – integrated in v5.1:
The following features were removed in this version of the specification:
On 31 December 2019, the Bluetooth SIG published the Bluetooth Core Specification version 5.2. The new specification adds new features:[121]
The Bluetooth SIG published the Bluetooth Core Specification version 5.3 on 13 July 2021. The feature enhancements of Bluetooth 5.3 are:[128]
The following features were removed in this version of the specification:
The Bluetooth SIG released the Bluetooth Core Specification version 5.4 on 7 February 2023. This new version adds the following features:[129]
The Bluetooth SIG released the Bluetooth Core Specification version 6.0 on 27 August 2024.[130]This version adds the following features:[131]
The Bluetooth SIG released the Bluetooth Core Specification version 6.1 on 7 May 2025.[132]
Seeking to extend the compatibility of Bluetooth devices, the devices that adhere to the standard use an interface called HCI (Host Controller Interface) between the host and the controller.
High-level protocols such as the SDP (Protocol used to find other Bluetooth devices within the communication range, also responsible for detecting the function of devices in range), RFCOMM (Protocol used to emulate serial port connections) and TCS (Telephony control protocol) interact with the baseband controller through the L2CAP (Logical Link Control and Adaptation Protocol). The L2CAP protocol is responsible for the segmentation and reassembly of the packets.
The hardware that makes up the Bluetooth device is made up of, logically, two parts; which may or may not be physically separate. A radio device, responsible for modulating and transmitting the signal; and a digital controller. The digital controller is likely a CPU, one of whose functions is to run a Link Controller; and interfaces with the host device; but some functions may be delegated to hardware. The Link Controller is responsible for the processing of the baseband and the management of ARQ and physical layer FEC protocols. In addition, it handles the transfer functions (both asynchronous and synchronous), audio coding (e.g.SBC (codec)) and data encryption. The CPU of the device is responsible for attending the instructions related to Bluetooth of the host device, in order to simplify its operation. To do this, the CPU runs software called Link Manager that has the function of communicating with other devices through the LMP protocol.
A Bluetooth device is ashort-rangewirelessdevice. Bluetooth devices arefabricatedonRF CMOSintegrated circuit(RF circuit) chips.[133][134]
Bluetooth is defined as a layer protocol architecture consisting of core protocols, cable replacement protocols, telephony control protocols, and adopted protocols.[135]Mandatory protocols for all Bluetooth stacks are LMP, L2CAP and SDP. In addition, devices that communicate with Bluetooth almost universally can use these protocols:HCIand RFCOMM.[citation needed]
The Link Manager (LM) is the system that manages establishing the connection between devices. It is responsible for the establishment, authentication and configuration of the link. The Link Manager locates other managers and communicates with them via the management protocol of the LMP link. To perform its function as a service provider, the LM uses the services included in the Link Controller (LC).
The Link Manager Protocol basically consists of several PDUs (Protocol Data Units) that are sent from one device to another. The following is a list of supported services:
The Host Controller Interface provides a command interface between the controller and the host.
TheLogical Link Control and Adaptation Protocol(L2CAP) is used to multiplex multiple logical connections between two devices using different higher level protocols.
Provides segmentation and reassembly of on-air packets.
InBasicmode, L2CAP provides packets with a payload configurable up to 64 kB, with 672 bytes as the defaultMTU, and 48 bytes as the minimum mandatory supported MTU.
InRetransmission and Flow Controlmodes, L2CAP can be configured either for isochronous data or reliable data per channel by performing retransmissions and CRC checks.
Bluetooth Core Specification Addendum 1 adds two additional L2CAP modes to the core specification. These modes effectively deprecate original Retransmission and Flow Control modes:
Reliability in any of these modes is optionally and/or additionally guaranteed by the lower layer Bluetooth BDR/EDR air interface by configuring the number of retransmissions and flush timeout (time after which the radio flushes packets). In-order sequencing is guaranteed by the lower layer.
Only L2CAP channels configured in ERTM or SM may be operated over AMP logical links.
TheService Discovery Protocol(SDP) allows a device to discover services offered by other devices, and their associated parameters. For example, when you use a mobile phone with a Bluetooth headset, the phone uses SDP to determine whichBluetooth profilesthe headset can use (Headset Profile, Hands Free Profile (HFP),Advanced Audio Distribution Profile (A2DP)etc.) and the protocol multiplexer settings needed for the phone to connect to the headset using each of them. Each service is identified by aUniversally unique identifier(UUID), with official services (Bluetooth profiles) assigned a short form UUID (16 bits rather than the full 128).
Radio Frequency Communications(RFCOMM) is a cable replacement protocol used for generating a virtual serial data stream. RFCOMM provides for binary data transport and emulatesEIA-232(formerly RS-232) control signals over the Bluetooth baseband layer, i.e., it is a serial port emulation.
RFCOMM provides a simple, reliable, data stream to the user, similar to TCP. It is used directly by many telephony related profiles as a carrier for AT commands, as well as being a transport layer for OBEX over Bluetooth.
Many Bluetooth applications use RFCOMM because of its widespread support and publicly available API on most operating systems. Additionally, applications that used a serial port to communicate can be quickly ported to use RFCOMM.
TheBluetooth Network Encapsulation Protocol(BNEP) is used for transferring another protocol stack's data via an L2CAP channel.
Its main purpose is the transmission of IP packets in the Personal Area Networking Profile.
BNEP performs a similar function toSNAPin Wireless LAN.
TheAudio/Video Control Transport Protocol(AVCTP) is used by the remote control profile to transfer AV/C commands over an L2CAP channel. The music control buttons on a stereo headset use this protocol to control the music player.
TheAudio/Video Distribution Transport Protocol(AVDTP) is used by the advanced audio distribution (A2DP) profile to stream music to stereo headsets over anL2CAPchannel intended for video distribution profile in the Bluetooth transmission.
TheTelephony Control Protocol– Binary(TCS BIN) is the bit-oriented protocol that defines the call control signaling for the establishment of voice and data calls between Bluetooth devices. Additionally, "TCS BIN defines mobility management procedures for handling groups of Bluetooth TCS devices."
TCS-BIN is only used by the cordless telephony profile, which failed to attract implementers. As such it is only of historical interest.
Adopted protocols are defined by other standards-making organizations and incorporated into Bluetooth's protocol stack, allowing Bluetooth to code protocols only when necessary. The adopted protocols include:
Depending on packet type, individual packets may be protected byerror correction, either 1/3 rateforward error correction(FEC) or 2/3 rate. In addition, packets with CRC will be retransmitted until acknowledged byautomatic repeat request(ARQ).
Any Bluetooth device indiscoverable modetransmits the following information on demand:
Any device may perform an inquiry to find other devices to connect to, and any device can be configured to respond to such inquiries. However, if the device trying to connect knows the address of the device, it always responds to direct connection requests and transmits the information shown in the list above if requested. Use of a device's services may require pairing or acceptance by its owner, but the connection itself can be initiated by any device and held until it goes out of range. Some devices can be connected to only one device at a time, and connecting to them prevents them from connecting to other devices and appearing in inquiries until they disconnect from the other device.
Every device has aunique 48-bit address. However, these addresses are generally not shown in inquiries. Instead, friendly Bluetooth names are used, which can be set by the user. This name appears when another user scans for devices and in lists of paired devices.
Most cellular phones have the Bluetooth name set to the manufacturer and model of the phone by default. Most cellular phones and laptops show only the Bluetooth names and special programs are required to get additional information about remote devices. This can be confusing as, for example, there could be several cellular phones in range namedT610(seeBluejacking).
Many services offered over Bluetooth can expose private data or let a connecting party control the Bluetooth device. Security reasons make it necessary to recognize specific devices, and thus enable control over which devices can connect to a given Bluetooth device. At the same time, it is useful for Bluetooth devices to be able to establish a connection without user intervention (for example, as soon as in range).
To resolve this conflict, Bluetooth uses a process calledbonding, and a bond is generated through a process calledpairing. The pairing process is triggered either by a specific request from a user to generate a bond (for example, the user explicitly requests to "Add a Bluetooth device"), or it is triggered automatically when connecting to a service where (for the first time) the identity of a device is required for security purposes. These two cases are referred to as dedicated bonding and general bonding respectively.
Pairing often involves some level of user interaction. This user interaction confirms the identity of the devices. When pairing completes, a bond forms between the two devices, enabling those two devices to connect in the future without repeating the pairing process to confirm device identities. When desired, the user can remove the bonding relationship.
During pairing, the two devices establish a relationship by creating ashared secretknown as alink key. If both devices store the same link key, they are said to bepairedorbonded. A device that wants to communicate only with a bonded device cancryptographicallyauthenticatethe identity of the other device, ensuring it is the same device it previously paired with. Once a link key is generated, an authenticatedACLlink between the devices may beencryptedto protect exchanged data againsteavesdropping. Users can delete link keys from either device, which removes the bond between the devices—so it is possible for one device to have a stored link key for a device it is no longer paired with.
Bluetooth services generally require either encryption or authentication and as such require pairing before they let a remote device connect. Some services, such as the Object Push Profile, elect not to explicitly require authentication or encryption so that pairing does not interfere with the user experience associated with the service use-cases.
Pairing mechanisms changed significantly with the introduction of Secure Simple Pairing in Bluetooth v2.1. The following summarizes the pairing mechanisms:
SSP is considered simple for the following reasons:
Prior to Bluetooth v2.1, encryption is not required and can be turned off at any time. Moreover, the encryption key is only good for approximately 23.5 hours; using a single encryption key longer than this time allows simpleXOR attacksto retrieve the encryption key.
Bluetooth v2.1 addresses this in the following ways:
Link keys may be stored on the device file system, not on the Bluetooth chip itself. Many Bluetooth chip manufacturers let link keys be stored on the device—however, if the device is removable, this means that the link key moves with the device.
Bluetooth implementsconfidentiality,authenticationandkeyderivation with custom algorithms based on theSAFER+block cipher. Bluetooth key generation is generally based on a Bluetooth PIN, which must be entered into both devices. This procedure might be modified if one of the devices has a fixed PIN (e.g., for headsets or similar devices with a restricted user interface). During pairing, an initialization key or master key is generated, using the E22 algorithm.[136]TheE0stream cipher is used for encrypting packets, granting confidentiality, and is based on a shared cryptographic secret, namely a previously generated link key or master key. Those keys, used for subsequent encryption of data sent via the air interface, rely on the Bluetooth PIN, which has been entered into one or both devices.
An overview of Bluetooth vulnerabilities exploits was published in 2007 by Andreas Becker.[137]
In September 2008, theNational Institute of Standards and Technology(NIST) published a Guide to Bluetooth Security as a reference for organizations. It describes Bluetooth security capabilities and how to secure Bluetooth technologies effectively. While Bluetooth has its benefits, it is susceptible to denial-of-service attacks, eavesdropping, man-in-the-middle attacks, message modification, and resource misappropriation. Users and organizations must evaluate their acceptable level of risk and incorporate security into the lifecycle of Bluetooth devices. To help mitigate risks, included in the NIST document are security checklists with guidelines and recommendations for creating and maintaining secure Bluetooth piconets, headsets, and smart card readers.[138]
Bluetooth v2.1 – finalized in 2007 with consumer devices first appearing in 2009 – makes significant changes to Bluetooth's security, including pairing. See thepairing mechanismssection for more about these changes.
Bluejacking is the sending of either a picture or a message from one user to an unsuspecting user through Bluetooth wireless technology. Common applications include short messages, e.g., "You've just been bluejacked!"[139]Bluejacking does not involve the removal or alteration of any data from the device.[140]
Some form ofDoSis also possible, even in modern devices, by sending unsolicited pairing requests in rapid succession; this becomes disruptive because most systems display a full screen notification for every connection request, interrupting every other activity, especially on less powerful devices.
In 2001, Jakobsson and Wetzel fromBell Laboratoriesdiscovered flaws in the Bluetooth pairing protocol and also pointed to vulnerabilities in the encryption scheme.[141]In 2003, Ben and Adam Laurie from A.L. Digital Ltd. discovered that serious flaws in some poor implementations of Bluetooth security may lead to disclosure of personal data.[142]In a subsequent experiment, Martin Herfurt from the trifinite.group was able to do a field-trial at theCeBITfairgrounds, showing the importance of the problem to the world. A new attack calledBlueBugwas used for this experiment.[143]In 2004 the first purportedvirususing Bluetooth to spread itself among mobile phones appeared on theSymbian OS.[144]The virus was first described byKaspersky Laband requires users to confirm the installation of unknown software before it can propagate. The virus was written as a proof-of-concept by a group of virus writers known as "29A" and sent to anti-virus groups. Thus, it should be regarded as a potential (but not real) security threat to Bluetooth technology orSymbian OSsince the virus has never spread outside of this system. In August 2004, a world-record-setting experiment (see alsoBluetooth sniping) showed that the range of Class 2 Bluetooth radios could be extended to 1.78 km (1.11 mi) with directional antennas and signal amplifiers.[145]This poses a potential security threat because it enables attackers to access vulnerable Bluetooth devices from a distance beyond expectation. The attacker must also be able to receive information from the victim to set up a connection. No attack can be made against a Bluetooth device unless the attacker knows its Bluetooth address and which channels to transmit on, although these can be deduced within a few minutes if the device is in use.[146]
In January 2005, a mobilemalwareworm known as Lasco surfaced. The worm began targeting mobile phones usingSymbian OS(Series 60 platform) using Bluetooth enabled devices to replicate itself and spread to other devices. The worm is self-installing and begins once the mobile user approves the transfer of the file (Velasco.sis) from another device. Once installed, the worm begins looking for other Bluetooth enabled devices to infect. Additionally, the worm infects other.SISfiles on the device, allowing replication to another device through the use of removable media (Secure Digital,CompactFlash, etc.). The worm can render the mobile device unstable.[147]
In April 2005,University of Cambridgesecurity researchers published results of their actual implementation of passive attacks against thePIN-basedpairing between commercial Bluetooth devices. They confirmed that attacks are practicably fast, and the Bluetooth symmetric key establishment method is vulnerable. To rectify this vulnerability, they designed an implementation that showed that stronger, asymmetric key establishment is feasible for certain classes of devices, such as mobile phones.[148]
In June 2005, Yaniv Shaked[149]and Avishai Wool[150]published a paper describing both passive and active methods for obtaining the PIN for a Bluetooth link. The passive attack allows a suitably equipped attacker to eavesdrop on communications and spoof if the attacker was present at the time of initial pairing. The active method makes use of a specially constructed message that must be inserted at a specific point in the protocol, to make the master and slave repeat the pairing process. After that, the first method can be used to crack the PIN. This attack's major weakness is that it requires the user of the devices under attack to re-enter the PIN during the attack when the device prompts them to. Also, this active attack probably requires custom hardware, since most commercially available Bluetooth devices are not capable of the timing necessary.[151]
In August 2005, police inCambridgeshire, England, issued warnings about thieves using Bluetooth enabled phones to track other devices left in cars. Police are advising users to ensure that any mobile networking connections are de-activated if laptops and other devices are left in this way.[152]
In April 2006, researchers fromSecure NetworkandF-Securepublished a report that warns of the large number of devices left in a visible state, and issued statistics on the spread of various Bluetooth services and the ease of spread of an eventual Bluetooth worm.[153]
In October 2006, at the Luxembourgish Hack.lu Security Conference, Kevin Finistere and Thierry Zoller demonstrated and released a remote root shell via Bluetooth on Mac OS X v10.3.9 and v10.4. They also demonstrated the first Bluetooth PIN and Linkkeys cracker, which is based on the research of Wool and Shaked.[154]
In April 2017, security researchers at Armis discovered multiple exploits in the Bluetooth software in various platforms, includingMicrosoft Windows,Linux, AppleiOS, and GoogleAndroid. These vulnerabilities are collectively called "BlueBorne". The exploits allow an attacker to connect to devices or systems without authentication and can give them "virtually full control over the device". Armis contacted Google, Microsoft, Apple, Samsung and Linux developers allowing them to patch their software before the coordinated announcement of the vulnerabilities on 12 September 2017.[155]
In July 2018, Lior Neumann andEli Biham, researchers at the Technion – Israel Institute of Technology identified a security vulnerability in the latest Bluetooth pairing procedures: Secure Simple Pairing and LE Secure Connections.[156][157]
Also, in October 2018, Karim Lounis, a network security researcher at Queen's University, identified a security vulnerability, called CDV (Connection Dumping Vulnerability), on various Bluetooth devices that allows an attacker to tear down an existing Bluetooth connection and cause the deauthentication and disconnection of the involved devices. The researcher demonstrated the attack on various devices of different categories and from different manufacturers.[158]
In August 2019, security researchers at theSingapore University of Technology and Design, Helmholtz Center for Information Security, andUniversity of Oxforddiscovered a vulnerability, called KNOB (Key Negotiation of Bluetooth) in the key negotiation that would "brute force the negotiated encryption keys, decrypt the eavesdropped ciphertext, and inject valid encrypted messages (in real-time)".[159][160]Google released anAndroidsecurity patch on 5 August 2019, which removed this vulnerability.[161]
In November 2023, researchers fromEurecomrevealed a new class of attacks known as BLUFFS (Bluetooth Low Energy Forward and Future Secrecy Attacks). These 6 new attacks expand on and work in conjunction with the previously known KNOB and BIAS (Bluetooth Impersonation AttackS) attacks. While the previous KNOB and BIAS attacks allowed an attacker to decrypt and spoof Bluetooth packets within a session, BLUFFS extends this capability to all sessions generated by a device (including past, present, and future). All devices running Bluetooth versions 4.2 up to and including 5.4 are affected.[162][163]
Bluetooth uses theradio frequencyspectrum in the 2.402GHz to 2.480GHz range,[164]which isnon-ionizing radiation, of similar bandwidth to that used by wireless and mobile phones. No specific harm has been demonstrated, even though wireless transmission has been included byIARCin the possiblecarcinogenlist. Maximum power output from a Bluetooth radio is 100mWfor Class1, 2.5mW for Class2, and 1mW for Class3 devices. Even the maximum power output of Class1 is a lower level than the lowest-powered mobile phones.[165]UMTSandW-CDMAoutput 250mW,GSM1800/1900outputs 1000mW, andGSM850/900outputs 2000mW.
The Bluetooth Innovation World Cup, a marketing initiative of the Bluetooth Special Interest Group (SIG), was an international competition that encouraged the development of innovations for applications leveraging Bluetooth technology in sports, fitness and health care products. The competition aimed to stimulate new markets.[166]
The Bluetooth Innovation World Cup morphed into the Bluetooth Breakthrough Awards in 2013. Bluetooth SIG subsequently launched the Imagine Blue Award in 2016 at Bluetooth World.[167]The Bluetooth Breakthrough Awards program highlights the most innovative products and applications available today, prototypes coming soon, and student-led projects in the making.[168]
|
https://en.wikipedia.org/wiki/Bluetooth#Pairing
|
Innumber theory,Euler's totient functioncounts the positive integers up to a given integernthat arerelatively primeton. It is written using the Greek letterphiasφ(n){\displaystyle \varphi (n)}orϕ(n){\displaystyle \phi (n)}, and may also be calledEuler's phi function. In other words, it is the number of integerskin the range1 ≤k≤nfor which thegreatest common divisorgcd(n,k)is equal to 1.[2][3]The integerskof this form are sometimes referred to astotativesofn.
For example, the totatives ofn= 9are the six numbers 1, 2, 4, 5, 7 and 8. They are all relatively prime to 9, but the other three numbers in this range, 3, 6, and 9 are not, sincegcd(9, 3) = gcd(9, 6) = 3andgcd(9, 9) = 9. Therefore,φ(9) = 6. As another example,φ(1) = 1since forn= 1the only integer in the range from 1 tonis 1 itself, andgcd(1, 1) = 1.
Euler's totient function is amultiplicative function, meaning that if two numbersmandnare relatively prime, thenφ(mn) =φ(m)φ(n).[4][5]This function gives theorderof themultiplicative group of integers modulon(thegroupofunitsof theringZ/nZ{\displaystyle \mathbb {Z} /n\mathbb {Z} }).[6]It is also used for defining theRSA encryption system.
Leonhard Eulerintroduced the function in 1763.[7][8][9]However, he did not at that time choose any specific symbol to denote it. In a 1784 publication, Euler studied the function further, choosing the Greek letterπto denote it: he wroteπDfor "the multitude of numbers less thanD, and which have no common divisor with it".[10]This definition varies from the current definition for the totient function atD= 1but is otherwise the same. The now-standard notation[8][11]φ(A)comes fromGauss's 1801 treatiseDisquisitiones Arithmeticae,[12][13]although Gauss did not use parentheses around the argument and wroteφA. Thus, it is often calledEuler's phi functionor simply thephi function.
In 1879,J. J. Sylvestercoined the termtotientfor this function,[14][15]so it is also referred to asEuler's totient function, theEuler totient, orEuler's totient.[16]Jordan's totientis a generalization of Euler's.
Thecototientofnis defined asn−φ(n). It counts the number of positive integers less than or equal tonthat have at least oneprime factorin common withn.
There are several formulae for computingφ(n).
It states
where the product is over the distinctprime numbersdividingn. (For notation, seeArithmetical function.)
An equivalent formulation isφ(n)=p1k1−1(p1−1)p2k2−1(p2−1)⋯prkr−1(pr−1),{\displaystyle \varphi (n)=p_{1}^{k_{1}-1}(p_{1}{-}1)\,p_{2}^{k_{2}-1}(p_{2}{-}1)\cdots p_{r}^{k_{r}-1}(p_{r}{-}1),}wheren=p1k1p2k2⋯prkr{\displaystyle n=p_{1}^{k_{1}}p_{2}^{k_{2}}\cdots p_{r}^{k_{r}}}is theprime factorizationofn{\displaystyle n}(that is,p1,p2,…,pr{\displaystyle p_{1},p_{2},\ldots ,p_{r}}are distinct prime numbers).
The proof of these formulae depends on two important facts.
This means that ifgcd(m,n) = 1, thenφ(m)φ(n) =φ(mn).Proof outline:LetA,B,Cbe the sets of positive integers which arecoprimeto and less thanm,n,mn, respectively, so that|A| =φ(m), etc. Then there is abijectionbetweenA×BandCby theChinese remainder theorem.
Ifpis prime andk≥ 1, then
Proof: Sincepis a prime number, the only possible values ofgcd(pk,m)are1,p,p2, ...,pk, and the only way to havegcd(pk,m) > 1is ifmis a multiple ofp, that is,m∈ {p, 2p, 3p, ...,pk− 1p=pk}, and there arepk− 1such multiples not greater thanpk. Therefore, the otherpk−pk− 1numbers are all relatively prime topk.
Thefundamental theorem of arithmeticstates that ifn> 1there is a unique expressionn=p1k1p2k2⋯prkr,{\displaystyle n=p_{1}^{k_{1}}p_{2}^{k_{2}}\cdots p_{r}^{k_{r}},}wherep1<p2< ... <prareprime numbersand eachki≥ 1. (The casen= 1corresponds to the empty product.) Repeatedly using the multiplicative property ofφand the formula forφ(pk)gives
This gives both versions of Euler's product formula.
An alternative proof that does not require the multiplicative property instead uses theinclusion-exclusion principleapplied to the set{1,2,…,n}{\displaystyle \{1,2,\ldots ,n\}}, excluding the sets of integers divisible by the prime divisors.
In words: the distinct prime factors of 20 are 2 and 5; half of the twenty integers from 1 to 20 are divisible by 2, leaving ten; a fifth of those are divisible by 5, leaving eight numbers coprime to 20; these are: 1, 3, 7, 9, 11, 13, 17, 19.
The alternative formula uses only integers:φ(20)=φ(2251)=22−1(2−1)51−1(5−1)=2⋅1⋅1⋅4=8.{\displaystyle \varphi (20)=\varphi (2^{2}5^{1})=2^{2-1}(2{-}1)\,5^{1-1}(5{-}1)=2\cdot 1\cdot 1\cdot 4=8.}
The totient is thediscrete Fourier transformof thegcd, evaluated at 1.[17]Let
wherexk= gcd(k,n)fork∈ {1, ...,n}. Then
The real part of this formula is
For example, usingcosπ5=5+14{\displaystyle \cos {\tfrac {\pi }{5}}={\tfrac {{\sqrt {5}}+1}{4}}}andcos2π5=5−14{\displaystyle \cos {\tfrac {2\pi }{5}}={\tfrac {{\sqrt {5}}-1}{4}}}:φ(10)=gcd(1,10)cos2π10+gcd(2,10)cos4π10+gcd(3,10)cos6π10+⋯+gcd(10,10)cos20π10=1⋅(5+14)+2⋅(5−14)+1⋅(−5−14)+2⋅(−5+14)+5⋅(−1)+2⋅(−5+14)+1⋅(−5−14)+2⋅(5−14)+1⋅(5+14)+10⋅(1)=4.{\displaystyle {\begin{array}{rcl}\varphi (10)&=&\gcd(1,10)\cos {\tfrac {2\pi }{10}}+\gcd(2,10)\cos {\tfrac {4\pi }{10}}+\gcd(3,10)\cos {\tfrac {6\pi }{10}}+\cdots +\gcd(10,10)\cos {\tfrac {20\pi }{10}}\\&=&1\cdot ({\tfrac {{\sqrt {5}}+1}{4}})+2\cdot ({\tfrac {{\sqrt {5}}-1}{4}})+1\cdot (-{\tfrac {{\sqrt {5}}-1}{4}})+2\cdot (-{\tfrac {{\sqrt {5}}+1}{4}})+5\cdot (-1)\\&&+\ 2\cdot (-{\tfrac {{\sqrt {5}}+1}{4}})+1\cdot (-{\tfrac {{\sqrt {5}}-1}{4}})+2\cdot ({\tfrac {{\sqrt {5}}-1}{4}})+1\cdot ({\tfrac {{\sqrt {5}}+1}{4}})+10\cdot (1)\\&=&4.\end{array}}}Unlike the Euler product and the divisor sum formula, this one does not require knowing the factors ofn. However, it does involve the calculation of the greatest common divisor ofnand every positive integer less thann, which suffices to provide the factorization anyway.
The property established by Gauss,[18]that
where the sum is over all positive divisorsdofn, can be proven in several ways. (SeeArithmetical functionfor notational conventions.)
One proof is to note thatφ(d)is also equal to the number of possible generators of thecyclic groupCd; specifically, ifCd= ⟨g⟩withgd= 1, thengkis a generator for everykcoprime tod. Since every element ofCngenerates a cyclicsubgroup, and each subgroupCd⊆Cnis generated by preciselyφ(d)elements ofCn, the formula follows.[19]Equivalently, the formula can be derived by the same argument applied to themultiplicative group of thenth roots of unityand theprimitivedth roots of unity.
The formula can also be derived from elementary arithmetic.[20]For example, letn= 20and consider the positive fractions up to 1 with denominator 20:
Put them into lowest terms:
These twenty fractions are all the positivek/d≤ 1 whose denominators are the divisorsd= 1, 2, 4, 5, 10, 20. The fractions with 20 as denominator are those with numerators relatively prime to 20, namely1/20,3/20,7/20,9/20,11/20,13/20,17/20,19/20; by definition this isφ(20)fractions. Similarly, there areφ(10)fractions with denominator 10, andφ(5)fractions with denominator 5, etc. Thus the set of twenty fractions is split into subsets of sizeφ(d)for eachddividing 20. A similar argument applies for anyn.
Möbius inversionapplied to the divisor sum formula gives
whereμis theMöbius function, themultiplicative functiondefined byμ(p)=−1{\displaystyle \mu (p)=-1}andμ(pk)=0{\displaystyle \mu (p^{k})=0}for each primepandk≥ 2. This formula may also be derived from the product formula by multiplying out∏p∣n(1−1p){\textstyle \prod _{p\mid n}(1-{\frac {1}{p}})}to get∑d∣nμ(d)d.{\textstyle \sum _{d\mid n}{\frac {\mu (d)}{d}}.}
An example:φ(20)=μ(1)⋅20+μ(2)⋅10+μ(4)⋅5+μ(5)⋅4+μ(10)⋅2+μ(20)⋅1=1⋅20−1⋅10+0⋅5−1⋅4+1⋅2+0⋅1=8.{\displaystyle {\begin{aligned}\varphi (20)&=\mu (1)\cdot 20+\mu (2)\cdot 10+\mu (4)\cdot 5+\mu (5)\cdot 4+\mu (10)\cdot 2+\mu (20)\cdot 1\\[.5em]&=1\cdot 20-1\cdot 10+0\cdot 5-1\cdot 4+1\cdot 2+0\cdot 1=8.\end{aligned}}}
The first 100 values (sequenceA000010in theOEIS) are shown in the table and graph below:
In the graph at right the top liney=n− 1is anupper boundvalid for allnother than one, and attained if and only ifnis a prime number. A simple lower bound isφ(n)≥n/2{\displaystyle \varphi (n)\geq {\sqrt {n/2}}}, which is rather loose: in fact, thelower limitof the graph is proportional ton/log logn.[21]
This states that ifaandnarerelatively primethen
The special case wherenis prime is known asFermat's little theorem.
This follows fromLagrange's theoremand the fact thatφ(n)is theorderof themultiplicative group of integers modulon.
TheRSA cryptosystemis based on this theorem: it implies that theinverseof the functiona↦aemodn, whereeis the (public) encryption exponent, is the functionb↦bdmodn, whered, the (private) decryption exponent, is themultiplicative inverseofemoduloφ(n). The difficulty of computingφ(n)without knowing the factorization ofnis thus the difficulty of computingd: this is known as theRSA problemwhich can be solved by factoringn. The owner of the private key knows the factorization, since an RSA private key is constructed by choosingnas the product of two (randomly chosen) large primespandq. Onlynis publicly disclosed, and given thedifficulty to factor large numberswe have the guarantee that no one else knows the factorization.
In particular:
Compare this to the formulalcm(m,n)⋅gcd(m,n)=m⋅n{\textstyle \operatorname {lcm} (m,n)\cdot \operatorname {gcd} (m,n)=m\cdot n}(seeleast common multiple).
Moreover, ifnhasrdistinct odd prime factors,2r|φ(n)
whererad(n)is theradical ofn(the product of all distinct primes dividingn).
(whereγis theEuler–Mascheroni constant).
In 1965 P. Kesava Menon proved
whered(n) =σ0(n)is the number of divisors ofn.
The following property, which is part of the « folklore » (i.e., apparently unpublished as a specific result:[26]see the introduction of this article in which it is stated as having « long been known ») has important consequences. For instance it rules out uniform distribution of the values ofφ(n){\displaystyle \varphi (n)}in the arithmetic progressions moduloq{\displaystyle q}for any integerq>1{\displaystyle q>1}.
This is an elementary consequence of the fact that the sum of the reciprocals of the primes congruent to 1 moduloq{\displaystyle q}diverges, which itself is a corollary of the proof ofDirichlet's theorem on arithmetic progressions.
TheDirichlet seriesforφ(n)may be written in terms of theRiemann zeta functionas:[27]
where the left-hand side converges forℜ(s)>2{\displaystyle \Re (s)>2}.
TheLambert seriesgenerating function is[28]
which converges for|q| < 1.
Both of these are proved by elementary series manipulations and the formulae forφ(n).
In the words of Hardy & Wright, the order ofφ(n)is "always 'nearlyn'."[29]
First[30]
but asngoes to infinity,[31]for allδ> 0
These two formulae can be proved by using little more than the formulae forφ(n)and thedivisor sum functionσ(n).
In fact, during the proof of the second formula, the inequality
true forn> 1, is proved.
We also have[21]
HereγisEuler's constant,γ= 0.577215665..., soeγ= 1.7810724...ande−γ= 0.56145948....
Proving this does not quite require theprime number theorem.[32][33]Sincelog logngoes to infinity, this formula shows that
In fact, more is true.[34][35][36]
and
The second inequality was shown byJean-Louis Nicolas. Ribenboim says "The method of proof is interesting, in that the inequality is shown first under the assumption that theRiemann hypothesisis true, secondly under the contrary assumption."[36]: 173
For the average order, we have[23][37]
due toArnold Walfisz, its proof exploiting estimates on exponential sums due toI. M. VinogradovandN. M. Korobov.
By a combination of van der Corput's and Vinogradov's methods, H.-Q. Liu (On Euler's function.Proc. Roy. Soc. Edinburgh Sect. A 146 (2016), no. 4, 769–775)
improved the error term to
(this is currently the best known estimate of this type). The"BigO"stands for a quantity that is bounded by a constant times the function ofninside the parentheses (which is small compared ton2).
This result can be used to prove[38]thatthe probability of two randomly chosen numbers being relatively primeis6/π2.
In 1950 Somayajulu proved[39][40]
In 1954SchinzelandSierpińskistrengthened this, proving[39][40]that the set
isdensein the positive real numbers. They also proved[39]that the set
is dense in the interval (0,1).
Atotient numberis a value of Euler's totient function: that is, anmfor which there is at least onenfor whichφ(n) =m. Thevalencyormultiplicityof a totient numbermis the number of solutions to this equation.[41]Anontotientis a natural number which is not a totient number. Every odd integer exceeding 1 is trivially a nontotient. There are also infinitely many even nontotients,[42]and indeed every positive integer has a multiple which is an even nontotient.[43]
The number of totient numbers up to a given limitxis
for a constantC= 0.8178146....[44]
If counted accordingly to multiplicity, the number of totient numbers up to a given limitxis
where the error termRis of order at mostx/(logx)kfor any positivek.[45]
It is known that the multiplicity ofmexceedsmδinfinitely often for anyδ< 0.55655.[46][47]
Ford (1999)proved that for every integerk≥ 2there is a totient numbermof multiplicityk: that is, for which the equationφ(n) =mhas exactlyksolutions; this result had previously been conjectured byWacław Sierpiński,[48]and it had been obtained as a consequence ofSchinzel's hypothesis H.[44]Indeed, each multiplicity that occurs, does so infinitely often.[44][47]
However, no numbermis known with multiplicityk= 1.Carmichael's totient function conjectureis the statement that there is no suchm.[49]
A perfect totient number is an integer that is equal to the sum of its iterated totients. That is, we apply the totient function to a numbern, apply it again to the resulting totient, and so on, until the number 1 is reached, and add together the resulting sequence of numbers; if the sum equalsn, thennis a perfect totient number.
In the last section of theDisquisitiones[50][51]Gauss proves[52]that a regularn-gon can be constructed with straightedge and compass ifφ(n)is a power of 2. Ifnis a power of an odd prime number the formula for the totient says its totient can be a power of two only ifnis a first power andn− 1is a power of 2. The primes that are one more than a power of 2 are calledFermat primes, and only five are known: 3, 5, 17, 257, and 65537. Fermat and Gauss knew of these. Nobody has been able to prove whether there are any more.
Thus, a regularn-gon has a straightedge-and-compass construction ifnis a product of distinct Fermat primes and any power of 2. The first few suchnare[53]
Setting up an RSA system involves choosing large prime numberspandq, computingn=pqandk=φ(n), and finding two numberseanddsuch thated≡ 1 (modk). The numbersnande(the "encryption key") are released to the public, andd(the "decryption key") is kept private.
A message, represented by an integerm, where0 <m<n, is encrypted by computingS=me(modn).
It is decrypted by computingt=Sd(modn). Euler's Theorem can be used to show that if0 <t<n, thent=m.
The security of an RSA system would be compromised if the numberncould be efficiently factored or ifφ(n)could be efficiently computed without factoringn.
Ifpis prime, thenφ(p) =p− 1. In 1932D. H. Lehmerasked if there are any composite numbersnsuch thatφ(n)dividesn− 1. None are known.[54]
In 1933 he proved that if any suchnexists, it must be odd, square-free, and divisible by at least seven primes (i.e.ω(n) ≥ 7). In 1980 Cohen and Hagis proved thatn> 1020and thatω(n) ≥ 14.[55]Further, Hagis showed that if 3 dividesnthenn> 101937042andω(n) ≥ 298848.[56][57]
This states that there is no numbernwith the property that for all other numbersm,m≠n,φ(m) ≠φ(n). SeeFord's theoremabove.
As stated in the main article, if there is a single counterexample to this conjecture, there must be infinitely many counterexamples, and the smallest one has at least ten billion digits in base 10.[41]
TheRiemann hypothesisis true if and only if the inequality
is true for alln≥p120569#whereγisEuler's constantandp120569#is theproduct of the first120569primes.[58]
TheDisquisitiones Arithmeticaehas been translated from Latin into English and German. The German edition includes all of Gauss's papers on number theory: all the proofs of quadratic reciprocity, the determination of the sign of the Gauss sum, the investigations into biquadratic reciprocity, and unpublished notes.
References to theDisquisitionesare of the form Gauss, DA, art.nnn.
|
https://en.wikipedia.org/wiki/Euler%27s_totient_function
|
Aprime number(or aprime) is anatural numbergreater than 1 that is not aproductof two smaller natural numbers. A natural number greater than 1 that is not prime is called acomposite number. For example, 5 is prime because the only ways of writing it as a product,1 × 5or5 × 1, involve 5 itself. However, 4 is composite because it is a product (2 × 2) in which both numbers are smaller than 4. Primes are central innumber theorybecause of thefundamental theorem of arithmetic: every natural number greater than 1 is either a prime itself or can befactorizedas a product of primes that is uniqueup totheir order.
The property of being prime is calledprimality. A simple but slowmethod of checking the primalityof a given numbern{\displaystyle n}, calledtrial division, tests whethern{\displaystyle n}is a multiple of any integer between 2 andn{\displaystyle {\sqrt {n}}}. Faster algorithms include theMiller–Rabin primality test, which is fast but has a small chance of error, and theAKS primality test, which always produces the correct answer inpolynomial timebut is too slow to be practical. Particularly fast methods are available for numbers of special forms, such asMersenne numbers. As of October 2024[update]thelargest known prime numberis a Mersenne prime with 41,024,320decimal digits.[1][2]
There areinfinitely manyprimes, asdemonstrated by Euclidaround 300 BC. No known simple formula separates prime numbers from composite numbers. However, the distribution of primes within the natural numbers in the large can be statistically modelled. The first result in that direction is theprime number theorem, proven at the end of the 19th century, which says roughly that theprobabilityof a randomly chosen large number being prime is inverselyproportionalto its number of digits, that is, to itslogarithm.
Several historical questions regarding prime numbers are still unsolved. These includeGoldbach's conjecture, that every even integer greater than 2 can be expressed as the sum of two primes, and thetwin primeconjecture, that there are infinitely many pairs of primes that differ by two. Such questions spurred the development of various branches of number theory, focusing onanalyticoralgebraicaspects of numbers. Primes are used in several routines ininformation technology, such aspublic-key cryptography, which relies on the difficulty offactoringlarge numbers into their prime factors. Inabstract algebra, objects that behave in a generalized way like prime numbers includeprime elementsandprime ideals.
Anatural number(1, 2, 3, 4, 5, 6, etc.) is called aprime number(or aprime) if it is greater than 1 and cannot be written as the product of two smaller natural numbers. The numbers greater than 1 that are not prime are calledcomposite numbers.[3]In other words,n{\displaystyle n}is prime ifn{\displaystyle n}items cannot be divided up into smaller equal-size groups of more than one item,[4]or if it is not possible to arrangen{\displaystyle n}dots into a rectangular grid that is more than one dot wide and more than one dot high.[5]For example, among the numbers 1 through 6, the numbers 2, 3, and 5 are the prime numbers,[6]as there are no other numbers that divide them evenly (without a remainder). 1 is not prime, as it is specifically excluded in the definition.4 = 2 × 2and6 = 2 × 3are both composite.
Thedivisorsof a natural numbern{\displaystyle n}are the natural numbers that dividen{\displaystyle n}evenly. Every natural number has both 1 and itself as a divisor. If it has any other divisor, it cannot be prime. This leads to an equivalent definition of prime numbers: they are the numbers with exactly two positivedivisors. Those two are 1 and the number itself. As 1 has only one divisor, itself, it is not prime by this definition.[7]Yet another way to express the same thing is that a numbern{\displaystyle n}is prime if it is greater than one and if none of the numbers2,3,…,n−1{\displaystyle 2,3,\dots ,n-1}dividesn{\displaystyle n}evenly.[8]
The first 25 prime numbers (all the prime numbers less than 100) are:[9]
Noeven numbern{\displaystyle n}greater than 2 is prime because any such number can be expressed as the product2×n/2{\displaystyle 2\times n/2}. Therefore, every prime number other than 2 is anodd number, and is called anodd prime.[10]Similarly, when written in the usualdecimalsystem, all prime numbers larger than 5 end in 1, 3, 7, or 9. The numbers that end with other digits are all composite: decimal numbers that end in 0, 2, 4, 6, or 8 are even, and decimal numbers that end in 0 or 5 are divisible by 5.[11]
Thesetof all primes is sometimes denoted byP{\displaystyle \mathbf {P} }(aboldfacecapital P)[12]or byP{\displaystyle \mathbb {P} }(ablackboard boldcapital P).[13]
TheRhind Mathematical Papyrus, from around 1550 BC, hasEgyptian fractionexpansions of different forms for prime and composite numbers.[14]However, the earliest surviving records of the study of prime numbers come from theancient Greek mathematicians, who called themprōtos arithmòs(πρῶτος ἀριθμὸς).Euclid'sElements(c. 300 BC) proves theinfinitude of primesand thefundamental theorem of arithmetic, and shows how to construct aperfect numberfrom aMersenne prime.[15]Another Greek invention, theSieve of Eratosthenes, is still used to construct lists ofprimes.[16][17]
Around 1000 AD, theIslamicmathematicianIbn al-Haytham(Alhazen) foundWilson's theorem, characterizing the prime numbers as the numbersn{\displaystyle n}that evenly divide(n−1)!+1{\displaystyle (n-1)!+1}. He also conjectured that all even perfect numbers come from Euclid's construction using Mersenne primes, but was unable to prove it.[18]Another Islamic mathematician,Ibn al-Banna' al-Marrakushi, observed that the sieve of Eratosthenes can be sped up by considering only the prime divisors up to the square root of the upper limit.[17]Fibonaccitook the innovations from Islamic mathematics to Europe. His bookLiber Abaci(1202) was the first to describetrial divisionfor testing primality, again using divisors only up to the square root.[17]
In 1640Pierre de Fermatstated (without proof)Fermat's little theorem(later proved byLeibnizandEuler).[19]Fermat also investigated the primality of theFermat numbers22n+1{\displaystyle 2^{2^{n}}+1},[20]andMarin Mersennestudied theMersenne primes, prime numbers of the form2p−1{\displaystyle 2^{p}-1}withp{\displaystyle p}itself a prime.[21]Christian GoldbachformulatedGoldbach's conjecture, that every even number is the sum of two primes, in a 1742 letter to Euler.[22]Euler proved Alhazen's conjecture (now theEuclid–Euler theorem) that all even perfect numbers can be constructed from Mersenne primes.[15]He introduced methods frommathematical analysisto this area in his proofs of the infinitude of the primes and thedivergence of the sum of the reciprocals of the primes12+13+15+17+111+⋯{\displaystyle {\tfrac {1}{2}}+{\tfrac {1}{3}}+{\tfrac {1}{5}}+{\tfrac {1}{7}}+{\tfrac {1}{11}}+\cdots }.[23]At the start of the 19th century, Legendre and Gauss conjectured that asx{\displaystyle x}tends to infinity, the number of primes up tox{\displaystyle x}isasymptotictox/logx{\displaystyle x/\log x}, wherelogx{\displaystyle \log x}is thenatural logarithmofx{\displaystyle x}. A weaker consequence of this high density of primes wasBertrand's postulate, that for everyn>1{\displaystyle n>1}there is a prime betweenn{\displaystyle n}and2n{\displaystyle 2n}, proved in 1852 byPafnuty Chebyshev.[24]Ideas ofBernhard Riemannin his1859 paper on the zeta-functionsketched an outline for proving the conjecture of Legendre and Gauss. Although the closely relatedRiemann hypothesisremains unproven, Riemann's outline was completed in 1896 byHadamardandde la Vallée Poussin, and the result is now known as theprime number theorem.[25]Another important 19th century result wasDirichlet's theorem on arithmetic progressions, that certainarithmetic progressionscontain infinitely many primes.[26]
Many mathematicians have worked onprimality testsfor numbers larger than those where trial division is practicably applicable. Methods that are restricted to specific number forms includePépin's testfor Fermat numbers (1877),[27]Proth's theorem(c. 1878),[28]theLucas–Lehmer primality test(originated 1856), and the generalizedLucas primality test.[17]
Since 1951 all thelargest known primeshave been found using these tests oncomputers.[a]The search for ever larger primes has generated interest outside mathematical circles, through theGreat Internet Mersenne Prime Searchand otherdistributed computingprojects.[9][30]The idea that prime numbers had few applications outside ofpure mathematics[b]was shattered in the 1970s whenpublic-key cryptographyand theRSAcryptosystem were invented, using prime numbers as their basis.[33]
The increased practical importance of computerized primality testing and factorization led to the development of improved methods capable of handling large numbers of unrestricted form.[16][34][35]The mathematical theory of prime numbers also moved forward with theGreen–Tao theorem(2004) that there are arbitrarily long arithmetic progressions of prime numbers, andYitang Zhang's 2013 proof that there exist infinitely manyprime gapsof bounded size.[36]
Most early Greeks did not even consider 1 to be a number,[37][38]so they could not consider its primality. A few scholars in the Greek and later Roman tradition, includingNicomachus,Iamblichus,Boethius, andCassiodorus, also considered the prime numbers to be a subdivision of the odd numbers, so they did not consider2{\displaystyle 2}to be prime either. However, Euclid and a majority of the other Greek mathematicians considered2{\displaystyle 2}as prime. Themedieval Islamic mathematicianslargely followed the Greeks in viewing 1 as not being a number.[37]By the Middle Ages and Renaissance, mathematicians began treating 1 as a number, and by the 17th century some of them included it as the first prime number.[39]In the mid-18th century,Christian Goldbachlisted 1 as prime in his correspondence withLeonhard Euler;[40]however, Euler himself did not consider 1 to be prime.[41]Many 19th century mathematicians still considered 1 to be prime,[42]andDerrick Norman Lehmerincluded 1 in hislist of primes less than ten millionpublished in 1914.[43]Lists of primes that included 1 continued to be published as recentlyas 1956.[44][45]However, by the early 20th century mathematicians began to agree that 1 should not be listed as prime, but rather in its own special category as a "unit".[42]
If 1 were to be considered a prime, many statements involving primes would need to be awkwardly reworded. For example, the fundamental theorem of arithmetic would need to be rephrased in terms of factorizations into primes greater than 1, because every number would have multiple factorizations with any number of copies of 1.[42]Similarly, thesieve of Eratostheneswould not work correctly if it handled 1 as a prime, because it would eliminate all multiples of 1 (that is, all other numbers) and output only the single number 1.[45]Some other more technical properties of prime numbers also do not hold for the number 1: for instance, the formulas forEuler's totient functionor for thesum of divisors functionare different for prime numbers than they are for 1.[46]
Writing a number as a product of prime numbers is called aprime factorizationof the number. For example:
The terms in the product are calledprime factors. The same prime factor may occur more than once; this example has two copies of the prime factor5.{\displaystyle 5.}When a prime occurs multiple times,exponentiationcan be used to group together multiple copies of the same prime number: for example, in the second way of writing the product above,52{\displaystyle 5^{2}}denotes thesquareor second power of>5{\displaystyle >5}.
The central importance of prime numbers to number theory and mathematics in general stems from thefundamental theorem of arithmetic.[47]This theorem states that every integer larger than 1 can be written as a product of one or more primes. More strongly, this product is unique in the sense that any two prime factorizations of the same number will have the same numbers of copies of the same primes, although their ordering may differ.[48]So, although there are many different ways of finding a factorization using aninteger factorizationalgorithm, they all must produce the same result. Primes can thus be considered the "basic building blocks" of the natural numbers.[49]
Some proofs of the uniqueness of prime factorizations are based onEuclid's lemma: Ifp{\displaystyle p}is a prime number andp{\displaystyle p}divides a productab{\displaystyle ab}of integersa{\displaystyle a}andb,{\displaystyle b,}thenp{\displaystyle p}dividesa{\displaystyle a}orp{\displaystyle p}dividesb{\displaystyle b}(or both).[50]Conversely, if a numberp{\displaystyle p}has the property that when it divides a product it always divides at least one factor of the product, thenp{\displaystyle p}must be prime.[51]
There areinfinitelymany prime numbers. Another way of saying this is that the sequence
of prime numbers never ends. This statement is referred to asEuclid's theoremin honor of the ancient Greek mathematicianEuclid, since the first known proof for this statement is attributed to him. Many more proofs of the infinitude of primes are known, including ananalyticalproof byEuler,Goldbach'sproofbased onFermat numbers,[52]Furstenberg'sproof using general topology,[53]andKummer'selegant proof.[54]
Euclid's proof[55]shows that everyfinite listof primes is incomplete. The key idea is to multiply together the primes in any given list and add1.{\displaystyle 1.}If the list consists of the primesp1,p2,…,pn,{\displaystyle p_{1},p_{2},\ldots ,p_{n},}this gives the number
By the fundamental theorem,N{\displaystyle N}has a prime factorization
with one or more prime factors.N{\displaystyle N}is evenly divisible by each of these factors, butN{\displaystyle N}has a remainder of one when divided by any of the prime numbers in the given list, so none of the prime factors ofN{\displaystyle N}can be in the given list. Because there is no finite list of all the primes, there must be infinitely many primes.
The numbers formed by adding one to the products of the smallest primes are calledEuclid numbers.[56]The first five of them are prime, but the sixth,
is a composite number.
There is no known efficient formula for primes. For example, there is no non-constantpolynomial, even in several variables, that takesonlyprime values.[57]However, there are numerous expressions that do encode all primes, or only primes. One possible formula is based onWilson's theoremand generates the number 2 many times and all other primes exactly once.[58]There is also a set ofDiophantine equationsin nine variables and one parameter with the following property: the parameter is prime if and only if the resulting system of equations has a solution over the natural numbers. This can be used to obtain a single formula with the property that all itspositivevalues are prime.[57]
Other examples of prime-generating formulas come fromMills' theoremand a theorem ofWright. These assert that there are real constantsA>1{\displaystyle A>1}andμ{\displaystyle \mu }such that
are prime for any natural numbern{\displaystyle n}in the first formula, and any number of exponents in the second formula.[59]Here⌊⋅⌋{\displaystyle \lfloor {}\cdot {}\rfloor }represents thefloor function, the largest integer less than or equal to the number in question. However, these are not useful for generating primes, as the primes must be generated first in order to compute the values ofA{\displaystyle A}orμ.{\displaystyle \mu .}[57]
Many conjectures revolving about primes have been posed. Often having an elementary formulation, many of these conjectures have withstood proof for decades: all four ofLandau's problemsfrom 1912 are still unsolved.[60]One of them isGoldbach's conjecture, which asserts that every even integern{\displaystyle n}greater than2{\displaystyle 2}can be written as a sum of two primes.[61]As of 2014[update], this conjecture has been verified for all numbers up ton=4⋅1018.{\displaystyle n=4\cdot 10^{18}.}[62]Weaker statements than this have been proven; for example,Vinogradov's theoremsays that every sufficiently large odd integer can be written as a sum of three primes.[63]Chen's theoremsays that every sufficiently large even number can be expressed as the sum of a prime and asemiprime(the product of two primes).[64]Also, any even integer greater than 10 can be written as the sum of six primes.[65]The branch of number theory studying such questions is calledadditive number theory.[66]
Another type of problem concernsprime gaps, the differences between consecutive primes.
The existence of arbitrarily large prime gaps can be seen by noting that the sequencen!+2,n!+3,…,n!+n{\displaystyle n!+2,n!+3,\dots ,n!+n}consists ofn−1{\displaystyle n-1}composite numbers, for any natural numbern.{\displaystyle n.}[67]However, large prime gaps occur much earlier than this argument shows.[68]For example, the first prime gap of length 8 is between the primes 89 and 97,[69]much smaller than8!=40320.{\displaystyle 8!=40320.}It is conjectured that there are infinitely manytwin primes, pairs of primes with difference 2; this is thetwin prime conjecture.Polignac's conjecturestates more generally that for every positive integerk,{\displaystyle k,}there are infinitely many pairs of consecutive primes that differ by2k.{\displaystyle 2k.}[70]Andrica's conjecture,[70]Brocard's conjecture,[71]Legendre's conjecture,[72]andOppermann's conjecture[71]all suggest that the largest gaps between primes from 1 ton{\displaystyle n}should be at most approximatelyn,{\displaystyle {\sqrt {n}},}a result that is known to follow from the Riemann hypothesis, while the much strongerCramér conjecturesets the largest gap size atO((logn)2){\displaystyle O((\log n)^{2})}.[70]Prime gaps can be generalized toprimek{\displaystyle k}-tuples, patterns in the differences among more than two prime numbers. Their infinitude and density are the subject of thefirst Hardy–Littlewood conjecture, which can be motivated by theheuristicthat the prime numbers behave similarly to a random sequence of numbers with density given by the prime number theorem.[73]
Analytic number theorystudies number theory through the lens ofcontinuous functions,limits,infinite series, and the related mathematics of the infinite andinfinitesimal.
This area of study began withLeonhard Eulerand his first major result, the solution to theBasel problem.
The problem asked for the value of the infinite sum1+14+19+116+…,{\displaystyle 1+{\tfrac {1}{4}}+{\tfrac {1}{9}}+{\tfrac {1}{16}}+\dots ,}which today can be recognized as the valueζ(2){\displaystyle \zeta (2)}of theRiemann zeta function. This function is closely connected to the prime numbers and to one of the most significant unsolved problems in mathematics, theRiemann hypothesis. Euler showed thatζ(2)=π2/6{\displaystyle \zeta (2)=\pi ^{2}/6}.[74]The reciprocal of this number,6/π2{\displaystyle 6/\pi ^{2}}, is the limiting probability that two random numbers selected uniformly from a large range arerelatively prime(have no factors in common).[75]
The distribution of primes in the large, such as the question how many primes are smaller than a given, large threshold, is described by theprime number theorem, but no efficientformula for then{\displaystyle n}-th primeis known.Dirichlet's theorem on arithmetic progressions, in its basic form, asserts that linear polynomials
with relatively prime integersa{\displaystyle a}andb{\displaystyle b}take infinitely many prime values. Stronger forms of the theorem state that the sum of the reciprocals of these prime values diverges, and that different linear polynomials with the sameb{\displaystyle b}have approximately the same proportions of primes.
Although conjectures have been formulated about the proportions of primes in higher-degree polynomials, they remain unproven, and it is unknown whether there exists a quadratic polynomial that (for integer arguments) is prime infinitely often.
Euler's proof that there are infinitely many primesconsiders the sums ofreciprocalsof primes,
Euler showed that, for any arbitraryreal numberx{\displaystyle x}, there exists a primep{\displaystyle p}for which this sum is greater thanx{\displaystyle x}.[76]This shows that there are infinitely many primes, because if there were finitely many primes the sum would reach its maximum value at the biggest prime rather than growing past everyx{\displaystyle x}.
The growth rate of this sum is described more precisely byMertens' second theorem.[77]For comparison, the sum
does not grow to infinity asn{\displaystyle n}goes to infinity (see theBasel problem). In this sense, prime numbers occur more often than squares of natural numbers,
although both sets are infinite.[78]Brun's theoremstates that the sum of the reciprocals oftwin primes,
is finite. Because of Brun's theorem, it is not possible to use Euler's method to solve thetwin prime conjecture, that there exist infinitely many twin primes.[78]
Theprime-counting functionπ(n){\displaystyle \pi (n)}is defined as the number of primes not greater thann{\displaystyle n}.[79]For example,π(11)=5{\displaystyle \pi (11)=5}, since there are five primes less than or equal to 11. Methods such as theMeissel–Lehmer algorithmcan compute exact values ofπ(n){\displaystyle \pi (n)}faster than it would be possible to list each prime up ton{\displaystyle n}.[80]Theprime number theoremstates thatπ(n){\displaystyle \pi (n)}is asymptotic ton/logn{\displaystyle n/\log n}, which is denoted as
and means that the ratio ofπ(n){\displaystyle \pi (n)}to the right-hand fractionapproaches1 asn{\displaystyle n}grows to infinity.[81]This implies that the likelihood that a randomly chosen number less thann{\displaystyle n}is prime is (approximately) inversely proportional to the number of digits inn{\displaystyle n}.[82]It also implies that then{\displaystyle n}th prime number is proportional tonlogn{\displaystyle n\log n}[83]and therefore that the average size of a prime gap is proportional tologn{\displaystyle \log n}.[68]A more accurate estimate forπ(n){\displaystyle \pi (n)}is given by theoffset logarithmic integral[81]
Anarithmetic progressionis a finite or infinite sequence of numbers such that consecutive numbers in the sequence all have the same difference.[84]This difference is called themodulusof the progression.[85]For example,
is an infinite arithmetic progression with modulus 9. In an arithmetic progression, all the numbers have the same remainder when divided by the modulus; in this example, the remainder is 3. Because both the modulus 9 and the remainder 3 are multiples of 3, so is every element in the sequence. Therefore, this progression contains only one prime number, 3 itself. In general, the infinite progression
can have more than one prime only when its remaindera{\displaystyle a}and modulusq{\displaystyle q}are relatively prime. If they are relatively prime,Dirichlet's theorem on arithmetic progressionsasserts that the progression contains infinitely many primes.[86]
TheGreen–Tao theoremshows that there are arbitrarily long finite arithmetic progressions consisting only of primes.[36][87]
Euler noted that the function
yields prime numbers for1≤n≤40{\displaystyle 1\leq n\leq 40}, although composite numbers appear among its later values.[88][89]The search for an explanation for this phenomenon led to the deepalgebraic number theoryofHeegner numbersand theclass number problem.[90]TheHardy–Littlewood conjecture Fpredicts the density of primes among the values ofquadratic polynomialswith integercoefficientsin terms of the logarithmic integral and the polynomial coefficients. No quadratic polynomial has been proven to take infinitely many prime values.[91]
TheUlam spiral[92]arranges the natural numbers in a two-dimensional grid, spiraling in concentric squares surrounding the origin with the prime numbers highlighted. Visually, the primes appear to cluster on certain diagonals and not others, suggesting that some quadratic polynomials take prime values more often than others.[91]
One of the most famous unsolved questions in mathematics, dating from 1859, and one of theMillennium Prize Problems, is theRiemann hypothesis, which asks where thezerosof theRiemann zeta functionζ(s){\displaystyle \zeta (s)}are located.
This function is ananalytic functionon thecomplex numbers. For complex numberss{\displaystyle s}with real part greater than one it equals both aninfinite sumover all integers, and aninfinite productover the prime numbers,
This equality between a sum and a product, discovered by Euler, is called anEuler product.[93]The Euler product can be derived from the fundamental theorem of arithmetic, and shows the close connection between the zeta function and the prime numbers.[94]It leads to another proof that there are infinitely many primes: if there were only finitely many,
then the sum-product equality would also be valid ats=1{\displaystyle s=1}, but the sum would diverge (it is theharmonic series1+12+13+…{\displaystyle 1+{\tfrac {1}{2}}+{\tfrac {1}{3}}+\dots }) while the product would be finite, a contradiction.[95]
The Riemann hypothesis states that thezerosof the zeta-function are all either negative even numbers, or complex numbers withreal partequal to 1/2.[96]The original proof of theprime number theoremwas based on a weak form of this hypothesis, that there are no zeros with real part equal to 1,[97][98]although other more elementary proofs have been found.[99]The prime-counting function can be expressed byRiemann's explicit formulaas a sum in which each term comes from one of the zeros of the zeta function; the main term of this sum is the logarithmic integral, and the remaining terms cause the sum to fluctuate above and below the main term.[100]In this sense, the zeros control how regularly the prime numbers are distributed. If the Riemann hypothesis is true, these fluctuations will be small, and theasymptotic distributionof primes given by the prime number theorem will also hold over much shorter intervals (of length about the square root ofx{\displaystyle x}for intervals near a numberx{\displaystyle x}).[98]
Modular arithmetic modifies usual arithmetic by only using the numbers{0,1,2,…,n−1}{\displaystyle \{0,1,2,\dots ,n-1\}}, for a natural numbern{\displaystyle n}called the modulus.
Any other natural number can be mapped into this system by replacing it by its remainder after division byn{\displaystyle n}.[101]Modular sums, differences and products are calculated by performing the same replacement by the remainder on the result of the usual sum, difference, or product of integers.[102]Equality of integers corresponds tocongruencein modular arithmetic:x{\displaystyle x}andy{\displaystyle y}are congruent (writtenx≡y{\displaystyle x\equiv y}modn{\displaystyle n}) when they have the same remainder after division byn{\displaystyle n}.[103]However, in this system of numbers,divisionby all nonzero numbers is possible if and only if the modulus is prime. For instance, with the prime number 7 as modulus, division by 3 is possible:2/3≡3mod7{\displaystyle 2/3\equiv 3{\bmod {7}}}, becauseclearing denominatorsby multiplying both sides by 3 gives the valid formula2≡9mod7{\displaystyle 2\equiv 9{\bmod {7}}}. However, with the composite modulus 6, division by 3 is impossible. There is no valid solution to2/3≡xmod6{\displaystyle 2/3\equiv x{\bmod {6}}}: clearing denominators by multiplying by 3 causes the left-hand side to become 2 while the right-hand side becomes either 0 or 3. In the terminology ofabstract algebra, the ability to perform division means that modular arithmetic modulo a prime number forms afieldor, more specifically, afinite field, while other moduli only give aringbut not a field.[104]
Several theorems about primes can be formulated using modular arithmetic. For instance,Fermat's little theoremstates that ifa≢0{\displaystyle a\not \equiv 0}(modp{\displaystyle p}), thenap−1≡1{\displaystyle a^{p-1}\equiv 1}(modp{\displaystyle p}).[105]Summing this over all choices ofa{\displaystyle a}gives the equation
valid wheneverp{\displaystyle p}is prime.Giuga's conjecturesays that this equation is also a sufficient condition forp{\displaystyle p}to be prime.[106]Wilson's theoremsays that an integerp>1{\displaystyle p>1}is prime if and only if thefactorial(p−1)!{\displaystyle (p-1)!}is congruent to−1{\displaystyle -1}modp{\displaystyle p}. For a composite numbern=r⋅s{\displaystyle n=r\cdot s}this cannot hold, since one of its factors divides bothnand(n−1)!{\displaystyle (n-1)!}, and so(n−1)!≡−1(modn){\displaystyle (n-1)!\equiv -1{\pmod {n}}}is impossible.[107]
Thep{\displaystyle p}-adic orderνp(n){\displaystyle \nu _{p}(n)}of an integern{\displaystyle n}is the number of copies ofp{\displaystyle p}in the prime factorization ofn{\displaystyle n}. The same concept can be extended from integers to rational numbers by defining thep{\displaystyle p}-adic order of a fractionm/n{\displaystyle m/n}to beνp(m)−νp(n){\displaystyle \nu _{p}(m)-\nu _{p}(n)}. Thep{\displaystyle p}-adic absolute value|q|p{\displaystyle |q|_{p}}of any rational numberq{\displaystyle q}is then defined as|q|p=p−νp(q){\displaystyle \vert q\vert _{p}=p^{-\nu _{p}(q)}}. Multiplying an integer by itsp{\displaystyle p}-adic absolute value cancels out the factors ofp{\displaystyle p}in its factorization, leaving only the other primes. Just as the distance between two real numbers can be measured by the absolute value of their distance, the distance between two rational numbers can be measured by theirp{\displaystyle p}-adic distance, thep{\displaystyle p}-adic absolute value of their difference. For this definition of distance, two numbers are close together (they have a small distance) when their difference is divisible by a high power ofp{\displaystyle p}. In the same way that the real numbers can be formed from the rational numbers and their distances, by adding extra limiting values to form acomplete field, the rational numbers with thep{\displaystyle p}-adic distance can be extended to a different complete field, thep{\displaystyle p}-adic numbers.[108][109]
This picture of an order, absolute value, and complete field derived from them can be generalized toalgebraic number fieldsand theirvaluations(certain mappings from themultiplicative groupof the field to atotally ordered additive group, also called orders),absolute values(certain multiplicative mappings from the field to the real numbers, also callednorms),[108]and places (extensions tocomplete fieldsin which the given field is adense set, also called completions).[110]The extension from the rational numbers to thereal numbers, for instance, is a place in which the distance between numbers is the usualabsolute valueof their difference. The corresponding mapping to an additive group would be thelogarithmof the absolute value, although this does not meet all the requirements of a valuation. According toOstrowski's theorem, up to a natural notion of equivalence, the real numbers andp{\displaystyle p}-adic numbers, with their orders and absolute values, are the only valuations, absolute values, and places on the rational numbers.[108]Thelocal–global principleallows certain problems over the rational numbers to be solved by piecing together solutions from each of their places, again underlining the importance of primes to number theory.[111]
Acommutative ringis analgebraic structurewhere addition, subtraction and multiplication are defined. The integers are a ring, and the prime numbers in the integers have been generalized to rings in two different ways,prime elementsandirreducible elements. An elementp{\displaystyle p}of a ringR{\displaystyle R}is called prime if it is nonzero, has nomultiplicative inverse(that is, it is not aunit), and satisfies the following requirement: wheneverp{\displaystyle p}divides the productxy{\displaystyle xy}of two elements ofR{\displaystyle R}, it also divides at least one ofx{\displaystyle x}ory{\displaystyle y}. An element is irreducible if it is neither a unit nor the product of two other non-unit elements. In the ring of integers, the prime and irreducible elements form the same set,
In an arbitrary ring, all prime elements are irreducible. The converse does not hold in general, but does hold forunique factorization domains.[112]
The fundamental theorem of arithmetic continues to hold (by definition) in unique factorization domains. An example of such a domain is theGaussian integersZ[i]{\displaystyle \mathbb {Z} [i]}, the ring ofcomplex numbersof the forma+bi{\displaystyle a+bi}wherei{\displaystyle i}denotes theimaginary unitanda{\displaystyle a}andb{\displaystyle b}are arbitrary integers. Its prime elements are known asGaussian primes. Not every number that is prime among the integers remains prime in the Gaussian integers; for instance, the number 2 can be written as a product of the two Gaussian primes1+i{\displaystyle 1+i}and1−i{\displaystyle 1-i}. Rational primes (the prime elements in the integers) congruent to 3 mod 4 are Gaussian primes, but rational primes congruent to 1 mod 4 are not.[113]This is a consequence ofFermat's theorem on sums of two squares,
which states that an odd primep{\displaystyle p}is expressible as the sum of two squares,p=x2+y2{\displaystyle p=x^{2}+y^{2}}, and therefore factorable asp=(x+iy)(x−iy){\displaystyle p=(x+iy)(x-iy)}, exactly whenp{\displaystyle p}is 1 mod 4.[114]
Not every ring is a unique factorization domain. For instance, in the ring of numbersa+b−5{\displaystyle a+b{\sqrt {-5}}}(for integersa{\displaystyle a}andb{\displaystyle b}) the number21{\displaystyle 21}has two factorizations21=3⋅7=(1+2−5)(1−2−5){\displaystyle 21=3\cdot 7=(1+2{\sqrt {-5}})(1-2{\sqrt {-5}})}, where neither of the four factors can be reduced any further, so it does not have a unique factorization. In order to extend unique factorization to a larger class of rings, the notion of a number can be replaced with that of anideal, a subset of the elements of a ring that contains all sums of pairs of its elements, and all products of its elements with ring elements.Prime ideals, which generalize prime elements in the sense that theprincipal idealgenerated by a prime element is a prime ideal, are an important tool and object of study incommutative algebra,algebraic number theoryandalgebraic geometry. The prime ideals of the ring of integers are the ideals(0){\displaystyle (0)},(2){\displaystyle (2)},(3){\displaystyle (3)},(5){\displaystyle (5)},(7){\displaystyle (7)},(11){\displaystyle (11)}, ... The fundamental theorem of arithmetic generalizes to theLasker–Noether theorem, which expresses every ideal in aNoetheriancommutative ringas an intersection ofprimary ideals, which are the appropriate generalizations ofprime powers.[115]
Thespectrum of a ringis a geometric space whose points are the prime ideals of the ring.[116]Arithmetic geometryalso benefits from this notion, and many concepts exist in both geometry and number theory. For example, factorization orramificationof prime ideals when lifted to anextension field, a basic problem of algebraic number theory, bears some resemblance withramification in geometry. These concepts can even assist with in number-theoretic questions solely concerned with integers. For example, prime ideals in thering of integersofquadratic number fieldscan be used in provingquadratic reciprocity, a statement that concerns the existence of square roots modulo integer prime numbers.[117]Early attempts to proveFermat's Last Theoremled toKummer's introduction ofregular primes, integer prime numbers connected with the failure of unique factorization in thecyclotomic integers.[118]The question of how many integer prime numbers factor into a product of multiple prime ideals in an algebraic number field is addressed byChebotarev's density theorem, which (when applied to the cyclotomic integers) has Dirichlet's theorem on primes in arithmetic progressions as a special case.[119]
In the theory offinite groupstheSylow theoremsimply that, if a power of a prime numberpn{\displaystyle p^{n}}divides theorder of a group, then the group has a subgroup of orderpn{\displaystyle p^{n}}. ByLagrange's theorem, any group of prime order is acyclic group,
and byBurnside's theoremany group whose order is divisible by only two primes issolvable.[120]
For a long time, number theory in general, and the study of prime numbers in particular, was seen as the canonical example of pure mathematics, with no applications outside of mathematics[b]other than the use of prime numbered gear teeth to distribute wear evenly.[121]In particular, number theorists such asBritishmathematicianG. H. Hardyprided themselves on doing work that had absolutely no military significance.[122]
This vision of the purity of number theory was shattered in the 1970s, when it was publicly announced that prime numbers could be used as the basis for the creation ofpublic-key cryptographyalgorithms.[33]These applications have led to significant study ofalgorithmsfor computing with prime numbers, and in particular ofprimality testing, methods for determining whether a given number is prime. The most basic primality testing routine, trial division, is too slow to be useful for large numbers. One group of modern primality tests is applicable to arbitrary numbers, while more efficient tests are available for numbers of special types. Most primality tests only tell whether their argument is prime or not. Routines that also provide a prime factor of composite arguments (or all of its prime factors) are calledfactorizationalgorithms. Prime numbers are also used in computing forchecksums,hash tables, andpseudorandom number generators.
The most basic method of checking the primality of a given integern{\displaystyle n}is calledtrial division. This method dividesn{\displaystyle n}by each integer from 2 up to thesquare rootofn{\displaystyle n}. Any such integer dividingn{\displaystyle n}evenly establishesn{\displaystyle n}as composite; otherwise it is prime. Integers larger than the square root do not need to be checked because, whenevern=a⋅b{\displaystyle n=a\cdot b}, one of the two factorsa{\displaystyle a}andb{\displaystyle b}is less than or equal to thesquare rootofn{\displaystyle n}. Another optimization is to check only primes as factors in this range.[123]For instance, to check whether 37 is prime, this method divides it by the primes in the range from 2 to37{\displaystyle {\sqrt {37}}}, which are 2, 3, and 5. Each division produces a nonzero remainder, so 37 is indeed prime.
Although this method is simple to describe, it is impractical for testing the primality of large integers, because the number of tests that it performsgrows exponentiallyas a function of the number of digits of these integers.[124]However, trial division is still used, with a smaller limit than the square root on the divisor size, to quickly discover composite numbers with small factors, before using more complicated methods on the numbers that pass this filter.[125]
Before computers,mathematical tableslisting all of the primes or prime factorizations up to a given limit were commonly printed.[126]The oldest known method for generating a list of primes is called the sieve of Eratosthenes.[127]The animation shows an optimized variant of this method.[128]Another more asymptotically efficient sieving method for the same problem is thesieve of Atkin.[129]In advanced mathematics,sieve theoryapplies similar methods to other problems.[130]
Some of the fastest modern tests for whether an arbitrary given numbern{\displaystyle n}is prime areprobabilistic(orMonte Carlo) algorithms, meaning that they have a small random chance of producing an incorrect answer.[131]For instance theSolovay–Strassen primality teston a given numberp{\displaystyle p}chooses a numbera{\displaystyle a}randomly from 2 throughp−2{\displaystyle p-2}and usesmodular exponentiationto check whethera(p−1)/2±1{\displaystyle a^{(p-1)/2}\pm 1}is divisible byp{\displaystyle p}.[c]If so, it answers yes and otherwise it answers no. Ifp{\displaystyle p}really is prime, it will always answer yes, but ifp{\displaystyle p}is composite then it answers yes with probability at most 1/2 and no with probability at least 1/2.[132]If this test is repeatedn{\displaystyle n}times on the same number, the probability that a composite number could pass the test every time is at most1/2n{\displaystyle 1/2^{n}}. Because this decreases exponentially with the number of tests, it provides high confidence (although not certainty) that a number that passes the repeated test is prime. On the other hand, if the test ever fails, then the number is certainly composite.[133]A composite number that passes such a test is called apseudoprime.[132]
In contrast, some other algorithms guarantee that their answer will always be correct: primes will always be determined to be prime and composites will always be determined to be composite. For instance, this is true of trial division. The algorithms with guaranteed-correct output include bothdeterministic(non-random) algorithms, such as theAKS primality test,[134]and randomizedLas Vegas algorithmswhere the random choices made by the algorithm do not affect its final answer, such as some variations ofelliptic curve primality proving.[131]When the elliptic curve method concludes that a number is prime, it providesprimality certificatethat can be verified quickly.[135]The elliptic curve primality test is the fastest in practice of the guaranteed-correct primality tests, but its runtime analysis is based onheuristic argumentsrather than rigorous proofs. TheAKS primality testhas mathematically proven time complexity, but is slower than elliptic curve primality proving in practice.[136]These methods can be used to generate large random prime numbers, by generating and testing random numbers until finding one that is prime; when doing this, a faster probabilistic test can quickly eliminate most composite numbers before a guaranteed-correct algorithm is used to verify that the remaining numbers are prime.[d]
The following table lists some of these tests. Their running time is given in terms ofn{\displaystyle n}, the number to be tested and, for probabilistic algorithms, the numberk{\displaystyle k}of tests performed. Moreover,ε{\displaystyle \varepsilon }is an arbitrarily small positive number, and log is thelogarithmto an unspecified base. Thebig O notationmeans that each time bound should be multiplied by aconstant factorto convert it from dimensionless units to units of time; this factor depends on implementation details such as the type of computer used to run the algorithm, but not on the input parametersn{\displaystyle n}andk{\displaystyle k}.
In addition to the aforementioned tests that apply to any natural number, some numbers of a special form can be tested for primality more quickly. For example, theLucas–Lehmer primality testcan determine whether aMersenne number(one less than apower of two) is prime, deterministically, in the same time as a single iteration of the Miller–Rabin test.[141]This is why since 1992 (as of October 2024[update]) thelargestknownprimehas always been a Mersenne prime.[142]It is conjectured that there are infinitely many Mersenne primes.[143]
The following table gives the largest known primes of various types. Some of these primes have been found usingdistributed computing. In 2009, theGreat Internet Mersenne Prime Searchproject was awarded a US$100,000 prize for first discovering a prime with at least 10 million digits.[144]TheElectronic Frontier Foundationalso offers $150,000 and $250,000 for primes with at least 100 million digits and 1 billion digits, respectively.[145]
Given a composite integern{\displaystyle n}, the task of providing one (or all) prime factors is referred to asfactorizationofn{\displaystyle n}. It is significantly more difficult than primality testing,[152]and although many factorization algorithms are known, they are slower than the fastest primality testing methods. Trial division andPollard's rho algorithmcan be used to find very small factors ofn{\displaystyle n},[125]andelliptic curve factorizationcan be effective whenn{\displaystyle n}has factors of moderate size.[153]Methods suitable for arbitrary large numbers that do not depend on the size of its factors include thequadratic sieveandgeneral number field sieve. As with primality testing, there are also factorization algorithms that require their input to have a special form, including thespecial number field sieve.[154]As of December 2019[update]thelargest number known to have been factoredby a general-purpose algorithm isRSA-240, which has 240 decimal digits (795 bits) and is the product of two large primes.[155]
Shor's algorithmcan factor any integer in a polynomial number of steps on aquantum computer.[156]However, current technology can only run this algorithm for very small numbers. As of October 2012[update], the largest number that has been factored by a quantum computer running Shor's algorithm is 21.[157]
Severalpublic-key cryptographyalgorithms, such asRSAand theDiffie–Hellman key exchange, are based on large prime numbers (2048-bitprimes are common).[158]RSA relies on the assumption that it is much easier (that is, more efficient) to perform the multiplication of two (large) numbersx{\displaystyle x}andy{\displaystyle y}than to calculatex{\displaystyle x}andy{\displaystyle y}(assumedcoprime) if only the productxy{\displaystyle xy}is known.[33]The Diffie–Hellman key exchange relies on the fact that there are efficient algorithms formodular exponentiation(computingabmodc{\displaystyle a^{b}{\bmod {c}}}), while the reverse operation (thediscrete logarithm) is thought to be a hard problem.[159]
Prime numbers are frequently used forhash tables. For instance the original method of Carter and Wegman foruniversal hashingwas based on computinghash functionsby choosing randomlinear functionsmodulo large prime numbers. Carter and Wegman generalized this method tok{\displaystyle k}-independent hashingby using higher-degree polynomials, again modulo large primes.[160]As well as in the hash function, prime numbers are used for the hash table size inquadratic probingbased hash tables to ensure that the probe sequence covers the whole table.[161]
Somechecksummethods are based on the mathematics of prime numbers. For instance the checksums used inInternational Standard Book Numbersare defined by taking the rest of the number modulo 11, a prime number. Because 11 is prime this method can detect both single-digit errors and transpositions of adjacent digits.[162]Another checksum method,Adler-32, uses arithmetic modulo 65521, the largest prime number less than216{\displaystyle 2^{16}}.[163]Prime numbers are also used inpseudorandom number generatorsincludinglinear congruential generators[164]and theMersenne Twister.[165]
Prime numbers are of central importance to number theory but also have many applications to other areas within mathematics, includingabstract algebraand elementary geometry. For example, it is possible to place prime numbers of points in a two-dimensional grid so thatno three are in a line, or so that every triangle formed by three of the pointshas large area.[166]Another example isEisenstein's criterion, a test for whether apolynomial is irreduciblebased on divisibility of its coefficients by a prime number and its square.[167]
The concept of a prime number is so important that it has been generalized in different ways in various branches of mathematics. Generally, "prime" indicates minimality or indecomposability, in an appropriate sense. For example, theprime fieldof a given field is its smallest subfield that contains both 0 and 1. It is either the field of rational numbers or afinite fieldwith a prime number of elements, whence the name.[168]Often a second, additional meaning is intended by using the word prime, namely that any object can be, essentially uniquely, decomposed into its prime components. For example, inknot theory, aprime knotis aknotthat is indecomposable in the sense that it cannot be written as theconnected sumof two nontrivial knots. Any knot can be uniquely expressed as a connected sum of prime knots.[169]Theprime decomposition of 3-manifoldsis another example of this type.[170]
Beyond mathematics and computing, prime numbers have potential connections toquantum mechanics, and have been used metaphorically in the arts and literature. They have also been used inevolutionary biologyto explain the life cycles ofcicadas.
Fermat primesare primes of the form
withk{\displaystyle k}anonnegative integer.[171]They are named afterPierre de Fermat, who conjectured that all such numbers are prime. The first five of these numbers – 3, 5, 17, 257, and 65,537 – are prime,[172]butF5{\displaystyle F_{5}}is composite and so are all other Fermat numbers that have been verified as of 2017.[173]Aregularn{\displaystyle n}-gonisconstructible using straightedge and compassif and only if the odd prime factors ofn{\displaystyle n}(if any) are distinct Fermat primes.[172]Likewise, a regularn{\displaystyle n}-gon may be constructed using straightedge, compass, and anangle trisectorif and only if the prime factors ofn{\displaystyle n}are any number of copies of 2 or 3 together with a (possibly empty) set of distinctPierpont primes, primes of the form2a3b+1{\displaystyle 2^{a}3^{b}+1}.[174]
It is possible to partition any convex polygon inton{\displaystyle n}smaller convex polygons of equal area and equal perimeter, whenn{\displaystyle n}is apower of a prime number, but this is not known for other values ofn{\displaystyle n}.[175]
Beginning with the work ofHugh MontgomeryandFreeman Dysonin the 1970s, mathematicians and physicists have speculated that the zeros of the Riemann zeta function are connected to the energy levels ofquantum systems.[176][177]Prime numbers are also significant inquantum information science, thanks to mathematical structures such asmutually unbiased basesandsymmetric informationally complete positive-operator-valued measures.[178][179]
The evolutionary strategy used bycicadasof the genusMagicicadamakes use of prime numbers.[180]These insects spend most of their lives asgrubsunderground. They only pupate and then emerge from their burrows after 7, 13 or 17 years, at which point they fly about, breed, and then die after a few weeks at most. Biologists theorize that these prime-numbered breeding cycle lengths have evolved in order to prevent predators from synchronizing with these cycles.[181][182]In contrast, the multi-year periods between flowering inbambooplants are hypothesized to besmooth numbers, having only small prime numbers in their factorizations.[183]
Prime numbers have influenced many artists and writers. The FrenchcomposerOlivier Messiaenused prime numbers to create ametrical music through "natural phenomena". In works such asLa Nativité du Seigneur(1935) andQuatre études de rythme(1949–1950), he simultaneously employs motifs with lengths given by different prime numbers to create unpredictable rhythms: the primes 41, 43, 47 and 53 appear in the third étude, "Neumes rythmiques". According to Messiaen this way of composing was "inspired by the movements of nature, movements of free and unequal durations".[184]
In his science fiction novelContact, scientistCarl Sagansuggested that prime factorization could be used as a means of establishing two-dimensional image planes in communications with aliens, an idea that he had first developed informally with American astronomerFrank Drakein 1975.[185]In the novelThe Curious Incident of the Dog in the Night-TimebyMark Haddon, the narrator arranges the sections of the story by consecutive prime numbers as a way to convey the mental state of its main character, a mathematically gifted teen withAsperger syndrome.[186]Prime numbers are used as a metaphor for loneliness and isolation in thePaolo GiordanonovelThe Solitude of Prime Numbers, in which they are portrayed as "outsiders" among integers.[187]
|
https://en.wikipedia.org/wiki/Prime_number
|
Acomposite numberis apositive integerthat can be formed by multiplying two smaller positive integers. Accordingly it is a positive integer that has at least onedivisorother than 1 and itself.[1][2]Every positive integer is composite,prime, or theunit1, so the composite numbers are exactly the numbers that are not prime and not a unit.[3][4]E.g., the integer 14 is a composite number because it is the product of the two smaller integers 2 × 7 but the integers 2 and 3 are not because each can only be divided by one and itself.
The composite numbers up to 150 are:
Every composite number can be written as the product of two or more (not necessarily distinct) primes.[2]For example, the composite number299can be written as 13 × 23, and the composite number360can be written as 23× 32× 5; furthermore, this representation is uniqueup tothe order of the factors. This fact is called thefundamental theorem of arithmetic.[5][6][7][8]
There are several knownprimality teststhat can determine whether a number is prime or composite which do not necessarily reveal thefactorizationof a composite input.
One way to classify composite numbers is by counting the number of prime factors. A composite number with two prime factors is asemiprimeor 2-almost prime (the factors need not be distinct, hence squares of primes are included). A composite number with three distinct prime factors is asphenic number. In some applications, it is necessary to differentiate between composite numbers with an odd number of distinct prime factors and those with an even number of distinct prime factors. For the latter
(where μ is theMöbius functionandxis half the total of prime factors), while for the former
However, for prime numbers, the function also returns −1 andμ(1)=1{\displaystyle \mu (1)=1}. For a numbernwith one or more repeated prime factors,
Ifallthe prime factors of a number are repeated it is called apowerful number(Allperfect powersare powerful numbers). Ifnoneof its prime factors are repeated, it is calledsquarefree. (All prime numbers and 1 are squarefree.)
For example,72= 23× 32, all the prime factors are repeated, so 72 is a powerful number.42= 2 × 3 × 7, none of the prime factors are repeated, so 42 is squarefree.
Another way to classify composite numbers is by counting the number of divisors. All composite numbers have at least three divisors. In the case of squares of primes, those divisors are{1,p,p2}{\displaystyle \{1,p,p^{2}\}}. A numbernthat has more divisors than anyx<nis ahighly composite number(though the first two such numbers are 1 and 2).
Composite numbers have also been called "rectangular numbers", but that name can also refer to thepronic numbers, numbers that are the product of two consecutive integers.
Yet another way to classify composite numbers is to determine whether all prime factors are either all below or all above some fixed (prime) number. Such numbers are calledsmooth numbersandrough numbers, respectively.
|
https://en.wikipedia.org/wiki/Composite_number
|
Moore's lawis the observation that the number oftransistorsin anintegrated circuit(IC) doubles about every two years. Moore's law is anobservationandprojectionof a historical trend. Rather than alaw of physics, it is anempirical relationship. It is anexperience-curve law, a type of law quantifying efficiency gains from experience in production.
The observation is named afterGordon Moore, the co-founder ofFairchild SemiconductorandInteland former CEO of the latter, who in 1965 noted that the number of components per integrated circuit had beendoubling every year,[a]and projected this rate of growth would continue for at least another decade. In 1975, looking forward to the next decade, he revised the forecast to doubling every two years, acompound annual growth rate(CAGR) of 41%. Moore's empirical evidence did not directly imply that the historical trend would continue, nevertheless, his prediction has held since 1975 and has since become known as alaw.
Moore's prediction has been used in thesemiconductor industryto guide long-term planning and to set targets forresearch and development(R&D). Advancements indigital electronics, such as the reduction inquality-adjusted pricesofmicroprocessors, the increase inmemory capacity(RAMandflash), the improvement ofsensors, and even the number and size ofpixelsindigital cameras, are strongly linked to Moore's law. These ongoing changes in digital electronics have been a driving force of technological and social change,productivity, and economic growth.
Industry experts have not reached a consensus on exactly when Moore's law will cease to apply. Microprocessor architects report that semiconductor advancement has slowed industry-wide since around 2010, slightly below the pace predicted by Moore's law. In September 2022,NvidiaCEOJensen Huangconsidered Moore's law dead,[2]while Intel CEOPat Gelsingerwas of the opposite view.[3]
In 1959,Douglas Engelbartstudied the projected downscaling of integrated circuit (IC) size, publishing his results in the article "Microelectronics, and the Art of Similitude".[4][5][6]Engelbart presented his findings at the 1960International Solid-State Circuits Conference, where Moore was present in the audience.[7]
In 1965, Gordon Moore, who at the time was working as the director of research and development atFairchild Semiconductor, was asked to contribute to the thirty-fifth-anniversary issue ofElectronicsmagazine with a prediction on the future of the semiconductor components industry over the next ten years.[8]His response was a brief article entitled "Cramming more components onto integrated circuits".[1][9][b]Within his editorial, he speculated that by 1975 it would be possible to contain as many as65000components on a single quarter-square-inch (~1.6 cm2) semiconductor.
The complexity for minimum component costs has increased at a rate of roughly a factor of two per year. Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years.[1]
Moore posited a log–linear relationship between device complexity (higher circuit density at reduced cost) and time.[12][13]In a 2015 interview, Moore noted of the 1965 article: "... I just did a wild extrapolation saying it's going to continue to double every year for the next 10 years."[14]One historian of the law citesStigler's law of eponymy, to introduce the fact that the regular doubling of components was known to many working in the field.[13]
In 1974,Robert H. DennardatIBMrecognized the rapid MOSFET scaling technology and formulated what became known asDennard scaling, which describes that as MOS transistors get smaller, theirpower densitystays constant such that the power use remains in proportion with area.[15][16]Evidence from the semiconductor industry shows that this inverse relationship between power density andareal densitybroke down in the mid-2000s.[17]
At the 1975IEEE International Electron Devices Meeting, Moore revised his forecast rate,[18][19]predicting semiconductor complexity would continue to double annually until about 1980, after which it would decrease to a rate of doubling approximately every two years.[19][20][21]He outlined several contributing factors for this exponential behavior:[12][13]
Shortly after 1975,CaltechprofessorCarver Meadpopularized the termMoore's law.[22][23]Moore's law eventually came to be widely accepted as a goal for the semiconductor industry, and it was cited by competitive semiconductor manufacturers as they strove to increase processing power. Moore viewed his eponymous law as surprising and optimistic: "Moore's law is a violation ofMurphy's law. Everything gets better and better."[24]The observation was even seen as aself-fulfilling prophecy.[25][26]
The doubling period is often misquoted as 18 months because of a separate prediction by Moore's colleague, Intel executiveDavid House.[27]In 1975, House noted that Moore's revised law of doubling transistor count every 2 years in turn implied that computer chip performance would roughly double every 18 months,[28]with no increase in power consumption.[29]Mathematically, Moore's law predicted that transistor count would double every 2 years due to shrinking transistor dimensions and other improvements.[30]As a consequence of shrinking dimensions, Dennard scaling predicted that power consumption per unit area would remain constant. Combining these effects, David House deduced that computer chip performance would roughly double every 18 months. Also due to Dennard scaling, this increased performance would not be accompanied by increased power, i.e., the energy-efficiency ofsilicon-based computer chips roughly doubles every 18 months. Dennard scaling ended in the 2000s.[17]Koomey later showed that a similar rate of efficiency improvement predated silicon chips and Moore's law, for technologies such as vacuum tubes.
Microprocessor architects report that since around 2010, semiconductor advancement has slowed industry-wide below the pace predicted by Moore's law.[17]Brian Krzanich, the former CEO of Intel, cited Moore's 1975 revision as a precedent for the current deceleration, which results from technical challenges and is "a natural part of the history of Moore's law".[31][32][33]The rate of improvement in physical dimensions known as Dennard scaling also ended in the mid-2000s. As a result, much of the semiconductor industry has shifted its focus to the needs of major computing applications rather than semiconductor scaling.[25][34][17]Nevertheless, as of 2019, leading semiconductor manufacturersTSMCandSamsung Electronicsclaimed to keep pace with Moore's law[35][36][37][38][39][40]with10,7, and5 nmnodes in mass production.[35][36][41][42][43]
As the cost of computer power to the consumer falls, the cost for producers to fulfill Moore's law follows an opposite trend: R&D, manufacturing, and test costs have increased steadily with each new generation of chips. The cost of the tools, principallyextreme ultraviolet lithography(EUVL), used to manufacture chips doubles every 4 years.[44]Rising manufacturing costs are an important consideration for the sustaining of Moore's law.[45]This led to the formulation ofMoore's second law, also called Rock's law (named afterArthur Rock), which is that thecapital costof asemiconductor fabrication plantalso increases exponentially over time.[46][47]
Numerous innovations by scientists and engineers have sustained Moore's law since the beginning of the IC era. Some of the key innovations are listed below, as examples of breakthroughs that have advanced integrated circuit andsemiconductor device fabricationtechnology, allowing transistor counts to grow by more than seven orders of magnitude in less than five decades.
Computer industry technology road maps predicted in 2001 that Moore's law would continue for several generations of semiconductor chips.[71]
One of the key technical challenges of engineering futurenanoscaletransistors is the design of gates. As device dimensions shrink, controlling the current flow in the thin channel becomes more difficult. Modern nanoscale transistors typically take the form ofmulti-gate MOSFETs, with theFinFETbeing the most common nanoscale transistor. The FinFET has gate dielectric on three sides of the channel. In comparison, thegate-all-aroundMOSFET (GAAFET) structure has even better gate control.
Microprocessor architects report that semiconductor advancement has slowed industry-wide since around 2010, below the pace predicted by Moore's law.[17]Brian Krzanich, the former CEO of Intel, announced, "Our cadence today is closer to two and a half years than two."[103]Intel stated in 2015 that improvements in MOSFET devices have slowed, starting at the22 nmfeature width around 2012, and continuing at14 nm.[104]Pat Gelsinger, Intel CEO, stated at the end of 2023 that "we're no longer in the golden era of Moore's Law, it's much, much harder now, so we're probably doubling effectively closer to every three years now, so we've definitely seen a slowing."[105]
The physical limits to transistor scaling have been reached due to source-to-drain leakage, limited gate metals and limited options for channel material. Other approaches are being investigated, which do not rely on physical scaling. These include the spin state of electronspintronics,tunnel junctions, and advanced confinement of channel materials via nano-wire geometry.[106]Spin-based logic and memory options are being developed actively in labs.[107][108]
The vast majority of current transistors on ICs are composed principally ofdopedsilicon and its alloys. As silicon is fabricated into single nanometer transistors,short-channel effectsadversely changes desired material properties of silicon as a functional transistor. Below are several non-silicon substitutes in the fabrication of small nanometer transistors.
One proposed material isindium gallium arsenide, or InGaAs. Compared to their silicon and germanium counterparts, InGaAs transistors are more promising for future high-speed, low-power logic applications. Because of intrinsic characteristics ofIII–V compound semiconductors, quantum well andtunneleffect transistors based on InGaAs have been proposed as alternatives to more traditional MOSFET designs.
Biological computingresearch shows that biological material has superior information density and energy efficiency compared to silicon-based computing.[116]
Various forms ofgrapheneare being studied forgraphene electronics, e.g.graphene nanoribbontransistorshave shown promise since its appearance in publications in 2008. (Bulk graphene has aband gapof zero and thus cannot be used in transistors because of its constant conductivity, an inability to turn off. The zigzag edges of the nanoribbons introduce localized energy states in the conduction and valence bands and thus a bandgap that enables switching when fabricated as a transistor. As an example, a typical GNR of width of 10 nm has a desirable bandgap energy of 0.4 eV.[117][118]) More research will need to be performed, however, on sub-50 nm graphene layers, as its resistivity value increases and thus electron mobility decreases.[117]
In April 2005,Gordon Moorestated in an interview that the projection cannot be sustained indefinitely: "It can't continue forever. The nature of exponentials is that you push them out and eventually disaster happens." He also noted that transistors eventually would reach the limits of miniaturization atatomiclevels:
In terms of size [of transistors] you can see that we're approaching the size of atoms which is a fundamental barrier, but it'll be two or three generations before we get that far—but that's as far out as we've ever been able to see. We have another 10 to 20 years before we reach a fundamental limit. By then they'll be able to make bigger chips and have transistor budgets in the billions.[119]
In 2016 theInternational Technology Roadmap for Semiconductors, after using Moore's Law to drive the industry since 1998, produced its final roadmap. It no longer centered its research and development plan on Moore's law. Instead, it outlined what might be called the More than Moore strategy in which the needs of applications drive chip development, rather than a focus on semiconductor scaling. Application drivers range from smartphones to AI to data centers.[120]
IEEE began a road-mapping initiative in 2016, Rebooting Computing, named theInternational Roadmap for Devices and Systems(IRDS).[121]
Some forecasters, including Gordon Moore,[122]predict that Moore's law will end by around 2025.[123][120][124]Although Moore's Law will reach a physical limit, some forecasters are optimistic about the continuation of technological progress in a variety of other areas, including new chip architectures, quantum computing, and AI and machine learning.[125][126]NvidiaCEOJensen Huangdeclared Moore's law dead in 2022;[2]several days later, Intel CEO Pat Gelsinger countered with the opposite claim.[3]
Digital electronics have contributed to world economic growth in the late twentieth and early twenty-first centuries.[127]The primary driving force of economic growth is the growth ofproductivity,[128]which Moore's law factors into. Moore (1995) expected that "the rate of technological progress is going to be controlled from financial realities".[129]The reverse could and did occur around the late-1990s, however, with economists reporting that "Productivity growth is the key economic indicator of innovation."[130]Moore's law describes a driving force of technological and social change, productivity, and economic growth.[131][132][128]
An acceleration in the rate of semiconductor progress contributed to a surge in U.S. productivity growth,[133][134][135]which reached 3.4% per year in 1997–2004, outpacing the 1.6% per year during both 1972–1996 and 2005–2013.[136]As economist Richard G. Anderson notes, "Numerous studies have traced the cause of the productivity acceleration to technological innovations in the production of semiconductors that sharply reduced the prices of such components and of the products that contain them (as well as expanding the capabilities of such products)."[137]
The primary negative implication of Moore's law is thatobsolescencepushes society up against theLimits to Growth. As technologies continue to rapidly improve, they render predecessor technologies obsolete. In situations in which security and survivability of hardware or data are paramount, or in which resources are limited, rapid obsolescence often poses obstacles to smooth or continued operations.[138]
Several measures of digital technology are improving at exponential rates related to Moore's law, including the size, cost, density, and speed of components. Moore wrote only about the density of components, "a component being a transistor, resistor, diode or capacitor",[129]at minimum cost.
Transistors per integrated circuit– The most popular formulation is of the doubling of the number of transistors on ICs every two years. At the end of the 1970s, Moore's law became known as the limit for the number of transistors on the most complex chips. The graph at the top of this article shows this trend holds true today. As of 2025[update], the commercially available processor possessing one of the highest numbers of transistors is aGB202 graphics processorwith more than 92.2 billion transistors.[139]
Density at minimum cost per transistor– This is the formulation given in Moore's 1965 paper.[1]It is not just about the density of transistors that can be achieved, but about the density of transistors at which the cost per transistor is the lowest.[140]
As more transistors are put on a chip, the cost to make each transistor decreases, but the chance that the chip will not work due to a defect increases. In 1965, Moore examined the density of transistors at which cost is minimized, and observed that, as transistors were made smaller through advances inphotolithography, this number would increase at "a rate of roughly a factor of two per year".[1]
Dennard scaling– This posits that power usage would decrease in proportion to area (both voltage and current being proportional to length) of transistors. Combined with Moore's law,performance per wattwould grow at roughly the same rate as transistor density, doubling every 1–2 years. According to Dennard scaling transistor dimensions would be scaled by 30% (0.7×) every technology generation, thus reducing their area by 50%. This would reduce the delay by 30% (0.7×) and therefore increase operating frequency by about 40% (1.4×). Finally, to keep electric field constant, voltage would be reduced by 30%, reducing energy by 65% and power (at 1.4× frequency) by 50%.[c]Therefore, in every technology generation transistor density would double, circuit becomes 40% faster, while power consumption (with twice the number of transistors) stays the same.[141]Dennard scaling ended in 2005–2010, due to leakage currents.[17]
The exponential processor transistor growth predicted by Moore does not always translate into exponentially greater practical CPU performance. Since around 2005–2007, Dennard scaling has ended, so even though Moore's law continued after that, it has not yielded proportional dividends in improved performance.[15][142]The primary reason cited for the breakdown is that at small sizes, current leakage poses greater challenges, and also causes the chip to heat up, which creates a threat ofthermal runawayand therefore, further increases energy costs.[15][142][17]
The breakdown of Dennard scaling prompted a greater focus on multicore processors, but the gains offered by switching to more cores are lower than the gains that would be achieved had Dennard scaling continued.[143][144]In another departure from Dennard scaling, Intel microprocessors adopted a non-planar tri-gate FinFET at 22 nm in 2012 that is faster and consumes less power than a conventional planar transistor.[145]The rate of performance improvement for single-core microprocessors has slowed significantly.[146]Single-core performance was improving by 52% per year in 1986–2003 and 23% per year in 2003–2011, but slowed to just seven percent per year in 2011–2018.[146]
Quality adjusted price of IT equipment– Thepriceof information technology (IT), computers and peripheral equipment, adjusted for quality and inflation, declined 16% per year on average over the five decades from 1959 to 2009.[147][148]The pace accelerated, however, to 23% per year in 1995–1999 triggered by faster IT innovation,[130]and later, slowed to 2% per year in 2010–2013.[147][149]
Whilequality-adjustedmicroprocessor price improvement continues,[150]the rate of improvement likewise varies, and is not linear on a log scale. Microprocessor price improvement accelerated during the late 1990s, reaching 60% per year (halving every nine months) versus the typical 30% improvement rate (halving every two years) during the years earlier and later.[151][152]Laptop microprocessors in particular improved 25–35% per year in 2004–2010, and slowed to 15–25% per year in 2010–2013.[153]
The number of transistors per chip cannot explain quality-adjusted microprocessor prices fully.[151][154][155]Moore's 1995 paper does not limit Moore's law to strict linearity or to transistor count, "The definition of 'Moore's Law' has come to refer to almost anything related to the semiconductor industry that on asemi-log plotapproximates a straight line. I hesitate to review its origins and by doing so restrict its definition."[129]
Hard disk drive areal density– A similar prediction (sometimes calledKryder's law) was made in 2005 forhard disk driveareal density.[156]The prediction was later viewed as over-optimistic. Several decades of rapid progress in areal density slowed around 2010, from 30 to 100% per year to 10–15% per year, because of noise related tosmaller grain sizeof the disk media, thermal stability, and writability using available magnetic fields.[157][158]
Fiber-optic capacity– The number of bits per second that can be sent down an optical fiber increases exponentially, faster than Moore's law.Keck's law, in honor ofDonald Keck.[159]
Network capacity– According to Gerald Butters,[160][161]the former head of Lucent's Optical Networking Group at Bell Labs, there is another version, called Butters' Law of Photonics,[162]a formulation that deliberately parallels Moore's law. Butters' law says that the amount of data coming out of an optical fiber is doubling every nine months.[163]Thus, the cost of transmitting a bit over an optical network decreases by half every nine months. The availability ofwavelength-division multiplexing(sometimes called WDM) increased the capacity that could be placed on a single fiber by as much as a factor of 100. Optical networking anddense wavelength-division multiplexing(DWDM) is rapidly bringing down the cost of networking, and further progress seems assured. As a result, the wholesale price of data traffic collapsed in thedot-com bubble.Nielsen's Lawsays that the bandwidth available to users increases by 50% annually.[164]
Pixels per dollar– Similarly, Barry Hendy of Kodak Australia has plotted pixels per dollar as a basic measure of value for a digital camera, demonstrating the historical linearity (on a log scale) of this market and the opportunity to predict the future trend of digital camera price,LCDandLEDscreens, and resolution.[165][166][167][168]
The great Moore's law compensator (TGMLC), also known asWirth's law– generally is referred to assoftware bloatand is the principle that successive generations of computer software increase in size and complexity, thereby offsetting the performance gains predicted by Moore's law. In a 2008 article inInfoWorld, Randall C. Kennedy,[169]formerly of Intel, introduces this term using successive versions ofMicrosoft Officebetween the year 2000 and 2007 as his premise. Despite the gains in computational performance during this time period according to Moore's law, Office 2007 performed the same task at half the speed on a prototypical year 2007 computer as compared to Office 2000 on a year 2000 computer.
Library expansion– was calculated in 1945 byFremont Riderto double in capacity every 16 years, if sufficient space were made available.[170]He advocated replacing bulky, decaying printed works with miniaturizedmicroformanalog photographs, which could be duplicated on-demand for library patrons or other institutions. He did not foresee the digital technology that would follow decades later to replace analog microform with digital imaging, storage, and transmission media. Automated, potentially lossless digital technologies allowed vast increases in the rapidity of information growth in an era that now sometimes is called theInformation Age.
Carlson curve– is a term coined byThe Economist[171]to describe the biotechnological equivalent of Moore's law, and is named after author Rob Carlson.[172]Carlson accurately predicted that the doubling time of DNA sequencing technologies (measured by cost and performance) would be at least as fast as Moore's law.[173]Carlson Curves illustrate the rapid (in some cases hyperexponential) decreases in cost, and increases in performance, of a variety of technologies, including DNA sequencing, DNA synthesis, and a range of physical and computational tools used in protein expression and in determining protein structures.
Eroom's law– is a pharmaceutical drug development observation that was deliberately written as Moore's Law spelled backward in order to contrast it with the exponential advancements of other forms of technology (such as transistors) over time. It states that the cost of developing a new drug roughly doubles every nine years.
Experience curve effectssays that each doubling of the cumulative production of virtually any product or service is accompanied by an approximate constant percentage reduction in the unit cost. The acknowledged first documented qualitative description of this dates from 1885.[174][175]A power curve was used to describe this phenomenon in a 1936 discussion of the cost of airplanes.[176]
Edholm's law– Phil Edholm observed that thebandwidthoftelecommunication networks(including the Internet) is doubling every 18 months.[177]The bandwidths of onlinecommunication networkshas risen frombits per secondtoterabits per second. The rapid rise in online bandwidth is largely due to the same MOSFET scaling that enabled Moore's law, as telecommunications networks are built from MOSFETs.[178]
Haitz's lawpredicts that the brightness of LEDs increases as their manufacturing cost goes down.
Swanson's lawis the observation that the price of solar photovoltaic modules tends to drop 20 percent for every doubling of cumulative shipped volume. At present rates, costs go down 75% about every 10 years.
|
https://en.wikipedia.org/wiki/Moore%27s_law
|
Intel Corporation[note 1]is an Americanmultinational corporationandtechnology companyheadquartered inSanta Clara, California, andincorporated in Delaware.[3]Intel designs, manufactures, and sells computer components such asCPUsand related products for business and consumer markets. It is considered one of the world'slargest semiconductor chip manufacturersby revenue[4][5]and ranked in theFortune500list of thelargest United States corporations by revenuefor nearly a decade, from 2007 to 2016fiscal years, until it was removed from the ranking in 2018.[6]In 2020, it was reinstated and ranked 45th, being the7th-largest technology company in the ranking. It was the first company listed onNasdaq.
Intel suppliesmicroprocessorsfor most manufacturers of computer systems, and is one of the developers of thex86series of instruction sets found in mostpersonal computers(PCs). It also manufactureschipsets,network interface controllers,flash memory,graphics processing units(GPUs),field-programmable gate arrays(FPGAs), and other devices related to communications and computing. Intel has a strong presence in the high-performance general-purpose andgaming PCmarket with itsIntel Coreline of CPUs, whose high-end models are among the fastest consumer CPUs, as well as itsIntel Arcseries of GPUs. The Open Source Technology Center at Intel hostsPowerTOPandLatencyTOP, and supports other open source projects such asWayland,Mesa,Threading Building Blocks(TBB), andXen.[7]
Intel was founded on July 18, 1968, by semiconductor pioneersGordon Moore(ofMoore's law) andRobert Noyce, along with investorArthur Rock, and is associated with the executive leadership and vision ofAndrew Grove.[8]The company was a key component of the rise ofSilicon Valleyas ahigh-techcenter,[9]as well as being an early developer ofSRAMandDRAMmemory chips, which represented the majority of its business until 1981. Although Intel created the world's first commercial microprocessor chip—theIntel 4004—in 1971, it was not until the success of the PC in the early 1990s that this became its primary business.
During the 1990s, the partnership betweenMicrosoft Windowsand Intel, known as "Wintel", became instrumental in shaping the PC landscape[10][11]and solidified Intel's position on the market. As a result, Intel invested heavily in new microprocessor designs in the mid to late 1990s, fostering the rapid growth of thecomputer industry. During this period, it became thedominantsupplier of PC microprocessors, with amarket shareof 90%,[12]and was known for aggressive andanti-competitive tacticsin defense of its market position, particularly againstAMD, as well as a struggle withMicrosoftfor control over the direction of the PC industry.[13][14]
Since the 2000s and especially since the late 2010s, Intel has faced increasing competition, which has led to a reduction in Intel's dominance and market share in the PC market.[15]Nevertheless, with a 68.4% market share as of 2023, Intel still leads the x86 market by a wide margin.[16]In addition, Intel's ability to design andmanufactureits own chips is considered a rarity in thesemiconductor industry,[9]as most chip designersdo not have their own production facilitiesand insteadrely on contract manufacturers(e.g.TSMC,FoxconnandSamsung), as AMD andNvidiado.[17]
Intel was incorporated inMountain View, California, on July 18, 1968, byGordon E. Moore(known for "Moore's law"), achemist;Robert Noyce, a physicist and co-inventor of theintegrated circuit; andArthur Rock, an investor andventure capitalist.[18][19][20]Moore and Noyce had leftFairchild Semiconductor, where they were part of the "traitorous eight" who founded it. There were originally 500,000 shares outstanding of which Noyce bought 245,000 shares, Moore 245,000 shares, and Rock 10,000 shares; all at $1 per share. Rock offered $2,500,000 of convertible debentures to a limited group of private investors (equivalent to $21 million in 2022), convertible at $5 per share.[21][22]Just 2 years later, Intel became apublic companyvia aninitial public offering(IPO), raising $6.8 million ($23.50 per share). Intel was the very first company to be listed on the then-newly establishedNational Association of Securities DealersAutomated Quotation System (NASDAQ).[23]Intel's third employee wasAndy Grove,[note 2]achemical engineer, who later ran the company through much of the 1980s and the high-growth 1990s.
In deciding on a name, Moore and Noyce quickly rejected "Moore Noyce",[24]nearhomophonefor "more noise" – an ill-suited name for anelectronicscompany, sincenoise in electronicsis usually undesirable and typically associated with badinterference. Instead, they founded the company asNM Electronicson July 18, 1968, but by the end of the month had changed the name toIntel, which stood forIntegratedElectronics.[note 3]Since "Intel" was already trademarked by the hotel chain Intelco, they had to buy the rights for the name.[23][30]
At its founding, Intel was distinguished by its ability to makelogic circuitsusingsemiconductor devices. The founders' goal was thesemiconductor memorymarket, widely predicted to replacemagnetic-core memory. Its first product, a quick entry into the small, high-speed memory market in 1969, was the 3101Schottky TTLbipolar64-bitstatic random-access memory(SRAM), which was nearly twice as fast as earlier Schottky diode implementations by Fairchild and the Electrotechnical Laboratory inTsukuba, Japan.[31][32]In the same year, Intel also produced the 3301 Schottky bipolar 1024-bitread-only memory(ROM)[33]and the first commercialmetal–oxide–semiconductor field-effect transistor(MOSFET)silicon gateSRAM chip, the 256-bit 1101.[23][34][35]
While the 1101 was a significant advance, its complex staticcell structuremade it too slow and costly formainframememories. The three-transistorcell implemented in the first commercially availabledynamic random-access memory(DRAM), the1103released in 1970, solved these issues. The 1103 was the bestselling semiconductor memory chip in the world by 1972, as it replaced core memory in many applications.[36][37]Intel's business grew during the 1970s as it expanded and improved its manufacturing processes and produced a wider range ofproducts, still dominated by various memory devices.
Intel created the first commercially available microprocessor, theIntel 4004, in 1971.[23]The microprocessor represented a notable advance in the technology of integrated circuitry, as it miniaturized the central processing unit of a computer, which then made it possible for small machines to perform calculations that in the past only very large machines could do. Considerable technological innovation was needed before the microprocessor could become the basis of what was first known as a "mini computer" and then a "personal computer".[38]Intel also created one of the firstmicrocomputersin 1973.[34][39]
Intel opened its first international manufacturing facility in 1972, inMalaysia, which would host multiple Intel operations, before opening assembly facilities and semiconductor plants inSingaporeandJerusalemin the early 1980s, and manufacturing and development centers inChina,India, andCostaRica in the 1990s.[40]By the early 1980s, its business was dominated by DRAM chips. However, increased competition from Japanese semiconductor manufacturers had, by 1983, dramatically reduced the profitability of this market. The growing success of theIBMpersonal computer, based on an Intel microprocessor, was among factors that convinced Gordon Moore (CEO since 1975) to shift the company's focus to microprocessors and to change fundamental aspects of that business model. Moore's decision to sole-source Intel's 386 chip played into the company's continuing success.
By the end of the 1980s, buoyed by its fortuitous position as microprocessor supplier to IBM and IBM's competitors within the rapidly growingpersonal computer market, Intel embarked on 10 years of unprecedented growth as the primary and most profitable hardware supplier to the PC industry, part of the winning 'Wintel' combination. Moore handed over his position as CEO toAndy Grovein 1987. By launching its Intel Insidemarketing campaignin 1991, Intel was able to associatebrand loyaltywith consumer selection, so that by the end of the 1990s, its line ofPentiumprocessors had become a household name.
As Intel exited other markets, the company depended so much on the 80386 and its successors that a marketing employee said that "there's only one product, and Andy Grove's the product manager".[41]After 2000, growth in demand for high-end microprocessors slowed. Competitors, most notablyAMD(Intel's largest competitor in its primaryx86architecture market), garnered significant market share, initially in low-end and mid-range processors but ultimately across the product range. Intel's dominant position in its core market was greatly reduced,[42]mostly due to the controversialNetBurstmicroarchitecture. In the early 2000s, then-CEO,Craig Barrettattempted to diversify the company's business beyond semiconductors, but few of these activities were ultimately successful.
Bob had also been embroiled in litigation for several years. U.S. law did not initially recognizeintellectual property rightsrelated to microprocessortopology(circuit layouts), until theSemiconductor Chip Protection Act of 1984, a law sought by Intel and theSemiconductor Industry Association(SIA).[43]During the late 1980s and 1990s (after this law was passed), Intel also sued companies that tried to develop competitor chips to the80386CPU.[44]Thelawsuitswere noted to significantly burden the competition with legal bills, even if Intel lost the suits.[44]Antitrustallegations had been simmering since the early 1990s and had been the cause of one lawsuit against Intel in 1991. In 2004 and 2005, AMDbrought further claims against Intelrelated tounfair competition.
In 2005, CEOPaul Otellinireorganized the company to refocus its core processor and chipset business on platforms (enterprise, digital home, digital health, and mobility). On June 6, 2005,Steve Jobs, then CEO ofApple, announced that Apple would be using Intel's x86 processors for itsMacintoshcomputers, switching from thePowerPCarchitecture developed by theAIM alliance.[45]This was seen as a win for Intel;[46]an analyst called the move "risky" and "foolish", as Intel's current offerings at the time were considered to be behind those of AMD and IBM.[47]In 2006, Intel unveiled itsCore microarchitectureto widespread critical acclaim; the product range was perceived as an exceptional leap in processor performance that at a stroke regained much of its leadership of the field.[48][49][50]In 2008, Intel had another "tick" when it introduced the Penryn microarchitecture, fabricated using the 45 nm process node. Later that year, Intel released a processor with theNehalemarchitecture to positive reception.[51]On June 27, 2006, the sale of Intel'sXScaleassets was announced. Intel agreed to sell the XScale processor business toMarvell Technology Groupfor an estimated $600 million and the assumption of unspecified liabilities. The move was intended to permit Intel to focus its resources on its core x86 and server businesses, and theacquisitioncompleted on November 9, 2006.[52]In 2008, Intel spun off key assets of a solar startup business effort to form an independent company, SpectraWatt Inc. In 2011, SpectraWatt filed for bankruptcy.[53]
In February 2011, Intel began to build a new microprocessor manufacturing facility inChandler, Arizona, completed in 2013 at a cost of $5 billion.[54]The building is now the 10 nm-certified Fab 42 and is connected to the other Fabs (12, 22, 32) on Ocotillo Campus via an enclosed bridge known as the Link.[55][56][57][58]The company produces three-quarters of its products in the United States, although three-quarters of its revenue come from overseas.[59]
TheAlliance for Affordable Internet(A4AI) was launched in October 2013 and Intel is part of the coalition of public and private organizations that also includesFacebook,Google, andMicrosoft. Led bySir Tim Berners-Lee, the A4AI seeks to make Internet access more affordable to broaden access in the developing world, where only 31% of people are online. Google will help to decrease Internet access prices so that they fall below the UN Broadband Commission's worldwide target of 5% of monthly income.[60]
In April 2011, Intel began a pilot project withZTE Corporationto produce smartphones using theIntel Atomprocessor for China's domestic market. In December 2011, Intel announced that it reorganized several of its business units into a new mobile and communications group[61]that would be responsible for the company's smartphone, tablet, and wireless efforts. Intel planned to introduce Medfield – a processor for tablets and smartphones – to the market in 2012, as an effort to compete with Arm.[62]As a 32-nanometer processor, Medfield is designed to be energy-efficient, one of Arm's chips' core features.[63]
Intel's partnership with Google was announced at the Intel Developers Forum (IDF) 2011 in San Francisco. In January 2012, Google announced Android 2.3, supporting Intel's Atom microprocessor.[64][65][66]In 2013, Intel's Kirk Skaugen said that Intel's exclusive focus on Microsoft platforms was a thing of the past and that they would now support all "tier-one operating systems" such as Linux, Android, iOS, and Chrome.[67]
In 2014, Intel cut thousands of employees in response to "evolving market trends",[68]and offered to subsidize manufacturers for the extra costs involved in using Intel chips in their tablets. In April 2016, Intel cancelled theSoFIAplatform and the Broxton Atom SoC for smartphones,[69][70][71][72]effectively leaving the smartphone market.[73][74]
Finding itself with excess fab capacity after the failure of theUltrabookto gain market traction and with PC sales declining, in 2013 Intel reached afoundryagreement to produce chips forAlterausing a 14 nm process. General Manager of Intel's custom foundry division Sunit Rikhi indicated that Intel would pursue further such deals in the future.[75]This was after poor sales ofWindows 8hardware caused a major retrenchment for most of the major semiconductor manufacturers, except for Qualcomm, which continued to see healthy purchases from its largest customer, Apple.[76]
As of July 2013, five companies were using Intel's fabs via theIntel Custom Foundrydivision:Achronix,Tabula,Netronome,Microsemi, andPanasonic– most arefield-programmable gate array(FPGA) makers, but Netronome designs network processors. Only Achronix began shipping chips made by Intel using the 22 nm Tri-Gate process.[77][78]Several other customers also exist but were not announced at the time.[79]
The foundry business was closed in 2018 due to Intel's issues with its manufacturing.[80][81]
Intel continued itstick-tockmodel of a microarchitecture change followed by a die shrink until the 6th-generation Core family based on theSkylakemicroarchitecture. This model was deprecated in 2016, with the release of the 7th-generation Core family (codenamedKaby Lake), ushering in theprocess–architecture–optimization model. As Intel struggled to shrink their process node from14 nmto10 nm, processor development slowed down and the company continued to use the Skylake microarchitecture until 2020, albeit with optimizations.[82]
While Intel originally planned to introduce 10 nm products in 2016, it later became apparent that there were manufacturing issues with the node.[83]The first microprocessor under that node,Cannon Lake(marketed as 8th-generation Core), was released in small quantities in 2018.[84][85]The company first delayed the mass production of their 10 nm products to 2017.[86][87]They later delayed mass production to 2018,[88]and then to 2019. Despite rumors of the process being cancelled,[89]Intel finally introduced mass-produced 10 nm 10th-generation Intel Core mobile processors (codenamed "Ice Lake") in September 2019.[90]
Intel later acknowledged that their strategy to shrink to 10 nm was too aggressive.[82][91]While other foundries used up to four steps in 10 nm or 7 nm processes, the company's 10 nm process required up to five or six multi-pattern steps.[92]In addition, Intel's 10 nm process is denser than its counterpart processes from other foundries.[93][94]Since Intel's microarchitecture and process node development were coupled, processor development stagnated.[82]
In early January 2018, it was reported that allIntel processorsmade since 1995[95][96](besidesIntel Itaniumand pre-2013Intel Atom) had been subject to two security flaws dubbedMeltdownand Spectre.[97][98]
Due to Intel's issues with its 10 nm process node and the company's slow processor development,[82]the company now found itself in a market with intense competition.[99]The company's main competitor, AMD, introduced theZenmicroarchitecture and a newchiplet-based design to critical acclaim. Since its introduction, AMD, once unable to compete with Intel in the high-end CPU market, has undergone a resurgence,[100]and Intel's dominance and market share have considerably decreased.[101]In addition, Apple began to transition away from the x86 architecture and Intel processors to their ownApple siliconfor their Macintosh computers in 2020. The transition is expected to affect Intel minimally; however, it might prompt other PC manufacturers to reevaluate their reliance on Intel and the x86 architecture.[102][103]
On March 23, 2021, CEO Pat Gelsinger laid out new plans for the company.[104]These include a new strategy, called IDM 2.0, that includes investments in manufacturing facilities, use of both internal and external foundries, and a new foundry business called Intel Foundry Services (IFS), a standalone business unit.[105][106]Unlike Intel Custom Foundry, IFS will offer a combination of packaging and process technology, and Intel's IP portfolio including x86 cores. Other plans for the company include a partnership withIBMand a new event for developers and engineers, called "Intel ON".[81]Gelsinger also confirmed that Intel's 7 nm process is on track, and that the first products using their 7 nm process (also known as Intel 4) arePonte VecchioandMeteor Lake.[81]
In January 2022, Intel reportedly selectedNew Albany, Ohio, nearColumbus, Ohio, as the site for a major new manufacturing facility.[107]The facility will cost at least $20 billion.[108]The company expects the facility to begin producing chips by 2025.[109]The same year Intel also chooseMagdeburg,Germany, as a site for two new chip mega factories for €17 billion (toppingTesla'sinvestmentinBrandenburg). The start of the construction was initially planned for 2023, but this has been postponed to late 2024, while the production start is scheduled for 2027.[110]Including subcontractors, this would create 10,000 new jobs.[111]
In August 2022, Intel signed a $30billion partnership withBrookfield Asset Managementto fund its recent factory expansions. As part of the deal, Intel would have a controlling stake by funding 51% of the cost of building new chip-making facilities in Chandler. Brookfield owns the remaining 49% stake, allowing the companies to split the revenue from those facilities.[112][113]
On January 31, 2023, as part of $3 billion in cost reductions, Intel announced pay cuts affecting employees above midlevel, ranging from 5% upwards. It also suspended bonuses and merit pay increases, reducing retirement plan matching. These cost reductions followed layoffs announced in the fall of 2022.[114]
In October 2023, Intel confirmed it would be the first commercial user ofhigh-NA EUV lithographytool, as part of its plan to regain process leadership fromTSMC.[115]
In December 2023, Intel unveiled Gaudi3, anartificial intelligence(AI) chip forgenerative AIsoftware which will launch in 2024[needs update]and compete with rival chips from Nvidia and AMD.[116]On June 4, 2024, Intel announced AI chips for data centers, the Xeon 6 processor, aiming for better performance and power efficiency compared to its predecessor. Intel's Gaudi 2 and Gaudi 3AI acceleratorswere revealed to be more cost-effective than competitors' offerings. Additionally, Intel disclosed architecture details for itsLunar Lakeprocessors for AI PCs,[117]which were released on September 24, 2024.
In August 2024, after posting $1.6 billion in losses for Q2, Intel announced that it intends to cut 15,000 jobs to save $10 billion in 2025.[118]In order to reach this goal, the company will offer early retirement and voluntary departure options.[119]
On November 1, 2024, it was announced that Intel will drop out of theDow Jones Industrial Averageon November 8 prior to the stock market open, withNvidiataking its place.[120][121]
In December 2024, Intel's CEO Pat Gelsinger was ousted amid ongoing struggles to revitalize the company, which has seen a significant decline in stock value during his tenure. Gelsinger's resignation, effective December 1, followed a board meeting where directors expressed dissatisfaction with the slow progress of his ambitious turnaround strategy. Despite efforts to enhance Intel's manufacturing capabilities and compete with rivals like AMD and Nvidia, the company faced mounting challenges, including a $16.6 billion loss and a 60% drop in share prices since Gelsinger's appointment in 2021. After his departure, Intel appointed David Zinsner and Michelle Johnston Holthaus interim co-CEOs while searching for a permanent successor. Gelsinger's exit underscored the turmoil at Intel as it grappled with its identity crisis and sought to regain its semiconductor industry position.[122][123][124]
On March 13, 2025, Intel announced the appointment ofLip-Bu Tanas their new CEO, effective March 18, after 4 months of having interim co-CEOs.[125]
Intel's first products wereshift registermemory and random-accessmemoryintegrated circuits, and Intel grew to be a leader in the fiercely competitiveDRAM,SRAM, andROMmarkets throughout the 1970s. Concurrently, Intel engineersMarcian Hoff,Federico Faggin,Stanley Mazor, andMasatoshi Shimainvented Intel's firstmicroprocessor. Originally developed for the Japanese companyBusicomto replace a number ofASICsin a calculator already produced by Busicom, theIntel 4004was introduced to the mass market on November 15, 1971, though the microprocessor did not become the core of Intel's business until the mid-1980s. (Note: Intel is usually given credit withTexas Instrumentsfor the almost-simultaneous invention of the microprocessor.)
In 1983, at the dawn of the personal computer era, Intel's profits came under increased pressure from Japanese memory-chip manufacturers, and then-president Andy Grove focused the company on microprocessors. Grove described this transition in the bookOnly the Paranoid Survive. A key element of his plan was the notion, then considered radical, of becoming the single source for successors to the popular8086microprocessor.
Until then, the manufacture of complex integrated circuits was not reliable enough for customers to depend on a single supplier, but Grove began producing processors in three geographically distinct factories,[which?]and ceased licensing the chip designs to competitors such asAMD.[126]When the PC industry boomed in the late 1980s and 1990s, Intel was one of the primary beneficiaries.
Despite the ultimate importance of the microprocessor, the4004and its successors the8008and the8080were never major revenue contributors at Intel.
In 1975, the company had started a project to develop a highly advanced 32-bit microprocessor, finally released in 1981 as theIntel iAPX 432. The project was too ambitious and the processor was never able to meet its performance objectives, and it failed in the marketplace. (Intel eventually extended thex86 architectureto 32 bits instead.)[127][128]
As the next processor, the8086(and its variant the 8088) was completed in 1978, Intel embarked on a major marketing and sales campaign for that chip nicknamed "Operation Crush", and intended to win as many customers for the processor as possible. One design win was the newly createdIBM PCdivision, though the importance of this was not fully realized at the time.
IBMintroduced its personal computer in 1981, and it was rapidly successful. In 1982, Intel created the80286microprocessor, which, two years later, was used in theIBM PC/AT.Compaq, the first IBM PC "clone" manufacturer, produced a desktop system based on the faster 80286 processor in 1985 and in 1986 quickly followed with the first80386-based system, beating IBM and establishing a competitive market for PC-compatible systems and setting up Intel as a keycomponent supplier.
During this periodAndrew Grovedramatically redirected the company, closing much of itsDRAMbusiness and directing resources to themicroprocessorbusiness. Of perhaps greater importance was his decision to "single-source" the 386 microprocessor. Prior to this, microprocessor manufacturing was in its infancy, and manufacturing problems frequently reduced or stopped production, interrupting supplies to customers. To mitigate this risk, these customers typically insisted that multiple manufacturers produce chips they could use to ensure a consistent supply. The 8080 and 8086-series microprocessors were produced by several companies, notably AMD, with which Intel had a technology-sharing contract.
Grove made the decision not to license the 386 design to other manufacturers, instead, producing it in three geographically distinct factories:Santa Clara, California;Hillsboro, Oregon; andChandler, a suburb ofPhoenix, Arizona. He convinced customers that this would ensure consistent delivery. In doing this, Intel breached its contract with AMD, which sued and was paid millions of dollars in damages but could not manufacture new Intel CPU designs any longer. (Instead, AMD started to develop and manufacture its own competing x86 designs.)
As the success ofCompaq'sDeskpro 386established the 386 as the dominant CPU choice, Intel achieved a position of near-exclusive dominance as its supplier. Profits from this funded rapid development of both higher-performance chip designs and higher-performance manufacturing capabilities, propelling Intel to a position of unquestioned leadership by the early 1990s.
Intel introduced the486microprocessor in 1989, and in 1990 established a second design team, designing the processors code-named "P5" and "P6" in parallel and committing to a major new processor every two years, versus the four or more years such designs had previously taken. The P5 project was earlier known as "Operation Bicycle", referring to the cycles of the processor through two parallel execution pipelines. The P5 was introduced in 1993 as the IntelPentium, substituting a registered trademark name for the former part number. (Numbers, such as 486, cannot be legally registered as trademarks in the United States.) The P6 followed in 1995 as thePentium Proand improved into thePentium IIin 1997. New architectures were developed alternately inSanta Clara, CaliforniaandHillsboro, Oregon.
The Santa Clara design team embarked in 1993 on a successor to thex86 architecture, codenamed "P7". The first attempt was dropped a year later but quickly revived in a cooperative program withHewlett-Packardengineers, though Intel soon took over primary design responsibility. The resulting implementation of theIA-6464-bit architecture was theItanium, finally introduced in June 2001. The Itanium's performance running legacy x86 code did not meet expectations, and it failed to compete effectively withx86-64, which was AMD's 64-bit extension of the 32-bit x86 architecture (Intel uses the nameIntel 64, previouslyEM64T). In 2017, Intel announced that theItanium 9700 series(Kittson) would be the last Itanium chips produced.[129][130]
The Hillsboro team designed theWillametteprocessors (initially code-named P68), which were marketed as the Pentium 4.
During this period, Intel undertook two major supporting advertising campaigns. The first campaign, the 1991 "Intel Inside" marketing and branding campaign, is widely known and has become synonymous with Intel itself. The idea of "ingredient branding" was new at the time, with onlyNutraSweetand a few others making attempts to do so.[131]One of the key architects of the marketing team was the head of the microprocessor division,David House.[132]He coined the slogan "Intel Inside".[133]This campaign established Intel, which had been a component supplier little-known outside the PC industry, as a household name.
The second campaign, Intel's Systems Group, which began in the early 1990s, showcased manufacturing of PCmotherboards, the main board component of a personal computer, and the one into which the processor (CPU) and memory (RAM) chips are plugged.[134]The Systems Group campaign was lesser known than the Intel Inside campaign.
Shortly after, Intel began manufacturing fully configured "white box" systems for the dozens of PC clone companies that rapidly sprang up.[135]At its peak in the mid-1990s, Intel manufactured over 15% of all PCs, making it the third-largest supplier at the time.[citation needed]
During the 1990s,Intel Architecture Labs(IAL) was responsible for many of the hardware innovations for the PC, including thePCIBus, thePCI Express(PCIe) bus, andUniversal Serial Bus(USB). IAL's software efforts met with a more mixed fate; its video and graphics software was important in the development of software digital video,[citation needed]but later its efforts were largely overshadowed by competition fromMicrosoft. The competition between Intel and Microsoft was revealed in testimony by then IAL Vice-presidentSteven McGeadyat theMicrosoft antitrust trial(United States v. Microsoft Corp.).
In June 1994, Intel engineers discovered a flaw in thefloating-pointmath subsection of theP5Pentium microprocessor. Under certain data-dependent conditions, the low-order bits of the result of a floating-point division would be incorrect. The error could compound in subsequent calculations. Intel corrected the error in a future chip revision, and under public pressure it issued a total recall and replaced the defective Pentium CPUs (which were limited to some 60, 66, 75, 90, and 100 MHzmodels) on customer request.
Thebugwas discovered independently in October 1994 by Thomas Nicely, Professor of Mathematics atLynchburg College. He contacted Intel but received no response. On October 30, he posted a message about his finding on the Internet.[136]Word of the bug spread quickly and reached the industry press. The bug was easy to replicate; a user could enter specific numbers into the calculator on the operating system. Consequently, many users did not accept Intel's statements that the error was minor and "not even an erratum". During Thanksgiving, in 1994,The New York Timesran a piece by journalistJohn Markoffspotlighting the error. Intel changed its position and offered to replace every chip, quickly putting in place a large end-usersupportorganization. This resulted in a $475 million charge against Intel's 1994revenue.[137]Nicely later learned that Intel had discovered the FDIV bug in its own testing a few months before him (but had decided not to inform customers).[138]
The "Pentium flaw" incident, Intel's response to it, and the surrounding media coverage propelled Intel from being a technology supplier generally unknown to most computer users to a household name. Dovetailing with an uptick in the "Intel Inside" campaign, the episode is considered to have been a positive event for Intel, changing some of its business practices to be more end-user focused and generating substantial public awareness, while avoiding a lasting negative impression.[139]
The Intel Core line originated from the original Core brand, with the release of the32-bitYonahCPU, Intel's firstdual-coremobile (low-power) processor. Derived from thePentium M, the processor family used an enhanced version of the P6 microarchitecture. Its successor, theCore 2family, was released on July 27, 2006. This was based on the IntelCore microarchitecture, and was a 64-bit design.[140]Instead of focusing on higher clock rates, the Core microarchitecture emphasized power efficiency and a return to lower clock speeds.[141]It also provided more efficient decoding stages, execution units,caches, andbuses, reducing thepower consumptionof Core 2-branded CPUs while increasing their processing capacity.
In November 2008, Intel released the 1st-generation Core processors based on theNehalem microarchitecture. Intel also introduced a new naming scheme, with the three variants now named Core i3, i5, and i7 (as well as i9 from 7th-generation onwards). Unlike the previous naming scheme, these names no longer correspond to specific technical features. It was succeeded by theWestmere microarchitecturein 2010, with a die shrink to 32 nm and included Intel HD Graphics.
In 2011, Intel released theSandy Bridge-based 2nd-generation Core processor family. This generation featured an 11% performance increase over Nehalem.[142]It was succeeded byIvy Bridge-based 3rd-generation Core, introduced at the 2012 Intel Developer Forum.[143]Ivy Bridge featured a die shrink to22 nm, and supported both DDR3 memory and DDR3L chips.
Intel continued itstick-tockmodel of a microarchitecture change followed by a die shrink until the 6th-generation Core family based on theSkylakemicroarchitecture. This model was deprecated in 2016, with the release of the 7th-generation Core family based onKaby Lake, ushering in theprocess–architecture–optimization model.[144]From 2016 until 2021, Intel later released more optimizations on the Skylake microarchitecture withKaby Lake R,Amber Lake,Whiskey Lake,Coffee Lake,Coffee Lake R, andComet Lake.[145][146][147][148]Intel struggled to shrink their process node from14 nmto10 nm, with the first microarchitecture under that node,Cannon Lake(marketed as 8th-generation Core), only being released in small quantities in 2018.[84][85]
In 2019, Intel released the 10th-generation of Core processors, codenamed "Amber Lake", "Comet Lake", and "Ice Lake". Ice Lake, based on theSunny Covemicroarchitecture, was produced on the 10 nm process and was limited to low-power mobile processors. Both Amber Lake and Comet Lake were based on a refined 14 nm node, with the latter being used for desktop and high-performance mobile products and the former used for low-power mobile products.
In September 2020, 11th-generation Core mobile processors, codenamedTiger Lake, were launched.[149]Tiger Lake is based on the Willow Cove microarchitecture and a refined 10 nm node.[150]Intel later released 11th-generation Core desktop processors (codenamed "Rocket Lake"), fabricated using Intel's 14 nm process and based on theCypress Covemicroarchitecture,[151]on March 30, 2021.[152]It replaced Comet Lake desktop processors. All 11th-generation Core processors feature new integrated graphics based on theIntel Xemicroarchitecture.[153]
Both desktop and mobile products were unified under a single process node with the release of 12th-generation Intel Core processors (codenamed "Alder Lake") in late 2021.[154][155]This generation will be fabricated using Intel's 10 nm process, called Intel 7, for both desktop and mobile processors, and is based on ahybrid architectureutilizing high-performanceGolden Covecores and high-efficiencyGracemont(Atom) cores.[154]
On June 6, 2005,Steve Jobs, then CEO ofApple, announced that Apple would be transitioning theMacintoshfrom its long favoredPowerPCarchitecture to the Intel x86 architecture because the future PowerPC road map was unable to satisfy Apple's needs.[45][156]This was seen as a win for Intel,[46]although an analyst called the move "risky" and "foolish", as Intel's current offerings at the time were considered to be behind those of AMD and IBM.[47]The first Mac computers containing Intel CPUs were announced on January 10, 2006, and Apple had its entire line of consumer Macs running on Intel processors by early August 2006. The Apple Xserve server was updated to IntelXeonprocessors from November 2006 and was offered in a configuration similar to Apple's Mac Pro.[157]
Despite Apple's use of Intel products, relations between the two companies were strained at times.[158]Rumors of Apple switching from Intel processors to their own designs began circulating as early as 2011.[159]On June 22, 2020, during Apple's annualWWDC,Tim Cook, Apple's CEO, announced that it would betransitioning the company's entire Mac linefrom Intel CPUs tocustom Apple-designed processorsbased on the Arm architecture over the course of the next two years. In the short term, this transition was estimated to have minimal effects on Intel, as Apple only accounted for 2% to 4% of its revenue. However, at the time it was believed that Apple's shift to its own chips might prompt other PC manufacturers to reassess their reliance on Intel and the x86 architecture.[102][103]By November 2020, Apple unveiled theM1, its processor custom-designed for the Mac.[160][161][162][163]
In 2008, Intel began shipping mainstreamsolid-state drives(SSDs) with up to 160 GB storage capacities.[164]As with their CPUs, Intel develops SSD chips using ever-smaller nanometer processes. These SSDs make use of industry standards such asNAND flash,[165]mSATA,[166]PCIe, andNVMe. In 2017, Intel introduced SSDs based on3D XPointtechnology under the Optane brand name.[167]
In 2021,SK Hynixacquired most of Intel's NAND memory business[168]for $7 billion, with a remaining transaction worth $2 billion expected in 2025.[169]Intel also discontinued its consumer Optane products in 2021.[170]In July 2022, Intel disclosed in its Q2 earnings report that it would cease future product development within its Optane business, which in turn effectively discontinued the development of 3D XPoint as a whole.[171]
The Intel Scientific Computers division was founded in 1984 byJustin Rattner, to design and produceparallel computersbased on Intel microprocessors connected inhypercube internetwork topology.[172]In 1992, the name was changed to the Intel Supercomputing Systems Division, and development of theiWarparchitecture was also subsumed.[173]The division designed severalsupercomputersystems, including theIntel iPSC/1,iPSC/2,iPSC/860,ParagonandASCI Red. In November 2014, Intel stated that it was planning to useoptical fibersto improve networking within supercomputers.[174]
On November 19, 2015, Intel, alongsideArm,Dell,Cisco Systems,Microsoft, andPrinceton University, founded theOpenFog Consortium, to promote interests and development infog computing.[175]Intel's Chief Strategist for the IoT Strategy and Technology Office, Jeff Fedders, became the consortium's first president.[176]
Intel is one of the biggest stakeholders in theself-driving carindustry, having joined the race in mid 2017[177]after joining forces withMobileye.[178]The company is also one of the first in the sector to research consumer acceptance, after an AAA report quoted a 78% nonacceptance rate of the technology in the U.S.[179]
Safety levelsof autonomous driving technology, the thought of abandoning control to a machine, and psychological comfort of passengers in such situations were the major discussion topics initially. The commuters also stated that they did not want to see everything the car was doing. This was primarily a referral to the auto-steering wheel with no one sitting in the driving seat. Intel also learned that voice control regulator is vital, and the interface between the humans and machine eases the discomfort condition, and brings some sense of control back.[180]It is important to mention that Intel included only 10 people in this study, which makes the study less credible.[179]In a video posted on YouTube,[181]Intel accepted this fact and called for further testing.
Intel formed a new business unit called the Programmable Solutions Group (PSG) on completion of itsAlteraacquisition.[182]Intel has since soldStratix, Arria, and CycloneFPGAs. In 2019, Intel releasedAgilexFPGAs: chips aimed at data centers,5Gapplications, and other uses.[183]
In October 2023, Intel announced it would be spinning off PSG into a separate company at the start of 2024, while maintaining majority ownership.[184]
By the end of the 1990s,microprocessorperformance had outstripped software demand for that CPU power.[citation needed]Aside from high-end server systems and software, whose demand dropped with the end of the "dot-com bubble",[185]consumer systems ran effectively on increasingly low-cost systems after 2000.
Intel's strategy was to develop processors with better performance in a short time, from the appearance of one to the other, as seen with the appearance of the Pentium II in May 1997, the Pentium III in February 1999, and the Pentium 4 in the fall of 2000, making the strategy ineffective since the consumer did not see the innovation as essential,[186]and leaving an opportunity for rapid gains by competitors, notablyAMD. This, in turn, lowered the profitability[citation needed]of the processor line and ended an era of unprecedented dominance of the PC hardware by Intel.[citation needed]
Intel's dominance in thex86microprocessor market led to numerous charges ofantitrustviolations over the years, includingFTCinvestigations in both the late 1980s and in 1999, and civil actions such as the 1997 suit byDigital Equipment Corporation(DEC) and a patent suit byIntergraph. Intel's market dominance (at one time[when?]it controlled over 85% of the market for 32-bit x86 microprocessors) combined with Intel's own hardball legal tactics (such as its infamous 338 patent suit versus PC manufacturers)[187]made it an attractive target for litigation, culminating in Intel agreeing to pay AMD $1.25 billion and grant them a perpetual patent cross-license in 2009 as well as several anti-trust judgements in Europe, Korea, and Japan.[188]
A case ofindustrial espionagearose in 1995 that involved both Intel and AMD.Bill Gaede, anArgentineformerly employed both at AMD and at Intel'sArizonaplant, was arrested for attempting in 1993 to sell thei486andP5Pentium designs to AMD and to certain foreign powers.[189]Gaede videotaped data from his computer screen at Intel and mailed it to AMD, which immediately alerted Intel and authorities, resulting in Gaede's arrest. Gaede was convicted and sentenced to 33 months in prison in June 1996.[190][191]
In 2023,Dellaccounted for about 19% of Intel's total revenues,Lenovoaccounted for 11% of total revenues, andHP Inc.accounted for 10% of total revenues.[194]As of May 2024, theU.S. Department of Defenseis another large customer for Intel.[195][196][197][198]In September 2024, Intel reportedly qualified for as much as $3.5 billion in federal grants to make semiconductors for the Defense Department.[199]
According toIDC, while Intel enjoyed the biggest market share in both the overall worldwide PC microprocessor market (73.3%) and the mobile PC microprocessor (80.4%) in the second quarter of 2011, the numbers decreased by 1.5% and 1.9% compared to the first quarter of 2011.[200][201]
Intel's market share decreased significantly in theenthusiastmarket as of 2019,[202]and they have faced delays for their 10 nm products. According to former Intel CEO Bob Swan, the delay was caused by the company's overly aggressive strategy for moving to its next node.[82]
In the 1980s, Intel was among the world's top ten sellers ofsemiconductors(10th in 1987[203]). Along withMicrosoft Windows, it was part of the "Wintel" personal computer domination in the 1990s and early 2000s. In 1992, Intel became thebiggest semiconductor chip makerby revenue and held the position until 2018 whenSamsung Electronicssurpassed it, but Intel returned to its former position the year after.[204][205]Other major semiconductor companies includeTSMC,GlobalFoundries,Texas Instruments,ASML,STMicroelectronics,United Microelectronics Corporation(UMC),Micron,SK Hynix,Kioxia, andSMIC.
Intel's competitors in PC chipsets includedAMD,VIA Technologies,Silicon Integrated Systems, andNvidia. Intel's competitors in networking includeNXP Semiconductors,Infineon,[needs update]Broadcom Limited,Marvell Technology GroupandApplied Micro Circuits Corporation, and competitors in flash memory includedSpansion, Samsung Electronics,Qimonda, Kioxia, STMicroelectronics,Micron,SK Hynix, andIBM.
The only major competitor in thex86processor market is AMD, with which Intel has had full cross-licensing agreements since 1976: each partner can use the other's patented technological innovations without charge after a certain time.[206]However, the cross-licensing agreement is canceled in the event of an AMD bankruptcy or takeover.[207]
Some smaller competitors, such as VIA Technologies, producelow-powerx86 processors for small factor computers and portable equipment. However, the advent of such mobile computing devices, in particular,smartphones, has led toa decline in PC sales.[208]Since over 95% of the world's smartphones currently use processors cores designed byArm, using theArm instruction set, Arm has become a major competitor for Intel's processor market. Arm is also planning to make attempts at setting foot into the PC and server market, withAmpereandIBMeach individually designing CPUs for servers andsupercomputers.[209]The only other major competitor in processor instruction sets isRISC-V, which is anopen sourceCPU instruction set. The major Chinese phone and telecommunications manufacturerHuaweihas released chips based on the RISC-V instruction set due toUS sanctions against China.[210]
Intel has been involved in several disputes regarding the violation ofantitrust laws, which are noted below.
Intel reported totalCO2e emissions(direct + indirect) for the twelve months ending December 31, 2020, at 2,882 Kt (+94/+3.4% y-o-y).[211]Intel plans to reduce carbon emissions 10% by 2030 from a 2020 base year[212]and achieve net-zero carbon emissions by 2040.[213]
Intel has self-reported that they have Wafer fabrication plants in the United States,Ireland, and Israel. They have also self-reported that they have assembly and testing sites mostly in China, Costa Rica, Malaysia, and Vietnam, and one site in the United States.[220][221]
The key trends for Intel are (as of the financial year ending in late December):[222]
Robert Noycewas Intel's CEO at its founding in 1968, followed by co-founderGordon Moorein 1975.Andy Grovebecame the company's president in 1979 and added the CEO title in 1987 when Moore became chairman. In 1998, Grove succeeded Moore as chairman, andCraig Barrett, already company president, took over. On May 18, 2005, Barrett handed the reins of the company over toPaul Otellini, who had been the company president and COO and who was responsible for Intel's design win in the originalIBM PC. The board of directors elected Otellini as president and CEO, and Barrett replaced Grove asChairman of the Board. Grove stepped down as chairman but is retained as a special adviser. In May 2009, Barrett stepped down as chairman of the board and was succeeded by Jane Shaw. In May 2012, Intel vice chairman Andy Bryant, who had held the posts of CFO (1994) and Chief Administrative Officer (2007) at Intel, succeeded Shaw as executive chairman.[223]
In November 2012, president and CEO Paul Otellini announced that he would step down in May 2013 at the age of 62, three years before the company's mandatory retirement age. During a six-month transition period, Intel's board of directors commenced a search process for the next CEO, in which it considered both internal managers and external candidates such asSanjay Jhaand Patrick Gelsinger.[224]Financial results revealed that, under Otellini, Intel's revenue increased by 55.8% (US$34.2 to 53.3 billion), while its net income increased by 46.7% (US$7.5 billion to 11 billion).[225]
On May 2, 2013, Executive Vice President and COOBrian Krzanichwas elected as Intel's sixth CEO,[226]a selection that became effective on May 16, 2013, at the company's annual meeting. Reportedly, the board concluded that an insider could proceed with the role and exert an impact more quickly, without the need to learn Intel's processes, and Krzanich was selected on such a basis.[227]Intel's software headRenée Jameswas selected as president of the company, a role that is second to the CEO position.[228]
As of May 2013, Intel's board of directors consists of Andy Bryant, John Donahoe, Frank Yeary, AmbassadorCharlene Barshefsky,Susan Decker,Reed Hundt, Paul Otellini, James Plummer, David Pottruck, and David Yoffie and Creative directorwill.i.am. The board was described by formerFinancial Timesjournalist Tom Foremski as "an exemplary example of corporate governance of the highest order" and received a rating of ten from GovernanceMetrics International, a form of recognition that has only been awarded to twenty-one other corporate boards worldwide.[229]
On June 21, 2018, Intel announced the resignation of Brian Krzanich as CEO, with the exposure of a relationship he had with an employee.Bob Swanwas named interim CEO, as the Board began a search for a permanent CEO.
On January 31, 2019, Swan transitioned from his role as CFO and interim CEO and was named by the Board as the seventh CEO to lead the company.[230]
On January 13, 2021, Intel announced that Swan would be replaced as CEO byPat Gelsinger, effective February 15. Gelsinger is a former Intel chief technology officer who had previously been head ofVMWare.[231]
In March 2021, Intel removed the mandatory retirement age for its corporate officers.[232]
In October 2023, Intel announced it would be spinning off its Programmable Solutions Group business unit into a separate company at the start of 2024, while maintaining majority ownership and intending to seek an IPO within three years to raise funds.[184][233]
On December 1, 2024, Pat Gelsinger retired from the position of Intel CEO and stepped down from the company’s board of directors.[234][235]David Zinsner and Michelle Johnston Holthaus were named as interim co-CEO's.[236]On March 13, 2025, it was announced that he would be formally replaced by AmericanLip-Bu Tanstarting March 18, 2025.[237]
The 10 largest shareholders of Intel as of December 2023 were:[238]
As of March 2023[update]:[239]
Prior to March 2021, Intel has a mandatory retirement policy for its CEOs when they reach age 65. Andy Grove retired at 62, while both Robert Noyce and Gordon Moore retired at 58. Grove retired as chairman and as a member of the board of directors in 2005 at age 68.
Intel's headquarters are located in Santa Clara, California, and the company hasoperations around the world. Its largest workforce concentration anywhere is inWashington County, Oregon[241](in thePortland metropolitan area's "Silicon Forest"), with 18,600 employees at several facilities.[242]Outside the United States, the company has facilities in China, Costa Rica,Malaysia, Israel, Ireland, India,Russia, Argentina andVietnam, in 63 countries and regions internationally. In March 2022, Intel stopped supplying the Russian market because ofinternational sanctions during the Russo-Ukrainian War.[243]In the U.S. Intel employs significant numbers of people in California,Colorado,Massachusetts,Arizona,New Mexico,Oregon, Texas,WashingtonandUtah. In Oregon, Intel is the state's largest private employer.[242][244]The company is the largest industrial employer inNew Mexicowhile in Arizona the company has 12,000 employees as of January 2020.[245]
Intel invests heavily in research in China and about 100 researchers – or 10% of the total number of researchers from Intel – are located in Beijing.[246]
In 2011, the Israeli government offered Intel $290 million to expand in the country. As a condition, Intel would employ 1,500 more workers inKiryat Gatand between 600 and 1000 workers in the north.[247]
In January 2014, it was reported that Intel would cut about 5,000 jobs from its workforce of 107,000. The announcement was made a day after it reported earnings that missed analyst targets.[248]
In March 2014, it was reported that Intel would embark upon a $6 billion plan to expand its activities in Israel. The plan calls for continued investment in existing and new Intel plants until 2030. As of 2014[update], Intel employs 10,000 workers at four development centers and two production plants in Israel.[249]
Due to declining PC sales, in 2016 Intel cut 12,000 jobs.[250]In 2021, Intel reversed course under new CEO Pat Gelsinger and started hiring thousands of engineers.[251]
Intel has a Diversity Initiative, including employee diversity groups,[252]as well as asupplier diversityprogram.[253]Like many companies with employee diversity groups, they include groups based on race and nationality as well as sexual identity and religion. In 1994, Intel sanctioned one of the earliest corporate Gay, Lesbian, Bisexual, and Transgender employee groups,[254]and supports a Muslim employees group,[255]a Jewish employees group,[256]and a Bible-based Christian group.[257][258]
Intel has received a 100% rating on numerousCorporate Equality Indicesreleased by theHuman Rights Campaignincluding the first one released in 2002. In addition, the company is frequently named one of the 100 Best Companies for Working Mothers byWorking Mothermagazine.
In January 2015, Intel announced the investment of $300 million over the next five years to enhance gender and racial diversity in their own company as well as the technology industry as a whole.[259][260][261][262][263]
In February 2016, Intel released its Global Diversity & Inclusion 2015 Annual Report.[264]The male-female mix of US employees was reported as 75.2% men and 24.8% women. For US employees in technical roles, the mix was reported as 79.8% male and 20.1% female.[264]NPRreports that Intel is facing a retention problem (particularly forAfrican Americans), not just a pipeline problem.[265]
In 2011, ECONorthwest conducted aneconomic impact analysisof Intel's economic contribution to the state of Oregon. The report found that in 2009 "the total economic impacts attributed to Intel's operations, capital spending, contributions and taxes amounted to almost $14.6 billion in activity, including $4.3 billion in personal income and 59,990 jobs".[266]Through multiplier effects, every 10 Intel jobs supported, on average, was found to create 31 jobs in other sectors of the economy.[267]
Intel has been addressing supply base reduction as an issue since the mid-1980's, adopting an "n + 1"rule of thumb, whereby the maximum number of suppliers required to maintain production levels for each component is determined, and no more than one additional supplier is engaged with for each component.[268]
Intel has been operating in the State ofIsraelsinceDov Frohmanfounded the Israeli branch of the company in 1974 in a small office inHaifa. Intel Israel currently has development centers in Haifa,JerusalemandPetah Tikva, and has a manufacturing plant in theKiryat Gatindustrial park that develops and manufactures microprocessors and communications products. Intel employed about 10,000 employees in Israel in 2013. Maxine Fesberg has been the CEO of Intel Israel since 2007 and the Vice President of Intel Global. In December 2016, Fesberg announced her resignation, her position ofchief executive officer(CEO) has been filled by Yaniv Gerti since January 2017.
In June 2024, the company announced that it was stopping development on a Kiryat Gat-based factory in Israel. The site was expected to cost $25 billion, with $3.2 billion provided by the Israeli government in the form of a grant.[269]
In 2010, Intel purchasedMcAfee, a manufacturer of computer security technology, for $7.68 billion.[270]As a condition for regulatory approval of the transaction, Intel agreed to provide rival security firms with all necessary information that would allow their products to use Intel's chips and personal computers.[271]After the acquisition, Intel had about 90,000 employees, including about 12,000 software engineers.[272]In September 2016, Intel sold a majority stake in its computer-security unit toTPG Capital, reversing the five-year-old McAfee acquisition.[273]
In August 2010, Intel andInfineon Technologiesannounced that Intel would acquire Infineon's Wireless Solutions business.[274]Intel planned to use Infineon's technology in laptops, smart phones, netbooks, tablets and embedded computers in consumer products, eventually integrating its wireless modem into Intel's silicon chips.[275]
In March 2011, Intel bought most of the assets of Cairo-based SySDSoft.[276]In July 2011, Intel announced that it had agreed to acquire Fulcrum Microsystems Inc., a company specializing in network switches.[277]The company used to be included on the EE Times list of 60 Emerging Startups.[277]In October 2011, Intel reached a deal to acquire Telmap, an Israeli-based navigation software company. The purchase price was not disclosed, but Israeli media reported values around $300 million to $350 million.[278]
In July 2012, Intel agreed to buy 10% of the shares ofASML HoldingNV for $2.1 billion and another $1 billion for 5% of the shares that need shareholder approval to fund relevant research and development efforts, as part of a EUR3.3 billion ($4.1 billion) deal to accelerate the development of 450-millimeter wafer technology and extreme ultra-violet lithography by as much as two years.[279]
In July 2013, Intel confirmed the acquisition ofOmek Interactive, an Israeli company that makes technology for gesture-based interfaces, without disclosing the monetary value of the deal. An official statement from Intel read: "The acquisition of Omek Interactive will help increase Intel's capabilities in the delivery of more immersive perceptual computing experiences." One report estimated the value of the acquisition between US$30 million and $50 million.[280]
The acquisition of a Spanishnatural language recognitionstartup, Indisys was announced in September 2013. The terms of the deal were not disclosed but an email from an Intel representative stated: "Intel has acquired Indisys, a privately held company based in Seville, Spain. The majority of Indisys employees joined Intel. We signed the agreement to acquire the company on May 31 and the deal has been completed." Indysis explains that its artificial intelligence (AI) technology "is a human image, which converses fluently and with common sense in multiple languages and also works in different platforms".[281]
In December 2014, Intel bought PasswordBox.[282]
In January 2015, Intel purchased a 30% stake in Vuzix, a smart glasses manufacturer. The deal was worth $24.8 million.[283]In February 2015, Intel announced its agreement to purchase German network chipmaker Lantiq, to aid in its expansion of its range of chips in devices with Internet connection capability.[284]In June 2015, Intel announced its agreement to purchase FPGA design companyAlterafor $16.7 billion, in its largest acquisition to date.[285]The acquisition completed in December 2015.[286]In October 2015, Intel boughtcognitive computingcompanySaffron Technologyfor an undisclosed price.[287]
In August 2016, Intel purchased deep-learning startupNervana Systemsfor over $400 million.[288]In December 2016, Intel acquired computer vision startupMovidiusfor an undisclosed price.[289]In March 2017, Intel announced that they had agreed to purchaseMobileye, an Israeli developer of "autonomous driving" systems for US$15.3 billion.[290]In June 2017, Intel Corporation announced an investment of over₹1,100 crore(US$130 million) for its upcoming Research and Development (R&D) centre inBangalore, India.[291]
In January 2019, Intel announced an investment of over $11 billion on a new Israeli chip plant, as told by the Israeli Finance Minister.[292]
In November 2021, Intel recruited some of the employees of theCentaur Technologydivision fromVIA Technologies, a deal worth $125 million, and effectively acquiring the talent and know-how of their x86 division.[293][294]VIA retained the x86 licence and associated patents, and its Zhaoxin CPU joint-venture continues.[295]In December 2021, Intel said it will invest $7.1 billion to build a new chip-packaging and testing factory in Malaysia. The new investment will expand the operations of its Malaysian subsidiary across Penang and Kulim, creating more than 4,000 new Intel jobs and more than 5,000 local construction jobs.[296]In December 2021, Intel announced its plan to take Mobileye automotive unit via anIPOof newly issued stock in 2022, maintaining its majority ownership of the company.[297]
In February 2022, Intel agreed to acquire Israeli chip manufacturerTower Semiconductorfor $5.4 billion.[298][299]In August 2023, Intel terminated the acquisition as it failed to obtain approval fromChinese regulatorswithin the 18-month transaction deadline.[300][301]In May 2022, Intel announced that they have acquired Finnish graphics technology firm Siru innovations. The firm founded by ex-AMD Qualcomm mobile GPU engineers, is focused on developing software and silicon building blocks for GPU's made by other companies and is set to join Intel's fledgling Accelerated Computing Systems and Graphics Group.[302]In May 2022, it was announced that Ericsson and Intel have pooled to launch a tech hub in California to focus on the research and development of cloudRANtechnology. The hub focuses on improving Ericsson Cloud RAN and Intel technology, including improving energy efficiency and network performance, reducing time to market, and monetizing new business opportunities such as enterprise applications.[303]
In April 2024, Intel reached a definitive agreement to sell 51% of Altera toSilver Lake. With this sale and Silver Lake now owning a majority stake, Intel also announced the cancellation of the potential IPO being conducted for Altera.[304]
In 2011, Intel Capital announced a new fund to support startups working on technologies in line with the company's concept for next-generation notebooks.[305]The company is setting aside a $300 million fund to be spent over the next three to four years in areas related to ultrabooks.[305]Intel announced the ultrabook concept at Computex in 2011. The ultrabook is defined as a thin (less than 0.8 inches [~2 cm] thick[306]) notebook that utilizes Intel processors[306]and also incorporates tablet features such as a touch screen and long battery life.[305][306]
At the Intel Developers Forum in 2011, four Taiwan ODMs showed prototype ultrabooks that used Intel's Ivy Bridge chips.[307]Intel plans to improve power consumption of its chips for ultrabooks, like new Ivy Bridge processors in 2013, which will only have 10W default thermal design power.[308]
Intel's goal for Ultrabook's price is below $1000;[306]however, according to two presidents from Acer and Compaq, this goal will not be achieved if Intel does not lower the price of its chips.[309]
Intel has a significant participation in theopen sourcecommunities since 1999.[310][self-published source]For example, in 2006 Intel releasedMIT-licensedX.orgdrivers for their integratedgraphic cardsof the i965 family of chipsets. Intel releasedFreeBSDdrivers for some networking cards,[311]available under a BSD-compatible license,[312]which were also ported toOpenBSD.[312]Binaryfirmwarefiles for non-wirelessEthernetdevices were also released under aBSD licenceallowingfree redistribution.[313]Intel ran theMoblin projectuntil April 23, 2009, when they handed the project over to theLinux Foundation. Intel also runs theLessWatts.orgcampaigns.[314]
However, after the release of the wireless products called Intel Pro/Wireless 2100, 2200BG/2225BG/2915ABG and 3945ABG in 2005, Intel was criticized for not granting free redistribution rights for thefirmwarethat must be included in the operating system for the wireless devices to operate.[315]As a result of this, Intel became a target of campaigns to allow free operating systems to include binary firmware on terms acceptable to theopen source community.Linspire-LinuxcreatorMichael Robertsonoutlined the difficult position that Intel was in releasing toopen source, as Intel did not want to upset their large customerMicrosoft.[316]Theo de RaadtofOpenBSDalso claimed that Intel is being "an Open Source fraud" after an Intel employee presented a distorted view of the situation at an open source conference.[317]In spite of the significant negative attention Intel received as a result of the wireless dealings, the binary firmware still[when?]has not gained a license compatible with free software principles.[318][319][320][321][322]
Intel has also supported other open source projects such asBlender[323]andOpen 3D Engine.[324]
Throughout its history, Intel has had three logos.
The first Intel logo, introduced in April 1969, featured the company's name stylized in all lowercase, with the letter "e" dropped below the other letters. The second logo, introduced on January 3, 2006, was inspired by the "Intel Inside" campaign, featuring a swirl around the Intel brand name.[325]The third logo, introduced on September 2, 2020, was inspired by the previous logos. It removes the swirl, redesign the style of the letters to form a refined symmetry, balance, and proportion. The dot on the "i" is the new visual identity, represents the potential and power of their processor.[326][327]
Intel has become one of the world's most recognizable computer brands following its long-runningIntel Insidecampaign.[328]The idea for "Intel Inside" came out of a meeting between Intel and one of the major computer resellers,MicroAge.[329]
In the late 1980s, Intel's market share was being seriously eroded by upstart competitors such asAMD,Zilog, and others who had started to sell their less expensive microprocessors to computer manufacturers. This was because, by using cheaper processors, manufacturers could make cheaper computers and gain more market share in an increasingly price-sensitive market. In 1989, Intel's Dennis Carter visited MicroAge's headquarters in Tempe, Arizona, to meet with MicroAge's VP of Marketing, Ron Mion. MicroAge had become one of the largest distributors of Compaq, IBM, HP, and others and thus was a primary – although indirect – driver of demand for microprocessors. Intel wanted MicroAge to petition its computer suppliers to favor Intel chips. However, Mion felt that the marketplace should decide which processors they wanted. Intel's counterargument was that it would be too difficult to educate PC buyers on why Intel microprocessors were worth paying more for.[329]
Mion felt that the public did not really need to fully understand why Intel chips were better, they just needed to feel they were better. So Mion proposed a market test. Intel would pay for a MicroAge billboard somewhere saying, "If you're buying a personal computer, make sure it has Intel inside." In turn, MicroAge would put "Intel Inside" stickers on the Intel-based computers in their stores in that area. To make the test easier to monitor, Mion decided to do the test in Boulder, Colorado, where it had a single store. Virtually overnight, the sales of personal computers in that store dramatically shifted to Intel-based PCs. Intel very quickly adopted "Intel Inside" as its primary branding and rolled it out worldwide.[329]As is often the case with computer lore, other tidbits have been combined to explain how things evolved. "Intel Inside" has not escaped that tendency and there are other "explanations" that had been floating around.
Intel's branding campaign started with "The Computer Inside" tagline in 1990 in the U.S. and Europe. The Japan chapter of Intel proposed an "Intel in it" tagline and kicked off the Japanese campaign by hosting EKI-KON (meaning "Station Concert" in Japanese) at the Tokyo railway station dome on Christmas Day, December 25, 1990. Several months later, "The Computer Inside" incorporated the Japan idea to become "Intel Inside" which eventually elevated to the worldwide branding campaign in 1991, by Intel marketing manager Dennis Carter.[330]A case study, "Inside Intel Inside", was put together by Harvard Business School.[331]The five-note jingle was introduced in 1994 and by its tenth anniversary was being heard in 130 countries around the world. The initial branding agency for the "Intel Inside" campaign was DahlinSmithWhite Advertising ofSalt Lake City.[332]The Intelswirllogo was the work of DahlinSmithWhite art director Steve Grigg under the direction of Intel president and CEO Andy Grove.[333][better source needed]
TheIntel Insideadvertising campaign sought public brand loyalty and awareness of Intel processors in consumer computers.[334]Intel paid some of the advertiser's costs for an ad that used theIntel Insidelogo andxylo-marimbajingle.[335]
In 2008, Intel planned to shift the emphasis of its Intel Inside campaign from traditional media such as television and print to newer media such as the Internet.[336]Intel required that a minimum of 35% of the money it provided to the companies in its co-op program be used for online marketing.[336]The Intel 2010 annual financial report indicated that $1.8 billion (6% of the gross margin and nearly 16% of the total net income) was allocated to all advertising with Intel Inside being part of that.[337]
In April 2025, chief marketing officer Brett Hannath announced a new marketing campaign—"That's the power of Intel Inside"—to highlight the usage of Intel products across different markets and industries.[338]
The D♭–D♭–G♭–D♭–A♭xylophone/marimbajingle, known as the "Intel bong",[339]used in Intel advertising was produced byMusikvergnuegenand written byWalter Werzowa, once a member of the Austrian 1980s sampling bandEdelweiss.[340]The Intel jingle was made in 1994 to coincide with the launch of the Pentium. It was modified in 1999 to coincide with the launch of thePentium III, although it overlapped with the 1994 version which was phased out in 2004. Advertisements for products featuring Intel processors with prominent MMX branding featured a version of the jingle with an embellishment (shining sound) after the final note.
The jingle was remade a second time in 2004 to coincide with the new logo change.[citation needed]Again, it overlapped with the 1999 version and was not mainstreamed until the launch of the Core processors in 2006, with the melody unchanged.
Another remake of the jingle debuted with Intel's new visual identity.[326]The company has made use of numerous variants since its rebranding in 2020 (while retaining the mainstream 2006 version).
In 2006, Intel expanded its promotion of open specification platforms beyondCentrino, to include theViivmedia center PC and the business desktopIntel vPro.
In mid-January 2006, Intel announced that they were dropping the long runningPentiumname from their processors. The Pentium name was first used to refer to the P5 core Intel processors and was done to comply with court rulings that prevent the trademarking of a string of numbers, so competitors could not just call their processor the same name, as had been done with the prior 386 and 486 processors (both of which had copies manufactured by IBM and AMD). They phased out the Pentium names from mobile processors first, when the newYonahchips, brandedCoreSolo and Core Duo, were released. The desktop processors changed when the Core 2 line of processors were released. By 2009, Intel was using agood–better–beststrategy with Celeron being good, Pentium better, and the Intel Core family representing the best the company has to offer.[341]
According to spokesman Bill Calder, Intel has maintained only the Celeron brand, the Atom brand for netbooks and the vPro lineup for businesses. Since late 2009, Intel's mainstream processors have been called Celeron, Pentium, Core i3, Core i5, Core i7, and Core i9 in order of performance from lowest to highest. The 1st-generation Core products carry a 3 digit name, such as i5-750, and the 2nd-generation products carry a 4 digit name, such as the i5-2500, and from 10th-generation onwards, Intel processors will have a 5 digit name, such as i9-10900K for desktop. In all cases, a 'K' at the end of it shows that it is an unlocked processor, enabling additional overclocking abilities (for instance, 2500K). vPro products will carry the Intel Core i7 vPro processor or the Intel Core i5 vPro processor name.[342]In October 2011, Intel started to sell its Core i7-2700K "Sandy Bridge" chip to customers worldwide.[343]
Since 2010, "Centrino" is only being applied to Intel's WiMAX and Wi-Fi technologies.[342]
In 2022, Intel announced that they are dropping the Pentium and Celeron naming schemes for their desktop and laptop entry level processors. The "Intel Processor" branding will be replacing the old Pentium and Celeron naming schemes starting in 2023.[344][345]
In 2023, Intel announced that they will be dropping the 'i' in their future processor markings. For example, products such as Core i7, will now be called Core 7. Ultra will be added to the endings of processors that are in the higher end, such as Core Ultra 7.[346][347]
Neo Sans Intel is a customized version ofNeo Sansbased on the Neo Sans and Neo Tech, designed by Sebastian Lester in 2004.[348]It was introduced alongside Intel's rebranding in 2006. Previously, Intel usedHelveticaas its standard typeface in corporate marketing.
Intel Clear is a global font announced in 2014 designed for to be used across all communications.[349][350]The font family was designed byRed Peek BrandingandDalton Maag.[351]Initially available in Latin, Greek and Cyrillic scripts, it replaced Neo Sans Intel as the company's corporate typeface.[352][353]Intel Clear Hebrew, Intel Clear Arabic were added by Dalton Maag Ltd.[354]Neo Sans Intel remained in logo and to mark processor type and socket on the packaging of Intel's processors.
In 2020, as part of a new visual identity, a new typeface, Intel One, was designed. It replaced Intel Clear as the font used by the company in most of its branding, however, it is used alongside Intel Clear typeface.[355]In logo, it replaced Neo Sans Intel typeface. However, it is still used to mark processor type and socket on the packaging of Intel's processors.
Intel Brand Book is a book produced by Red Peak Branding as part of Intel's new brand identity campaign, celebrating the company's achievements while setting the new standard for what Intel looks, feels and sounds like.[356]
In November 2014, Intel designed aPaddington Bearstatue—themed "Little Bear Blue"—one of fifty statues created by various celebrities and companies which were located around London.[357]Created prior to the release of the filmPaddington, the Intel-designed statue was located outsideFramestoreinChancery Lane, London, a British visual-effects company which uses Intel technology for films includingPaddington.[358]The statues were then auctioned to raise funds for theNational Society for the Prevention of Cruelty to Children(NSPCC).[357][359]
Intel sponsors theIntel Extreme Masters, a series of internationalesportstournaments.[360]It was also a sponsor for the Formula 1 teamsBMW SauberandScuderia Ferraritogether withAMD,AT&T,Pernod Ricard,DiageoandVodafone.[361]In 2013, Intel became a sponsor ofFC Barcelona.[362]In 2017, Intel became a sponsor of theOlympic Games, lasting from the2018 Winter Olympicsto the2024 Summer Olympics.[363]In 2024, Intel andRiot Gameshad an annual sponsorship valued at US$5 million, and one withJD Gamingfor US$3.3 million. The company also had a sponsorship withGlobal Esports.[364]
In October 2006, aTransmeta lawsuitwas filed against Intel for patent infringement on computer architecture and power efficiency technologies.[365]The lawsuit was settled in October 2007, with Intel agreeing to pay US$150 million initially and US$20 million per year for the next five years. Both companies agreed to drop lawsuits against each other, while Intel was granted a perpetual non-exclusive license to use current and future patented Transmeta technologies in its chips for 10 years.[366]
In September 2005, Intel filed a response to an AMD lawsuit,[367]disputing AMD's claims, and claiming that Intel's business practices are fair and lawful. In a rebuttal, Intel deconstructed AMD's offensive strategy and argued that AMD struggled largely as a result of its own bad business decisions, including underinvestment in essential manufacturing capacity and excessive reliance on contracting out chip foundries.[368]Legal analysts predicted the lawsuit would drag on for a number of years, since Intel's initial response indicated its unwillingness to settle with AMD.[369][370]In 2008, a court date was finally set.[371][372]
On November 4, 2009, New York's attorney general filed an antitrust lawsuit against Intel Corp, claiming the company used "illegal threats and collusion" to dominate the market for computer microprocessors.
On November 12, 2009, AMD agreed to drop the antitrust lawsuit against Intel in exchange for $1.25 billion.[372]A joint press release published by the two chip makers stated "While the relationship between the two companies has been difficult in the past, this agreement ends the legal disputes and enables the companies to focus all of our efforts on product innovation and development."[373][374]
An antitrust lawsuit[375]and a class-action suit relating tocold callingemployees of other companies has been settled.[376]
In 2005, the localFair Trade Commissionfound that Intel violated theJapanese Antimonopoly Act. The commission ordered Intel to eliminate discounts that had discriminated against AMD. To avoid a trial, Intel agreed to comply with the order.[377][378][379][380]
In September 2007, South Korean regulators accused Intel of breaking antitrust law. The investigation began in February 2006, when officials raided Intel's South Korean offices. The company risked a penalty of up to 3% of its annual sales if found guilty.[381]In June 2008, the Fair Trade Commission ordered Intel to pay a fine of US$25.5 million for taking advantage of its dominant position to offer incentives to major Korean PC manufacturers on the condition of not buying products from AMD.[382]
New York started an investigation of Intel in January 2008 on whether the company violated antitrust laws in pricing and sales of its microprocessors.[383]In June 2008, theFederal Trade Commissionalso began an antitrust investigation of the case.[384]In December 2009, the FTC announced it would initiate an administrative proceeding against Intel in September 2010.[385][386][387][388]
In November 2009, following a two-year investigation,New York Attorney GeneralAndrew Cuomosued Intel, accusing them of bribery and coercion, claiming that Intel bribed computer makers to buy more of their chips than those of their rivals and threatened to withdraw these payments if the computer makers were perceived as working too closely with its competitors. Intel has denied these claims.[389]
On July 22, 2010,Dellagreed to a settlement with theU.S. Securities and Exchange Commission(SEC) to pay $100 million in penalties resulting from charges that Dell did not accuratelydiscloseaccounting information to investors. In particular, the SEC charged that from 2002 to 2006, Dell had an agreement with Intel to receive rebates in exchange for not using chips manufactured by AMD. These substantial rebates were not disclosed to investors, but were used to help meet investor expectations regarding the company's financial performance; "These exclusivity payments grew from 10% of Dell's operating income in FY 2003 to 38% in FY 2006, and peaked at 76% in the first quarter of FY 2007."[390]Dell eventually did adopt AMD as a secondary supplier in 2006, and Intel subsequently stopped their rebates, causing Dell's financial performance to fall.[391][392][393]
In July 2007, theEuropean Commissionaccused Intel ofanti-competitive practices, mostly againstAMD.[394]The allegations, going back to 2003, include giving preferential prices to computer makers buying most or all of theirchipsfrom Intel, paying computer makers to delay or cancel the launch of products using AMD chips, and providing chips at below standard cost to governments and educational institutions.[395]Intel responded that the allegations were unfounded and instead qualified its market behavior as consumer-friendly.[395]General counselBruce Sewellresponded that the commission had misunderstood some factual assumptions regarding pricing and manufacturing costs.[396]
In February 2008, Intel announced that its office in Munich had been raided byEuropean Unionregulators. Intel reported that it was cooperating with investigators.[397]Intel faced a fine of up to 10% of its annual revenue if found guilty of stifling competition.[398]AMD subsequently launched a website promoting these allegations.[399][400]In June 2008, the EU filed new charges against Intel.[401]In May 2009, the EU found that Intel had engaged in anti-competitive practices and subsequently fined Intel €1.06 billion (US$1.44 billion), a record amount. Intel was found to have paid companies, includingAcer,Dell,HP,LenovoandNEC,[402]to exclusively use Intel chips in their products, and therefore harmed other, less successful companies including AMD.[402][403][404]The European Commission said that Intel had deliberately acted to keep competitors out of the computer chip market and in doing so had made a "serious and sustained violation of the EU's antitrust rules".[402]In addition to the fine, Intel was ordered by the commission to immediately cease all illegal practices.[402]Intel has said that they will appeal against the commission's verdict. In June 2014, the General Court, which sits below the European Court of Justice, rejected the appeal.[402]
In 2022 the €1.06 billion fine was dropped, but was successively re-imposed in September 2023 as a €376.36 million fine.[405]
Intel has been accused by some residents ofRio Rancho, New Mexicoof allowingvolatile organic compounds(VOCs) to be released in excess of their pollution permit. One resident claimed that a release of 1.4 tons ofcarbon tetrachloridewas measured from one acid scrubber during the fourth quarter of 2003 but an emission factor allowed Intel to report no carbon tetrachloride emissions for all of 2003.[406]
Another resident alleges that Intel was responsible for the release of other VOCs from their Rio Rancho site and that anecropsyof lung tissue from two deceased dogs in the area indicated trace amounts oftoluene,hexane,ethylbenzene, andxyleneisomers,[407]all of which aresolventsused in industrial settings but also commonly found ingasoline, retailpaint thinnersand retail solvents. During a sub-committee meeting of the New Mexico Environment Improvement Board, a resident claimed that Intel's own reports documented more than 1,580 pounds (720 kg) of VOCs were released in June and July 2006.[408]
Intel's environmental performance is published annually in their corporate responsibility report.[409]
In 2009, Intel announced that it planned to undertake an effort to removeconflict resources—materials sourced from mines whose profits are used to fund armed militant groups, particularly within theDemocratic Republic of the Congo—from its supply chain. Intel sought conflict-free sources of the precious metals common to electronics from within the country, using a system of first- and third-party audits, as well as input from theEnough Projectand other organizations. During a keynote address atConsumer Electronics Show2014, Intel CEO at the time, Brian Krzanich, announced that the company's microprocessors would henceforth be conflict free. In 2016, Intel stated that it had expected its entire supply chain to be conflict-free by the end of the year.[410][411][412]
In its 2012 rankings on the progress of consumer electronics companies relating toconflict minerals, the Enough Project rated Intel the best of 24 companies, calling it a "Pioneer of progress".[413]In 2014, chief executive Brian Krzanich urged the rest of the industry to follow Intel's lead by also shunning conflict minerals.[414]
Intel has faced complaints ofage discriminationin firing and layoffs. Intel was sued in 1993 by nine former employees, over allegations that they were laid off because they were over the age of 40.[415]
A group called FACE Intel (Former and Current Employees of Intel) claims that Intel weeds out older employees. FACE Intel claims that more than 90% of people who have been laid off or fired from Intel are over the age of 40.Upsidemagazine requested data from Intel breaking out its hiring and firing by age, but the company declined to provide any.[416]Intel has denied that age plays any role in Intel's employment practices.[417]FACE Intel was founded by Ken Hamidi, who was fired from Intel in 1995 at the age of 47.[416]Hamidi was blocked in a 1999 court decision from using Intel's email system to distribute criticism of the company to employees,[418]which overturned in 2003 inIntel Corp. v. Hamidi.
In August 2016, Indian officials of theBruhat Bengaluru Mahanagara Palike(BBMP) parked garbage trucks on Intel's campus and threatened to dump them for evading payment of property taxes between 2007 and 2008, to the tune of₹340 million(US$4.0 million). Intel had reportedly been paying taxes as a non-air-conditioned office, when the campus in fact had central air conditioning. Other factors, such as land acquisition and construction improvements, added to the tax burden. Previously, Intel had appealed the demand in theKarnatakahigh court in July, during which the court ordered Intel to pay BBMP half the owed amount of₹170 million(US$2.0 million) plus arrears by August 28 of that year.[419][420]
Intel-related biographical articles on Wikipedia
|
https://en.wikipedia.org/wiki/Intel
|
Inmodular arithmetic, theintegerscoprime(relatively prime) tonfrom the set{0,1,…,n−1}{\displaystyle \{0,1,\dots ,n-1\}}ofnnon-negative integers form agroupunder multiplicationmodulon, called themultiplicative group of integers modulon. Equivalently, the elements of this group can be thought of as thecongruence classes, also known asresiduesmodulon, that are coprime ton.
Hence another name is the group ofprimitive residue classes modulon.
In thetheory of rings, a branch ofabstract algebra, it is described as thegroup of unitsof thering of integers modulon. Hereunitsrefers to elements with amultiplicative inverse, which, in this ring, are exactly those coprime ton.
This group, usually denoted(Z/nZ)×{\displaystyle (\mathbb {Z} /n\mathbb {Z} )^{\times }}, is fundamental innumber theory. It is used incryptography,integer factorization, andprimality testing. It is anabelian,finitegroup whoseorderis given byEuler's totient function:|(Z/nZ)×|=φ(n).{\displaystyle |(\mathbb {Z} /n\mathbb {Z} )^{\times }|=\varphi (n).}For primenthe group iscyclic, and in general the structure is easy to describe, but no simple general formula for findinggeneratorsis known.
It is a straightforward exercise to show that, under multiplication, the set ofcongruence classesmodulonthat are coprime tonsatisfy the axioms for anabelian group.
Indeed,ais coprime tonif and only ifgcd(a,n) = 1. Integers in the same congruence classa≡b(modn)satisfygcd(a,n) = gcd(b,n); hence one is coprime tonif and only if the other is. Thus the notion of congruence classes modulonthat are coprime tonis well-defined.
Sincegcd(a,n) = 1andgcd(b,n) = 1impliesgcd(ab,n) = 1, the set of classes coprime tonis closed under multiplication.
Integer multiplication respects the congruence classes; that is,a≡a'andb≡b'(modn)impliesab≡a'b'(modn).
This implies that the multiplication is associative, commutative, and that the class of 1 is the unique multiplicative identity.
Finally, givena, themultiplicative inverseofamodulonis an integerxsatisfyingax≡ 1 (modn).
It exists precisely whenais coprime ton, because in that casegcd(a,n) = 1and byBézout's lemmathere are integersxandysatisfyingax+ny= 1. Notice that the equationax+ny= 1implies thatxis coprime ton, so the multiplicative inverse belongs to the group.
The set of (congruence classes of) integers modulonwith the operations of addition and multiplication is aring.
It is denotedZ/nZ{\displaystyle \mathbb {Z} /n\mathbb {Z} }orZ/(n){\displaystyle \mathbb {Z} /(n)}(the notation refers to taking thequotientof integers modulo theidealnZ{\displaystyle n\mathbb {Z} }or(n){\displaystyle (n)}consisting of the multiples ofn).
Outside of number theory the simpler notationZn{\displaystyle \mathbb {Z} _{n}}is often used, though it can be confused with thep-adic integerswhennis a prime number.
The multiplicative group of integers modulon, which is thegroup of unitsin this ring, may be written as (depending on the author)(Z/nZ)×,{\displaystyle (\mathbb {Z} /n\mathbb {Z} )^{\times },}(Z/nZ)∗,{\displaystyle (\mathbb {Z} /n\mathbb {Z} )^{*},}U(Z/nZ),{\displaystyle \mathrm {U} (\mathbb {Z} /n\mathbb {Z} ),}E(Z/nZ){\displaystyle \mathrm {E} (\mathbb {Z} /n\mathbb {Z} )}(for GermanEinheit, which translates asunit),Zn∗{\displaystyle \mathbb {Z} _{n}^{*}}, or similar notations. This article uses(Z/nZ)×.{\displaystyle (\mathbb {Z} /n\mathbb {Z} )^{\times }.}
The notationCn{\displaystyle \mathrm {C} _{n}}refers to thecyclic groupof ordern.
It isisomorphicto the group of integers modulonunder addition.
Note thatZ/nZ{\displaystyle \mathbb {Z} /n\mathbb {Z} }orZn{\displaystyle \mathbb {Z} _{n}}may also refer to the group under addition.
For example, the multiplicative group(Z/pZ)×{\displaystyle (\mathbb {Z} /p\mathbb {Z} )^{\times }}for a primepis cyclic and hence isomorphic to the additive groupZ/(p−1)Z{\displaystyle \mathbb {Z} /(p-1)\mathbb {Z} }, but the isomorphism is not obvious.
The order of the multiplicative group of integers modulonis the number of integers in{0,1,…,n−1}{\displaystyle \{0,1,\dots ,n-1\}}coprime ton. It is given byEuler's totient function:|(Z/nZ)×|=φ(n){\displaystyle |(\mathbb {Z} /n\mathbb {Z} )^{\times }|=\varphi (n)}(sequenceA000010in theOEIS).
For primep,φ(p)=p−1{\displaystyle \varphi (p)=p-1}.
The group(Z/nZ)×{\displaystyle (\mathbb {Z} /n\mathbb {Z} )^{\times }}iscyclicif and only ifnis 1, 2, 4,pkor 2pk, wherepis an odd prime andk> 0. For all other values ofnthe group is not cyclic.[1][2][3]This was first proved byGauss.[4]
This means that for thesen:
By definition, the group is cyclic if and only if it has ageneratorg; that is, the powersg0,g1,g2,…,{\displaystyle g^{0},g^{1},g^{2},\dots ,}give all possible residues moduloncoprime ton(the firstφ(n){\displaystyle \varphi (n)}powersg0,…,gφ(n)−1{\displaystyle g^{0},\dots ,g^{\varphi (n)-1}}give each exactly once).
A generator of(Z/nZ)×{\displaystyle (\mathbb {Z} /n\mathbb {Z} )^{\times }}is called aprimitive root modulon.[5]If there is any generator, then there areφ(φ(n)){\displaystyle \varphi (\varphi (n))}of them.
Modulo 1 any two integers are congruent, i.e., there is only one congruence class, [0], coprime to 1. Therefore,(Z/1Z)×≅C1{\displaystyle (\mathbb {Z} /1\,\mathbb {Z} )^{\times }\cong \mathrm {C} _{1}}is the trivial group withφ(1) = 1element. Because of its trivial nature, the case of congruences modulo 1 is generally ignored and some authors choose not to include the case ofn= 1 in theorem statements.
Modulo 2 there is only one coprime congruence class, [1], so(Z/2Z)×≅C1{\displaystyle (\mathbb {Z} /2\mathbb {Z} )^{\times }\cong \mathrm {C} _{1}}is thetrivial group.
Modulo 4 there are two coprime congruence classes, [1] and [3], so(Z/4Z)×≅C2,{\displaystyle (\mathbb {Z} /4\mathbb {Z} )^{\times }\cong \mathrm {C} _{2},}the cyclic group with two elements.
Modulo 8 there are four coprime congruence classes, [1], [3], [5] and [7]. The square of each of these is 1, so(Z/8Z)×≅C2×C2,{\displaystyle (\mathbb {Z} /8\mathbb {Z} )^{\times }\cong \mathrm {C} _{2}\times \mathrm {C} _{2},}theKlein four-group.
Modulo 16 there are eight coprime congruence classes [1], [3], [5], [7], [9], [11], [13] and [15].{±1,±7}≅C2×C2,{\displaystyle \{\pm 1,\pm 7\}\cong \mathrm {C} _{2}\times \mathrm {C} _{2},}is the 2-torsion subgroup(i.e., the square of each element is 1), so(Z/16Z)×{\displaystyle (\mathbb {Z} /16\mathbb {Z} )^{\times }}is not cyclic. The powers of 3,{1,3,9,11}{\displaystyle \{1,3,9,11\}}are a subgroup of order 4, as are the powers of 5,{1,5,9,13}.{\displaystyle \{1,5,9,13\}.}Thus(Z/16Z)×≅C2×C4.{\displaystyle (\mathbb {Z} /16\mathbb {Z} )^{\times }\cong \mathrm {C} _{2}\times \mathrm {C} _{4}.}
The pattern shown by 8 and 16 holds[6]for higher powers 2k,k> 2:{±1,2k−1±1}≅C2×C2,{\displaystyle \{\pm 1,2^{k-1}\pm 1\}\cong \mathrm {C} _{2}\times \mathrm {C} _{2},}is the 2-torsion subgroup, so(Z/2kZ)×{\displaystyle (\mathbb {Z} /2^{k}\mathbb {Z} )^{\times }}cannot be cyclic, and the powers of 3 are a cyclic subgroup of order2k− 2, so:
(Z/2kZ)×≅C2×C2k−2.{\displaystyle (\mathbb {Z} /2^{k}\mathbb {Z} )^{\times }\cong \mathrm {C} _{2}\times \mathrm {C} _{2^{k-2}}.}
By thefundamental theorem of finite abelian groups, the group(Z/nZ)×{\displaystyle (\mathbb {Z} /n\mathbb {Z} )^{\times }}is isomorphic to adirect productof cyclic groups of prime power orders.
More specifically, theChinese remainder theorem[7]says that ifn=p1k1p2k2p3k3…,{\displaystyle \;\;n=p_{1}^{k_{1}}p_{2}^{k_{2}}p_{3}^{k_{3}}\dots ,\;}then the ringZ/nZ{\displaystyle \mathbb {Z} /n\mathbb {Z} }is thedirect productof the rings corresponding to each of its prime power factors:
Similarly, the group of units(Z/nZ)×{\displaystyle (\mathbb {Z} /n\mathbb {Z} )^{\times }}is the direct product of the groups corresponding to each of the prime power factors:
For each odd prime powerpk{\displaystyle p^{k}}the corresponding factor(Z/pkZ)×{\displaystyle (\mathbb {Z} /{p^{k}}\mathbb {Z} )^{\times }}is the cyclic group of orderφ(pk)=pk−pk−1{\displaystyle \varphi (p^{k})=p^{k}-p^{k-1}}, which may further factor into cyclic groups of prime-power orders.
For powers of 2 the factor(Z/2kZ)×{\displaystyle (\mathbb {Z} /{2^{k}}\mathbb {Z} )^{\times }}is not cyclic unlessk= 0, 1, 2, but factors into cyclic groups as described above.
The order of the groupφ(n){\displaystyle \varphi (n)}is the product of the orders of the cyclic groups in the direct product.
Theexponentof the group; that is, theleast common multipleof the orders in the cyclic groups, is given by theCarmichael functionλ(n){\displaystyle \lambda (n)}(sequenceA002322in theOEIS).
In other words,λ(n){\displaystyle \lambda (n)}is the smallest number such that for eachacoprime ton,aλ(n)≡1(modn){\displaystyle a^{\lambda (n)}\equiv 1{\pmod {n}}}holds.
It dividesφ(n){\displaystyle \varphi (n)}and is equal to it if and only if the group is cyclic.
Ifnis composite, there exists a possibly proper subgroup ofZn×{\displaystyle \mathbb {Z} _{n}^{\times }}, called the "group of false witnesses", comprising the solutions of the equationxn−1=1{\displaystyle x^{n-1}=1}, the elements which, raised to the powern− 1, are congruent to 1 modulon.[8]Fermat's Little Theoremstates that forn = pa prime, this group consists of allx∈Zp×{\displaystyle x\in \mathbb {Z} _{p}^{\times }}; thus forncomposite, such residuesxare "false positives" or "false witnesses" for the primality ofn. The numberx =2 is most often used in this basic primality check, andn =341 = 11 × 31is notable since2341−1≡1mod341{\displaystyle 2^{341-1}\equiv 1\mod 341}, andn =341 is the smallest composite number for whichx =2 is a false witness to primality. In fact, the false witnesses subgroup for 341 contains 100 elements, and is of index 3 inside the 300-element groupZ341×{\displaystyle \mathbb {Z} _{341}^{\times }}.
The smallest example with a nontrivial subgroup of false witnesses is9 = 3 × 3. There are 6 residues coprime to 9: 1, 2, 4, 5, 7, 8. Since 8 is congruent to−1 modulo 9, it follows that 88is congruent to 1 modulo 9. So 1 and 8 are false positives for the "primality" of 9 (since 9 is not actually prime). These are in fact the only ones, so the subgroup {1,8} is the subgroup of false witnesses. The same argument shows thatn− 1is a "false witness" for any odd compositen.
Forn= 91 (= 7 × 13), there areφ(91)=72{\displaystyle \varphi (91)=72}residues coprime to 91, half of them (i.e., 36 of them) are false witnesses of 91, namely 1, 3, 4, 9, 10, 12, 16, 17, 22, 23, 25, 27, 29, 30, 36, 38, 40, 43, 48, 51, 53, 55, 61, 62, 64, 66, 68, 69, 74, 75, 79, 81, 82, 87, 88, and 90, since for these values ofx,x90is congruent to 1 mod 91.
n= 561 (= 3 × 11 × 17) is aCarmichael number, thuss560is congruent to 1 modulo 561 for any integerscoprime to 561. The subgroup of false witnesses is, in this case, not proper; it is the entire group of multiplicative units modulo 561, which consists of 320 residues.
This table shows the cyclic decomposition of(Z/nZ)×{\displaystyle (\mathbb {Z} /n\mathbb {Z} )^{\times }}and agenerating setforn≤ 128. The decomposition and generating sets are not unique; for example,
(Z/35Z)×≅(Z/5Z)××(Z/7Z)×≅C4×C6≅C4×C2×C3≅C2×C12≅(Z/4Z)××(Z/13Z)×≅(Z/52Z)×{\displaystyle \displaystyle {\begin{aligned}(\mathbb {Z} /35\mathbb {Z} )^{\times }&\cong (\mathbb {Z} /5\mathbb {Z} )^{\times }\times (\mathbb {Z} /7\mathbb {Z} )^{\times }\cong \mathrm {C} _{4}\times \mathrm {C} _{6}\cong \mathrm {C} _{4}\times \mathrm {C} _{2}\times \mathrm {C} _{3}\cong \mathrm {C} _{2}\times \mathrm {C} _{12}\cong (\mathbb {Z} /4\mathbb {Z} )^{\times }\times (\mathbb {Z} /13\mathbb {Z} )^{\times }\\&\cong (\mathbb {Z} /52\mathbb {Z} )^{\times }\end{aligned}}}
(but≇C24≅C8×C3{\displaystyle \not \cong \mathrm {C} _{24}\cong \mathrm {C} _{8}\times \mathrm {C} _{3}}). The table below lists the shortest decomposition (among those, the lexicographically first is chosen – this guarantees isomorphic groups are listed with the same decompositions). The generating set is also chosen to be as short as possible, and fornwith primitive root, the smallest primitive root modulonis listed.
For example, take(Z/20Z)×{\displaystyle (\mathbb {Z} /20\mathbb {Z} )^{\times }}. Thenφ(20)=8{\displaystyle \varphi (20)=8}means that the order of the group is 8 (i.e., there are 8 numbers less than 20 and coprime to it);λ(20)=4{\displaystyle \lambda (20)=4}means the order of each element divides 4; that is, the fourth power of any number coprime to 20 is congruent to 1 (mod 20). The set {3,19} generates the group, which means that every element of(Z/20Z)×{\displaystyle (\mathbb {Z} /20\mathbb {Z} )^{\times }}is of the form3a× 19b(whereais 0, 1, 2, or 3, because the element 3 has order 4, and similarlybis 0 or 1, because the element 19 has order 2).
Smallest primitive root modnare (0 if no root exists)
Numbers of the elements in a minimal generating set of modnare
TheDisquisitiones Arithmeticaehas been translated from Gauss'sCiceronian LatinintoEnglishandGerman. The German edition includes all of his papers on number theory: all the proofs ofquadratic reciprocity, the determination of the sign of theGauss sum, the investigations intobiquadratic reciprocity, and unpublished notes.
|
https://en.wikipedia.org/wiki/Multiplicative_group_of_integers_modulo_n
|
Inmathematics, theChinese remainder theoremstates that if one knows theremainders of the Euclidean divisionof anintegernby several integers, then one can determine uniquely the remainder of the division ofnby the product of these integers, under the condition that thedivisorsarepairwise coprime(no two divisors share a common factor other than 1).[1]
The theorem is sometimes calledSunzi's theorem. Both names of the theorem refer to its earliest known statement that appeared inSunzi Suanjing, a Chinese manuscript written during the 3rd to 5th century CE. This first statement was restricted to the following example:
If one knows that the remainder ofndivided by 3 is 2, the remainder ofndivided by 5 is 3, and the remainder ofndivided by 7 is 2, then with no other information, one can determine the remainder ofndivided by 105 (the product of 3, 5, and 7) without knowing the value ofn. In this example, the remainder is 23. Moreover, this remainder is the only possible positive value ofnthat is less than 105.
The Chinese remainder theorem is widely used for computing with large integers, as it allows replacing a computation for which one knows a bound on the size of the result by several similar computations on small integers.
The Chinese remainder theorem (expressed in terms ofcongruences) is true over everyprincipal ideal domain. It has been generalized to anyring, with a formulation involvingtwo-sided ideals.
The earliest known statement of the problem appears in the 5th-century bookSunzi Suanjingby the Chinese mathematician Sunzi:[2]
There are certain things whose number is unknown. If we count them by threes, we have two left over; by fives, we have three left over; and by sevens, two are left over. How many things are there?[3]
Sunzi's work would not be considered atheoremby modern standards; it only gives one particular problem, without showing how to solve it, much less anyproofabout the general case or a generalalgorithmfor solving it.[4]An algorithm for solving this problem was described byAryabhata(6th century).[5]Special cases of the Chinese remainder theorem were also known toBrahmagupta(7th century) and appear inFibonacci'sLiber Abaci(1202).[6]The result was later generalized with a complete solution calledDa-yan-shu(大衍術) inQin Jiushao's 1247Mathematical Treatise in Nine Sections[7]which was translated into English in early 19th century by British missionaryAlexander Wylie.[8]
The notion of congruences was first introduced and used byCarl Friedrich Gaussin hisDisquisitiones Arithmeticaeof 1801.[10]Gauss illustrates the Chinese remainder theorem on a problem involving calendars, namely, "to find the years that have a certain period number with respect to the solar and lunar cycle and the Roman indiction."[11]Gauss introduces a procedure for solving the problem that had already been used byLeonhard Eulerbut was in fact an ancient method that had appeared several times.[12]
Letn1, ...,nkbe integers greater than 1, which are often calledmoduliordivisors. Let us denote byNthe product of theni.
The Chinese remainder theorem asserts that if theniarepairwise coprime, and ifa1, ...,akare integers such that 0 ≤ai<nifor everyi, then there is one and only one integerx, such that 0 ≤x<Nand the remainder of theEuclidean divisionofxbyniisaifor everyi.
This may be restated as follows in terms ofcongruences:
If theni{\displaystyle n_{i}}are pairwise coprime, and ifa1, ...,akare any integers, then the system
has a solution, and any two solutions, sayx1andx2, are congruent moduloN, that is,x1≡x2(modN).[13]
Inabstract algebra, the theorem is often restated as: if theniare pairwise coprime, the map
defines aring isomorphism[14]
between theringofintegers moduloNand thedirect productof the rings of integers modulo theni. This means that for doing a sequence of arithmetic operations inZ/NZ,{\displaystyle \mathbb {Z} /N\mathbb {Z} ,}one may do the same computation independently in eachZ/niZ{\displaystyle \mathbb {Z} /n_{i}\mathbb {Z} }and then get the result by applying the isomorphism (from the right to the left). This may be much faster than the direct computation ifNand the number of operations are large. This is widely used, under the namemulti-modular computation, forlinear algebraover the integers or therational numbers.
The theorem can also be restated in the language ofcombinatoricsas the fact that the infinitearithmetic progressionsof integers form aHelly family.[15]
The existence and the uniqueness of the solution may be proven independently. However, the first proof of existence, given below, uses this uniqueness.
Suppose thatxandyare both solutions to all the congruences. Asxandygive the same remainder, when divided byni, their differencex−yis a multiple of eachni. As theniare pairwise coprime, their productNalso dividesx−y, and thusxandyare congruent moduloN. Ifxandyare supposed to be non-negative and less thanN(as in the first statement of the theorem), then their difference may be a multiple ofNonly ifx=y.
The map
mapscongruence classesmoduloNto sequences of congruence classes moduloni. The proof of uniqueness shows that this map isinjective. As thedomainand thecodomainof this map have the same number of elements, the map is alsosurjective, which proves the existence of the solution.
This proof is very simple but does not provide any direct way for computing a solution. Moreover, it cannot be generalized to other situations where the following proof can.
Existence may be established by an explicit construction ofx.[16]This construction may be split into two steps, first solving the problem in the case of two moduli, and then extending this solution to the general case byinductionon the number of moduli.
We want to solve the system:
wheren1{\displaystyle n_{1}}andn2{\displaystyle n_{2}}arecoprime.
Bézout's identityasserts the existence of two integersm1{\displaystyle m_{1}}andm2{\displaystyle m_{2}}such that
The integersm1{\displaystyle m_{1}}andm2{\displaystyle m_{2}}may be computed by theextended Euclidean algorithm.
A solution is given by
Indeed,
implying thatx≡a1(modn1).{\displaystyle x\equiv a_{1}{\pmod {n_{1}}}.}The second congruence is proved similarly, by exchanging the subscripts 1 and 2.
Consider a sequence of congruence equations:
where theni{\displaystyle n_{i}}are pairwise coprime. The two first equations have a solutiona1,2{\displaystyle a_{1,2}}provided by the method of the previous section. The set of the solutions of these two first equations is the set of all solutions of the equation
As the otherni{\displaystyle n_{i}}are coprime withn1n2,{\displaystyle n_{1}n_{2},}this reduces solving the initial problem ofkequations to a similar problem withk−1{\displaystyle k-1}equations. Iterating the process, one gets eventually the solutions of the initial problem.
For constructing a solution, it is not necessary to make an induction on the number of moduli. However, such a direct construction involves more computation with large numbers, which makes it less efficient and less used. Nevertheless,Lagrange interpolationis a special case of this construction, applied topolynomialsinstead of integers.
LetNi=N/ni{\displaystyle N_{i}=N/n_{i}}be the product of all moduli but one. As theni{\displaystyle n_{i}}are pairwise coprime,Ni{\displaystyle N_{i}}andni{\displaystyle n_{i}}are coprime. ThusBézout's identityapplies, and there exist integersMi{\displaystyle M_{i}}andmi{\displaystyle m_{i}}such that
A solution of the system of congruences is
In fact, asNj{\displaystyle N_{j}}is a multiple ofni{\displaystyle n_{i}}fori≠j,{\displaystyle i\neq j,}we have
for everyi.{\displaystyle i.}
Consider a system of congruences:
where theni{\displaystyle n_{i}}arepairwise coprime, and letN=n1n2⋯nk.{\displaystyle N=n_{1}n_{2}\cdots n_{k}.}In this section several methods are described for computing the unique solution forx{\displaystyle x}, such that0≤x<N,{\displaystyle 0\leq x<N,}and these methods are applied on the example
Several methods of computation are presented. The two first ones are useful for small examples, but become very inefficient when the productn1⋯nk{\displaystyle n_{1}\cdots n_{k}}is large. The third one uses the existence proof given in§ Existence (constructive proof). It is the most convenient when the productn1⋯nk{\displaystyle n_{1}\cdots n_{k}}is large, or for computer computation.
It is easy to check whether a value ofxis a solution: it suffices to compute the remainder of theEuclidean divisionofxby eachni. Thus, to find the solution, it suffices to check successively the integers from0toNuntil finding the solution.
Although very simple, this method is very inefficient. For the simple example considered here,40integers (including0) have to be checked for finding the solution, which is39. This is anexponential timealgorithm, as the size of the input is, up to a constant factor, the number of digits ofN, and the average number of operations is of the order ofN.
Therefore, this method is rarely used, neither for hand-written computation nor on computers.
The search of the solution may be made dramatically faster by sieving. For this method, we suppose, without loss of generality, that0≤ai<ni{\displaystyle 0\leq a_{i}<n_{i}}(if it were not the case, it would suffice to replace eachai{\displaystyle a_{i}}by the remainder of its division byni{\displaystyle n_{i}}). This implies that the solution belongs to thearithmetic progression
By testing the values of these numbers modulon2,{\displaystyle n_{2},}one eventually finds a solutionx2{\displaystyle x_{2}}of the two first congruences. Then the solution belongs to the arithmetic progression
Testing the values of these numbers modulon3,{\displaystyle n_{3},}and continuing until every modulus has been tested eventually yields the solution.
This method is faster if the moduli have been ordered by decreasing value, that is ifn1>n2>⋯>nk.{\displaystyle n_{1}>n_{2}>\cdots >n_{k}.}For the example, this gives the following computation. We consider first the numbers that are congruent to 4 modulo 5 (the largest modulus), which are 4,9 = 4 + 5,14 = 9 + 5, ... For each of them, compute the remainder by 4 (the second largest modulus) until getting a number congruent to 3 modulo 4. Then one can proceed by adding20 = 5 × 4at each step, and computing only the remainders by 3. This gives
This method works well for hand-written computation with a product of moduli that is not too big. However, it is much slower than other methods, for very large products of moduli. Although dramatically faster than the systematic search, this method also has anexponential timecomplexity and is therefore not used on computers.
Theconstructive existence proofshows that, in thecase of two moduli, the solution may be obtained by the computation of theBézout coefficientsof the moduli, followed by a few multiplications, additions andreductions modulon1n2{\displaystyle n_{1}n_{2}}(for getting a result in theinterval(0,n1n2−1){\displaystyle (0,n_{1}n_{2}-1)}). As the Bézout's coefficients may be computed with theextended Euclidean algorithm, the whole computation, at most, has aquadratic timecomplexityofO((s1+s2)2),{\displaystyle O((s_{1}+s_{2})^{2}),}wheresi{\displaystyle s_{i}}denotes the number of digits ofni.{\displaystyle n_{i}.}
For more than two moduli, the method for two moduli allows the replacement of any two congruences by a single congruence modulo the product of the moduli. Iterating this process provides eventually the solution with a complexity, which is quadratic in the number of digits of the product of all moduli. This quadratic time complexity does not depend on the order in which the moduli are regrouped. One may regroup the two first moduli, then regroup the resulting modulus with the next one, and so on. This strategy is the easiest to implement, but it also requires more computation involving large numbers.
Another strategy consists in partitioning the moduli in pairs whose product have comparable sizes (as much as possible), applying, in parallel, the method of two moduli to each pair, and iterating with a number of moduli approximatively divided by two. This method allows an easy parallelization of the algorithm. Also, if fast algorithms (that is, algorithms working inquasilinear time) are used for the basic operations, this method provides an algorithm for the whole computation that works in quasilinear time.
On the current example (which has only three moduli), both strategies are identical and work as follows.
Bézout's identityfor 3 and 4 is
Putting this in the formula given for proving the existence gives
for a solution of the two first congruences, the other solutions being obtained by adding to −9 any multiple of3 × 4 = 12. One may continue with any of these solutions, but the solution3 = −9 +12is smaller (inabsolute value) and thus leads probably to an easier computation
Bézout identity for 5 and 3 × 4 = 12 is
Applying the same formula again, we get a solution of the problem:
The other solutions are obtained by adding any multiple of3 × 4 × 5 = 60, and the smallest positive solution is−21 + 60 = 39.
The system of congruences solved by the Chinese remainder theorem may be rewritten as asystem of linear Diophantine equations:
where the unknown integers arex{\displaystyle x}and thexi.{\displaystyle x_{i}.}Therefore, every general method for solving such systems may be used for finding the solution of Chinese remainder theorem, such as the reduction of thematrixof the system toSmith normal formorHermite normal form. However, as usual when using a general algorithm for a more specific problem, this approach is less efficient than the method of the preceding section, based on a direct use ofBézout's identity.
In§ Statement, the Chinese remainder theorem has been stated in three different ways: in terms of remainders, of congruences, and of aring isomorphism. The statement in terms of remainders does not apply, in general, toprincipal ideal domains, as remainders are not defined in suchrings. However, the two other versions make sense over a principal ideal domainR: it suffices to replace "integer" by "element of the domain" andZ{\displaystyle \mathbb {Z} }byR. These two versions of the theorem are true in this context, because the proofs (except for the first existence proof), are based onEuclid's lemmaandBézout's identity, which are true over every principal domain.
However, in general, the theorem is only an existence theorem and does not provide any way for computing the solution, unless one has an algorithm for computing the coefficients of Bézout's identity.
The statement in terms of remainders given in§ Theorem statementcannot be generalized to any principal ideal domain, but its generalization toEuclidean domainsis straightforward. Theunivariate polynomialsover afieldis the typical example of a Euclidean domain which is not the integers. Therefore, we state the theorem for the case of the ringR=K[X]{\displaystyle R=K[X]}for a fieldK.{\displaystyle K.}For getting the theorem for a general Euclidean domain, it suffices to replace thedegreeby theEuclidean functionof the Euclidean domain.
The Chinese remainder theorem for polynomials is thus: LetPi(X){\displaystyle P_{i}(X)}(the moduli) be, fori=1,…,k{\displaystyle i=1,\dots ,k}, pairwisecoprime polynomialsinR=K[X]{\displaystyle R=K[X]}. Letdi=degPi{\displaystyle d_{i}=\deg P_{i}}be the degree ofPi(X){\displaystyle P_{i}(X)}, andD{\displaystyle D}be the sum of thedi.{\displaystyle d_{i}.}IfAi(X),…,Ak(X){\displaystyle A_{i}(X),\ldots ,A_{k}(X)}are polynomials such thatAi(X)=0{\displaystyle A_{i}(X)=0}ordegAi<di{\displaystyle \deg A_{i}<d_{i}}for everyi, then, there is one and only one polynomialP(X){\displaystyle P(X)}, such thatdegP<D{\displaystyle \deg P<D}and the remainder of theEuclidean divisionofP(X){\displaystyle P(X)}byPi(X){\displaystyle P_{i}(X)}isAi(X){\displaystyle A_{i}(X)}for everyi.
The construction of the solution may be done as in§ Existence (constructive proof)or§ Existence (direct proof). However, the latter construction may be simplified by using, as follows,partial fraction decompositioninstead of theextended Euclidean algorithm.
Thus, we want to find a polynomialP(X){\displaystyle P(X)}, which satisfies the congruences
fori=1,…,k.{\displaystyle i=1,\ldots ,k.}
Consider the polynomials
The partial fraction decomposition of1/Q(X){\displaystyle 1/Q(X)}giveskpolynomialsSi(X){\displaystyle S_{i}(X)}with degreesdegSi(X)<di,{\displaystyle \deg S_{i}(X)<d_{i},}such that
and thus
Then a solution of the simultaneous congruence system is given by the polynomial
In fact, we have
for1≤i≤k.{\displaystyle 1\leq i\leq k.}
This solution may have a degree larger thanD=∑i=1kdi.{\displaystyle D=\sum _{i=1}^{k}d_{i}.}The unique solution of degree less thanD{\displaystyle D}may be deduced by considering the remainderBi(X){\displaystyle B_{i}(X)}of the Euclidean division ofAi(X)Si(X){\displaystyle A_{i}(X)S_{i}(X)}byPi(X).{\displaystyle P_{i}(X).}This solution is
A special case of Chinese remainder theorem for polynomials isLagrange interpolation. For this, considerkmonic polynomialsof degree one:
They are pairwise coprime if thexi{\displaystyle x_{i}}are all different. The remainder of the division byPi(X){\displaystyle P_{i}(X)}of a polynomialP(X){\displaystyle P(X)}isP(xi){\displaystyle P(x_{i})}, by thepolynomial remainder theorem.
Now, letA1,…,Ak{\displaystyle A_{1},\ldots ,A_{k}}be constants (polynomials of degree 0) inK.{\displaystyle K.}Both Lagrange interpolation and Chinese remainder theorem assert the existence of a unique polynomialP(X),{\displaystyle P(X),}of degree less thank{\displaystyle k}such that
for everyi.{\displaystyle i.}
Lagrange interpolation formula is exactly the result, in this case, of the above construction of the solution. More precisely, let
Thepartial fraction decompositionof1Q(X){\displaystyle {\frac {1}{Q(X)}}}is
In fact, reducing the right-hand side to a common denominator one gets
and the numerator is equal to one, as being a polynomial of degree less thank,{\displaystyle k,}which takes the value one fork{\displaystyle k}different values ofX.{\displaystyle X.}
Using the above general formula, we get the Lagrange interpolation formula:
Hermite interpolationis an application of the Chinese remainder theorem for univariate polynomials, which may involve moduli of arbitrary degrees (Lagrange interpolation involves only moduli of degree one).
The problem consists of finding a polynomial of the least possible degree, such that the polynomial and its firstderivativestake given values at some fixed points.
More precisely, letx1,…,xk{\displaystyle x_{1},\ldots ,x_{k}}bek{\displaystyle k}elements of the groundfieldK,{\displaystyle K,}and, fori=1,…,k,{\displaystyle i=1,\ldots ,k,}letai,0,ai,1,…,ai,ri−1{\displaystyle a_{i,0},a_{i,1},\ldots ,a_{i,r_{i}-1}}be the values of the firstri{\displaystyle r_{i}}derivatives of the sought polynomial atxi{\displaystyle x_{i}}(including the 0th derivative, which is the value of the polynomial itself). The problem is to find a polynomialP(X){\displaystyle P(X)}such that itsjth derivative takes the valueai,j{\displaystyle a_{i,j}}atxi,{\displaystyle x_{i},}fori=1,…,k{\displaystyle i=1,\ldots ,k}andj=0,…,rj.{\displaystyle j=0,\ldots ,r_{j}.}
Consider the polynomial
This is theTaylor polynomialof orderri−1{\displaystyle r_{i}-1}atxi{\displaystyle x_{i}}, of the unknown polynomialP(X).{\displaystyle P(X).}Therefore, we must have
Conversely, any polynomialP(X){\displaystyle P(X)}that satisfies thesek{\displaystyle k}congruences, in particular verifies, for anyi=1,…,k{\displaystyle i=1,\ldots ,k}
thereforePi(X){\displaystyle P_{i}(X)}is its Taylor polynomial of orderri−1{\displaystyle r_{i}-1}atxi{\displaystyle x_{i}}, that is,P(X){\displaystyle P(X)}solves the initial Hermite interpolation problem.
The Chinese remainder theorem asserts that there exists exactly one polynomial of degree less than the sum of theri,{\displaystyle r_{i},}which satisfies thesek{\displaystyle k}congruences.
There are several ways for computing the solutionP(X).{\displaystyle P(X).}One may use the method described at the beginning of§ Over univariate polynomial rings and Euclidean domains. One may also use the constructions given in§ Existence (constructive proof)or§ Existence (direct proof).
The Chinese remainder theorem can be generalized to non-coprime moduli. Letm,n,a,b{\displaystyle m,n,a,b}be any integers, letg=gcd(m,n){\displaystyle g=\gcd(m,n)};M=lcm(m,n){\displaystyle M=\operatorname {lcm} (m,n)}, and consider the system of congruences:
Ifa≡b(modg){\displaystyle a\equiv b{\pmod {g}}}, then this system has a unique solution moduloM=mn/g{\displaystyle M=mn/g}. Otherwise, it has no solutions.
If one usesBézout's identityto writeg=um+vn{\displaystyle g=um+vn}, then the solution is given by
This defines an integer, asgdivides bothmandn. Otherwise, the proof is very similar to that for coprime moduli.[17]
The Chinese remainder theorem can be generalized to anyring, by usingcoprime ideals(also calledcomaximal ideals). TwoidealsIandJare coprime if there are elementsi∈I{\displaystyle i\in I}andj∈J{\displaystyle j\in J}such thati+j=1.{\displaystyle i+j=1.}This relation plays the role ofBézout's identityin the proofs related to this generalization, which otherwise are very similar. The generalization may be stated as follows.[18][19]
LetI1, ...,Ikbe two-sided ideals of a ringR{\displaystyle R}and letIbe theirintersection. If the ideals are pairwise coprime, we have theisomorphism:
between thequotient ringR/I{\displaystyle R/I}and thedirect productof theR/Ii,{\displaystyle R/I_{i},}where "xmodI{\displaystyle x{\bmod {I}}}" denotes theimageof the elementx{\displaystyle x}in the quotient ring defined by the idealI.{\displaystyle I.}Moreover, ifR{\displaystyle R}iscommutative, then the ideal intersection of pairwise coprime ideals is equal to theirproduct; that is
ifIiandIjare coprime for alli≠j.
LetI1,I2,…,Ik{\displaystyle I_{1},I_{2},\dots ,I_{k}}be pairwise coprime two-sided ideals with⋂i=1kIi=0,{\displaystyle \bigcap _{i=1}^{k}I_{i}=0,}and
be the isomorphism defined above. Letfi=(0,…,1,…,0){\displaystyle f_{i}=(0,\ldots ,1,\ldots ,0)}be the element of(R/I1)×⋯×(R/Ik){\displaystyle (R/I_{1})\times \cdots \times (R/I_{k})}whose components are all0except theith which is1, andei=φ−1(fi).{\displaystyle e_{i}=\varphi ^{-1}(f_{i}).}
Theei{\displaystyle e_{i}}arecentral idempotentsthat are pairwiseorthogonal; this means, in particular, thatei2=ei{\displaystyle e_{i}^{2}=e_{i}}andeiej=ejei=0{\displaystyle e_{i}e_{j}=e_{j}e_{i}=0}for everyiandj. Moreover, one hase1+⋯+en=1,{\textstyle e_{1}+\cdots +e_{n}=1,}andIi=R(1−ei).{\displaystyle I_{i}=R(1-e_{i}).}
In summary, this generalized Chinese remainder theorem is the equivalence between giving pairwise coprime two-sided ideals with a zero intersection, and giving central and pairwise orthogonal idempotents that sum to1.[20]
The Chinese remainder theorem has been used to construct aGödel numbering for sequences, which is involved in the proof ofGödel's incompleteness theorems.
Theprime-factor FFT algorithm(also called Good-Thomas algorithm) uses the Chinese remainder theorem for reducing the computation of afast Fourier transformof sizen1n2{\displaystyle n_{1}n_{2}}to the computation of two fast Fourier transforms of smaller sizesn1{\displaystyle n_{1}}andn2{\displaystyle n_{2}}(providing thatn1{\displaystyle n_{1}}andn2{\displaystyle n_{2}}are coprime).
Mostimplementations of RSA use the Chinese remainder theoremduring signing ofHTTPScertificates and during decryption.
The Chinese remainder theorem can also be used insecret sharing, which consists of distributing a set of shares among a group of people who, all together (but no one alone), can recover a certain secret from the given set of shares. Each of the shares is represented in a congruence, and the solution of the system of congruences using the Chinese remainder theorem is the secret to be recovered.Secret sharing using the Chinese remainder theoremuses, along with the Chinese remainder theorem, special sequences of integers that guarantee the impossibility of recovering the secret from a set of shares with less than a certaincardinality.
Therange ambiguity resolutiontechniques used withmedium pulse repetition frequencyradar can be seen as a special case of the Chinese remainder theorem.
Given asurjectionZ/n→Z/m{\displaystyle \mathbb {Z} /n\to \mathbb {Z} /m}offiniteabelian groups, we can use the Chinese remainder theorem to give a complete description of any such map. First of all, the theorem gives isomorphisms
where{pm1,…,pmj}⊆{pn1,…,pni}{\displaystyle \{p_{m_{1}},\ldots ,p_{m_{j}}\}\subseteq \{p_{n_{1}},\ldots ,p_{n_{i}}\}}. In addition, for any induced map
from the original surjection, we haveak≥bl{\displaystyle a_{k}\geq b_{l}}andpnk=pml,{\displaystyle p_{n_{k}}=p_{m_{l}},}since for a pair ofprimesp,q{\displaystyle p,q}, the only non-zero surjections
can be defined ifp=q{\displaystyle p=q}anda≥b{\displaystyle a\geq b}.
These observations are pivotal for constructing the ring ofprofinite integers, which is given as aninverse limitof all such maps.
Dedekind's theorem on the linear independence of characters.LetMbe amonoidandkanintegral domain, viewed as a monoid by considering the multiplication onk. Then any finite family(fi)i∈Iof distinctmonoid homomorphismsfi:M→kislinearly independent. In other words, every family(αi)i∈Iof elementsαi∈ksatisfying
must be equal to the family(0)i∈I.
Proof.First assume thatkis afield, otherwise, replace the integral domainkby itsquotient field, and nothing will change. We can linearly extend the monoid homomorphismsfi:M→ktok-algebra homomorphismsFi:k[M] →k, wherek[M]is themonoid ringofMoverk. Then, by linearity, the condition
yields
Next, fori,j∈I;i≠jthe twok-linear mapsFi:k[M] →kandFj:k[M] →kare not proportional to each other. Otherwisefiandfjwould also be proportional, and thus equal since as monoid homomorphisms they satisfy:fi(1) = 1 =fj(1), which contradicts the assumption that they are distinct.
Therefore, thekernelsKerFiandKerFjare distinct. Sincek[M]/KerFi≅Fi(k[M]) =kis a field,KerFiis amaximal idealofk[M]for everyiinI. Because they are distinct and maximal the idealsKerFiandKerFjare coprime wheneveri≠j. The Chinese Remainder Theorem (for general rings) yields an isomorphism:
where
Consequently, the map
is surjective. Under the isomorphismsk[M]/KerFi→Fi(k[M]) =k,the mapΦcorresponds to:
Now,
yields
for every vector(ui)i∈Iin theimageof the mapψ. Sinceψis surjective, this means that
for every vector
Consequently,(αi)i∈I= (0)i∈I. QED.
|
https://en.wikipedia.org/wiki/Chinese_remainder_theorem
|
Inmathematics, anisomorphismis a structure-preservingmappingormorphismbetween twostructuresof the same type that can be reversed by aninverse mapping. Two mathematical structures areisomorphicif an isomorphism exists between them. The word is derived fromAncient Greekἴσος(isos)'equal'andμορφή(morphe)'form, shape'.
The interest in isomorphisms lies in the fact that two isomorphic objects have the same properties (excluding further information such as additional structure or names of objects). Thus isomorphic structures cannot be distinguished from the point of view of structure only, and may often be identified. Inmathematical jargon, one says that two objects are the sameup toan isomorphism. A common example where isomorphic structures cannot be identified is when the structures are substructures of a larger one. For example, all subspaces of dimension one of avector spaceare isomorphic and cannot be identified.
Anautomorphismis an isomorphism from a structure to itself. An isomorphism between two structures is acanonical isomorphism(acanonical mapthat is an isomorphism) if there is only one isomorphism between the two structures (as is the case for solutions of auniversal property), or if the isomorphism is much more natural (in some sense) than other isomorphisms. For example, for everyprime numberp, allfieldswithpelements are canonically isomorphic, with a unique isomorphism. Theisomorphism theoremsprovide canonical isomorphisms that are not unique.
The termisomorphismis mainly used foralgebraic structuresandcategories. In the case of algebraic structures, mappings are calledhomomorphisms, and a homomorphism is an isomorphismif and only ifit isbijective.
In various areas of mathematics, isomorphisms have received specialized names, depending on the type of structure under consideration. For example:
Category theory, which can be viewed as a formalization of the concept of mapping between structures, provides a language that may be used to unify the approach to these different aspects of the basic idea.
LetR+{\displaystyle \mathbb {R} ^{+}}be themultiplicative groupofpositive real numbers, and letR{\displaystyle \mathbb {R} }be the additive group of real numbers.
Thelogarithm functionlog:R+→R{\displaystyle \log :\mathbb {R} ^{+}\to \mathbb {R} }satisfieslog(xy)=logx+logy{\displaystyle \log(xy)=\log x+\log y}for allx,y∈R+,{\displaystyle x,y\in \mathbb {R} ^{+},}so it is agroup homomorphism. Theexponential functionexp:R→R+{\displaystyle \exp :\mathbb {R} \to \mathbb {R} ^{+}}satisfiesexp(x+y)=(expx)(expy){\displaystyle \exp(x+y)=(\exp x)(\exp y)}for allx,y∈R,{\displaystyle x,y\in \mathbb {R} ,}so it too is a homomorphism.
The identitieslogexpx=x{\displaystyle \log \exp x=x}andexplogy=y{\displaystyle \exp \log y=y}show thatlog{\displaystyle \log }andexp{\displaystyle \exp }areinversesof each other. Sincelog{\displaystyle \log }is a homomorphism that has an inverse that is also a homomorphism,log{\displaystyle \log }is anisomorphism of groups, i.e.,R+≅R{\displaystyle \mathbb {R} ^{+}\cong \mathbb {R} }via the isomorphismlogx{\displaystyle \log x}.
Thelog{\displaystyle \log }function is an isomorphism which translates multiplication of positive real numbers into addition of real numbers. This facility makes it possible to multiply real numbers using arulerand atable of logarithms, or using aslide rulewith a logarithmic scale.
Consider the group(Z6,+),{\displaystyle (\mathbb {Z} _{6},+),}the integers from 0 to 5 with additionmodulo6. Also consider the group(Z2×Z3,+),{\displaystyle \left(\mathbb {Z} _{2}\times \mathbb {Z} _{3},+\right),}the ordered pairs where thexcoordinates can be 0 or 1, and the y coordinates can be 0, 1, or 2, where addition in thex-coordinate is modulo 2 and addition in they-coordinate is modulo 3.
These structures are isomorphic under addition, under the following scheme:(0,0)↦0(1,1)↦1(0,2)↦2(1,0)↦3(0,1)↦4(1,2)↦5{\displaystyle {\begin{alignedat}{4}(0,0)&\mapsto 0\\(1,1)&\mapsto 1\\(0,2)&\mapsto 2\\(1,0)&\mapsto 3\\(0,1)&\mapsto 4\\(1,2)&\mapsto 5\\\end{alignedat}}}or in general(a,b)↦(3a+4b)mod6.{\displaystyle (a,b)\mapsto (3a+4b)\mod 6.}
For example,(1,1)+(1,0)=(0,1),{\displaystyle (1,1)+(1,0)=(0,1),}which translates in the other system as1+3=4.{\displaystyle 1+3=4.}
Even though these two groups "look" different in that the sets contain different elements, they are indeedisomorphic: their structures are exactly the same. More generally, thedirect productof twocyclic groupsZm{\displaystyle \mathbb {Z} _{m}}andZn{\displaystyle \mathbb {Z} _{n}}is isomorphic to(Zmn,+){\displaystyle (\mathbb {Z} _{mn},+)}if and only ifmandnarecoprime, per theChinese remainder theorem.
If one object consists of a setXwith abinary relationR and the other object consists of a setYwith a binary relation S then an isomorphism fromXtoYis a bijective functionf:X→Y{\displaystyle f:X\to Y}such that:[1]S(f(u),f(v))if and only ifR(u,v){\displaystyle \operatorname {S} (f(u),f(v))\quad {\text{ if and only if }}\quad \operatorname {R} (u,v)}
S isreflexive,irreflexive,symmetric,antisymmetric,asymmetric,transitive,total,trichotomous, apartial order,total order,well-order,strict weak order,total preorder(weak order), anequivalence relation, or a relation with any other special properties, if and only if R is.
For example, R is anordering≤ and S an ordering⊑,{\displaystyle \scriptstyle \sqsubseteq ,}then an isomorphism fromXtoYis a bijective functionf:X→Y{\displaystyle f:X\to Y}such thatf(u)⊑f(v)if and only ifu≤v.{\displaystyle f(u)\sqsubseteq f(v)\quad {\text{ if and only if }}\quad u\leq v.}Such an isomorphism is called anorder isomorphismor (less commonly) anisotone isomorphism.
IfX=Y,{\displaystyle X=Y,}then this is a relation-preservingautomorphism.
Inalgebra, isomorphisms are defined for allalgebraic structures. Some are more specifically studied; for example:
Just as theautomorphismsof analgebraic structureform agroup, the isomorphisms between two algebras sharing a common structure form aheap. Letting a particular isomorphism identify the two structures turns this heap into a group.
Inmathematical analysis, theLaplace transformis an isomorphism mapping harddifferential equationsinto easieralgebraicequations.
Ingraph theory, an isomorphism between two graphsGandHis abijectivemapffrom the vertices ofGto the vertices ofHthat preserves the "edge structure" in the sense that there is an edge fromvertexuto vertexvinGif and only if there is an edge fromf(u){\displaystyle f(u)}tof(v){\displaystyle f(v)}inH. Seegraph isomorphism.
Inorder theory, an isomorphism between two partially ordered setsPandQis abijectivemapf{\displaystyle f}fromPtoQthat preserves the order structure in the sense that for any elementsx{\displaystyle x}andy{\displaystyle y}ofPwe havex{\displaystyle x}less thany{\displaystyle y}inPif and only iff(x){\displaystyle f(x)}is less thanf(y){\displaystyle f(y)}inQ. As an example, the set {1,2,3,6} of whole numbers ordered by theis-a-factor-ofrelation is isomorphic to the set {O,A,B,AB} ofblood typesordered by thecan-donate-torelation. Seeorder isomorphism.
In mathematical analysis, an isomorphism between twoHilbert spacesis a bijection preserving addition, scalar multiplication, and inner product.
In early theories oflogical atomism, the formal relationship between facts and true propositions was theorized byBertrand RussellandLudwig Wittgensteinto be isomorphic. An example of this line of thinking can be found in Russell'sIntroduction to Mathematical Philosophy.
Incybernetics, thegood regulator theoremor Conant–Ashby theorem is stated as "Every good regulator of a system must be a model of that system". Whether regulated or self-regulating, an isomorphism is required between the regulator and processing parts of the system.
Incategory theory, given acategoryC, an isomorphism is a morphismf:a→b{\displaystyle f:a\to b}that has an inverse morphismg:b→a,{\displaystyle g:b\to a,}that is,fg=1b{\displaystyle fg=1_{b}}andgf=1a.{\displaystyle gf=1_{a}.}
Two categoriesCandDareisomorphicif there existfunctorsF:C→D{\displaystyle F:C\to D}andG:D→C{\displaystyle G:D\to C}which are mutually inverse to each other, that is,FG=1D{\displaystyle FG=1_{D}}(the identity functor onD) andGF=1C{\displaystyle GF=1_{C}}(the identity functor onC).
In aconcrete category(roughly, a category whose objects are sets (perhaps with extra structure) and whose morphisms are structure-preserving functions), such as thecategory of topological spacesor categories of algebraic objects (like thecategory of groups, thecategory of rings, and thecategory of modules), an isomorphism must be bijective on theunderlying sets. In algebraic categories (specifically, categories ofvarieties in the sense of universal algebra), an isomorphism is the same as a homomorphism which is bijective on underlying sets. However, there are concrete categories in which bijective morphisms are not necessarily isomorphisms (such as the category of topological spaces).
Since a composition of isomorphisms is an isomorphism, since the identity is an isomorphism and since the inverse of an isomorphism is an isomorphism, the relation that two mathematical objects are isomorphic is anequivalence relation. Anequivalence classgiven by isomorphisms is commonly called anisomorphism class.[2]
Examples of isomorphism classes are plentiful in mathematics.
However, there are circumstances in which the isomorphism class of an object conceals vital information about it.
Although there are cases where isomorphic objects can be considered equal, one must distinguishequalityandisomorphism.[3]Equality is when two objects are the same, and therefore everything that is true about one object is true about the other. On the other hand, isomorphisms are related to some structure, and two isomorphic objects share only the properties that are related to this structure.
For example, the setsA={x∈Z∣x2<2}andB={−1,0,1}{\displaystyle A=\left\{x\in \mathbb {Z} \mid x^{2}<2\right\}\quad {\text{ and }}\quad B=\{-1,0,1\}}areequal; they are merely different representations—the first anintensionalone (inset builder notation), and the secondextensional(by explicit enumeration)—of the same subset of the integers. By contrast, the sets{A,B,C}{\displaystyle \{A,B,C\}}and{1,2,3}{\displaystyle \{1,2,3\}}are notequalsince they do not have the same elements. They are isomorphic as sets, but there are many choices (in fact 6) of an isomorphism between them: one isomorphism is
while another is
and no one isomorphism is intrinsically better than any other.[note 1]On this view and in this sense, these two sets are not equal because one cannot consider themidentical: one can choose an isomorphism between them, but that is a weaker claim than identity and valid only in the context of the chosen isomorphism.
Also,integersandeven numbersare isomorphic asordered setsandabelian groups(for addition), but cannot be considered equal sets, since one is aproper subsetof the other.
On the other hand, when sets (or othermathematical objects) are defined only by their properties, without considering the nature of their elements, one often considers them to be equal. This is generally the case with solutions ofuniversal properties.
For example, therational numbersare formally defined asequivalence classesof pairs of integers, although nobody thinks of a rational number as a set (equivalence class). The universal property of the rational numbers is essentially that they form afieldthat contains the integers and does not contain any proper subfield. Given two fields with these properties, there is a unique field isomorphism between them. This allows identifying these two fields, since every property of one of them can be transferred to the other through the isomorphism. Thereal numbersthat can be expressed as a quotient of integers form the smallest subfield of the reals. There is thus a unique isomorphism from this subfield of the reals to the rational numbers defined by equivalence classes.
|
https://en.wikipedia.org/wiki/Isomorphism
|
Incryptography, asaltisrandomdata fed as an additional input to aone-way functionthathashesdata, apasswordorpassphrase.[1]Salting helps defend against attacks that use precomputed tables (e.g.rainbow tables), by vastly growing the size of table needed for a successful attack.[2][3][4]It also helps protect passwords that occur multiple times in a database, as a new salt is used for each password instance.[5]Additionally, salting does not place any burden on users.
Typically, a unique salt is randomly generated for each password. The salt and the password (or its version afterkey stretching) areconcatenatedand fed to acryptographic hash function, and the outputhash valueis then stored with the salt in a database. The salt does not need to be encrypted, because knowing the salt would not help the attacker.[5]
Salting is broadly used in cybersecurity, fromUnixsystem credentials toInternet security.
Salts are related tocryptographic nonces.
Without a salt, identical passwords will map to identical hash values, which could make it easier for a hacker to guess the passwords from their hash value.
Instead, a salt is generated and appended to each password, which causes the resultant hash to output different values for the same original password.
The salt and hash are then stored in the database. To later test if a password a user enters is correct, the same process can be performed on it (appending that user's salt to the password and calculating the resultant hash): if the result does not match the stored hash, it could not have been the correct password that was entered.
In practice, a salt is usually generated using aCryptographically Secure PseudoRandom Number Generator. CSPRNGs are designed to produce unpredictable random numbers which can be alphanumeric. While generally discouraged due to lower security, some systems use timestamps or simple counters as a source of salt. Sometimes, a salt may be generated by combining a random value with additional information, such as a timestamp or user-specific data, to ensure uniqueness across different systems or time periods.
Using the same salt for all passwords is dangerous because a precomputed table which simply accounts for the salt will render the salt useless.
Generation of precomputed tables for databases with unique salts for every password is not viable because of the computational cost of doing so. But, if a common salt is used for all the entries, creating such a table (that accounts for the salt) then becomes a viable and possibly successful attack.[6]
Because salt re-use can cause users with the same password to have the same hash, cracking a single hash can result in other passwords being compromised too.
If a salt is too short, an attacker may precompute a table of every possible salt appended to every likely password. Using a long salt ensures such a table would be prohibitively large.[7][8]16 bytes (128 bits) or more is generally sufficient to provide a large enough space of possible values, minimizing the risk of collisions (i.e., two different passwords ending up with the same salt).
To understand the difference between cracking a single password and a set of them, consider a file with users and their hashed passwords. Say the file is unsalted. Then an attacker could pick a string, call itattempt[0], and then computehash(attempt[0]). A user whose hash stored in the file ishash(attempt[0])may or may not have passwordattempt[0]. However, even ifattempt[0]isnotthe user's actual password, it will be accepted as if it were, because the system can only check passwords by computing the hash of the password entered and comparing it to the hash stored in the file. Thus, each match cracks a user password, and the chance of a match rises with the number of passwords in the file. In contrast, if salts are used, the attacker would have to computehash(attempt[0] || salt[a]), compare against entry A, thenhash(attempt[0] || salt[b]), compare against entry B, and so on. This prevents any one attempt from cracking multiple passwords, given that salt re-use is avoided.[9]
Salts also combat the use of precomputed tables for cracking passwords.[10]Such a table might simply map common passwords to their hashes, or it might do something more complex, like store the start and end points of a set ofprecomputed hash chains. In either case, salting can defend against the use of precomputed tables by lengthening hashes and having them draw from larger character sets, making it less likely that the table covers the resulting hashes. In particular, a precomputed table would need to cover the string[salt + hash]rather than simply[hash].
The modernshadow passwordsystem, in which password hashes and other security data are stored in a non-public file, somewhat mitigates these concerns. However, they remain relevant in multi-server installations which use centralized password management systems to push passwords or password hashes to multiple systems. In such installations, therootaccount on each individual system may be treated as less trusted than the administrators of the centralized password system, so it remains worthwhile to ensure that the security of the password hashing algorithm, including the generation of unique salt values, is adequate.[citation needed]
Another (lesser) benefit of a salt is as follows: two users might choose the same string as their password. Without a salt, this password would be stored as the same hash string in the password file. This would disclose the fact that the two accounts have the same password, allowing anyone who knows one of the account's passwords to access the other account. By salting the passwords with two random characters, even if two accounts use the same password, no one can discover this just by reading hashes. Salting also makes it extremely difficult to determine if a person has used the same password for multiple systems.[11]
Earlier versions ofUnixused apassword file/etc/passwdto store the hashes of salted passwords (passwords prefixed with two-character random salts). In these older versions of Unix, the salt was also stored in the passwd file (as cleartext) together with the hash of the salted password. The password file was publicly readable for all users of the system. This was necessary so that user-privileged software tools could find user names and other information. The security of passwords is therefore protected only by the one-way functions (enciphering or hashing) used for the purpose. Early Unix implementations limited passwords to eight characters and used a 12-bit salt, which allowed for 4,096 possible salt values.[12]This was an appropriate balance for 1970s computational and storage costs.[13]
Theshadow passwordsystem is used to limit access to hashes and salt. The salt is eight characters, the hash is 86 characters, and the password length is effectively unlimited, barring stack overflow errors.
It is common for a web application to store in a database the hash value of a user's password. Without a salt, a successfulSQL injectionattack may yield easily crackable passwords. Because many users re-use passwords for multiple sites, the use of a salt is an important component of overallweb application security.[14]Some additional references for using a salt to secure password hashes in specific languages or libraries (PHP, the .NET libraries, etc.) can be found in theexternal linkssection below.
|
https://en.wikipedia.org/wiki/Salt_(cryptography)
|
Incryptography, akey derivation function(KDF) is a cryptographic algorithm that derives one or moresecret keysfrom a secret value such as a master key, apassword, or apassphraseusing apseudorandom function(which typically uses acryptographic hash functionorblock cipher).[1][2][3]KDFs can be used to stretch keys into longer keys or to obtain keys of a required format, such as converting a group element that is the result of aDiffie–Hellman key exchangeinto a symmetric key for use withAES.Keyed cryptographic hash functionsare popular examples of pseudorandom functions used for key derivation.[4]
The first[citation needed]deliberately slow (key stretching) password-based key derivation function was called "crypt" (or "crypt(3)" after itsman page), and was invented byRobert Morrisin 1978. It would encrypt a constant (zero), using the first 8 characters of the user's password as the key, by performing 25 iterations of a modifiedDESencryption algorithm (in which a 12-bit number read from the real-time computer clock is used to perturb the calculations). The resulting 64-bit number is encoded as 11 printable characters and then stored in theUnixpassword file.[5]While it was a great advance at the time, increases in processor speeds since thePDP-11era have made brute-force attacks against crypt feasible, and advances in storage have rendered the 12-bit salt inadequate. The crypt function's design also limits the user password to 8 characters, which limits the keyspace and makes strongpassphrasesimpossible.[citation needed]
Although high throughput is a desirable property in general-purpose hash functions, the opposite is true in password security applications in which defending against brute-force cracking is a primary concern. The growing use of massively-parallel hardware such as GPUs, FPGAs, and even ASICs for brute-force cracking has made the selection of a suitable algorithms even more critical because the good algorithm should not only enforce a certain amount of computational cost not only on CPUs, but also resist the cost/performance advantages of modern massively-parallel platforms for such tasks. Various algorithms have been designed specifically for this purpose, includingbcrypt,scryptand, more recently,Lyra2andArgon2(the latter being the winner of thePassword Hashing Competition). The large-scaleAshley Madison data breachin which roughly 36 million passwords hashes were stolen by attackers illustrated the importance of algorithm selection in securing passwords. Although bcrypt was employed to protect the hashes (making large scale brute-force cracking expensive and time-consuming), a significant portion of the accounts in the compromised data also contained a password hash based on the fast general-purposeMD5algorithm, which made it possible for over 11 million of the passwords to be cracked in a matter of weeks.[6]
In June 2017, The U.S. National Institute of Standards and Technology (NIST) issued a new revision of their digital authentication guidelines, NIST SP 800-63B-3,[7]: 5.1.1.2stating that: "Verifiers SHALL store memorized secrets [i.e. passwords] in a form that is resistant to offline attacks. Memorized secrets SHALL be salted and hashed using a suitable one-way key derivation function. Key derivation functions take a password, a salt, and a cost factor as inputs then generate a password hash. Their purpose is to make each password guessing trial by an attacker who has obtained a password hash file expensive and therefore the cost of a guessing attack high or prohibitive."
Modern password-based key derivation functions, such asPBKDF2,[2]are based on a recognized cryptographic hash, such asSHA-2, use more salt (at least 64 bits and chosen randomly) and a high iteration count. NIST recommends a minimum iteration count of 10,000.[7]: 5.1.1.2"For especially critical keys, or for very powerful systems or systems where user-perceived performance is not critical, an iteration count of 10,000,000 may be appropriate.”[8]: 5.2
The original use for a KDF is key derivation, the generation of keys from secret passwords or passphrases. Variations on this theme include:
Key derivation functions are also used in applications to derive keys from secret passwords or passphrases, which typically do not have the desired properties to be used directly as cryptographic keys. In such applications, it is generally recommended that the key derivation function be made deliberately slow so as to frustratebrute-force attackordictionary attackon the password or passphrase input value.
Such use may be expressed asDK = KDF(key, salt, iterations), whereDKis the derived key,KDFis the key derivationfunction,keyis the original key or password,saltis a random number which acts ascryptographic salt, anditerationsrefers to the number ofiterationsof a sub-function. The derived key is used instead of the original key or password as the key to the system. The values of the salt and the number of iterations (if it is not fixed) are stored with the hashed password or sent ascleartext(unencrypted) with an encrypted message.[10]
The difficulty of a brute force attack is increased with the number of iterations. A practical limit on the iteration count is the unwillingness of users to tolerate a perceptible delay in logging into a computer or seeing a decrypted message. The use ofsaltprevents the attackers from precomputing a dictionary of derived keys.[10]
An alternative approach, calledkey strengthening, extends the key with a random salt, but then (unlike in key stretching) securely deletes the salt.[11]This forces both the attacker and legitimate users to perform a brute-force search for the salt value.[12]Although the paper that introduced key stretching[13]referred to this earlier technique and intentionally chose a different name, the term "key strengthening" is now often (arguably incorrectly) used to refer to key stretching.
Despite their original use for key derivation, KDFs are possibly better known for their use inpassword hashing(password verification by hash comparison), as used by thepasswdfile orshadow passwordfile. Password hash functions should be relatively expensive to calculate in case of brute-force attacks, and thekey stretchingof KDFs happen to provide this characteristic.[citation needed]The non-secret parameters are called "salt" in this context.
In 2013 aPassword Hashing Competitionwas announced to choose a new, standard algorithm for password hashing. On 20 July 2015 the competition ended andArgon2was announced as the final winner. Four other algorithms received special recognition: Catena,Lyra2, Makwa andyescrypt.[14]
As of May 2023, theOpen Worldwide Application Security Project(OWASP) recommends the following KDFs for password hashing, listed in order of priority:[15]
|
https://en.wikipedia.org/wiki/Password_hashing#Salting
|
Inprobability theory, thebirthday problemasks for the probability that, in a set ofnrandomlychosen people, at least two will share the samebirthday. Thebirthday paradoxis the counterintuitive fact that only 23 people are needed for that probability to exceed 50%.
The birthday paradox is averidical paradox: it seems wrong at first glance but is, in fact, true. While it may seem surprising that only 23 individuals are required to reach a 50% probability of a shared birthday, this result is made more intuitive by considering that the birthday comparisons will be made between every possible pair of individuals. With 23 individuals, there are23 × 22/2= 253 pairs to consider.
Real-world applications for the birthday problem include a cryptographic attack called thebirthday attack, which uses this probabilistic model to reduce the complexity of finding acollisionfor ahash function, as well as calculating the approximate risk of a hash collision existing within the hashes of a given size of population.
The problem is generally attributed toHarold Davenportin about 1927, though he did not publish it at the time. Davenport did not claim to be its discoverer "because he could not believe that it had not been stated earlier".[1][2]The first publication of a version of the birthday problem was byRichard von Misesin 1939.[3]
From apermutationsperspective, let the eventAbe the probability of finding a group of 23 people without any repeated birthdays. And let the eventBbe the probability of finding a group of 23 people with at least two people sharing same birthday,P(B) = 1 −P(A). This is such thatP(A)is the ratio of the total number of birthdays,Vnr{\displaystyle V_{nr}}, without repetitions and order matters (e.g. for a group of 2 people, mm/dd birthday format, one possible outcome is{{01/02,05/20},{05/20,01/02},{10/02,08/04},...}{\displaystyle \left\{\left\{01/02,05/20\right\},\left\{05/20,01/02\right\},\left\{10/02,08/04\right\},...\right\}}) divided by the total number of birthdays with repetition and order matters,Vt{\displaystyle V_{t}}, as it is the total space of outcomes from the experiment (e.g. 2 people, one possible outcome is{{01/02,01/02},{10/02,08/04},...}{\displaystyle \left\{\left\{01/02,01/02\right\},\left\{10/02,08/04\right\},...\right\}}). ThereforeVnr{\displaystyle V_{nr}}andVt{\displaystyle V_{t}}arepermutations.
Another way the birthday problem can be solved is by asking for an approximate probability that in a group ofnpeople at least two have the same birthday. For simplicity,leap years,twins,selection bias, and seasonal and weekly variations in birth rates[4]are generally disregarded, and instead it is assumed that there are 365 possible birthdays, and that each person's birthday is equally likely to be any of these days, independent of the other people in the group.
For independent birthdays, auniform distributionof birthdays minimizes the probability of two people in a group having the same birthday. Any unevenness increases the likelihood of two people sharing a birthday.[5][6]However real-world birthdays are not sufficiently uneven to make much change: the real-world group size necessary to have a greater than 50% chance of a shared birthday is 23, as in the theoretical uniform distribution.[7]
The goal is to computeP(B), the probability that at least two people in the room have the same birthday. However, it is simpler to calculateP(A′), the probability that no two people in the room have the same birthday. Then, becauseBandA′are the only two possibilities and are alsomutually exclusive,P(B) = 1 −P(A′).
Here is the calculation ofP(B)for 23 people. Let the 23 people be numbered 1 to 23. Theeventthat all 23 people have different birthdays is the same as the event that person 2 does not have the same birthday as person 1, and that person 3 does not have the same birthday as either person 1 or person 2, and so on, and finally that person 23 does not have the same birthday as any of persons 1 through 22. Let these events be called Event 2, Event 3, and so on. Event 1 is the event of person 1 having a birthday, which occurs with probability 1. This conjunction of events may be computed usingconditional probability: the probability of Event 2 is364/365, as person 2 may have any birthday other than the birthday of person 1. Similarly, the probability of Event 3 given that Event 2 occurred is363/365, as person 3 may have any of the birthdays not already taken by persons 1 and 2. This continues until finally the probability of Event 23 given that all preceding events occurred is343/365. Finally, the principle of conditional probability implies thatP(A′)is equal to the product of these individual probabilities:
The terms of equation (1) can be collected to arrive at:
Evaluating equation (2) givesP(A′) ≈ 0.492703
Therefore,P(B) ≈ 1 − 0.492703 = 0.507297(50.7297%).
This process can be generalized to a group ofnpeople, wherep(n)is the probability of at least two of thenpeople sharing a birthday. It is easier to first calculate the probabilityp(n)that allnbirthdays aredifferent. According to thepigeonhole principle,p(n)is zero whenn> 365. Whenn≤ 365:
where!is thefactorialoperator,(365n)is thebinomial coefficientandkPrdenotespermutation.
The equation expresses the fact that the first person has no one to share a birthday, the second person cannot have the same birthday as the first(364/365), the third cannot have the same birthday as either of the first two(363/365), and in general thenth birthday cannot be the same as any of then− 1preceding birthdays.
Theeventof at least two of thenpersons having the same birthday iscomplementaryto allnbirthdays being different. Therefore, its probabilityp(n)is
The following table shows the probability for some other values ofn(for this table, the existence of leap years is ignored, and each birthday is assumed to be equally likely):
TheTaylor seriesexpansion of theexponential function(the constante≈2.718281828)
provides a first-order approximation forexfor|x|≪1{\displaystyle |x|\ll 1}:
To apply this approximation to the first expression derived forp(n), setx= −a/365. Thus,
Then, replaceawith non-negative integers for each term in the formula ofp(n)untila=n− 1, for example, whena= 1,
The first expression derived forp(n)can be approximated as
Therefore,
An even coarser approximation is given by
which, as the graph illustrates, is still fairly accurate.
According to the approximation, the same approach can be applied to any number of "people" and "days". If rather than 365 days there ared, if there arenpersons, and ifn≪d, then using the same approach as above we achieve the result that ifp(n,d)is the probability that at least two out ofnpeople share the same birthday from a set ofdavailable days, then:
The probability of any two people not having the same birthday is364/365. In a room containingnpeople, there are(n2)=n(n− 1)/2pairs of people, i.e.(n2)events. The probability of no two people sharing the same birthday can be approximated by assuming that these events are independent and hence by multiplying their probability together. Being independent would be equivalent to pickingwith replacement, any pair of people in the world, not just in a room. In short364/365can be multiplied by itself(n2)times, which gives us
Since this is the probability of no one having the same birthday, then the probability of someone sharing a birthday is
And for the group of 23 people, the probability of sharing is
Applying thePoissonapproximation for the binomial on the group of 23 people,
so
The result is over 50% as previous descriptions. This approximation is the same as the one above based on the Taylor expansion that usesex≈ 1 +x.
A goodrule of thumbwhich can be used formental calculationis the relation
which can also be written as
which works well for probabilities less than or equal to1/2. In these equations,dis the number of days in a year.
For instance, to estimate the number of people required for a1/2chance of a shared birthday, we get
Which is not too far from the correct answer of 23.
This can also be approximated using the following formula for thenumberof people necessary to have at least a1/2chance of matching:
This is a result of the good approximation that an event with1/kprobability will have a1/2chance of occurring at least once if it is repeatedkln 2times.[8]
The lighter fields in this table show the number of hashes needed to achieve the given probability of collision (column) given a hash space of a certain size in bits (row). Using the birthday analogy: the "hash space size" resembles the "available days", the "probability of collision" resembles the "probability of shared birthday", and the "required number of hashed elements" resembles the "required number of people in a group". One could also use this chart to determine the minimum hash size required (given upper bounds on the hashes and probability of error), or the probability of collision (for fixed number of hashes and probability of error).
For comparison,10−18to10−15is the uncorrectablebit error rateof a typical hard disk.[9]In theory, 128-bit hash functions, such asMD5, should stay within that range until about8.2×1011documents, even if its possible outputs are many more.
The argument below is adapted from an argument ofPaul Halmos.[nb 1]
As stated above, the probability that no two birthdays coincide is
As in earlier paragraphs, interest lies in the smallestnsuch thatp(n) >1/2; or equivalently, the smallestnsuch thatp(n) <1/2.
Using the inequality1 −x<e−xin the above expression we replace1 −k/365withe−k⁄365. This yields
Therefore, the expression above is not only an approximation, but also anupper boundofp(n). The inequality
impliesp(n) <1/2. Solving forngives
Now,730 ln 2is approximately 505.997, which is barely below 506, the value ofn2−nattained whenn= 23. Therefore, 23 people suffice. Incidentally, solvingn2−n= 730 ln 2forngives the approximate formula of Frank H. Mathis cited above.
This derivation only shows thatat most23 people are needed to ensure the chances of a birthday match are at least even; it leaves open the possibility thatnis 22 or less could also work.
Given a year withddays, thegeneralized birthday problemasks for the minimal numbern(d)such that, in a set ofnrandomly chosen people, the probability of a birthday coincidence is at least 50%. In other words,n(d)is the minimal integernsuch that
The classical birthday problem thus corresponds to determiningn(365). The first 99 values ofn(d)are given here (sequenceA033810in theOEIS):
A similar calculation shows thatn(d)= 23 whendis in the range 341–372.
A number of bounds and formulas forn(d)have been published.[10]For anyd≥ 1, the numbern(d)satisfies[11]
These bounds are optimal in the sense that the sequencen(d) −√2dln 2gets arbitrarily close to
while it has
as its maximum, taken ford= 43.
The bounds are sufficiently tight to give the exact value ofn(d)in most of the cases. For example, ford=365 these bounds imply that22.7633 <n(365) < 23.7736and 23 is the onlyintegerin that range. In general, it follows from these bounds thatn(d)always equals either
where⌈ · ⌉denotes theceiling function.
The formula
holds for 73% of all integersd.[12]The formula
holds foralmost alld, i.e., for a set of integersdwithasymptotic density1.[12]
The formula
holds for alld≤1018, but it is conjectured that there are infinitely many counterexamples to this formula.[13]
The formula
holds for alld≤1018, and it is conjectured that this formula holds for alld.[13]
It is possible to extend the problem to ask how many people in a group are necessary for there to be a greater than 50% probability that at least 3, 4, 5, etc. of the group share the same birthday.
The first few values are as follows: >50% probability of 3 people sharing a birthday - 88 people; >50% probability of 4 people sharing a birthday - 187 people (sequenceA014088in theOEIS).[14]
The strong birthday problem asks for the number of people that need to be gathered together before there is a 50% chance thateveryonein the gathering shares their birthday with at least one other person. For d=365 days the answer is 3,064 people.[15][16]
The number of people needed for arbitrary number of days is given by (sequenceA380129in theOEIS)
The birthday problem can be generalized as follows:
The generic results can be derived using the same arguments given above.
Conversely, ifn(p;d)denotes the number of random integers drawn from[1,d]to obtain a probabilitypthat at least two numbers are the same, then
The birthday problem in this more generic sense applies tohash functions: the expected number ofN-bithashes that can be generated before getting a collision is not2N, but rather only2N⁄2. This is exploited bybirthday attacksoncryptographic hash functionsand is the reason why a small number of collisions in ahash tableare, for all practical purposes, inevitable.
The theory behind the birthday problem was used by Zoe Schnabel[18]under the name ofcapture-recapturestatistics to estimate the size of fish population in lakes. The birthday problem and its generalizations are also useful tools for modelling coincidences.[19]
The classic birthday problem allows for more than two people to share a particular birthday or for there to be matches on multiple days. The probability that amongnpeople there is exactly one pair of individuals with a matching birthday givendpossible days is[19]
Unlike the standard birthday problem, asnincreases the probability reaches a maximum value before decreasing. For example, ford= 365, the probability of a unique match has a maximum value of 0.3864 occurring whenn= 28.
The basic problem considers all trials to be of one "type". The birthday problem has been generalized to consider an arbitrary number of types.[20]In the simplest extension there are two types of people, saymmen andnwomen, and the problem becomes characterizing the probability of a shared birthday between at least one man and one woman. (Shared birthdays between two men or two women do not count.) The probability of no shared birthdays here is
whered= 365andS2areStirling numbers of the second kind. Consequently, the desired probability is1 −p0.
This variation of the birthday problem is interesting because there is not a unique solution for the total number of peoplem+n. For example, the usual 50% probability value is realized for both a 32-member group of 16 men and 16 women and a 49-member group of 43 women and 6 men.
A related question is, as people enter a room one at a time, which one is most likely to be the first to have the same birthday as someone already in the room? That is, for whatnisp(n) −p(n− 1)maximum? The answer is 20—if there is a prize for first match, the best position in line is 20th.[citation needed]
In the birthday problem, neither of the two people is chosen in advance. By contrast, the probabilityq(n)thatat least one other personin a room ofnother people has the same birthday as aparticularperson (for example, you) is given by
and for generaldby
In the standard case ofd= 365, substitutingn= 23gives about 6.1%, which is less than 1 chance in 16. For a greater than 50% chance thatat leastone other person in a roomful ofnpeople has the same birthday asyou,nwould need to be at least 253. This number is significantly higher than365/2= 182.5: the reason is that it is likely that there are some birthday matches among the other people in the room.
For any one person in a group ofnpeople the probability that he or she shares his birthday with someone else isq(n−1;d){\displaystyle q(n-1;d)}, as explained above. The expected number of people with a shared (non-unique) birthday can now be calculated easily by multiplying that probability by the number of people (n), so it is:
(This multiplication can be done this way because of the linearity of theexpected valueof indicator variables). This implies that the expected number of people with a non-shared (unique) birthday is:
Similar formulas can be derived for the expected number of people who share with three, four, etc. other people.
The expected number of people needed until every birthday is achieved is called theCoupon collector's problem. It can be calculated bynHn, whereHnis thenthharmonic number. For 365 possible dates (the birthday problem), the answer is 2365.
Another generalization is to ask for the probability of finding at least one pair in a group ofnpeople with birthdays withinkcalendar days of each other, if there aredequally likely birthdays.[21]
The number of people required so that the probability that some pair will have a birthday separated bykdays or fewer will be higher than 50% is given in the following table:
Thus in a group of just seven random people, it is more likely than not that two of them will have a birthday within a week of each other.[21]
The expected number of different birthdays, i.e. the number of days that are at least one person's birthday, is:
This follows from the expected number of days that are no one's birthday:
which follows from the probability that a particular day is no one's birthday,(d− 1/d)n, easily summed because of the linearity of the expected value.
For instance, withd= 365, you should expect about 21 different birthdays when there are 22 people, or 46 different birthdays when there are 50 people. When there are 1000 people, there will be around 341 different birthdays (24 unclaimed birthdays).
The above can be generalized from the distribution of the number of people with their birthday on any particular day, which is aBinomial distributionwith probability1/d. Multiplying the relevant probability bydwill then give the expected number of days. For example, the expected number of days which are shared; i.e. which are at least two (i.e. not zero and not one) people's birthday is:
d−d(d−1d)n−d⋅(n1)(1d)1(d−1d)n−1=d−d(d−1d)n−n(d−1d)n−1{\displaystyle d-d\left({\frac {d-1}{d}}\right)^{n}-d\cdot {\binom {n}{1}}\left({\frac {1}{d}}\right)^{1}\left({\frac {d-1}{d}}\right)^{n-1}=d-d\left({\frac {d-1}{d}}\right)^{n}-n\left({\frac {d-1}{d}}\right)^{n-1}}
The probability that thekth integer randomly chosen from[1,d]will repeat at least one previous choice equalsq(k− 1;d)above. The expected total number of times a selection will repeat a previous selection asnsuch integers are chosen equals[22]
This can be seen to equal the number of people minus the expected number of different birthdays.
In an alternative formulation of the birthday problem, one asks theaveragenumber of people required to find a pair with the same birthday. If we consider the probability function Pr[npeople have at least one shared birthday], thisaverageis determining themeanof the distribution, as opposed to the customary formulation, which asks for themedian. The problem is relevant to severalhashing algorithmsanalyzed byDonald Knuthin his bookThe Art of Computer Programming. It may be shown[23][24]that if one samples uniformly, with replacement, from a population of sizeM, the number of trials required for the first repeated sampling ofsomeindividual hasexpected valuen= 1 +Q(M), where
The function
has been studied bySrinivasa Ramanujanand hasasymptotic expansion:
WithM= 365days in a year, the average number of people required to find a pair with the same birthday isn= 1 +Q(M) ≈ 24.61659, somewhat more than 23, the number required for a 50% chance. In the best case, two people will suffice; at worst, the maximum possible number ofM+ 1 = 366people is needed; but on average, only 25 people are required
An analysis using indicator random variables can provide a simpler but approximate analysis of this problem.[25]For each pair (i,j) for k people in a room, we define the indicator random variableXij, for1≤i≤j≤k{\displaystyle 1\leq i\leq j\leq k}, by
Xij=I{personiand personjhave the same birthday}={1,if personiand personjhave the same birthday;0,otherwise.{\displaystyle {\begin{alignedat}{2}X_{ij}&=I\{{\text{person }}i{\text{ and person }}j{\text{ have the same birthday}}\}\\[10pt]&={\begin{cases}1,&{\text{if person }}i{\text{ and person }}j{\text{ have the same birthday;}}\\0,&{\text{otherwise.}}\end{cases}}\end{alignedat}}}
E[Xij]=Pr{personiand personjhave the same birthday}=1n.{\displaystyle {\begin{alignedat}{2}E[X_{ij}]&=\Pr\{{\text{person }}i{\text{ and person }}j{\text{ have the same birthday}}\}={\frac {1}{n}}.\end{alignedat}}}
LetXbe a random variable counting the pairs of individuals with the same birthday.
X=∑i=1k∑j=i+1kXij{\displaystyle X=\sum _{i=1}^{k}\sum _{j=i+1}^{k}X_{ij}}
E[X]=∑i=1k∑j=i+1kE[Xij]=(k2)1n=k(k−1)2n{\displaystyle {\begin{alignedat}{3}E[X]&=\sum _{i=1}^{k}\sum _{j=i+1}^{k}E[X_{ij}]\\[8pt]&={\binom {k}{2}}{\frac {1}{n}}\\[8pt]&={\frac {k(k-1)}{2n}}\end{alignedat}}}
Forn= 365, ifk= 28, the expected number of pairs of individuals with the same birthday is28 × 27/2 × 365≈ 1.0356. Therefore, we can expect at least one matching pair with at least 28 people.
In the2014 FIFA World Cup, each of the 32 squads had 23 players. An analysis of the official squad lists suggested that 16 squads had pairs of players sharing birthdays, and of these 5 squads had two pairs: Argentina, France, Iran, South Korea and Switzerland each had two pairs, and Australia, Bosnia and Herzegovina, Brazil, Cameroon, Colombia, Honduras, Netherlands, Nigeria, Russia, Spain and USA each with one pair.[26]
Voracek, Tran andFormannshowed that the majority of people markedly overestimate the number of people that is necessary to achieve a given probability of people having the same birthday, and markedly underestimate the probability of people having the same birthday when a specific sample size is given.[27]Further results showed that psychology students and women did better on the task than casino visitors/personnel or men, but were less confident about their estimates.
The reverse problem is to find, for a fixed probabilityp,
the greatestnfor which the probabilityp(n)is smaller than the givenp, or the smallestnfor which the probabilityp(n)is greater than the givenp.[citation needed]
Taking the above formula ford= 365, one has
The following table gives some sample calculations.
Some values falling outside the bounds have beencoloredto show that the approximation is not always exact.
A related problem is thepartition problem, a variant of theknapsack problemfromoperations research. Some weights are put on abalance scale; each weight is an integer number of grams randomly chosen between one gram and one million grams (onetonne). The question is whether one can usually (that is, with probability close to 1) transfer the weights between the left and right arms to balance the scale. (In case the sum of all the weights is an odd number of grams, a discrepancy of one gram is allowed.) If there are only two or three weights, the answer is very clearly no; although there are some combinations which work, the majority of randomly selected combinations of three weights do not. If there are very many weights, the answer is clearly yes. The question is, how many are just sufficient? That is, what is the number of weights such that it is equally likely for it to be possible to balance them as it is to be impossible?
Often, people's intuition is that the answer is above100000. Most people's intuition is that it is in the thousands or tens of thousands, while others feel it should at least be in the hundreds. The correct answer is 23.[citation needed]
The reason is that the correct comparison is to the number of partitions of the weights into left and right. There are2N− 1different partitions forNweights, and the left sum minus the right sum can be thought of as a new random quantity for each partition. The distribution of the sum of weights is approximatelyGaussian, with a peak at500000Nand width1000000√N, so that when2N− 1is approximately equal to1000000√Nthe transition occurs. 223 − 1is about 4 million, while the width of the distribution is only 5 million.[28]
Arthur C. Clarke's 1961 novelA Fall of Moondustcontains a section where the main characters, trapped underground for an indefinite amount of time, are celebrating a birthday and find themselves discussing the validity of the birthday problem. As stated by a physicist passenger: "If you have a group of more than twenty-four people, the odds are better than even that two of them have the same birthday." Eventually, out of 22 present, it is revealed that two characters share the same birthday, May 23.
The reasoning is based on important tools that all students of mathematics should have ready access to. The birthday problem used to be a splendid illustration of the advantages of pure thought over mechanical manipulation; the inequalities can be obtained in a minute or two, whereas the multiplications would take much longer, and be much more subject to error, whether the instrument is a pencil or an old-fashioned desk computer. Whatcalculatorsdo not yield is understanding, or mathematical facility, or a solid basis for more advanced, generalized theories.
|
https://en.wikipedia.org/wiki/Birthday_paradox
|
Ahash functionis anyfunctionthat can be used to mapdataof arbitrary size to fixed-size values, though there are some hash functions that support variable-length output.[1]The values returned by a hash function are calledhash values,hash codes, (hash/message)digests,[2]or simplyhashes. The values are usually used to index a fixed-size table called ahash table. Use of a hash function to index a hash table is calledhashingorscatter-storage addressing.
Hash functions and their associated hash tables are used in data storage and retrieval applications to access data in a small and nearly constant time per retrieval. They require an amount of storage space only fractionally greater than the total space required for the data or records themselves. Hashing is a computationally- and storage-space-efficient form of data access that avoids the non-constant access time of ordered and unordered lists and structured trees, and the often-exponential storage requirements of direct access of state spaces of large or variable-length keys.
Use of hash functions relies on statistical properties of key and function interaction: worst-case behavior is intolerably bad but rare, and average-case behavior can be nearly optimal (minimalcollision).[3]: 527
Hash functions are related to (and often confused with)checksums,check digits,fingerprints,lossy compression,randomization functions,error-correcting codes, andciphers. Although the concepts overlap to some extent, each one has its own uses and requirements and is designed and optimized differently. The hash function differs from these concepts mainly in terms ofdata integrity. Hash tables may usenon-cryptographic hash functions, whilecryptographic hash functionsare used in cybersecurity to secure sensitive data such as passwords.
In a hash table, a hash function takes a key as an input, which is associated with a datum or record and used to identify it to the data storage and retrieval application. The keys may be fixed-length, like an integer, or variable-length, like a name. In some cases, the key is the datum itself. The output is a hash code used to index a hash table holding the data or records, or pointers to them.
A hash function may be considered to perform three functions:
A good hash function satisfies two basic properties: it should be very fast to compute, and it should minimize duplication of output values (collisions). Hash functions rely on generating favorableprobability distributionsfor their effectiveness, reducing access time to nearly constant. High table loading factors,pathologicalkey sets, and poorly designed hash functions can result in access times approaching linear in the number of items in the table. Hash functions can be designed to give the best worst-case performance,[Notes 1]good performance under high table loading factors, and in special cases, perfect (collisionless) mapping of keys into hash codes. Implementation is based on parity-preserving bit operations (XOR and ADD), multiply, or divide. A necessary adjunct to the hash function is a collision-resolution method that employs an auxiliary data structure likelinked lists, or systematic probing of the table to find an empty slot.
Hash functions are used in conjunction withhash tablesto store and retrieve data items or data records. The hash function translates the key associated with each datum or record into a hash code, which is used to index the hash table. When an item is to be added to the table, the hash code may index an empty slot (also called a bucket), in which case the item is added to the table there. If the hash code indexes a full slot, then some kind of collision resolution is required: the new item may be omitted (not added to the table), or replace the old item, or be added to the table in some other location by a specified procedure. That procedure depends on the structure of the hash table. Inchained hashing, each slot is the head of a linked list or chain, and items that collide at the slot are added to the chain. Chains may be kept in random order and searched linearly, or in serial order, or as a self-ordering list by frequency to speed up access. Inopen address hashing, the table is probed starting from the occupied slot in a specified manner, usually bylinear probing,quadratic probing, ordouble hashinguntil an open slot is located or the entire table is probed (overflow). Searching for the item follows the same procedure until the item is located, an open slot is found, or the entire table has been searched (item not in table).
Hash functions are also used to buildcachesfor large data sets stored in slow media. A cache is generally simpler than a hashed search table, since any collision can be resolved by discarding or writing back the older of the two colliding items.[4]
Hash functions are an essential ingredient of theBloom filter, a space-efficientprobabilisticdata structurethat is used to test whether anelementis a member of aset.
A special case of hashing is known asgeometric hashingor thegrid method. In these applications, the set of all inputs is some sort ofmetric space, and the hashing function can be interpreted as apartitionof that space into a grid ofcells. The table is often an array with two or more indices (called agrid file,grid index,bucket grid, and similar names), and the hash function returns an indextuple. This principle is widely used incomputer graphics,computational geometry, and many other disciplines, to solve manyproximity problemsin theplaneor inthree-dimensional space, such as findingclosest pairsin a set of points, similar shapes in a list of shapes, similarimagesin animage database, and so on.
Hash tables are also used to implementassociative arraysanddynamic sets.[5]
A good hash function should map the expected inputs as evenly as possible over its output range. That is, every hash value in the output range should be generated with roughly the sameprobability. The reason for this last requirement is that the cost of hashing-based methods goes up sharply as the number ofcollisions—pairs of inputs that are mapped to the same hash value—increases. If some hash values are more likely to occur than others, then a larger fraction of the lookup operations will have to search through a larger set of colliding table entries.
This criterion only requires the value to beuniformly distributed, notrandomin any sense. A good randomizing function is (barring computational efficiency concerns) generally a good choice as a hash function, but the converse need not be true.
Hash tables often contain only a small subset of the valid inputs. For instance, a club membership list may contain only a hundred or so member names, out of the very large set of all possible names. In these cases, the uniformity criterion should hold for almost all typical subsets of entries that may be found in the table, not just for the global set of all possible entries.
In other words, if a typical set ofmrecords is hashed tontable slots, then the probability of a bucket receiving many more thanm/nrecords should be vanishingly small. In particular, ifm<n, then very few buckets should have more than one or two records. A small number of collisions is virtually inevitable, even ifnis much larger thanm—see thebirthday problem.
In special cases when the keys are known in advance and the key set is static, a hash function can be found that achieves absolute (or collisionless) uniformity. Such a hash function is said to beperfect. There is no algorithmic way of constructing such a function—searching for one is afactorialfunction of the number of keys to be mapped versus the number of table slots that they are mapped into. Finding a perfect hash function over more than a very small set of keys is usually computationally infeasible; the resulting function is likely to be more computationally complex than a standard hash function and provides only a marginal advantage over a function with good statistical properties that yields a minimum number of collisions. Seeuniversal hash function.
When testing a hash function, the uniformity of the distribution of hash values can be evaluated by thechi-squared test. This test is a goodness-of-fit measure: it is the actual distribution of items in buckets versus the expected (or uniform) distribution of items. The formula is
∑j=0m−1(bj)(bj+1)/2(n/2m)(n+2m−1),{\displaystyle {\frac {\sum _{j=0}^{m-1}(b_{j})(b_{j}+1)/2}{(n/2m)(n+2m-1)}},}
wherenis the number of keys,mis the number of buckets, andbjis the number of items in bucketj.
A ratio within one confidence interval (such as 0.95 to 1.05) is indicative that the hash function evaluated has an expected uniform distribution.
Hash functions can have some technical properties that make it more likely that they will have a uniform distribution when applied. One is thestrict avalanche criterion: whenever a single input bit is complemented, each of the output bits changes with a 50% probability. The reason for this property is that selected subsets of the keyspace may have low variability. For the output to be uniformly distributed, a low amount of variability, even one bit, should translate into a high amount of variability (i.e. distribution over the tablespace) in the output. Each bit should change with a probability of 50% because, if some bits are reluctant to change, then the keys become clustered around those values. If the bits want to change too readily, then the mapping is approaching a fixed XOR function of a single bit. Standard tests for this property have been described in the literature.[6]The relevance of the criterion to a multiplicative hash function is assessed here.[7]
In data storage and retrieval applications, the use of a hash function is a trade-off between search time and data storage space. If search time were unbounded, then a very compact unordered linear list would be the best medium; if storage space were unbounded, then a randomly accessible structure indexable by the key-value would be very large and very sparse, but very fast. A hash function takes a finite amount of time to map a potentially large keyspace to a feasible amount of storage space searchable in a bounded amount of time regardless of the number of keys. In most applications, the hash function should be computable with minimum latency and secondarily in a minimum number of instructions.
Computational complexity varies with the number of instructions required and latency of individual instructions, with the simplest being the bitwise methods (folding), followed by the multiplicative methods, and the most complex (slowest) are the division-based methods.
Because collisions should be infrequent, and cause a marginal delay but are otherwise harmless, it is usually preferable to choose a faster hash function over one that needs more computation but saves a few collisions.
Division-based implementations can be of particular concern because a division requires multiple cycles on nearly all processormicroarchitectures. Division (modulo) by a constant can be inverted to become a multiplication by the word-size multiplicative-inverse of that constant. This can be done by the programmer, or by the compiler. Division can also be reduced directly into a series of shift-subtracts and shift-adds, though minimizing the number of such operations required is a daunting problem; the number of machine-language instructions resulting may be more than a dozen and swamp the pipeline. If the microarchitecture hashardware multiplyfunctional units, then the multiply-by-inverse is likely a better approach.
We can allow the table sizento not be a power of 2 and still not have to perform any remainder or division operation, as these computations are sometimes costly. For example, letnbe significantly less than2b. Consider apseudorandom number generatorfunctionP(key)that is uniform on the interval[0, 2b− 1]. A hash function uniform on the interval[0,n− 1]isnP(key) / 2b. We can replace the division by a (possibly faster) rightbit shift:n P(key) >>b.
If keys are being hashed repeatedly, and the hash function is costly, then computing time can be saved by precomputing the hash codes and storing them with the keys. Matching hash codes almost certainly means that the keys are identical. This technique is used for the transposition table in game-playing programs, which stores a 64-bit hashed representation of the board position.
Auniversal hashingscheme is arandomized algorithmthat selects a hash functionhamong a family of such functions, in such a way that the probability of a collision of any two distinct keys is1/m, wheremis the number of distinct hash values desired—independently of the two keys. Universal hashing ensures (in a probabilistic sense) that the hash function application will behave as well as if it were using a random function, for any distribution of the input data. It will, however, have more collisions than perfect hashing and may require more operations than a special-purpose hash function.
A hash function that allows only certain table sizes or strings only up to a certain length, or cannot accept a seed (i.e. allow double hashing) is less useful than one that does.[citation needed]
A hash function is applicable in a variety of situations. Particularly within cryptography, notable applications include:[8]
A hash procedure must bedeterministic—for a given input value, it must always generate the same hash value. In other words, it must be afunctionof the data to be hashed, in the mathematical sense of the term. This requirement excludes hash functions that depend on external variable parameters, such aspseudo-random number generatorsor the time of day. It also excludes functions that depend on the memory address of the object being hashed, because the address may change during execution (as may happen on systems that use certain methods ofgarbage collection), although sometimes rehashing of the item is possible.
The determinism is in the context of the reuse of the function. For example,Pythonadds the feature that hash functions make use of a randomized seed that is generated once when the Python process starts in addition to the input to be hashed.[9]The Python hash (SipHash) is still a valid hash function when used within a single run, but if the values are persisted (for example, written to disk), they can no longer be treated as valid hash values, since in the next run the random value might differ.
It is often desirable that the output of a hash function have fixed size (but see below). If, for example, the output is constrained to 32-bit integer values, then the hash values can be used to index into an array. Such hashing is commonly used to accelerate data searches.[10]Producing fixed-length output from variable-length input can be accomplished by breaking the input data into chunks of specific size. Hash functions used for data searches use some arithmetic expression that iteratively processes chunks of the input (such as the characters in a string) to produce the hash value.[10]
In many applications, the range of hash values may be different for each run of the program or may change along the same run (for instance, when a hash table needs to be expanded). In those situations, one needs a hash function which takes two parameters—the input dataz, and the numbernof allowed hash values.
A common solution is to compute a fixed hash function with a very large range (say,0to232− 1), divide the result byn, and use the division'sremainder. Ifnis itself a power of2, this can be done bybit maskingandbit shifting. When this approach is used, the hash function must be chosen so that the result has fairly uniform distribution between0andn− 1, for any value ofnthat may occur in the application. Depending on the function, the remainder may be uniform only for certain values ofn, e.g.oddorprime numbers.
When the hash function is used to store values in a hash table that outlives the run of the program, and the hash table needs to be expanded or shrunk, the hash table is referred to as a dynamic hash table.
A hash function that will relocate the minimum number of records when the table is resized is desirable. What is needed is a hash functionH(z,n)(wherezis the key being hashed andnis the number of allowed hash values) such thatH(z,n+ 1) =H(z,n)with probability close ton/(n+ 1).
Linear hashingandspiral hashingare examples of dynamic hash functions that execute in constant time but relax the property of uniformity to achieve the minimal movement property.Extendible hashinguses a dynamic hash function that requires space proportional tonto compute the hash function, and it becomes a function of the previous keys that have been inserted. Several algorithms that preserve the uniformity property but require time proportional tonto compute the value ofH(z,n)have been invented.[clarification needed]
A hash function with minimal movement is especially useful indistributed hash tables.
In some applications, the input data may contain features that are irrelevant for comparison purposes. For example, when looking up a personal name, it may be desirable to ignore the distinction between upper and lower case letters. For such data, one must use a hash function that is compatible with the dataequivalencecriterion being used: that is, any two inputs that are considered equivalent must yield the same hash value. This can be accomplished by normalizing the input before hashing it, as by upper-casing all letters.
There are several common algorithms for hashing integers. The method giving the best distribution is data-dependent. One of the simplest and most common methods in practice is the modulo division method.
If the data to be hashed is small enough, then one can use the data itself (reinterpreted as an integer) as the hashed value. The cost of computing thisidentityhash function is effectively zero. This hash function isperfect, as it maps each input to a distinct hash value.
The meaning of "small enough" depends on the size of the type that is used as the hashed value. For example, inJava, the hash code is a 32-bit integer. Thus the 32-bit integerIntegerand 32-bit floating-pointFloatobjects can simply use the value directly, whereas the 64-bit integerLongand 64-bit floating-pointDoublecannot.
Other types of data can also use this hashing scheme. For example, when mappingcharacter stringsbetweenupper and lower case, one can use the binary encoding of each character, interpreted as an integer, to index a table that gives the alternative form of that character ("A" for "a", "8" for "8", etc.). If each character is stored in 8 bits (as inextended ASCII[Notes 2]orISO Latin 1), the table has only 28= 256 entries; in the case ofUnicodecharacters, the table would have 17 × 216=1114112entries.
The same technique can be used to maptwo-letter country codeslike "us" or "za" to country names (262= 676 table entries), 5-digitZIP codeslike 13083 to city names (100000entries), etc. Invalid data values (such as the country code "xx" or the ZIP code 00000) may be left undefined in the table or mapped to some appropriate "null" value.
If the keys are uniformly or sufficiently uniformly distributed over the key space, so that the key values are essentially random, then they may be considered to be already "hashed". In this case, any number of any bits in the key may be extracted and collated as an index into the hash table. For example, a simple hash function might mask off themleast significant bits and use the result as an index into a hash table of size2m.
A mid-squares hash code is produced by squaring the input and extracting an appropriate number of middle digits or bits. For example, if the input is123456789and the hash table size10000, then squaring the key produces15241578750190521, so the hash code is taken as the middle 4 digits of the 17-digit number (ignoring the high digit) 8750. The mid-squares method produces a reasonable hash code if there is not a lot of leading or trailing zeros in the key. This is a variant of multiplicative hashing, but not as good because an arbitrary key is not a good multiplier.
A standard technique is to use a modulo function on the key, by selecting a divisorMwhich is a prime number close to the table size, soh(K) ≡K(modM). The table size is usually a power of 2. This gives a distribution from{0,M− 1}. This gives good results over a large number of key sets. A significant drawback of division hashing is that division requires multiple cycles on most modern architectures (includingx86) and can be 10 times slower than multiplication. A second drawback is that it will not break up clustered keys. For example, the keys 123000, 456000, 789000, etc. modulo 1000 all map to the same address. This technique works well in practice because many key sets are sufficiently random already, and the probability that a key set will be cyclical by a large prime number is small.
Algebraic coding is a variant of the division method of hashing which uses division by a polynomial modulo 2 instead of an integer to mapnbits tombits.[3]: 512–513In this approach,M= 2m, and we postulate anmth-degree polynomialZ(x) =xm+ ζm−1xm−1+ ⋯ + ζ0. A keyK= (kn−1…k1k0)2can be regarded as the polynomialK(x) =kn−1xn−1+ ⋯ +k1x+k0. The remainder using polynomial arithmetic modulo 2 isK(x) modZ(x) =hm−1xm−1+ ⋯h1x+h0. Thenh(K) = (hm−1…h1h0)2. IfZ(x)is constructed to havetor fewer non-zero coefficients, then keys which share fewer thantbits are guaranteed to not collide.
Zis a function ofk,t, andn(the last of which is a divisor of2k− 1) and is constructed from thefinite fieldGF(2k).Knuthgives an example: taking(n,m,t) = (15,10,7)yieldsZ(x) =x10+x8+x5+x4+x2+x+ 1. The derivation is as follows:
LetSbe the smallest set of integers such that{1,2,…,t} ⊆Sand(2jmodn) ∈S∀j∈S.[Notes 3]
DefineP(x)=∏j∈S(x−αj){\displaystyle P(x)=\prod _{j\in S}(x-\alpha ^{j})}whereα ∈nGF(2k)and where the coefficients ofP(x)are computed in this field. Then the degree ofP(x) = |S|. Sinceα2jis a root ofP(x)wheneverαjis a root, it follows that the coefficientspiofP(x)satisfyp2i=pi, so they are all 0 or 1. IfR(x) =rn−1xn−1+ ⋯ +r1x+r0is any nonzero polynomial modulo 2 with at mosttnonzero coefficients, thenR(x)is not a multiple ofP(x)modulo 2.[Notes 4]If follows that the corresponding hash function will map keys with fewer thantbits in common to unique indices.[3]: 542–543
The usual outcome is that eithernwill get large, ortwill get large, or both, for the scheme to be computationally feasible. Therefore, it is more suited to hardware or microcode implementation.[3]: 542–543
Unique permutation hashing has a guaranteed best worst-case insertion time.[11]
Standard multiplicative hashing uses the formulaha(K) =⌊(aKmodW) / (W/M)⌋, which produces a hash value in{0, …,M− 1}. The valueais an appropriately chosen value that should berelatively primetoW; it should be large,[clarification needed]and its binary representation a random mix[clarification needed]of 1s and 0s. An important practical special case occurs whenW= 2wandM= 2mare powers of 2 andwis the machineword size. In this case, this formula becomesha(K) =⌊(aKmod 2w) / 2w−m⌋. This is special because arithmetic modulo2wis done by default in low-level programming languages and integer division by a power of 2 is simply a right-shift, so, inC, for example, this function becomes
and for fixedmandwthis translates into a single integer multiplication and right-shift, making it one of the fastest hash functions to compute.
Multiplicative hashing is susceptible to a "common mistake" that leads to poor diffusion—higher-value input bits do not affect lower-value output bits.[12]A transmutation on the input which shifts the span of retained top bits down and XORs or ADDs them to the key before the multiplication step corrects for this. The resulting function looks like:[7]
Fibonaccihashing is a form of multiplicative hashing in which the multiplier is2w/ ϕ, wherewis the machine word length andϕ(phi) is thegolden ratio(approximately 1.618). A property of this multiplier is that it uniformly distributes over the table space,blocksof consecutive keys with respect to any block of bits in the key. Consecutive keys within the high bits or low bits of the key (or some other field) are relatively common. The multipliers for various word lengths are:
The multiplier should be odd, so the least significant bit of the output is invertible modulo2w. The last two values given above are rounded (up and down, respectively) by more than 1/2 of a least-significant bit to achieve this.
Tabulation hashing, more generally known asZobrist hashingafterAlbert Zobrist, is a method for constructing universal families of hash functions by combining table lookup with XOR operations. This algorithm has proven to be very fast and of high quality for hashing purposes (especially hashing of integer-number keys).[13]
Zobrist hashing was originally introduced as a means of compactly representing chess positions in computer game-playing programs. A unique random number was assigned to represent each type of piece (six each for black and white) on each space of the board. Thus a table of 64×12 such numbers is initialized at the start of the program. The random numbers could be any length, but 64 bits was natural due to the 64 squares on the board. A position was transcribed by cycling through the pieces in a position, indexing the corresponding random numbers (vacant spaces were not included in the calculation) and XORing them together (the starting value could be 0 (the identity value for XOR) or a random seed). The resulting value was reduced by modulo, folding, or some other operation to produce a hash table index. The original Zobrist hash was stored in the table as the representation of the position.
Later, the method was extended to hashing integers by representing each byte in each of 4 possible positions in the word by a unique 32-bit random number. Thus, a table of 28×4 random numbers is constructed. A 32-bit hashed integer is transcribed by successively indexing the table with the value of each byte of the plain text integer and XORing the loaded values together (again, the starting value can be the identity value or a random seed). The natural extension to 64-bit integers is by use of a table of 28×8 64-bit random numbers.
This kind of function has some nice theoretical properties, one of which is called3-tuple independence, meaning that every 3-tuple of keys is equally likely to be mapped to any 3-tuple of hash values.
A hash function can be designed to exploit existing entropy in the keys. If the keys have leading or trailing zeros, or particular fields that are unused, always zero or some other constant, or generally vary little, then masking out only the volatile bits and hashing on those will provide a better and possibly faster hash function. Selected divisors or multipliers in the division and multiplicative schemes may make more uniform hash functions if the keys are cyclic or have other redundancies.
When the data values are long (or variable-length)character strings—such as personal names,web page addresses, or mail messages—their distribution is usually very uneven, with complicated dependencies. For example, text in anynatural languagehas highly non-uniform distributions ofcharacters, andcharacter pairs, characteristic of the language. For such data, it is prudent to use a hash function that depends on all characters of the string—and depends on each character in a different way.[clarification needed]
Simplistic hash functions may add the first and lastncharacters of a string along with the length, or form a word-size hash from the middle 4 characters of a string. This saves iterating over the (potentially long) string, but hash functions that do not hash on all characters of a string can readily become linear due to redundancies, clustering, or other pathologies in the key set. Such strategies may be effective as a custom hash function if the structure of the keys is such that either the middle, ends, or other fields are zero or some other invariant constant that does not differentiate the keys; then the invariant parts of the keys can be ignored.
The paradigmatic example of folding by characters is to add up the integer values of all the characters in the string. A better idea is to multiply the hash total by a constant, typically a sizable prime number, before adding in the next character, ignoring overflow. Using exclusive-or instead of addition is also a plausible alternative. The final operation would be a modulo, mask, or other function to reduce the word value to an index the size of the table. The weakness of this procedure is that information may cluster in the upper or lower bits of the bytes; this clustering will remain in the hashed result and cause more collisions than a proper randomizing hash. ASCII byte codes, for example, have an upper bit of 0, and printable strings do not use the last byte code or most of the first 32 byte codes, so the information, which uses the remaining byte codes, is clustered in the remaining bits in an unobvious manner.
The classic approach, dubbed thePJW hashbased on the work ofPeter J. WeinbergeratBell Labsin the 1970s, was originally designed for hashing identifiers into compiler symbol tables as given in the"Dragon Book".[14]This hash function offsets the bytes 4 bits before adding them together. When the quantity wraps, the high 4 bits are shifted out and if non-zero,xoredback into the low byte of the cumulative quantity. The result is a word-size hash code to which a modulo or other reducing operation can be applied to produce the final hash index.
Today, especially with the advent of 64-bit word sizes, much more efficient variable-length string hashing by word chunks is available.
Modern microprocessors will allow for much faster processing if 8-bit character strings are not hashed by processing one character at a time, but by interpreting the string as an array of 32-bit or 64-bit integers and hashing/accumulating these "wide word" integer values by means of arithmetic operations (e.g. multiplication by constant and bit-shifting). The final word, which may have unoccupied byte positions, is filled with zeros or a specified randomizing value before being folded into the hash. The accumulated hash code is reduced by a final modulo or other operation to yield an index into the table.
Analogous to the way an ASCII orEBCDICcharacter string representing a decimal number is converted to a numeric quantity for computing, a variable-length string can be converted asxk−1ak−1+xk−2ak−2+ ⋯ +x1a+x0. This is simply a polynomial in aradixa> 1that takes the components(x0,x1,...,xk−1)as the characters of the input string of lengthk. It can be used directly as the hash code, or a hash function applied to it to map the potentially large value to the hash table size. The value ofais usually a prime number large enough to hold the number of different characters in the character set of potential keys. Radix conversion hashing of strings minimizes the number of collisions.[15]Available data sizes may restrict the maximum length of string that can be hashed with this method. For example, a 128-bit word will hash only a 26-character alphabetic string (ignoring case) with a radix of 29; a printable ASCII string is limited to 9 characters using radix 97 and a 64-bit word. However, alphabetic keys are usually of modest length, because keys must be stored in the hash table. Numeric character strings are usually not a problem; 64 bits can count up to1019, or 19 decimal digits with radix 10.
In some applications, such assubstring search, one can compute a hash functionhfor everyk-charactersubstringof a givenn-character string by advancing a window of widthkcharacters along the string, wherekis a fixed integer, andn>k. The straightforward solution, which is to extract such a substring at every character position in the text and computehseparately, requires a number of operations proportional tok·n. However, with the proper choice ofh, one can use the technique of rolling hash to compute all those hashes with an effort proportional tomk+nwheremis the number of occurrences of the substring.[16][what is the choice of h?]
The most familiar algorithm of this type isRabin-Karpwith best and average case performanceO(n+mk)and worst caseO(n·k)(in all fairness, the worst case here is gravely pathological: both the text string and substring are composed of a repeated single character, such ast="AAAAAAAAAAA", ands="AAA"). The hash function used for the algorithm is usually theRabin fingerprint, designed to avoid collisions in 8-bit character strings, but other suitable hash functions are also used.
Worst case results for a hash function can be assessed two ways: theoretical and practical. The theoretical worst case is the probability that all keys map to a single slot. The practical worst case is the expected longest probe sequence (hash function + collision resolution method). This analysis considers uniform hashing, that is, any key will map to any particular slot with probability1/m, a characteristic of universal hash functions.
WhileKnuthworries about adversarial attack on real time systems,[24]Gonnet has shown that the probability of such a case is "ridiculously small". His representation was that the probability ofkofnkeys mapping to a single slot isαk/ (eαk!), whereαis the load factor,n/m.[25]
The termhashoffers a natural analogy with its non-technical meaning (to chop up or make a mess out of something), given how hash functions scramble their input data to derive their output.[26]: 514In his research for the precise origin of the term,Donald Knuthnotes that, whileHans Peter LuhnofIBMappears to have been the first to use the concept of a hash function in a memo dated January 1953, the term itself did not appear in published literature until the late 1960s, in Herbert Hellerman'sDigital Computer System Principles, even though it was already widespread jargon by then.[26]: 547–548
|
https://en.wikipedia.org/wiki/Hash_function
|
Incryptography,Triple DES(3DESorTDES), officially theTriple Data Encryption Algorithm(TDEAorTriple DEA), is asymmetric-keyblock cipher, which applies theDEScipher algorithm three times to each data block. The 56-bit key of the Data Encryption Standard (DES) is no longer considered adequate in the face of modern cryptanalytic techniques and supercomputing power; Triple DES increases the effective security to 112 bits. ACVEreleased in 2016,CVE-2016-2183, disclosed a major security vulnerability in the DES and 3DES encryption algorithms. This CVE, combined with the inadequate key size of 3DES, led toNISTdeprecating 3DES in 2019 and disallowing all uses (except processing already encrypted data) by the end of 2023.[1]It has been replaced with the more secure, more robustAES.
While US government and industry standards abbreviate the algorithm's name as TDES (Triple DES) and TDEA (Triple Data Encryption Algorithm),[2]RFC 1851 referred to it as 3DES from the time it first promulgated the idea, and this namesake has since come into wide use by most vendors, users, and cryptographers.[3][4][5][6]
In 1978, a triple encryption method using DES with two 56-bit keys was proposed byWalter Tuchman; in 1981,MerkleandHellmanproposed a more secure triple-key version of 3DES with 112 bits of security.[7]
The Triple Data Encryption Algorithm is variously defined in several standards documents:
The original DES cipher'skey sizeof 56 bits was considered generally sufficient when it was designed, but the availability of increasing computational power madebrute-force attacksfeasible. Triple DES provides a relatively simple method of increasing the key size of DES to protect against such attacks, without the need to design a completely new block cipher algorithm.
A naive approach to increase the strength of a block encryption algorithm with a short key length (like DES) would be to use two keys(K1,K2){\displaystyle (K1,K2)}instead of one, and encrypt each block twice:EK2(EK1(plaintext)){\displaystyle E_{K2}(E_{K1}({\textrm {plaintext}}))}. If the original key length isn{\displaystyle n}bits, one would hope this scheme provides security equivalent to using a key2n{\displaystyle 2n}bits long. Unfortunately, this approach is vulnerable to themeet-in-the-middle attack: given aknown plaintextpair(x,y){\displaystyle (x,y)}, such thaty=EK2(EK1(x)){\displaystyle y=E_{K2}(E_{K1}(x))}, one can recover the key pair(K1,K2){\displaystyle (K1,K2)}in2n+1{\displaystyle 2^{n+1}}steps, instead of the22n{\displaystyle 2^{2n}}steps one would expect from an ideally secure algorithm with2n{\displaystyle 2n}bits of key.
Therefore, Triple DES uses a "key bundle" that comprises three DESkeys,K1{\displaystyle K1},K2{\displaystyle K2}andK3{\displaystyle K3}, each of 56 bits (excludingparity bits). The encryption algorithm is:
That is, encrypt withK1{\displaystyle K1},decryptwithK2{\displaystyle K2}, then encrypt withK3{\displaystyle K3}.
Decryption is the reverse:
That is, decrypt withK3{\displaystyle K3},encryptwithK2{\displaystyle K2}, then decrypt withK1{\displaystyle K1}.
Each triple encryption encrypts oneblockof 64 bits of data.
In each case, the middle operation is the reverse of the first and last. This improves the strength of the algorithm when usingkeying option2 and providesbackward compatibilitywith DES with keying option 3.
The standards define three keying options:
This is the strongest, with 3 × 56 = 168 independent key bits. It is still vulnerable to themeet-in-the-middle attack, but the attack requires 22 × 56steps.
This provides a shorter key length of 56 × 2 or 112 bits and a reasonable compromise between DES and keying option 1, with the same caveat as above.[18]This is an improvement over "double DES" which only requires 256steps to attack. NIST disallowed this option in 2015.[16]
This is backward-compatible with DES, since two of the operations cancel out. ISO/IEC 18033-3 never allowed this option, and NIST no longer allows K1= K2or K2= K3.[16][13]
Each DES key is 8odd-paritybytes, with 56 bits of key and 8 bits of error-detection.[9]A key bundle requires 24 bytes for option 1, 16 for option 2, or 8 for option 3.
NIST (and the current TCG specifications version 2.0 of approved algorithms forTrusted Platform Module) also disallows using any one of the 64 following 64-bit values in any keys (note that 32 of them are the binary complement of the 32 others; and that 32 of these keys are also the reverse permutation of bytes of the 32 others), listed here in hexadecimal (in each byte, the least significant bit is an odd-parity generated bit, which is discarded when forming the effectively 56-bit key):
With these restrictions on allowed keys, Triple DES was reapproved with keying options 1 and 2 only. Generally, the three keys are generated by taking 24 bytes from a strong random generator, and only keying option 1 should be used (option 2 needs only 16 random bytes, but strong random generators are hard to assert and it is considered best practice to use only option 1).
As with all block ciphers, encryption and decryption of multiple blocks of data may be performed using a variety ofmodes of operation, which can generally be defined independently of the block cipher algorithm. However, ANS X9.52 specifies directly, and NIST SP 800-67 specifies via SP 800-38A,[19]that some modes shall only be used with certain constraints on them that do not necessarily apply to general specifications of those modes. For example, ANS X9.52 specifies that forcipher block chaining, theinitialization vectorshall be different each time, whereas ISO/IEC 10116[20]does not. FIPS PUB 46-3 and ISO/IEC 18033-3 define only the single-block algorithm, and do not place any restrictions on the modes of operation for multiple blocks.
In general, Triple DES with three independent keys (keying option1) has a key length of 168 bits (three 56-bit DES keys), but due to themeet-in-the-middle attack, the effective security it provides is only 112 bits.[16]Keying option 2 reduces the effective key size to 112 bits (because the third key is the same as the first). However, this option is susceptible to certainchosen-plaintextorknown-plaintextattacks,[21][22]and thus it is designated by NIST to have only 80bits of security.[16]This can be considered insecure; as a consequence, Triple DES's planned deprecation was announced by NIST in 2017.[23]
The short block size of 64 bits makes 3DES vulnerable to block collision attacks if it is used to encrypt large amounts of data with the same key. The Sweet32 attack shows how this can be exploited in TLS and OpenVPN.[24]Practical Sweet32 attack on 3DES-based cipher-suites in TLS required236.6{\displaystyle 2^{36.6}}blocks (785 GB) for a full attack, but researchers were lucky to get a collision just after around220{\displaystyle 2^{20}}blocks, which took only 25 minutes.
The security of TDEA is affected by the number of blocks processed with one key bundle. One key bundle shall not be used to apply cryptographic protection (e.g., encrypt) more than220{\displaystyle 2^{20}}64-bit data blocks.
OpenSSLdoes not include 3DES by default since version 1.1.0 (August 2016) and considers it a "weak cipher".[25]
As of 2008, theelectronic paymentindustry uses Triple DES and continues to develop and promulgate standards based upon it, such asEMV.[26]
Earlier versions ofMicrosoft OneNote,[27]Microsoft Outlook2007[28]and MicrosoftSystem Center Configuration Manager2012[29]use Triple DES to password-protect user content and system data. However, in December 2018, Microsoft announced the retirement of 3DES throughout their Office 365 service.[30]
FirefoxandMozilla Thunderbirduse Triple DES inCBC modeto encrypt website authentication login credentials when using a master password.[31]
Below is a list of cryptography libraries that support Triple DES:
Some implementations above may not include 3DES in the default build, in later or more recent versions, but may still support decryption in order to handle existing data.
|
https://en.wikipedia.org/wiki/Triple_DES
|
TheData Encryption Standard(DES/ˌdiːˌiːˈɛs,dɛz/) is asymmetric-key algorithmfor theencryptionof digital data. Although its short key length of 56 bits makes it too insecure for modern applications, it has been highly influential in the advancement ofcryptography.
Developed in the early 1970s atIBMand based on an earlier design byHorst Feistel, the algorithm was submitted to theNational Bureau of Standards(NBS) following the agency's invitation to propose a candidate for the protection of sensitive, unclassified electronic government data. In 1976, after consultation with theNational Security Agency(NSA), the NBS selected a slightly modified version (strengthened againstdifferential cryptanalysis, but weakened againstbrute-force attacks), which was published as an officialFederal Information Processing Standard(FIPS) for the United States in 1977.[2]
The publication of an NSA-approved encryption standard led to its quick international adoption and widespread academic scrutiny. Controversies arose fromclassifieddesign elements, a relatively shortkey lengthof thesymmetric-keyblock cipherdesign, and the involvement of the NSA, raising suspicions about abackdoor. TheS-boxesthat had prompted those suspicions were designed by the NSA to address a vulnerability they secretly knew (differential cryptanalysis). However, the NSA also ensured that the key size was drastically reduced so that they could break the cipher by brute force attack.[2][failed verification]The intense academic scrutiny the algorithm received over time led to the modern understanding of block ciphers and theircryptanalysis.
DES is insecure due to the relatively short56-bit key size. In January 1999,distributed.netand theElectronic Frontier Foundationcollaborated to publicly break a DES key in 22 hours and 15 minutes (see§ Chronology). There are also some analytical results which demonstrate theoretical weaknesses in the cipher, although they are infeasible in practice[citation needed]. DES has been withdrawn as a standard by theNIST.[3]Later, the variantTriple DESwas developed to increase the security level, but it is considered insecure today as well. DES has been superseded by theAdvanced Encryption Standard(AES).
Some documents distinguish between the DES standard and its algorithm, referring to the algorithm as theDEA(Data Encryption Algorithm).
The origins of DES date to 1972, when aNational Bureau of Standardsstudy of US governmentcomputer securityidentified a need for a government-wide standard for encrypting unclassified, sensitive information.[4]
Around the same time, engineerMohamed Atallain 1972 foundedAtalla Corporationand developed the firsthardware security module(HSM), the so-called "Atalla Box" which was commercialized in 1973. It protected offline devices with a securePINgenerating key, and was a commercial success. Banks and credit card companies were fearful that Atalla would dominate the market, which spurred the development of an international encryption standard.[3]Atalla was an early competitor toIBMin the banking market, and was cited as an influence by IBM employees who worked on the DES standard.[5]TheIBM 3624later adopted a similar PIN verification system to the earlier Atalla system.[6]
On 15 May 1973, after consulting with the NSA, NBS solicited proposals for a cipher that would meet rigorous design criteria. None of the submissions was suitable. A second request was issued on 27 August 1974. This time,IBMsubmitted a candidate which was deemed acceptable—a cipher developed during the period 1973–1974 based on an earlier algorithm,Horst Feistel'sLucifercipher. The team at IBM involved in cipher design and analysis included Feistel,Walter Tuchman,Don Coppersmith, Alan Konheim, Carl Meyer, Mike Matyas,Roy Adler,Edna Grossman, Bill Notz, Lynn Smith, andBryant Tuckerman.
On 17 March 1975, the proposed DES was published in theFederal Register. Public comments were requested, and in the following year two open workshops were held to discuss the proposed standard. There was criticism received frompublic-key cryptographypioneersMartin HellmanandWhitfield Diffie,[1]citing a shortenedkey lengthand the mysterious "S-boxes" as evidence of improper interference from the NSA. The suspicion was that the algorithm had been covertly weakened by the intelligence agency so that they—but no one else—could easily read encrypted messages.[7]Alan Konheim (one of the designers of DES) commented, "We sent the S-boxes off to Washington. They came back and were all different."[8]TheUnited States Senate Select Committee on Intelligencereviewed the NSA's actions to determine whether there had been any improper involvement. In the unclassified summary of their findings, published in 1978, the Committee wrote:
In the development of DES, NSA convincedIBMthat a reduced key size was sufficient; indirectly assisted in the development of the S-box structures; and certified that the final DES algorithm was, to the best of their knowledge, free from any statistical or mathematical weakness.[9]
However, it also found that
NSA did not tamper with the design of the algorithm in any way. IBM invented and designed the algorithm, made all pertinent decisions regarding it, and concurred that the agreed upon key size was more than adequate for all commercial applications for which the DES was intended.[10]
Another member of the DES team, Walter Tuchman, stated "We developed the DES algorithm entirely within IBM using IBMers. The NSA did not dictate a single wire!"[11]In contrast, a declassified NSA book on cryptologic history states:
In 1973 NBS solicited private industry for a data encryption standard (DES). The first offerings were disappointing, so NSA began working on its own algorithm. Then Howard Rosenblum, deputy director for research and engineering, discovered that Walter Tuchman of IBM was working on a modification to Lucifer for general use. NSA gave Tuchman a clearance and brought him in to work jointly with the Agency on his Lucifer modification."[12]
and
NSA worked closely with IBM to strengthen the algorithm against all except brute-force attacks and to strengthen substitution tables, called S-boxes. Conversely, NSA tried to convince IBM to reduce the length of the key from 64 to 48 bits. Ultimately they compromised on a 56-bit key.[13][14]
Some of the suspicions about hidden weaknesses in the S-boxes were allayed in 1990, with the independent discovery and open publication byEli BihamandAdi Shamirofdifferential cryptanalysis, a general method for breaking block ciphers. The S-boxes of DES were much more resistant to the attack than if they had been chosen at random, strongly suggesting that IBM knew about the technique in the 1970s. This was indeed the case; in 1994, Don Coppersmith published some of the original design criteria for the S-boxes.[15]According toSteven Levy, IBM Watson researchers discovered differential cryptanalytic attacks in 1974 and were asked by the NSA to keep the technique secret.[16]Coppersmith explains IBM's secrecy decision by saying, "that was because [differential cryptanalysis] can be a very powerful tool, used against many schemes, and there was concern that such information in the public domain could adversely affect national security." Levy quotes Walter Tuchman: "[t]hey asked us to stamp all our documents confidential... We actually put a number on each one and locked them up in safes, because they were considered U.S. government classified. They said do it. So I did it".[16]Bruce Schneier observed that "It took the academic community two decades to figure out that the NSA 'tweaks' actually improved the security of DES."[17]
Despite the criticisms, DES was approved as a federal standard in November 1976, and published on 15 January 1977 asFIPSPUB 46, authorized for use on all unclassified data. It was subsequently reaffirmed as the standard in 1983, 1988 (revised as FIPS-46-1), 1993 (FIPS-46-2), and again in 1999 (FIPS-46-3), the latter prescribing "Triple DES" (see below). On 26 May 2002, DES was finally superseded by the Advanced Encryption Standard (AES), followinga public competition. On 19 May 2005, FIPS 46-3 was officially withdrawn, butNISThas approvedTriple DESthrough the year 2030 for sensitive government information.[18]
The algorithm is also specified inANSIX3.92 (Today X3 is known asINCITSand ANSI X3.92 as ANSIINCITS92),[19]NIST SP 800-67[18]and ISO/IEC 18033-3[20](as a component ofTDEA).
Another theoretical attack, linear cryptanalysis, was published in 1994, but it was theElectronic Frontier Foundation'sDES crackerin 1998 that demonstrated that DES could be attacked very practically, and highlighted the need for a replacement algorithm. These and other methods ofcryptanalysisare discussed in more detail later in this article.
The introduction of DES is considered to have been a catalyst for the academic study of cryptography, particularly of methods to crack block ciphers. According to a NIST retrospective about DES,
DES is the archetypalblock cipher—analgorithmthat takes a fixed-length string ofplaintextbits and transforms it through a series of complicated operations into anotherciphertextbitstring of the same length. In the case of DES, theblock sizeis 64 bits. DES also uses akeyto customize the transformation, so that decryption can supposedly only be performed by those who know the particular key used to encrypt. The key ostensibly consists of 64 bits; however, only 56 of these are actually used by the algorithm. Eight bits are used solely for checkingparity, and are thereafter discarded. Hence the effectivekey lengthis 56 bits.
The key is nominally stored or transmitted as 8bytes, each with odd parity. According to ANSI X3.92-1981 (Now, known as ANSIINCITS92–1981), section 3.5:
One bit in each 8-bit byte of theKEYmay be utilized for error detection in key generation, distribution, and storage. Bits 8, 16,..., 64 are for use in ensuring that each byte is of odd parity.
Like other block ciphers, DES by itself is not a secure means of encryption, but must instead be used in amode of operation. FIPS-81 specifies several modes for use with DES.[27]Further comments on the usage of DES are contained in FIPS-74.[28]
Decryption uses the same structure as encryption, but with the keys used in reverse order. (This has the advantage that the same hardware or software can be used in both directions.)
The algorithm's overall structure is shown in Figure 1: there are 16 identical stages of processing, termedrounds. There is also an initial and finalpermutation, termedIPandFP, which areinverses(IP "undoes" the action of FP, and vice versa). IP and FP have no cryptographic significance, but were included in order to facilitate loading blocks in and out of mid-1970s 8-bit based hardware.[29]
Before the main rounds, the block is divided into two 32-bit halves and processed alternately; this criss-crossing is known as theFeistel scheme. The Feistel structure ensures that decryption and encryption are very similar processes—the only difference is that the subkeys are applied in the reverse order when decrypting. The rest of the algorithm is identical. This greatly simplifies implementation, particularly in hardware, as there is no need for separate encryption and decryption algorithms.
The ⊕ symbol denotes theexclusive-OR(XOR) operation. TheF-functionscrambles half a block together with some of the key. The output from the F-function is then combined with the other half of the block, and the halves are swapped before the next round. After the final round, the halves are swapped; this is a feature of the Feistel structure which makes encryption and decryption similar processes.
The F-function, depicted in Figure 2, operates on half a block (32 bits) at a time and consists of four stages:
The alternation of substitution from the S-boxes, and permutation of bits from the P-box and E-expansion provides so-called "confusion and diffusion" respectively, a concept identified byClaude Shannonin the 1940s as a necessary condition for a secure yet practical cipher.
Figure 3 illustrates thekey schedulefor encryption—the algorithm which generates the subkeys. Initially, 56 bits of the key are selected from the initial 64 byPermuted Choice 1(PC-1)—the remaining eight bits are either discarded or used asparitycheck bits. The 56 bits are then divided into two 28-bit halves; each half is thereafter treated separately. In successive rounds, both halves are rotated left by one or two bits (specified for each round), and then 48 subkey bits are selected byPermuted Choice 2(PC-2)—24 bits from the left half, and 24 from the right. The rotations (denoted by "<<<" in the diagram) mean that a different set of bits is used in each subkey; each bit is used in approximately 14 out of the 16 subkeys.
The key schedule for decryption is similar—the subkeys are in reverse order compared to encryption. Apart from that change, the process is the same as for encryption. The same 28 bits are passed to all rotation boxes.
Pseudocodefor the DES algorithm follows.
Although more information has been published on the cryptanalysis of DES than any other block cipher, the most practical attack to date is still a brute-force approach. Various minor cryptanalytic properties are known, and three theoretical attacks are possible which, while having a theoretical complexity less than a brute-force attack, require an unrealistic number ofknownorchosen plaintextsto carry out, and are not a concern in practice.
For anycipher, the most basic method of attack isbrute force—trying every possible key in turn. Thelength of the keydetermines the number of possible keys, and hence the feasibility of this approach. For DES, questions were raised about the adequacy of its key size early on, even before it was adopted as a standard, and it was the small key size, rather than theoretical cryptanalysis, which dictated a need for a replacementalgorithm. As a result of discussions involving external consultants including the NSA, the key size was reduced from 256 bits to 56 bits to fit on a single chip.[30]
In academia, various proposals for a DES-cracking machine were advanced. In 1977, Diffie and Hellman proposed a machine costing an estimated US$20 million which could find a DES key in a single day.[1][31]By 1993, Wiener had proposed a key-search machine costing US$1 million which would find a key within 7 hours. However, none of these early proposals were ever implemented—or, at least, no implementations were publicly acknowledged. The vulnerability of DES was practically demonstrated in the late 1990s.[32]In 1997,RSA Securitysponsored a series of contests, offering a $10,000 prize to the first team that broke a message encrypted with DES for the contest. That contest was won by theDESCHALL Project, led by Rocke Verser,Matt Curtin, and Justin Dolske, using idle cycles of thousands of computers across the Internet. The feasibility of cracking DES quickly was demonstrated in 1998 when a custom DES-cracker was built by theElectronic Frontier Foundation(EFF), a cyberspace civil rights group, at the cost of approximately US$250,000 (seeEFF DES cracker). Their motivation was to show that DES was breakable in practice as well as in theory: "There are many people who will not believe a truth until they can see it with their own eyes. Showing them a physical machine that can crack DES in a few days is the only way to convince some people that they really cannot trust their security to DES." The machine brute-forced a key in a little more than 2 days' worth of searching.
The next confirmed DES cracker was the COPACOBANA machine built in 2006 by teams of theUniversities of BochumandKiel, both inGermany. Unlike the EFF machine, COPACOBANA consists of commercially available, reconfigurable integrated circuits. 120 of thesefield-programmable gate arrays(FPGAs) of type XILINX Spartan-3 1000 run in parallel. They are grouped in 20 DIMM modules, each containing 6 FPGAs. The use of reconfigurable hardware makes the machine applicable to other code breaking tasks as well.[33]One of the more interesting aspects of COPACOBANA is its cost factor. One machine can be built for approximately $10,000.[34]The cost decrease by roughly a factor of 25 over the EFF machine is an example of the continuous improvement ofdigital hardware—seeMoore's law. Adjusting for inflation over 8 years yields an even higher improvement of about 30x. Since 2007,SciEngines GmbH, a spin-off company of the two project partners of COPACOBANA has enhanced and developed successors of COPACOBANA. In 2008 their COPACOBANA RIVYERA reduced the time to break DES to less than one day, using 128 Spartan-3 5000's. SciEngines RIVYERA held the record in brute-force breaking DES, having utilized 128 Spartan-3 5000 FPGAs.[35]Their 256 Spartan-6 LX150 model has further lowered this time.
In 2012, David Hulton andMoxie Marlinspikeannounced a system with 48 Xilinx Virtex-6 LX240T FPGAs, each FPGA containing 40 fully pipelined DES cores running at 400 MHz, for a total capacity of 768 gigakeys/sec. The system can exhaustively search the entire 56-bit DES key space in about 26 hours and this service is offered for a fee online.[36][37]However, the service has been offline since the year 2024, supposedly for maintenance but probably permanently switched off.[38]
There are three attacks known that can break the full 16 rounds of DES with less complexity than a brute-force search:differential cryptanalysis(DC),[39]linear cryptanalysis(LC),[40]andDavies' attack.[41]However, the attacks are theoretical and are generally considered infeasible to mount in practice;[42]these types of attack are sometimes termed certificational weaknesses.
There have also been attacks proposed against reduced-round versions of the cipher, that is, versions of DES with fewer than 16 rounds. Such analysis gives an insight into how many rounds are needed for safety, and how much of a "security margin" the full version retains.
Differential-linear cryptanalysiswas proposed by Langford and Hellman in 1994, and combines differential and linear cryptanalysis into a single attack.[47]An enhanced version of the attack can break 9-round DES with 215.8chosen plaintexts and has a 229.2time complexity (Biham and others, 2002).[48]
DES exhibits the complementation property, namely that
wherex¯{\displaystyle {\overline {x}}}is thebitwise complementofx.{\displaystyle x.}EK{\displaystyle E_{K}}denotes encryption with keyK.{\displaystyle K.}P{\displaystyle P}andC{\displaystyle C}denote plaintext and ciphertext blocks respectively. The complementation property means that the work for abrute-force attackcould be reduced by a factor of 2 (or a single bit) under achosen-plaintextassumption. By definition, this property also applies to TDES cipher.[49]
DES also has four so-calledweak keys. Encryption (E) and decryption (D) under a weak key have the same effect (seeinvolution):
There are also six pairs ofsemi-weak keys. Encryption with one of the pair of semiweak keys,K1{\displaystyle K_{1}}, operates identically to decryption with the other,K2{\displaystyle K_{2}}:
It is easy enough to avoid the weak and semiweak keys in an implementation, either by testing for them explicitly, or simply by choosing keys randomly; the odds of picking a weak or semiweak key by chance are negligible. The keys are not really any weaker than any other keys anyway, as they do not give an attack any advantage.
DES has also been proved not to be agroup, or more precisely, the set{EK}{\displaystyle \{E_{K}\}}(for all possible keysK{\displaystyle K}) underfunctional compositionis not a group, nor "close" to being a group.[50]This was an open question for some time, and if it had been the case, it would have been possible to break DES, and multiple encryption modes such asTriple DESwould not increase the security, because repeated encryption (and decryptions) under different keys would be equivalent to encryption under another, single key.[51]
Simplified DES (SDES) was designed for educational purposes only, to help students learn about modern cryptanalytic techniques.
SDES has similar structure and properties to DES, but has been simplified to make it much easier to perform encryption and decryption by hand with pencil and paper.
Some people feel that learning SDES gives insight into DES and other block ciphers, and insight into various cryptanalytic attacks against them.[52][53][54][55][56][57][58][59][60]
Concerns about security and the relatively slow operation of DES insoftwaremotivated researchers to propose a variety of alternativeblock cipherdesigns, which started to appear in the late 1980s and early 1990s: examples includeRC5,Blowfish,IDEA,NewDES,SAFER,CAST5andFEAL. Most of these designs kept the 64-bitblock sizeof DES, and could act as a "drop-in" replacement, although they typically used a 64-bit or 128-bit key. In theSoviet UniontheGOST 28147-89algorithm was introduced, with a 64-bit block size and a 256-bit key, which was also used inRussialater.
Another approach to strengthening DES was the development ofTriple DES (3DES), which applies the DES algorithm three times to each data block to increase security. However, 3DES was later deprecated by NIST due to its inefficiencies and susceptibility to certain cryptographic attacks.
To address these security concerns, modern cryptographic systems rely on more advanced encryption techniques such as RSA, ECC, and post-quantum cryptography. These replacements aim to provide stronger resistance against both classical and quantum computing attacks.
A crucial aspect of DES involves itspermutations and key scheduling, which play a significant role in its encryption process. Analyzing these permutations helps in understanding DES's security limitations and the need for replacement algorithms. A detailed breakdown of DES permutations and their role in encryption is available in this analysis of Data Encryption Standards Permutations.[61]
DES itself can be adapted and reused in a more secure scheme. Many former DES users now useTriple DES(TDES) which was described and analysed by one of DES's patentees (seeFIPSPub 46–3); it involves applying DES three times with two (2TDES) or three (3TDES) different keys. TDES is regarded as adequately secure, although it is quite slow. A less computationally expensive alternative isDES-X, which increases the key size by XORing extra key material before and after DES.GDESwas a DES variant proposed as a way to speed up encryption, but it was shown to be susceptible to differential cryptanalysis.
On January 2, 1997, NIST announced that they wished to choose a successor to DES.[62]In 2001, after an international competition, NIST selected a new cipher, theAdvanced Encryption Standard(AES), as a replacement.[63]The algorithm which was selected as the AES was submitted by its designers under the nameRijndael. Other finalists in the NISTAES competitionincludedRC6,Serpent,MARS, andTwofish.
|
https://en.wikipedia.org/wiki/Data_Encryption_Standard
|
Wi-Fi Protected Access(WPA) (Wireless Protected Access),Wi-Fi Protected Access 2(WPA2), andWi-Fi Protected Access 3(WPA3) are the three security certification programs developed after 2000 by theWi-Fi Allianceto secure wireless computer networks. The Alliance defined these in response to serious weaknesses researchers had found in the previous system,Wired Equivalent Privacy(WEP).[1]
WPA (sometimes referred to as the TKIP standard) became available in 2003. The Wi-Fi Alliance intended it as an intermediate measure in anticipation of the availability of the more secure and complex WPA2, which became available in 2004 and is a common shorthand for the full IEEE 802.11i (orIEEE 802.11i-2004) standard.
In January 2018, the Wi-Fi Alliance announced the release of WPA3, which has several security improvements over WPA2.[2]
As of 2023, most computers that connect to a wireless network have support for using WPA, WPA2, or WPA3. All versions thereof, at least as implemented through May, 2021, are vulnerable to compromise.[3]
WEP (Wired Equivalent Privacy) is an early encryption protocol for wireless networks, designed to secure WLAN connections. It supports 64-bit and 128-bit keys, combining user-configurable and factory-set bits. WEP uses the RC4 algorithm for encrypting data, creating a unique key for each packet by combining a new Initialization Vector (IV) with a shared key (it has 40 bits of vectored key and 24 bits of random numbers). Decryption involves reversing this process, using the IV and the shared key to generate a key stream and decrypt the payload. Despite its initial use, WEP's significant vulnerabilities led to the adoption of more secure protocols.[4]
The Wi-Fi Alliance intended WPA as an intermediate measure to take the place ofWEPpending the availability of the fullIEEE 802.11istandard. WPA could be implemented throughfirmware upgradesonwireless network interface cardsdesigned for WEP that began shipping as far back as 1999. However, since the changes required in thewireless access points(APs) were more extensive than those needed on the network cards, most pre-2003 APs were not upgradable by vendor-provided methods to support WPA.
The WPA protocol implements theTemporal Key Integrity Protocol(TKIP). WEP uses a 64-bit or 128-bit encryption key that must be manually entered on wireless access points and devices and does not change. TKIP employs a per-packet key, meaning that it dynamically generates a new 128-bit key for each packet and thus prevents the types of attacks that compromise WEP.[5]
WPA also includes aMessage Integrity Check, which is designed to prevent an attacker from altering and resending data packets. This replaces thecyclic redundancy check(CRC) that was used by the WEP standard. CRC's main flaw is that it does not provide a sufficiently strongdata integrityguarantee for the packets it handles.[6]Well-testedmessage authentication codesexisted to solve these problems, but they require too much computation to be used on old network cards. WPA uses a message integrity check algorithm calledTKIPto verify the integrity of the packets. TKIP is much stronger than a CRC, but not as strong as the algorithm used in WPA2. Researchers have since discovered a flaw in WPA that relied on older weaknesses in WEP and the limitations of the message integrity code hash function, namedMichael, to retrieve the keystream from short packets to use for re-injection andspoofing.[7][8]
Ratified in 2004, WPA2 replaced WPA. WPA2, which requires testing and certification by the Wi-Fi Alliance, implements the mandatory elements of IEEE 802.11i. In particular, it includes support forCCMP, anAES-based encryption mode.[9][10][11]Certification began in September, 2004. From March 13, 2006, to June 30, 2020, WPA2 certification was mandatory for all new devices to bear the Wi-Fi trademark.[12]In WPA2-protected WLANs, secure communication is established through a multi-step process. Initially, devices associate with the Access Point (AP) via an association request. This is followed by a 4-way handshake, a crucial the for step ensuring both the client and AP have the correctPre-Shared Key(PSK) without actually transmitting it. During this handshake, aPairwise Transient Key(PTK) is generated for secure data exchange key fution for the exchange RP = 2025
WPA2 employs the Advanced Encryption Standard (AES) with a 128-bit key, enhancing security through the Counter-Mode/CBC-Mac ProtocolCCMP. This protocol ensures robust encryption and data integrity, using different Initialization Vectors (IVs) for encryption and authentication purposes.[13]
The 4-way handshake involves:
Post-handshake, the established PTK is used for encrypting unicast traffic, and theGroup Temporal Key(GTK) is used for broadcast traffic. This comprehensive authentication and encryption mechanism is what makes WPA2 a robust security standard for wireless networks.[14]
In January 2018, the Wi-Fi Alliance announced WPA3 as a replacement to WPA2.[15][16]Certification began in June 2018,[17]and WPA3 support has been mandatory for devices which bear the "Wi-Fi CERTIFIED™" logo since July 2020.[18]
The new standard uses an equivalent 192-bit cryptographic strength in WPA3-Enterprise mode[19](AES-256inGCM modewithSHA-384asHMAC), and still mandates the use ofCCMP-128(AES-128inCCM mode) as the minimum encryption algorithm in WPA3-Personal mode.TKIPis not allowed in WPA3.
The WPA3 standard also replaces thepre-shared key(PSK) exchange withSimultaneous Authentication of Equals(SAE) exchange, a method originally introduced withIEEE 802.11s, resulting in a more secure initial key exchange in personal mode[20][21]andforward secrecy.[22]The Wi-Fi Alliance also says that WPA3 will mitigate security issues posed by weak passwords and simplify the process of setting up devices with no display interface.[2][23]WPA3 also supportsOpportunistic Wireless Encryption (OWE)for open Wi-Fi networks that do not have passwords.
Protection of management frames as specified in theIEEE 802.11wamendment is also enforced by the WPA3 specifications.
WPA has been designed specifically to work with wireless hardware produced prior to the introduction of WPA protocol,[24]which provides inadequate security throughWEP. Some of these devices support WPA only after applying firmware upgrades, which are not available for some legacy devices.[24]
Wi-Fi devices certified since 2006 support both the WPA and WPA2 security protocols. WPA3 is required since July 1, 2020.[18]
Different WPA versions and protection mechanisms can be distinguished based on the target end-user (such as WEP, WPA, WPA2, WPA3) and the method of authentication key distribution, as well as the encryption protocol used. As of July 2020, WPA3 is the latest iteration of the WPA standard, bringing enhanced security features and addressing vulnerabilities found in WPA2. WPA3 improves authentication methods and employs stronger encryption protocols, making it the recommended choice for securing Wi-Fi networks.[23]
Also referred to asWPA-PSK(pre-shared key) mode, this is designed for home, small office and basic uses and does not require an authentication server.[25]Each wireless network device encrypts the network traffic by deriving its 128-bit encryption key from a 256-bit sharedkey. This key may be entered either as a string of 64hexadecimaldigits, or as apassphraseof 8 to 63printable ASCII characters.[26]This pass-phrase-to-PSK mapping is nevertheless not binding, as Annex J is informative in the latest 802.11 standard.[27]If ASCII characters are used, the 256-bit key is calculated by applying thePBKDF2key derivation functionto the passphrase, using theSSIDas thesaltand 4096 iterations ofHMAC-SHA1.[28]WPA-Personal mode is available on all three WPA versions.
This enterprise mode uses an802.1Xserver for authentication, offering higher security control by replacing the vulnerable WEP with the more advanced TKIP encryption. TKIP ensures continuous renewal of encryption keys, reducing security risks. Authentication is conducted through aRADIUSserver, providing robust security, especially vital in corporate settings. This setup allows integration with Windows login processes and supports various authentication methods likeExtensible Authentication Protocol, which uses certificates for secure authentication, and PEAP, creating a protected environment for authentication without requiring client certificates.[29]
Originally, only EAP-TLS (Extensible Authentication Protocol-Transport Layer Security) was certified by the Wi-Fi alliance. In April 2010, theWi-Fi Allianceannounced the inclusion of additional EAP[31]types to its WPA- and WPA2-Enterprise certification programs.[32]This was to ensure that WPA-Enterprise certified products can interoperate with one another.
As of 2010[update]the certification program includes the following EAP types:
802.1X clients and servers developed by specific firms may support other EAP types. This certification is an attempt for popular EAP types to interoperate; their failure to do so as of 2013[update]is one of the major issues preventing rollout of 802.1X on heterogeneous networks.
Commercial 802.1X servers include MicrosoftNetwork Policy ServerandJuniper NetworksSteelbelted RADIUS as well as Aradial Radius server.[34]FreeRADIUSis an open source 802.1X server.
WPA-Personal and WPA2-Personal remain vulnerable topassword crackingattacks if users rely on aweak password or passphrase. WPA passphrase hashes are seeded from the SSID name and its length;rainbow tablesexist for the top 1,000 network SSIDs and a multitude of common passwords, requiring only a quick lookup to speed up cracking WPA-PSK.[35]
Brute forcing of simple passwords can be attempted using theAircrack Suitestarting from the four-way authentication handshake exchanged during association or periodic re-authentication.[36][37][38][39][40]
WPA3 replaces cryptographic protocols susceptible to off-line analysis with protocols that require interaction with the infrastructure for each guessed password, supposedly placing temporal limits on the number of guesses.[15]However, design flaws in WPA3 enable attackers to plausibly launch brute-force attacks (see§ Dragonblood).
WPA and WPA2 do not provideforward secrecy, meaning that once an adverse person discovers the pre-shared key, they can potentially decrypt all packets encrypted using that PSK transmitted in the future and even past, which could be passively and silently collected by the attacker. This also means an attacker can silently capture and decrypt others' packets if a WPA-protected access point is provided free of charge at a public place, because its password is usually shared to anyone in that place. In other words, WPA only protects from attackers who do not have access to the password. Because of that, it's safer to useTransport Layer Security(TLS) or similar on top of that for the transfer of any sensitive data. However starting from WPA3, this issue has been addressed.[22]
In 2013, Mathy Vanhoef and Frank Piessens[41]significantly improved upon theWPA-TKIPattacks of Erik Tews and Martin Beck.[42][43]They demonstrated how to inject an arbitrary number of packets, with each packet containing at most 112 bytes of payload. This was demonstrated by implementing aport scanner, which can be executed against any client usingWPA-TKIP. Additionally, they showed how to decrypt arbitrary packets sent to a client. They mentioned this can be used to hijack aTCP connection, allowing an attacker to inject maliciousJavaScriptwhen the victim visits a website.
In contrast, the Beck-Tews attack could only decrypt short packets with mostly known content, such asARPmessages, and only allowed injection of 3 to 7 packets of at most 28 bytes. The Beck-Tews attack also requiresquality of service(as defined in802.11e) to be enabled, while the Vanhoef-Piessens attack does not. Neither attack leads to recovery of the shared session key between the client andAccess Point. The authors say using a short rekeying interval can prevent some attacks but not all, and strongly recommend switching fromTKIPto AES-basedCCMP.
Halvorsen and others show how to modify the Beck-Tews attack to allow injection of 3 to 7 packets having a size of at most 596 bytes.[44]The downside is that their attack requires substantially more time to execute: approximately 18 minutes and 25 seconds. In other work Vanhoef and Piessens showed that, when WPA is used to encrypt broadcast packets, their original attack can also be executed.[45]This is an important extension, as substantially more networks use WPA to protectbroadcast packets, than to protectunicast packets. The execution time of this attack is on average around 7 minutes, compared to the 14 minutes of the original Vanhoef-Piessens and Beck-Tews attack.
The vulnerabilities of TKIP are significant because WPA-TKIP had been held before to be an extremely safe combination; indeed, WPA-TKIP is still a configuration option upon a wide variety of wireless routing devices provided by many hardware vendors. A survey in 2013 showed that 71% still allow usage of TKIP, and 19% exclusively support TKIP.[41]
A more serious security flaw was revealed in December 2011 by Stefan Viehböck is the production that affects wireless routers with theWi-Fi Protected Setup(WPS) feature, regardless of which encryption method they use. Most recent models have this feature and enable it by default. Many consumer Wi-Fi device manufacturers had taken steps to eliminate the potential of weak passphrase choices by promoting alternative methods of automatically generating and distributing strong keys when users add a new wireless adapter or appliance to a network. These methods include pushing buttons on the devices or entering an 8-digitPIN.
The Wi-Fi Alliance standardized these methods as Wi-Fi Protected Setup; however, the PIN feature as widely implemented introduced a major new security flaw. The flaw allows a remote attacker to recover the WPS PIN and, with it, the router's WPA/WPA2 password in a few hours.[46]Users have been urged to turn off the WPS feature,[47]although this may not be possible on some router models. Also, the PIN is written on a label on most Wi-Fi routers with WPS, which cannot be changed if compromised.
In 2018, the Wi-Fi Alliance introduced Wi-Fi Easy Connect[48]as a new alternative for the configuration of devices that lack sufficient user interface capabilities by allowing nearby devices to serve as an adequate UI for network provisioning purposes, thus mitigating the need for WPS.[49]
Several weaknesses have been found inMS-CHAPv2, some of which severely reduce the complexity of brute-force attacks, making them feasible with modern hardware. In 2012 the complexity of breaking MS-CHAPv2 was reduced to that of breaking a singleDESkey (work byMoxie Marlinspikeand Marsh Ray). Moxie advised: "Enterprises who are depending on the mutual authentication properties of MS-CHAPv2 for connection to their WPA2 Radius servers should immediately start migrating to something else."[50]
Tunneled EAP methods using TTLS or PEAP which encrypt the MSCHAPv2 exchange are widely deployed to protect against exploitation of this vulnerability. However, prevalent WPA2 client implementations during the early 2000s were prone to misconfiguration by end users, or in some cases (e.g.Android), lacked any user-accessible way to properly configure validation of AAA server certificate CNs. This extended the relevance of the original weakness in MSCHAPv2 withinMiTMattack scenarios.[51]Under stricter compliance tests for WPA2 announced alongside WPA3, certified client software will be required to conform to certain behaviors surrounding AAA certificate validation.[15]
Hole196 is a vulnerability in the WPA2 protocol that abuses the shared Group Temporal Key (GTK). It can be used to conduct man-in-the-middle anddenial-of-serviceattacks. However, it assumes that the attacker is already authenticated against Access Point and thus in possession of the GTK.[52][53]
In 2016 it was shown that the WPA and WPA2 standards contain an insecure expositoryrandom number generator(RNG). Researchers showed that, if vendors implement the proposed RNG, an attacker is able to predict the group key (GTK) that is supposed to be randomly generated by theaccess point(AP). Additionally, they showed that possession of the GTK enables the attacker to inject any traffic into the network, and allowed the attacker to decrypt unicast internet traffic transmitted over the wireless network. They demonstrated their attack against anAsusRT-AC51U router that uses theMediaTekout-of-tree drivers, which generate the GTK themselves, and showed the GTK can be recovered within two minutes or less. Similarly, they demonstrated the keys generated by Broadcom access daemons running on VxWorks 5 and later can be recovered in four minutes or less, which affects, for example, certain versions of Linksys WRT54G and certain Apple AirPort Extreme models. Vendors can defend against this attack by using a secure RNG. By doing so,Hostapdrunning on Linux kernels is not vulnerable against this attack and thus routers running typicalOpenWrtorLEDEinstallations do not exhibit this issue.[54]
In October 2017, details of theKRACK(Key Reinstallation Attack) attack on WPA2 were published.[55][56]The KRACK attack is believed to affect all variants of WPA and WPA2; however, the security implications vary between implementations, depending upon how individual developers interpreted a poorly specified part of the standard. Software patches can resolve the vulnerability but are not available for all devices.[57]KRACK exploits a weakness in the WPA2 4-Way Handshake, a critical process for generating encryption keys. Attackers can force multiple handshakes, manipulating key resets. By intercepting the handshake, they could decrypt network traffic without cracking encryption directly. This poses a risk, especially with sensitive data transmission.[58]
Manufacturers have released patches in response, but not all devices have received updates. Users are advised to keep their devices updated to mitigate such security risks. Regular updates are crucial for maintaining network security against evolving threats.[58]
The Dragonblood attacks exposed significant vulnerabilities in the Dragonfly handshake protocol used in WPA3 and EAP-pwd. These included side-channel attacks potentially revealing sensitive user information and implementation weaknesses in EAP-pwd and SAE. Concerns were also raised about the inadequate security in transitional modes supporting both WPA2 and WPA3. In response, security updates and protocol changes are being integrated into WPA3 and EAP-pwd to address these vulnerabilities and enhance overall Wi-Fi security.[59]
On May 11, 2021,FragAttacks, a set of new security vulnerabilities, were revealed, affecting Wi-Fi devices and enabling attackers within range to steal information or target devices. These include design flaws in the Wi-Fi standard, affecting most devices, and programming errors in Wi-Fi products, making almost all Wi-Fi products vulnerable. The vulnerabilities impact all Wi-Fi security protocols, including WPA3 and WEP. Exploiting these flaws is complex but programming errors in Wi-Fi products are easier to exploit. Despite improvements in Wi-Fi security, these findings highlight the need for continuous security analysis and updates. In response, security patches were developed, and users are advised to use HTTPS and install available updates for protection.
|
https://en.wikipedia.org/wiki/Wi-Fi_Protected_Access#WPA-TKIP
|
Incryptography, theMerkle–Damgård constructionorMerkle–Damgård hash functionis a method of buildingcollision-resistantcryptographic hash functionsfrom collision-resistantone-way compression functions.[1]: 145This construction was used in the design of many popular hash algorithms such asMD5,SHA-1, andSHA-2.
The Merkle–Damgård construction was described inRalph Merkle'sPh.D.thesisin 1979.[2]Ralph Merkle andIvan Damgårdindependently proved that the structure is sound: that is, if an appropriatepadding schemeis used and the compression function iscollision-resistant, then the hash function will also be collision-resistant.[3][4]
The Merkle–Damgård hash function first applies anMD-compliant paddingfunction to create an input whose size is a multiple of a fixed number (e.g. 512 or 1024) — this is because compression functions cannot handle inputs of arbitrary size. The hash function then breaks the result into blocks of fixed size, and processes them one at a time with the compression function, each time combining a block of the input with the output of the previous round.[1]: 146In order to make the construction secure, Merkle and Damgård proposed that messages be padded with a padding that encodes the length of the original message. This is calledlength paddingorMerkle–Damgård strengthening.
In the diagram, the one-way compression function is denoted byf, and transforms two fixed length inputs to an output of the same size as one of the inputs. The algorithm starts with an initial value, theinitialization vector(IV). The IV is a fixed value (algorithm- or implementation-specific). For each message block, the compression (or compacting) functionftakes the result so far, combines it with the message block, and produces an intermediate result. The last block is padded with zeros as needed and bits representing the length of the entire message are appended. (See below for a detailed length-padding example.)
To harden the hash further, the last result is then sometimes fed through afinalisation function. The finalisation function can have several purposes such as compressing a bigger internal state (the last result) into a smaller output hash size or to guarantee a better mixing andavalanche effecton the bits in the hash sum. The finalisation function is often built by using the compression function.[citation needed](Note that in some documents a different terminology is used: the act of length padding is called "finalisation".[citation needed])
The popularity of this construction is due to the fact, proven byMerkleandDamgård, that if the one-way compression functionfiscollision resistant, then so is the hash function constructed using it. Unfortunately, this construction also has several undesirable properties:
Due to several structural weaknesses of Merkle–Damgård construction, especially the length extension problem and multicollision attacks,Stefan Lucksproposed the use of the wide-pipe hash[11]instead of Merkle–Damgård construction. The wide-pipe hash is very similar to the Merkle–Damgård construction but has a larger internal state size, meaning that the bit-length that is internally used is larger than the output bit-length. If a hash ofnbits is desired, then the compression functionftakes2nbits of chaining value andmbits of the message and compresses this to an output of2nbits.
Therefore, in a final step, a second compression function compresses the last internal hash value (2nbits) to the final hash value (nbits). This can be done as simply as discarding half of the last2n-bit output. SHA-512/224 and SHA-512/256 take this form since they are derived from a variant of SHA-512. SHA-384 and SHA-224 are similarly derived from SHA-512 and SHA-256, respectively, but the width of their pipe is much less than2n.
It has been demonstrated byMridul NandiandSouradyuti Paulthat a wide-pipe hash function can be made approximately twice as fast if the wide-pipe state can be divided in half in the following manner: one half is input to the succeeding compression function while the other half is combined with the output of that compression function.[12]
The main idea of the hash construction is to forward half of the previous chaining value forward to XOR it to the output of the compression function. In so doing the construction takes in longer message blocks every iteration than the original wide pipe. Using the same functionfas before, it takesn-bit chaining values andn+mbits of the message. However, the price to pay is the extra memory used in the construction for feed-forward.
The MD construction is inherently sequential. There is aparallel algorithm[13]which constructs a collision-resistant hash function from a collision-resistant compression function. The hash function PARSHA-256[14]was designed using the parallel algorithm and the compression function of SHA-256.
As mentioned in the introduction, the padding scheme used in the Merkle–Damgård construction must be chosen carefully to ensure the security of the scheme.Mihir Bellaregives sufficient conditions for a padding scheme to possess to ensure that the MD construction is secure: it suffices that the scheme be "MD-compliant" (the original length-padding scheme used by Merkle is an example of MD-compliant padding).[1]: 145The conditions are:
Here,|X|denotes the length ofX. With these conditions in place, a collision in the MD hash function existsexactly whenthere is a collision in the underlying compression function. Therefore, the Merkle–Damgård construction is provably secure when the underlying compression function is secure.[1]: 147
To be able to feed the message to the compression function, the last block must be padded with constant data (generally with zeroes) to a full block. For example, suppose the message to be hashed is "HashInput" (9 octet string, 0x48617368496e707574 inASCII) and the block size of the compression function is 8 bytes (64 bits). We get two blocks (the padding octets shown with the lightblue background color):
This implies that other messages having the same content but ending with additional zeros at the end will result in the same hash value. In the above example, another almost-identical message (0x48617368496e7075 7400) will generate the same hash value as the original message "HashInput" above. In other words, any message having extra zeros at the end makes it indistinguishable from the one without them. To prevent this situation, the first bit of the first padding octet is changed to "1" (0x80), yielding:
However, most common implementations use a fixed bit-size (generally 64 or 128 bits in modern algorithms) at a fixed position at the end of the last block for inserting the message length value (seeSHA-1 pseudocode). Further improvement can be made by inserting the length value in the last block if there is enough space. Doing so avoids having an extra block for the message length. If we assume the length value is encoded on 5 bytes (40 bits), the message becomes:
Storing the message length out-of-band in metadata, or otherwise embedded at the start of the message, is an effective mitigation of thelength extension attack[citation needed], as long as invalidation of either the message length and checksum are both considered failure of integrity checking.
|
https://en.wikipedia.org/wiki/Merkle%E2%80%93Damg%C3%A5rd_construction
|
Innumber theory, aCarmichael numberis acomposite numbern{\displaystyle n}which inmodular arithmeticsatisfies thecongruence relation:
for all integersb{\displaystyle b}.[1]The relation may also be expressed[2]in the form:
for all integersb{\displaystyle b}that arerelatively primeton{\displaystyle n}. They areinfinitein number.[3]
They constitute the comparatively rare instances where the strict converse ofFermat's Little Theoremdoes not hold. This fact precludes the use of that theorem as an absolute test ofprimality.[4]
The Carmichael numbers form the subsetK1of theKnödel numbers.
The Carmichael numbers were named after the American mathematicianRobert CarmichaelbyNicolaas Beeger, in 1950.Øystein Orehad referred to them in 1948 as numbers with the "Fermat property", or "Fnumbers" for short.[5]
Fermat's little theoremstates that ifp{\displaystyle p}is aprime number, then for anyintegerb{\displaystyle b}, the numberbp−b{\displaystyle b^{p}-b}is an integer multiple ofp{\displaystyle p}. Carmichael numbers are composite numbers which have the same property. Carmichael numbers are also calledFermat pseudoprimesorabsolute Fermat pseudoprimes. A Carmichael number will pass aFermat primality testto every baseb{\displaystyle b}relatively prime to the number, even though it is not actually prime.
This makes tests based on Fermat's Little Theorem less effective thanstrong probable primetests such as theBaillie–PSW primality testand theMiller–Rabin primality test.
However, no Carmichael number is either anEuler–Jacobi pseudoprimeor astrong pseudoprimeto every base relatively prime to it[6]so, in theory, either an Euler or a strong probable prime test could prove that a Carmichael number is, in fact, composite.
Arnault[7]gives a 397-digit Carmichael numberN{\displaystyle N}that is astrongpseudoprime to allprimebases less than 307:
where
is a 131-digit prime.p{\displaystyle p}is the smallest prime factor ofN{\displaystyle N}, so this Carmichael number is also a (not necessarily strong) pseudoprime to all bases less thanp{\displaystyle p}.
As numbers become larger, Carmichael numbers become increasingly rare. For example, there are 20,138,200 Carmichael numbers between 1 and 1021(approximately one in 50 trillion (5·1013) numbers).[8]
An alternative and equivalent definition of Carmichael numbers is given byKorselt's criterion.
It follows from this theorem that all Carmichael numbers areodd, since anyevencomposite number that is square-free (and hence has only one prime factor of two) will have at least one odd prime factor, and thusp−1∣n−1{\displaystyle p-1\mid n-1}results in an even dividing an odd, a contradiction. (The oddness of Carmichael numbers also follows from the fact that−1{\displaystyle -1}is aFermat witnessfor any even composite number.)
From the criterion it also follows that Carmichael numbers arecyclic.[9][10]Additionally, it follows that there are no Carmichael numbers with exactly two prime divisors.
The first seven Carmichael numbers, from 561 to 8911, were all found by the Czech mathematicianVáclav Šimerkain 1885[11](thus preceding not just Carmichael but also Korselt, although Šimerka did not find anything like Korselt's criterion).[12]His work, published in Czech scientific journalČasopis pro pěstování matematiky a fysiky, however, remained unnoticed.
Korselt was the first who observed the basic properties of Carmichael numbers, but he did not give any examples.
That 561 is a Carmichael number can be seen with Korselt's criterion. Indeed,561=3⋅11⋅17{\displaystyle 561=3\cdot 11\cdot 17}is square-free and2∣560{\displaystyle 2\mid 560},10∣560{\displaystyle 10\mid 560}and16∣560{\displaystyle 16\mid 560}.
The next six Carmichael numbers are (sequenceA002997in theOEIS):
In 1910, Carmichael himself[13]also published the smallest such number, 561, and the numbers were later named after him.
Jack Chernick[14]proved a theorem in 1939 which can be used to construct asubsetof Carmichael numbers. The number(6k+1)(12k+1)(18k+1){\displaystyle (6k+1)(12k+1)(18k+1)}is a Carmichael number if its three factors are all prime. Whether this formula produces an infinite quantity of Carmichael numbers is an open question (though it is implied byDickson's conjecture).
Paul Erdősheuristically argued there should be infinitely many Carmichael numbers. In 1994W. R. (Red) Alford,Andrew GranvilleandCarl Pomeranceused a bound onOlson's constantto show that there really do exist infinitely many Carmichael numbers. Specifically, they showed that for sufficiently largen{\displaystyle n}, there are at leastn2/7{\displaystyle n^{2/7}}Carmichael numbers between 1 andn{\displaystyle n}.[3]
Thomas Wrightproved that ifa{\displaystyle a}andm{\displaystyle m}are relatively prime,
then there are infinitely many Carmichael numbers in thearithmetic progressiona+k⋅m{\displaystyle a+k\cdot m},
wherek=1,2,…{\displaystyle k=1,2,\ldots }.[15]
Löh and Niebuhr in 1992 found some very large Carmichael numbers, including one with 1,101,518 factors and over 16 million digits.
This has been improved to 10,333,229,505 prime factors and 295,486,761,787 digits,[16]so the largest known Carmichael number is much greater than thelargest known prime.
Carmichael numbers have at least three positive prime factors. The first Carmichael numbers withk=3,4,5,…{\displaystyle k=3,4,5,\ldots }prime factors are (sequenceA006931in theOEIS):
The first Carmichael numbers with 4 prime factors are (sequenceA074379in theOEIS):
The second Carmichael number (1105) can be expressed as the sum of two squares in more ways than any smaller number. The third Carmichael number (1729) is theHardy-Ramanujan Number: the smallest number that can be expressed as thesum of two cubes(of positive numbers) in two different ways.
LetC(X){\displaystyle C(X)}denote the number of Carmichael numbers less than or equal toX{\displaystyle X}. The distribution of Carmichael numbers by powers of 10 (sequenceA055553in theOEIS):[8]
In 1953,Knödelproved theupper bound:
for some constantk1{\displaystyle k_{1}}.
In 1956, Erdős improved the bound to
for some constantk2{\displaystyle k_{2}}.[17]He further gave aheuristic argumentsuggesting that this upper bound should be close to the true growth rate ofC(X){\displaystyle C(X)}.
In the other direction,Alford,GranvilleandPomeranceproved in 1994[3]that forsufficiently largeX,
In 2005, this bound was further improved byHarman[18]to
who subsequently improved the exponent to0.7039⋅0.4736=0.33336704>1/3{\displaystyle 0.7039\cdot 0.4736=0.33336704>1/3}.[19]
Regarding the asymptotic distribution of Carmichael numbers, there have been several conjectures. In 1956, Erdős[17]conjectured that there wereX1−o(1){\displaystyle X^{1-o(1)}}Carmichael numbers forXsufficiently large. In 1981, Pomerance[20]sharpened Erdős' heuristic arguments to conjecture that there are at least
Carmichael numbers up toX{\displaystyle X}, whereL(x)=exp(logxlogloglogxloglogx){\displaystyle L(x)=\exp {\left({\frac {\log x\log \log \log x}{\log \log x}}\right)}}.
However, inside current computational ranges (such as the count of Carmichael numbers performed by Goutier(sequenceA055553in theOEIS) up to 1022), these conjectures are not yet borne out by the data; empirically, the exponent isC(X)≈X0.35{\displaystyle C(X)\approx X^{0.35}}for the highest available count (C(X)=49679870 for X= 1022).
In 2021,Daniel Larsenproved an analogue ofBertrand's postulatefor Carmichael numbers first conjectured by Alford, Granville, and Pomerance in 1994.[4][21]Using techniques developed byYitang ZhangandJames Maynardto establish results concerningsmall gaps between primes, his work yielded the much stronger statement that, for anyδ>0{\displaystyle \delta >0}and sufficiently largex{\displaystyle x}in terms ofδ{\displaystyle \delta }, there will always be at least
Carmichael numbers betweenx{\displaystyle x}and
The notion of Carmichael number generalizes to a Carmichael ideal in anynumber fieldK{\displaystyle K}. For any nonzeroprime idealp{\displaystyle {\mathfrak {p}}}inOK{\displaystyle {\mathcal {O}}_{K}}, we haveαN(p)≡αmodp{\displaystyle \alpha ^{{\rm {N}}({\mathfrak {p}})}\equiv \alpha {\bmod {\mathfrak {p}}}}for allα{\displaystyle \alpha }inOK{\displaystyle {\mathcal {O}}_{K}}, whereN(p){\displaystyle {\rm {N}}({\mathfrak {p}})}is the norm of theidealp{\displaystyle {\mathfrak {p}}}. (This generalizes Fermat's little theorem, thatmp≡mmodp{\displaystyle m^{p}\equiv m{\bmod {p}}}for all integersm{\displaystyle m}whenp{\displaystyle p}is prime.) Call a nonzero ideala{\displaystyle {\mathfrak {a}}}inOK{\displaystyle {\mathcal {O}}_{K}}Carmichael if it is not a prime ideal andαN(a)≡αmoda{\displaystyle \alpha ^{{\rm {N}}({\mathfrak {a}})}\equiv \alpha {\bmod {\mathfrak {a}}}}for allα∈OK{\displaystyle \alpha \in {\mathcal {O}}_{K}}, whereN(a){\displaystyle {\rm {N}}({\mathfrak {a}})}is the norm of the ideala{\displaystyle {\mathfrak {a}}}. WhenK{\displaystyle K}isQ{\displaystyle \mathbf {Q} }, the ideala{\displaystyle {\mathfrak {a}}}isprincipal, and if we leta{\displaystyle a}be its positive generator then the ideala=(a){\displaystyle {\mathfrak {a}}=(a)}is Carmichael exactly whena{\displaystyle a}is a Carmichael number in the usual sense.
WhenK{\displaystyle K}is larger than therationalsit is easy to write down Carmichael ideals inOK{\displaystyle {\mathcal {O}}_{K}}: for any prime numberp{\displaystyle p}that splits completely inK{\displaystyle K}, the principal idealpOK{\displaystyle p{\mathcal {O}}_{K}}is a Carmichael ideal. Since infinitely many prime numbers split completely in any number field, there are infinitely many Carmichael ideals inOK{\displaystyle {\mathcal {O}}_{K}}. For example, ifp{\displaystyle p}is any prime number that is 1 mod 4, the ideal(p){\displaystyle (p)}in theGaussian integersZ[i]{\displaystyle \mathbb {Z} [i]}is a Carmichael ideal.
Both prime and Carmichael numbers satisfy the following equality:
A positive composite integern{\displaystyle n}is a Lucas–Carmichael number if and only ifn{\displaystyle n}issquare-free, and for allprime divisorsp{\displaystyle p}ofn{\displaystyle n}, it is true thatp+1∣n+1{\displaystyle p+1\mid n+1}. The first Lucas–Carmichael numbers are:
Quasi–Carmichael numbers are squarefree composite numbersn{\displaystyle n}with the property that for every prime factorp{\displaystyle p}ofn{\displaystyle n},p+b{\displaystyle p+b}dividesn+b{\displaystyle n+b}positively withb{\displaystyle b}being any integer besides 0. Ifb=−1{\displaystyle b=-1}, these are Carmichael numbers, and ifb=1{\displaystyle b=1}, these are Lucas–Carmichael numbers. The first Quasi–Carmichael numbers are:
Ann-Knödel numberfor a givenpositive integernis acomposite numbermwith the property that eachi<m{\displaystyle i<m}coprimetomsatisfiesim−n≡1(modm){\displaystyle i^{m-n}\equiv 1{\pmod {m}}}. Then=1{\displaystyle n=1}case are Carmichael numbers.
Carmichael numbers can be generalized using concepts ofabstract algebra.
The above definition states that a composite integernis Carmichael
precisely when thenth-power-raising functionpnfrom theringZnof integers modulonto itself is the identity function. The identity is the onlyZn-algebraendomorphismonZnso we can restate the definition as asking thatpnbe an algebra endomorphism ofZn.
As above,pnsatisfies the same property whenevernis prime.
Thenth-power-raising functionpnis also defined on anyZn-algebraA. A theorem states thatnis prime if and only if all such functionspnare algebra endomorphisms.
In-between these two conditions lies the definition ofCarmichael number of order mfor any positive integermas any composite numbernsuch thatpnis an endomorphism on everyZn-algebra that can be generated asZn-modulebymelements. Carmichael numbers of order 1 are just the ordinary Carmichael numbers.
According to Howe, 17 · 31 · 41 · 43 · 89 · 97 · 167 · 331 is an order 2 Carmichael number. This product is equal to 443,372,888,629,441.[22]
Korselt's criterion can be generalized to higher-order Carmichael numbers, as shown by Howe.
A heuristic argument, given in the same paper, appears to suggest that there are infinitely many Carmichael numbers of orderm, for anym. However, not a single Carmichael number of order 3 or above is known.
|
https://en.wikipedia.org/wiki/Carmichael_number
|
Inmathematics, agroupis asetwith abinary operationthat satisfies the following constraints: the operation isassociative, it has anidentity element, and every element of the set has aninverse element.
Manymathematical structuresare groups endowed with other properties. For example, theintegerswith theaddition operationform aninfinitegroup that isgenerated by a single elementcalled1{\displaystyle 1}(these properties fully characterize the integers).
The concept of a group was elaborated for handling, in a unified way, many mathematical structures such as numbers,geometric shapesandpolynomial roots. Because the concept of groups is ubiquitous in numerous areas both within and outside mathematics, some authors consider it as a central organizing principle of contemporary mathematics.[1][2]
Ingeometry, groups arise naturally in the study ofsymmetriesandgeometric transformations: The symmetries of an object form a group, called thesymmetry groupof the object, and the transformations of a given type form a general group.Lie groupsappear in symmetry groups in geometry, and also in theStandard Modelofparticle physics. ThePoincaré groupis a Lie group consisting of the symmetries ofspacetimeinspecial relativity.Point groupsdescribesymmetry in molecular chemistry.
The concept of a group arose in the study ofpolynomial equations, starting withÉvariste Galoisin the 1830s, who introduced the termgroup(French:groupe) for the symmetry group of therootsof an equation, now called aGalois group. After contributions from other fields such asnumber theoryand geometry, the group notion was generalized and firmly established around 1870. Moderngroup theory—an active mathematical discipline—studies groups in their own right. To explore groups, mathematicians have devised various notions to break groups into smaller, better-understandable pieces, such assubgroups,quotient groupsandsimple groups. In addition to their abstract properties, group theorists also study the different ways in which a group can be expressed concretely, both from a point of view ofrepresentation theory(that is, through therepresentations of the group) and ofcomputational group theory. A theory has been developed forfinite groups, which culminated with theclassification of finite simple groups, completed in 2004. Since the mid-1980s,geometric group theory, which studiesfinitely generated groupsas geometric objects, has become an active area in group theory.
One of the more familiar groups is the set ofintegersZ={…,−4,−3,−2,−1,0,1,2,3,4,…}{\displaystyle \mathbb {Z} =\{\ldots ,-4,-3,-2,-1,0,1,2,3,4,\ldots \}}together withaddition.[3]For any two integersa{\displaystyle a}andb{\displaystyle b}, thesuma+b{\displaystyle a+b}is also an integer; thisclosureproperty says that+{\displaystyle +}is abinary operationonZ{\displaystyle \mathbb {Z} }. The following properties of integer addition serve as a model for the group axioms in the definition below.
The integers, together with the operation+{\displaystyle +}, form a mathematical object belonging to a broad class sharing similar structural aspects. To appropriately understand these structures as a collective, the following definition is developed.
The axioms for a group are short and natural ... Yet somehow hidden behind these axioms is themonster simple group, a huge and extraordinary mathematical object, which appears to rely on numerous bizarre coincidences to exist. The axioms for groups give no obvious hint that anything like this exists.
A group is a non-emptysetG{\displaystyle G}together with abinary operationonG{\displaystyle G}, here denoted "⋅{\displaystyle \cdot }", that combines any twoelementsa{\displaystyle a}andb{\displaystyle b}ofG{\displaystyle G}to form an element ofG{\displaystyle G}, denoteda⋅b{\displaystyle a\cdot b}, such that the following three requirements, known asgroup axioms, are satisfied:[5][6][7][a]
Formally, a group is anordered pairof a set and a binary operation on this set that satisfies thegroup axioms. The set is called theunderlying setof the group, and the operation is called thegroup operationor thegroup law.
A group and its underlying set are thus two differentmathematical objects. To avoid cumbersome notation, it is common toabuse notationby using the same symbol to denote both. This reflects also an informal way of thinking: that the group is the same as the set except that it has been enriched by additional structure provided by the operation.
For example, consider the set ofreal numbersR{\displaystyle \mathbb {R} }, which has the operations of additiona+b{\displaystyle a+b}andmultiplicationab{\displaystyle ab}. Formally,R{\displaystyle \mathbb {R} }is a set,(R,+){\displaystyle (\mathbb {R} ,+)}is a group, and(R,+,⋅){\displaystyle (\mathbb {R} ,+,\cdot )}is afield. But it is common to writeR{\displaystyle \mathbb {R} }to denote any of these three objects.
Theadditive groupof the fieldR{\displaystyle \mathbb {R} }is the group whose underlying set isR{\displaystyle \mathbb {R} }and whose operation is addition. Themultiplicative groupof the fieldR{\displaystyle \mathbb {R} }is the groupR×{\displaystyle \mathbb {R} ^{\times }}whose underlying set is the set of nonzero real numbersR∖{0}{\displaystyle \mathbb {R} \smallsetminus \{0\}}and whose operation is multiplication.
More generally, one speaks of anadditive groupwhenever the group operation is notated as addition; in this case, the identity is typically denoted0{\displaystyle 0}, and the inverse of an elementx{\displaystyle x}is denoted−x{\displaystyle -x}. Similarly, one speaks of amultiplicative groupwhenever the group operation is notated as multiplication; in this case, the identity is typically denoted1{\displaystyle 1}, and the inverse of an elementx{\displaystyle x}is denotedx−1{\displaystyle x^{-1}}. In a multiplicative group, the operation symbol is usually omitted entirely, so that the operation is denoted by juxtaposition,ab{\displaystyle ab}instead ofa⋅b{\displaystyle a\cdot b}.
The definition of a group does not require thata⋅b=b⋅a{\displaystyle a\cdot b=b\cdot a}for all elementsa{\displaystyle a}andb{\displaystyle b}inG{\displaystyle G}. If this additional condition holds, then the operation is said to becommutative, and the group is called anabelian group. It is a common convention that for an abelian group either additive or multiplicative notation may be used, but for a nonabelian group only multiplicative notation is used.
Several other notations are commonly used for groups whose elements are not numbers. For a group whose elements arefunctions, the operation is oftenfunction compositionf∘g{\displaystyle f\circ g}; then the identity may be denoted id. In the more specific cases ofgeometric transformationgroups,symmetrygroups,permutation groups, andautomorphism groups, the symbol∘{\displaystyle \circ }is often omitted, as for multiplicative groups. Many other variants of notation may be encountered.
Two figures in theplanearecongruentif one can be changed into the other using a combination ofrotations,reflections, andtranslations. Any figure is congruent to itself. However, some figures are congruent to themselves in more than one way, and these extra congruences are calledsymmetries. Asquarehas eight symmetries. These are:
fh{\displaystyle f_{\mathrm {h} }}(horizontal reflection)
fd{\displaystyle f_{\mathrm {d} }}(diagonal reflection)
fc{\displaystyle f_{\mathrm {c} }}(counter-diagonal reflection)
These symmetries are functions. Each sends a point in the square to the corresponding point under the symmetry. For example,r1{\displaystyle r_{1}}sends a point to its rotation 90° clockwise around the square's center, andfh{\displaystyle f_{\mathrm {h} }}sends a point to its reflection across the square's vertical middle line. Composing two of these symmetries gives another symmetry. These symmetries determine a group called thedihedral groupof degree four, denotedD4{\displaystyle \mathrm {D} _{4}}. The underlying set of the group is the above set of symmetries, and the group operation is function composition.[8]Two symmetries are combined by composing them as functions, that is, applying the first one to the square, and the second one to the result of the first application. The result of performing firsta{\displaystyle a}and thenb{\displaystyle b}is written symbolicallyfrom right to leftasb∘a{\displaystyle b\circ a}("apply the symmetryb{\displaystyle b}after performing the symmetrya{\displaystyle a}"). This is the usual notation for composition of functions.
ACayley tablelists the results of all such compositions possible. For example, rotating by 270° clockwise (r3{\displaystyle r_{3}}) and then reflecting horizontally (fh{\displaystyle f_{\mathrm {h} }}) is the same as performing a reflection along the diagonal (fd{\displaystyle f_{\mathrm {d} }}). Using the above symbols, highlighted in blue in the Cayley table:fh∘r3=fd.{\displaystyle f_{\mathrm {h} }\circ r_{3}=f_{\mathrm {d} }.}
Given this set of symmetries and the described operation, the group axioms can be understood as follows.
Binary operation: Composition is a binary operation. That is,a∘b{\displaystyle a\circ b}is a symmetry for any two symmetriesa{\displaystyle a}andb{\displaystyle b}. For example,r3∘fh=fc,{\displaystyle r_{3}\circ f_{\mathrm {h} }=f_{\mathrm {c} },}that is, rotating 270° clockwise after reflecting horizontally equals reflecting along the counter-diagonal (fc{\displaystyle f_{\mathrm {c} }}). Indeed, every other combination of two symmetries still gives a symmetry, as can be checked using the Cayley table.
Associativity: The associativity axiom deals with composing more than two symmetries: Starting with three elementsa{\displaystyle a},b{\displaystyle b}andc{\displaystyle c}ofD4{\displaystyle \mathrm {D} _{4}}, there are two possible ways of using these three symmetries in this order to determine a symmetry of the square. One of these ways is to first composea{\displaystyle a}andb{\displaystyle b}into a single symmetry, then to compose that symmetry withc{\displaystyle c}. The other way is to first composeb{\displaystyle b}andc{\displaystyle c}, then to compose the resulting symmetry witha{\displaystyle a}. These two ways must give always the same result, that is,(a∘b)∘c=a∘(b∘c),{\displaystyle (a\circ b)\circ c=a\circ (b\circ c),}For example,(fd∘fv)∘r2=fd∘(fv∘r2){\displaystyle (f_{\mathrm {d} }\circ f_{\mathrm {v} })\circ r_{2}=f_{\mathrm {d} }\circ (f_{\mathrm {v} }\circ r_{2})}can be checked using the Cayley table:(fd∘fv)∘r2=r3∘r2=r1fd∘(fv∘r2)=fd∘fh=r1.{\displaystyle {\begin{aligned}(f_{\mathrm {d} }\circ f_{\mathrm {v} })\circ r_{2}&=r_{3}\circ r_{2}=r_{1}\\f_{\mathrm {d} }\circ (f_{\mathrm {v} }\circ r_{2})&=f_{\mathrm {d} }\circ f_{\mathrm {h} }=r_{1}.\end{aligned}}}
Identity element: The identity element isid{\displaystyle \mathrm {id} }, as it does not change any symmetrya{\displaystyle a}when composed with it either on the left or on the right.
Inverse element: Each symmetry has an inverse:id{\displaystyle \mathrm {id} }, the reflectionsfh{\displaystyle f_{\mathrm {h} }},fv{\displaystyle f_{\mathrm {v} }},fd{\displaystyle f_{\mathrm {d} }},fc{\displaystyle f_{\mathrm {c} }}and the 180° rotationr2{\displaystyle r_{2}}are their own inverse, because performing them twice brings the square back to its original orientation. The rotationsr3{\displaystyle r_{3}}andr1{\displaystyle r_{1}}are each other's inverses, because rotating 90° and then rotation 270° (or vice versa) yields a rotation over 360° which leaves the square unchanged. This is easily verified on the table.
In contrast to the group of integers above, where the order of the operation is immaterial, it does matter inD4{\displaystyle \mathrm {D} _{4}}, as, for example,fh∘r1=fc{\displaystyle f_{\mathrm {h} }\circ r_{1}=f_{\mathrm {c} }}butr1∘fh=fd{\displaystyle r_{1}\circ f_{\mathrm {h} }=f_{\mathrm {d} }}. In other words,D4{\displaystyle \mathrm {D} _{4}}is not abelian.
The modern concept of anabstract groupdeveloped out of several fields of mathematics.[9][10][11]The original motivation for group theory was the quest for solutions ofpolynomial equationsof degree higher than 4. The 19th-century French mathematicianÉvariste Galois, extending prior work ofPaolo RuffiniandJoseph-Louis Lagrange, gave a criterion for thesolvabilityof a particular polynomial equation in terms of thesymmetry groupof itsroots(solutions). The elements of such aGalois groupcorrespond to certainpermutationsof the roots. At first, Galois's ideas were rejected by his contemporaries, and published only posthumously.[12][13]More general permutation groups were investigated in particular byAugustin Louis Cauchy.Arthur Cayley'sOn the theory of groups, as depending on the symbolic equationθn=1{\displaystyle \theta ^{n}=1}(1854) gives the first abstract definition of afinite group.[14]
Geometry was a second field in which groups were used systematically, especially symmetry groups as part ofFelix Klein's 1872Erlangen program.[15]After novel geometries such ashyperbolicandprojective geometryhad emerged, Klein used group theory to organize them in a more coherent way. Further advancing these ideas,Sophus Liefounded the study ofLie groupsin 1884.[16]
The third field contributing to group theory wasnumber theory. Certain abelian group structures had been used implicitly inCarl Friedrich Gauss's number-theoretical workDisquisitiones Arithmeticae(1798), and more explicitly byLeopold Kronecker.[17]In 1847,Ernst Kummermade early attempts to proveFermat's Last Theoremby developinggroups describing factorizationintoprime numbers.[18]
The convergence of these various sources into a uniform theory of groups started withCamille Jordan'sTraité des substitutions et des équations algébriques(1870).[19]Walther von Dyck(1882) introduced the idea of specifying a group by means of generators and relations, and was also the first to give an axiomatic definition of an "abstract group", in the terminology of the time.[20]As of the 20th century, groups gained wide recognition by the pioneering work ofFerdinand Georg FrobeniusandWilliam Burnside(who worked onrepresentation theoryof finite groups),Richard Brauer'smodular representation theoryandIssai Schur's papers.[21]The theory of Lie groups, and more generallylocally compact groupswas studied byHermann Weyl,Élie Cartanand many others.[22]Itsalgebraiccounterpart, the theory ofalgebraic groups, was first shaped byClaude Chevalley(from the late 1930s) and later by the work ofArmand BorelandJacques Tits.[23]
TheUniversity of Chicago's 1960–61 Group Theory Year brought together group theorists such asDaniel Gorenstein,John G. ThompsonandWalter Feit, laying the foundation of a collaboration that, with input from numerous other mathematicians, led to theclassification of finite simple groups, with the final step taken byAschbacherand Smith in 2004. This project exceeded previous mathematical endeavours by its sheer size, in both length ofproofand number of researchers. Research concerning this classification proof is ongoing.[24]Group theory remains a highly active mathematical branch,[b]impacting many other fields, as theexamples belowillustrate.
Basic facts about all groups that can be obtained directly from the group axioms are commonly subsumed underelementary group theory.[25]For example,repeatedapplications of the associativity axiom show that the unambiguity ofa⋅b⋅c=(a⋅b)⋅c=a⋅(b⋅c){\displaystyle a\cdot b\cdot c=(a\cdot b)\cdot c=a\cdot (b\cdot c)}generalizes to more than three factors. Because this implies thatparenthesescan be inserted anywhere within such a series of terms, parentheses are usually omitted.[26]
The group axioms imply that the identity element is unique; that is, there exists only one identity element: any two identity elementse{\displaystyle e}andf{\displaystyle f}of a group are equal, because the group axioms implye=e⋅f=f{\displaystyle e=e\cdot f=f}. It is thus customary to speak oftheidentity element of the group.[27]
The group axioms also imply that the inverse of each element is unique. Let a group elementa{\displaystyle a}have bothb{\displaystyle b}andc{\displaystyle c}as inverses. Then
Therefore, it is customary to speak oftheinverse of an element.[27]
Given elementsa{\displaystyle a}andb{\displaystyle b}of a groupG{\displaystyle G}, there is a unique solutionx{\displaystyle x}inG{\displaystyle G}to the equationa⋅x=b{\displaystyle a\cdot x=b}, namelya−1⋅b{\displaystyle a^{-1}\cdot b}.[c][28]It follows that for eacha{\displaystyle a}inG{\displaystyle G}, the functionG→G{\displaystyle G\to G}that maps eachx{\displaystyle x}toa⋅x{\displaystyle a\cdot x}is abijection; it is calledleft multiplicationbya{\displaystyle a}orleft translationbya{\displaystyle a}.
Similarly, givena{\displaystyle a}andb{\displaystyle b}, the unique solution tox⋅a=b{\displaystyle x\cdot a=b}isb⋅a−1{\displaystyle b\cdot a^{-1}}. For eacha{\displaystyle a}, the functionG→G{\displaystyle G\to G}that maps eachx{\displaystyle x}tox⋅a{\displaystyle x\cdot a}is a bijection calledright multiplicationbya{\displaystyle a}orright translationbya{\displaystyle a}.
The group axioms for identity and inverses may be "weakened" to assert only the existence of aleft identityandleft inverses. From theseone-sided axioms, one can prove that the left identity is also a right identity and a left inverse is also a right inverse for the same element. Since they define exactly the same structures as groups, collectively the axioms are not weaker.[29]
In particular, assuming associativity and the existence of a left identitye{\displaystyle e}(that is,e⋅f=f{\displaystyle e\cdot f=f}) and a left inversef−1{\displaystyle f^{-1}}for each elementf{\displaystyle f}(that is,f−1⋅f=e{\displaystyle f^{-1}\cdot f=e}), it follows that every left inverse is also a right inverse of the same element as follows.[29]Indeed, one has
Similarly, the left identity is also a right identity:[29]
These results do not hold if any of these axioms (associativity, existence of left identity and existence of left inverse) is removed. For a structure with a looser definition (like asemigroup) one may have, for example, that a left identity is not necessarily a right identity.
The same result can be obtained by only assuming the existence of a right identity and a right inverse.
However, only assuming the existence of aleftidentity and arightinverse (or vice versa) is not sufficient to define a group. For example, consider the setG={e,f}{\displaystyle G=\{e,f\}}with the operator⋅{\displaystyle \cdot }satisfyinge⋅e=f⋅e=e{\displaystyle e\cdot e=f\cdot e=e}ande⋅f=f⋅f=f{\displaystyle e\cdot f=f\cdot f=f}. This structure does have a left identity (namely,e{\displaystyle e}), and each element has a right inverse (which ise{\displaystyle e}for both elements). Furthermore, this operation is associative (since the product of any number of elements is always equal to the rightmost element in that product, regardless of the order in which these operations are applied). However,(G,⋅){\displaystyle (G,\cdot )}is not a group, since it lacks a right identity.
When studying sets, one uses concepts such assubset, function, andquotient by an equivalence relation. When studying groups, one uses insteadsubgroups,homomorphisms, andquotient groups. These are the analogues that take the group structure into account.[d]
Group homomorphisms[e]are functions that respect group structure; they may be used to relate two groups. Ahomomorphismfrom a group(G,⋅){\displaystyle (G,\cdot )}to a group(H,∗){\displaystyle (H,*)}is a functionφ:G→H{\displaystyle \varphi :G\to H}such that
It would be natural to require also thatφ{\displaystyle \varphi }respect identities,φ(1G)=1H{\displaystyle \varphi (1_{G})=1_{H}}, and inverses,φ(a−1)=φ(a)−1{\displaystyle \varphi (a^{-1})=\varphi (a)^{-1}}for alla{\displaystyle a}inG{\displaystyle G}. However, these additional requirements need not be included in the definition of homomorphisms, because they are already implied by the requirement of respecting the group operation.[30]
Theidentity homomorphismof a groupG{\displaystyle G}is the homomorphismιG:G→G{\displaystyle \iota _{G}:G\to G}that maps each element ofG{\displaystyle G}to itself. Aninverse homomorphismof a homomorphismφ:G→H{\displaystyle \varphi :G\to H}is a homomorphismψ:H→G{\displaystyle \psi :H\to G}such thatψ∘φ=ιG{\displaystyle \psi \circ \varphi =\iota _{G}}andφ∘ψ=ιH{\displaystyle \varphi \circ \psi =\iota _{H}}, that is, such thatψ(φ(g))=g{\displaystyle \psi {\bigl (}\varphi (g){\bigr )}=g}for allg{\displaystyle g}inG{\displaystyle G}and such thatφ(ψ(h))=h{\displaystyle \varphi {\bigl (}\psi (h){\bigr )}=h}for allh{\displaystyle h}inH{\displaystyle H}. Anisomorphismis a homomorphism that has an inverse homomorphism; equivalently, it is abijectivehomomorphism. GroupsG{\displaystyle G}andH{\displaystyle H}are calledisomorphicif there exists an isomorphismφ:G→H{\displaystyle \varphi :G\to H}. In this case,H{\displaystyle H}can be obtained fromG{\displaystyle G}simply by renaming its elements according to the functionφ{\displaystyle \varphi }; then any statement true forG{\displaystyle G}is true forH{\displaystyle H}, provided that any specific elements mentioned in the statement are also renamed.
The collection of all groups, together with the homomorphisms between them, form acategory, thecategory of groups.[31]
Aninjectivehomomorphismϕ:G′→G{\displaystyle \phi :G'\to G}factors canonically as an isomorphism followed by an inclusion,G′→∼H↪G{\displaystyle G'\;{\stackrel {\sim }{\to }}\;H\hookrightarrow G}for some subgroupH{\displaystyle H}ofG{\displaystyle G}.
Injective homomorphisms are themonomorphismsin the category of groups.
Informally, asubgroupis a groupH{\displaystyle H}contained within a bigger one,G{\displaystyle G}: it has a subset of the elements ofG{\displaystyle G}, with the same operation.[32]Concretely, this means that the identity element ofG{\displaystyle G}must be contained inH{\displaystyle H}, and wheneverh1{\displaystyle h_{1}}andh2{\displaystyle h_{2}}are both inH{\displaystyle H}, then so areh1⋅h2{\displaystyle h_{1}\cdot h_{2}}andh1−1{\displaystyle h_{1}^{-1}}, so the elements ofH{\displaystyle H}, equipped with the group operation onG{\displaystyle G}restricted toH{\displaystyle H}, indeed form a group. In this case, the inclusion mapH→G{\displaystyle H\to G}is a homomorphism.
In the example of symmetries of a square, the identity and the rotations constitute a subgroupR={id,r1,r2,r3}{\displaystyle R=\{\mathrm {id} ,r_{1},r_{2},r_{3}\}}, highlighted in red in the Cayley table of the example: any two rotations composed are still a rotation, and a rotation can be undone by (i.e., is inverse to) the complementary rotations 270° for 90°, 180° for 180°, and 90° for 270°. Thesubgroup testprovides anecessary and sufficient conditionfor a nonempty subsetH{\displaystyle H}of a groupG{\displaystyle G}to be a subgroup: it is sufficient to check thatg−1⋅h∈H{\displaystyle g^{-1}\cdot h\in H}for all elementsg{\displaystyle g}andh{\displaystyle h}inH{\displaystyle H}. Knowing a group'ssubgroupsis important in understanding the group as a whole.[f]
Given any subsetS{\displaystyle S}of a groupG{\displaystyle G}, the subgroupgeneratedbyS{\displaystyle S}consists of all products of elements ofS{\displaystyle S}and their inverses. It is the smallest subgroup ofG{\displaystyle G}containingS{\displaystyle S}.[33]In the example of symmetries of a square, the subgroup generated byr2{\displaystyle r_{2}}andfv{\displaystyle f_{\mathrm {v} }}consists of these two elements, the identity elementid{\displaystyle \mathrm {id} }, and the elementfh=fv⋅r2{\displaystyle f_{\mathrm {h} }=f_{\mathrm {v} }\cdot r_{2}}. Again, this is a subgroup, because combining any two of these four elements or their inverses (which are, in this particular case, these same elements) yields an element of this subgroup.
In many situations it is desirable to consider two group elements the same if they differ by an element of a given subgroup. For example, in the symmetry group of a square, once any reflection is performed, rotations alone cannot return the square to its original position, so one can think of the reflected positions of the square as all being equivalent to each other, and as inequivalent to the unreflected positions; the rotation operations are irrelevant to the question whether a reflection has been performed. Cosets are used to formalize this insight: a subgroupH{\displaystyle H}determines left and right cosets, which can be thought of as translations ofH{\displaystyle H}by an arbitrary group elementg{\displaystyle g}. In symbolic terms, theleftandrightcosets ofH{\displaystyle H}, containing an elementg{\displaystyle g}, are
The left cosets of any subgroupH{\displaystyle H}form apartitionofG{\displaystyle G}; that is, theunionof all left cosets is equal toG{\displaystyle G}and two left cosets are either equal or have anemptyintersection.[35]The first caseg1H=g2H{\displaystyle g_{1}H=g_{2}H}happensprecisely wheng1−1⋅g2∈H{\displaystyle g_{1}^{-1}\cdot g_{2}\in H}, i.e., when the two elements differ by an element ofH{\displaystyle H}. Similar considerations apply to the right cosets ofH{\displaystyle H}. The left cosets ofH{\displaystyle H}may or may not be the same as its right cosets. If they are (that is, if allg{\displaystyle g}inG{\displaystyle G}satisfygH=Hg{\displaystyle gH=Hg}), thenH{\displaystyle H}is said to be anormal subgroup.
InD4{\displaystyle \mathrm {D} _{4}}, the group of symmetries of a square, with its subgroupR{\displaystyle R}of rotations, the left cosetsgR{\displaystyle gR}are either equal toR{\displaystyle R}, ifg{\displaystyle g}is an element ofR{\displaystyle R}itself, or otherwise equal toU=fcR={fc,fd,fv,fh}{\displaystyle U=f_{\mathrm {c} }R=\{f_{\mathrm {c} },f_{\mathrm {d} },f_{\mathrm {v} },f_{\mathrm {h} }\}}(highlighted in green in the Cayley table ofD4{\displaystyle \mathrm {D} _{4}}). The subgroupR{\displaystyle R}is normal, becausefcR=U=Rfc{\displaystyle f_{\mathrm {c} }R=U=Rf_{\mathrm {c} }}and similarly for the other elements of the group. (In fact, in the case ofD4{\displaystyle \mathrm {D} _{4}}, the cosets generated by reflections are all equal:fhR=fvR=fdR=fcR{\displaystyle f_{\mathrm {h} }R=f_{\mathrm {v} }R=f_{\mathrm {d} }R=f_{\mathrm {c} }R}.)
Suppose thatN{\displaystyle N}is a normal subgroup of a groupG{\displaystyle G}, andG/N={gN∣g∈G}{\displaystyle G/N=\{gN\mid g\in G\}}denotes its set of cosets.
Then there is a unique group law onG/N{\displaystyle G/N}for which the mapG→G/N{\displaystyle G\to G/N}sending each elementg{\displaystyle g}togN{\displaystyle gN}is a homomorphism.
Explicitly, the product of two cosetsgN{\displaystyle gN}andhN{\displaystyle hN}is(gh)N{\displaystyle (gh)N}, the coseteN=N{\displaystyle eN=N}serves as the identity ofG/N{\displaystyle G/N}, and the inverse ofgN{\displaystyle gN}in the quotient group is(gN)−1=(g−1)N{\displaystyle (gN)^{-1}=\left(g^{-1}\right)N}.
The groupG/N{\displaystyle G/N}, read as "G{\displaystyle G}moduloN{\displaystyle N}",[36]is called aquotient grouporfactor group.
The quotient group can alternatively be characterized by auniversal property.
The elements of the quotient groupD4/R{\displaystyle \mathrm {D} _{4}/R}areR{\displaystyle R}andU=fvR{\displaystyle U=f_{\mathrm {v} }R}. The group operation on the quotient is shown in the table. For example,U⋅U=fvR⋅fvR=(fv⋅fv)R=R{\displaystyle U\cdot U=f_{\mathrm {v} }R\cdot f_{\mathrm {v} }R=(f_{\mathrm {v} }\cdot f_{\mathrm {v} })R=R}. Both the subgroupR={id,r1,r2,r3}{\displaystyle R=\{\mathrm {id} ,r_{1},r_{2},r_{3}\}}and the quotientD4/R{\displaystyle \mathrm {D} _{4}/R}are abelian, butD4{\displaystyle \mathrm {D} _{4}}is not. Sometimes a group can be reconstructed from a subgroup and quotient (plus some additional data), by thesemidirect productconstruction;D4{\displaystyle \mathrm {D} _{4}}is an example.
Thefirst isomorphism theoremimplies that anysurjectivehomomorphismϕ:G→H{\displaystyle \phi :G\to H}factors canonically as a quotient homomorphism followed by an isomorphism:G→G/kerϕ→∼H{\displaystyle G\to G/\ker \phi \;{\stackrel {\sim }{\to }}\;H}.
Surjective homomorphisms are theepimorphismsin the category of groups.
Every group is isomorphic to a quotient of afree group, in many ways.
For example, the dihedral groupD4{\displaystyle \mathrm {D} _{4}}is generated by the right rotationr1{\displaystyle r_{1}}and the reflectionfv{\displaystyle f_{\mathrm {v} }}in a vertical line (every element ofD4{\displaystyle \mathrm {D} _{4}}is a finite product of copies of these and their inverses).
Hence there is a surjective homomorphismϕ{\displaystyle \phi }from the free group⟨r,f⟩{\displaystyle \langle r,f\rangle }on two generators toD4{\displaystyle \mathrm {D} _{4}}sendingr{\displaystyle r}tor1{\displaystyle r_{1}}andf{\displaystyle f}tof1{\displaystyle f_{1}}.
Elements inkerϕ{\displaystyle \ker \phi }are calledrelations; examples includer4,f2,(r⋅f)2{\displaystyle r^{4},f^{2},(r\cdot f)^{2}}.
In fact, it turns out thatkerϕ{\displaystyle \ker \phi }is the smallest normal subgroup of⟨r,f⟩{\displaystyle \langle r,f\rangle }containing these three elements; in other words, all relations are consequences of these three.
The quotient of the free group by this normal subgroup is denoted⟨r,f∣r4=f2=(r⋅f)2=1⟩{\displaystyle \langle r,f\mid r^{4}=f^{2}=(r\cdot f)^{2}=1\rangle }.
This is called apresentationofD4{\displaystyle \mathrm {D} _{4}}by generators and relations, because the first isomorphism theorem forϕ{\displaystyle \phi }yields an isomorphism⟨r,f∣r4=f2=(r⋅f)2=1⟩→D4{\displaystyle \langle r,f\mid r^{4}=f^{2}=(r\cdot f)^{2}=1\rangle \to \mathrm {D} _{4}}.[37]
A presentation of a group can be used to construct theCayley graph, a graphical depiction of adiscrete group.[38]
Examples and applications of groups abound. A starting point is the groupZ{\displaystyle \mathbb {Z} }of integers with addition as group operation, introduced above. If instead of addition multiplication is considered, one obtainsmultiplicative groups. These groups are predecessors of important constructions inabstract algebra.
Groups are also applied in many other mathematical areas. Mathematical objects are often examined byassociatinggroups to them and studying the properties of the corresponding groups. For example,Henri Poincaréfounded what is now calledalgebraic topologyby introducing thefundamental group.[39]By means of this connection,topological propertiessuch asproximityandcontinuitytranslate into properties of groups.[g]
Elements of the fundamental group of atopological spaceareequivalence classesof loops, where loops are considered equivalent if one can besmoothly deformedinto another, and the group operation is "concatenation" (tracing one loop then the other). For example, as shown in the figure, if the topological space is the plane with one point removed, then loops which do not wrap around the missing point (blue)can be smoothly contracted to a single pointand are the identity element of the fundamental group. A loop which wraps around the missing pointk{\displaystyle k}times cannot be deformed into a loop which wrapsm{\displaystyle m}times (withm≠k{\displaystyle m\neq k}), because the loop cannot be smoothly deformed across the hole, so each class of loops is characterized by itswinding numberaround the missing point. The resulting group is isomorphic to the integers under addition.
In more recent applications, the influence has also been reversed to motivate geometric constructions by a group-theoretical background.[h]In a similar vein,geometric group theoryemploys geometric concepts, for example in the study ofhyperbolic groups.[40]Further branches crucially applying groups includealgebraic geometryand number theory.[41]
In addition to the above theoretical applications, many practical applications of groups exist.Cryptographyrelies on the combination of the abstract group theory approach together withalgorithmicalknowledge obtained incomputational group theory, in particular when implemented for finite groups.[42]Applications of group theory are not restricted to mathematics; sciences such asphysics,chemistryandcomputer sciencebenefit from the concept.
Many number systems, such as the integers and therationals, enjoy a naturally given group structure. In some cases, such as with the rationals, both addition and multiplication operations give rise to group structures. Such number systems are predecessors to more general algebraic structures known asringsand fields. Further abstract algebraic concepts such asmodules,vector spacesandalgebrasalso form groups.
The group of integersZ{\displaystyle \mathbb {Z} }under addition, denoted(Z,+){\displaystyle \left(\mathbb {Z} ,+\right)}, has been described above. The integers, with the operation of multiplication instead of addition,(Z,⋅){\displaystyle \left(\mathbb {Z} ,\cdot \right)}donotform a group. The associativity and identity axioms are satisfied, but inverses do not exist: for example,a=2{\displaystyle a=2}is an integer, but the only solution to the equationa⋅b=1{\displaystyle a\cdot b=1}in this case isb=12{\displaystyle b={\tfrac {1}{2}}}, which is a rational number, but not an integer. Hence not every element ofZ{\displaystyle \mathbb {Z} }has a (multiplicative) inverse.[i]
The desire for the existence of multiplicative inverses suggests consideringfractionsab.{\displaystyle {\frac {a}{b}}.}
Fractions of integers (withb{\displaystyle b}nonzero) are known asrational numbers.[j]The set of all such irreducible fractions is commonly denotedQ{\displaystyle \mathbb {Q} }. There is still a minor obstacle for(Q,⋅){\displaystyle \left(\mathbb {Q} ,\cdot \right)}, the rationals with multiplication, being a group: because zero does not have a multiplicative inverse (i.e., there is nox{\displaystyle x}such thatx⋅0=1{\displaystyle x\cdot 0=1}),(Q,⋅){\displaystyle \left(\mathbb {Q} ,\cdot \right)}is still not a group.
However, the set of allnonzerorational numbersQ∖{0}={q∈Q∣q≠0}{\displaystyle \mathbb {Q} \smallsetminus \left\{0\right\}=\left\{q\in \mathbb {Q} \mid q\neq 0\right\}}does form an abelian group under multiplication, also denotedQ×{\displaystyle \mathbb {Q} ^{\times }}.[k]Associativity and identity element axioms follow from the properties of integers. The closure requirement still holds true after removing zero, because the product of two nonzero rationals is never zero. Finally, the inverse ofa/b{\displaystyle a/b}isb/a{\displaystyle b/a}, therefore the axiom of the inverse element is satisfied.
The rational numbers (including zero) also form a group under addition. Intertwining addition and multiplication operations yields more complicated structures called rings and – ifdivisionby other than zero is possible, such as inQ{\displaystyle \mathbb {Q} }– fields, which occupy a central position in abstract algebra. Group theoretic arguments therefore underlie parts of the theory of those entities.[l]
Modular arithmetic for amodulusn{\displaystyle n}defines any two elementsa{\displaystyle a}andb{\displaystyle b}that differ by a multiple ofn{\displaystyle n}to be equivalent, denoted bya≡b(modn){\displaystyle a\equiv b{\pmod {n}}}. Every integer is equivalent to one of the integers from0{\displaystyle 0}ton−1{\displaystyle n-1}, and the operations of modular arithmetic modify normal arithmetic by replacing the result of any operation by its equivalentrepresentative. Modular addition, defined in this way for the integers from0{\displaystyle 0}ton−1{\displaystyle n-1}, forms a group, denoted asZn{\displaystyle \mathrm {Z} _{n}}or(Z/nZ,+){\displaystyle (\mathbb {Z} /n\mathbb {Z} ,+)}, with0{\displaystyle 0}as the identity element andn−a{\displaystyle n-a}as the inverse element ofa{\displaystyle a}.
A familiar example is addition of hours on the face of aclock, where 12 rather than 0 is chosen as the representative of the identity. If the hour hand is on9{\displaystyle 9}and is advanced4{\displaystyle 4}hours, it ends up on1{\displaystyle 1}, as shown in the illustration. This is expressed by saying that9+4{\displaystyle 9+4}is congruent to1{\displaystyle 1}"modulo12{\displaystyle 12}" or, in symbols,9+4≡1(mod12).{\displaystyle 9+4\equiv 1{\pmod {12}}.}
For any prime numberp{\displaystyle p}, there is also themultiplicative group of integers modulop{\displaystyle p}.[43]Its elements can be represented by1{\displaystyle 1}top−1{\displaystyle p-1}. The group operation, multiplication modulop{\displaystyle p}, replaces the usual product by its representative, theremainderof division byp{\displaystyle p}. For example, forp=5{\displaystyle p=5}, the four group elements can be represented by1,2,3,4{\displaystyle 1,2,3,4}. In this group,4⋅4≡1mod5{\displaystyle 4\cdot 4\equiv 1{\bmod {5}}}, because the usual product16{\displaystyle 16}is equivalent to1{\displaystyle 1}: when divided by5{\displaystyle 5}it yields a remainder of1{\displaystyle 1}. The primality ofp{\displaystyle p}ensures that the usual product of two representatives is not divisible byp{\displaystyle p}, and therefore that the modular product is nonzero.[m]The identity element is represented by1{\displaystyle 1}, and associativity follows from the corresponding property of the integers. Finally, the inverse element axiom requires that given an integera{\displaystyle a}not divisible byp{\displaystyle p}, there exists an integerb{\displaystyle b}such thata⋅b≡1(modp),{\displaystyle a\cdot b\equiv 1{\pmod {p}},}that is, such thatp{\displaystyle p}evenly dividesa⋅b−1{\displaystyle a\cdot b-1}. The inverseb{\displaystyle b}can be found by usingBézout's identityand the fact that thegreatest common divisorgcd(a,p){\displaystyle \gcd(a,p)}equals1{\displaystyle 1}.[44]In the casep=5{\displaystyle p=5}above, the inverse of the element represented by4{\displaystyle 4}is that represented by4{\displaystyle 4}, and the inverse of the element represented by3{\displaystyle 3}is represented by2{\displaystyle 2}, as3⋅2=6≡1mod5{\displaystyle 3\cdot 2=6\equiv 1{\bmod {5}}}. Hence all group axioms are fulfilled. This example is similar to(Q∖{0},⋅){\displaystyle \left(\mathbb {Q} \smallsetminus \left\{0\right\},\cdot \right)}above: it consists of exactly those elements in the ringZ/pZ{\displaystyle \mathbb {Z} /p\mathbb {Z} }that have a multiplicative inverse.[45]These groups, denotedFp×{\displaystyle \mathbb {F} _{p}^{\times }}, are crucial topublic-key cryptography.[n]
Acyclic groupis a group all of whose elements arepowersof a particular elementa{\displaystyle a}.[46]In multiplicative notation, the elements of the group are…,a−3,a−2,a−1,a0,a,a2,a3,…,{\displaystyle \dots ,a^{-3},a^{-2},a^{-1},a^{0},a,a^{2},a^{3},\dots ,}wherea2{\displaystyle a^{2}}meansa⋅a{\displaystyle a\cdot a},a−3{\displaystyle a^{-3}}stands fora−1⋅a−1⋅a−1=(a⋅a⋅a)−1{\displaystyle a^{-1}\cdot a^{-1}\cdot a^{-1}=(a\cdot a\cdot a)^{-1}}, etc.[o]Such an elementa{\displaystyle a}is called a generator or aprimitive elementof the group. In additive notation, the requirement for an element to be primitive is that each element of the group can be written as…,(−a)+(−a),−a,0,a,a+a,….{\displaystyle \dots ,(-a)+(-a),-a,0,a,a+a,\dots .}
In the groups(Z/nZ,+){\displaystyle (\mathbb {Z} /n\mathbb {Z} ,+)}introduced above, the element1{\displaystyle 1}is primitive, so these groups are cyclic. Indeed, each element is expressible as a sum all of whose terms are1{\displaystyle 1}. Any cyclic group withn{\displaystyle n}elements is isomorphic to this group. A second example for cyclic groups is the group ofn{\displaystyle n}thcomplex roots of unity, given bycomplex numbersz{\displaystyle z}satisfyingzn=1{\displaystyle z^{n}=1}. These numbers can be visualized as theverticeson a regularn{\displaystyle n}-gon, as shown in blue in the image forn=6{\displaystyle n=6}. The group operation is multiplication of complex numbers. In the picture, multiplying withz{\displaystyle z}corresponds to acounter-clockwiserotation by 60°.[47]Fromfield theory, the groupFp×{\displaystyle \mathbb {F} _{p}^{\times }}is cyclic for primep{\displaystyle p}: for example, ifp=5{\displaystyle p=5},3{\displaystyle 3}is a generator since31=3{\displaystyle 3^{1}=3},32=9≡4{\displaystyle 3^{2}=9\equiv 4},33≡2{\displaystyle 3^{3}\equiv 2}, and34≡1{\displaystyle 3^{4}\equiv 1}.
Some cyclic groups have an infinite number of elements. In these groups, for every non-zero elementa{\displaystyle a}, all the powers ofa{\displaystyle a}are distinct; despite the name "cyclic group", the powers of the elements do not cycle. An infinite cyclic group is isomorphic to(Z,+){\displaystyle (\mathbb {Z} ,+)}, the group of integers under addition introduced above.[48]As these two prototypes are both abelian, so are all cyclic groups.
The study of finitely generated abelian groups is quite mature, including thefundamental theorem of finitely generated abelian groups; and reflecting this state of affairs, many group-related notions, such ascenterandcommutator, describe the extent to which a given group is not abelian.[49]
Symmetry groupsare groups consisting of symmetries of given mathematical objects, principally geometric entities, such as the symmetry group of the square given as an introductory example above, although they also arise in algebra such as the symmetries among the roots of polynomial equations dealt with in Galois theory (see below).[51]Conceptually, group theory can be thought of as the study of symmetry.[p]Symmetries in mathematicsgreatly simplify the study ofgeometricaloranalyticalobjects. A group is said toacton another mathematical objectX{\displaystyle X}if every group element can be associated to some operation onX{\displaystyle X}and the composition of these operations follows the group law. For example, an element of the(2,3,7) triangle groupacts on a triangulartilingof thehyperbolic planeby permuting the triangles.[50]By a group action, the group pattern is connected to the structure of the object being acted on.
In chemistry,point groupsdescribemolecular symmetries, whilespace groupsdescribe crystal symmetries incrystallography. These symmetries underlie the chemical and physical behavior of these systems, and group theory enables simplification ofquantum mechanicalanalysis of these properties.[52]For example, group theory is used to show that optical transitions between certain quantum levels cannot occur simply because of the symmetry of the states involved.[53]
Group theory helps predict the changes in physical properties that occur when a material undergoes aphase transition, for example, from a cubic to a tetrahedral crystalline form. An example isferroelectricmaterials, where the change from a paraelectric to a ferroelectric state occurs at theCurie temperatureand is related to a change from the high-symmetry paraelectric state to the lower symmetry ferroelectric state, accompanied by a so-called softphononmode, a vibrational lattice mode that goes to zero frequency at the transition.[54]
Suchspontaneous symmetry breakinghas found further application in elementary particle physics, where its occurrence is related to the appearance ofGoldstone bosons.[55]
Finite symmetry groups such as theMathieu groupsare used incoding theory, which is in turn applied inerror correctionof transmitted data, and inCD players.[59]Another application isdifferential Galois theory, which characterizes functions havingantiderivativesof a prescribed form, giving group-theoretic criteria for when solutions of certaindifferential equationsare well-behaved.[q]Geometric properties that remain stable under group actions are investigated in(geometric)invariant theory.[60]
Matrix groupsconsist ofmatricestogether withmatrix multiplication. Thegeneral linear groupGL(n,R){\displaystyle \mathrm {GL} (n,\mathbb {R} )}consists of allinvertiblen{\displaystyle n}-by-n{\displaystyle n}matrices with real entries.[61]Its subgroups are referred to asmatrix groupsorlinear groups. The dihedral group example mentioned above can be viewed as a (very small) matrix group. Another important matrix group is thespecial orthogonal groupSO(n){\displaystyle \mathrm {SO} (n)}. It describes all possible rotations inn{\displaystyle n}dimensions.Rotation matricesin this group are used incomputer graphics.[62]
Representation theoryis both an application of the group concept and important for a deeper understanding of groups.[63][64]It studies the group by its group actions on other spaces. A broad class ofgroup representationsare linear representations in which the group acts on a vector space, such as the three-dimensionalEuclidean spaceR3{\displaystyle \mathbb {R} ^{3}}. A representation of a groupG{\displaystyle G}on ann{\displaystyle n}-dimensionalreal vector space is simply a group homomorphismρ:G→GL(n,R){\displaystyle \rho :G\to \mathrm {GL} (n,\mathbb {R} )}from the group to the general linear group. This way, the group operation, which may be abstractly given, translates to the multiplication of matrices making it accessible to explicit computations.[r]
A group action gives further means to study the object being acted on.[s]On the other hand, it also yields information about the group. Group representations are an organizing principle in the theory of finite groups, Lie groups, algebraic groups andtopological groups, especially (locally)compact groups.[63][65]
Galois groupswere developed to help solve polynomial equations by capturing their symmetry features.[66][67]For example, the solutions of thequadratic equationax2+bx+c=0{\displaystyle ax^{2}+bx+c=0}are given byx=−b±b2−4ac2a.{\displaystyle x={\frac {-b\pm {\sqrt {b^{2}-4ac}}}{2a}}.}Each solution can be obtained by replacing the±{\displaystyle \pm }sign by+{\displaystyle +}or−{\displaystyle -}; analogous formulae are known forcubicandquartic equations, but donotexist in general fordegree 5and higher.[68]In thequadratic formula, changing the sign (permuting the resulting two solutions) can be viewed as a (very simple) group operation. Analogous Galois groups act on the solutions of higher-degree polynomial equations and are closely related to the existence of formulas for their solution. Abstract properties of these groups (in particular theirsolvability) give a criterion for the ability to express the solutions of these polynomials using solely addition, multiplication, androotssimilar to the formula above.[69]
ModernGalois theorygeneralizes the above type of Galois groups by shifting to field theory and consideringfield extensionsformed as thesplitting fieldof a polynomial. This theory establishes—via thefundamental theorem of Galois theory—a precise relationship between fields and groups, underlining once again the ubiquity of groups in mathematics.[70]
A group is calledfiniteif it has afinite number of elements. The number of elements is called theorderof the group.[71]An important class is thesymmetric groupsSN{\displaystyle \mathrm {S} _{N}}, the groups of permutations ofN{\displaystyle N}objects. For example, thesymmetric group on 3 lettersS3{\displaystyle \mathrm {S} _{3}}is the group of all possible reorderings of the objects. The three letters ABC can be reordered into ABC, ACB, BAC, BCA, CAB, CBA, forming in total 6 (factorialof 3) elements. The group operation is composition of these reorderings, and the identity element is the reordering operation that leaves the order unchanged. This class is fundamental insofar as any finite group can be expressed as a subgroup of a symmetric groupSN{\displaystyle \mathrm {S} _{N}}for a suitable integerN{\displaystyle N}, according toCayley's theorem. Parallel to the group of symmetries of the square above,S3{\displaystyle \mathrm {S} _{3}}can also be interpreted as the group of symmetries of anequilateral triangle.
The order of an elementa{\displaystyle a}in a groupG{\displaystyle G}is the least positive integern{\displaystyle n}such thatan=e{\displaystyle a^{n}=e}, wherean{\displaystyle a^{n}}representsa⋯a⏟nfactors,{\displaystyle \underbrace {a\cdots a} _{n{\text{ factors}}},}that is, application of the operation "⋅{\displaystyle \cdot }" ton{\displaystyle n}copies ofa{\displaystyle a}. (If "⋅{\displaystyle \cdot }" represents multiplication, thenan{\displaystyle a^{n}}corresponds to then{\displaystyle n}th power ofa{\displaystyle a}.) In infinite groups, such ann{\displaystyle n}may not exist, in which case the order ofa{\displaystyle a}is said to be infinity. The order of an element equals the order of the cyclic subgroup generated by this element.
More sophisticated counting techniques, for example, counting cosets, yield more precise statements about finite groups:Lagrange's Theoremstates that for a finite groupG{\displaystyle G}the order of any finite subgroupH{\displaystyle H}dividesthe order ofG{\displaystyle G}. TheSylow theoremsgive a partial converse.
The dihedral groupD4{\displaystyle \mathrm {D} _{4}}of symmetries of a square is a finite group of order 8. In this group, the order ofr1{\displaystyle r_{1}}is 4, as is the order of the subgroupR{\displaystyle R}that this element generates. The order of the reflection elementsfv{\displaystyle f_{\mathrm {v} }}etc. is 2. Both orders divide 8, as predicted by Lagrange's theorem. The groupsFp×{\displaystyle \mathbb {F} _{p}^{\times }}of multiplication modulo a primep{\displaystyle p}have orderp−1{\displaystyle p-1}.
Any finite abelian group is isomorphic to aproductof finite cyclic groups; this statement is part of thefundamental theorem of finitely generated abelian groups.
Any group of prime orderp{\displaystyle p}is isomorphic to the cyclic groupZp{\displaystyle \mathrm {Z} _{p}}(a consequence ofLagrange's theorem).
Any group of orderp2{\displaystyle p^{2}}is abelian, isomorphic toZp2{\displaystyle \mathrm {Z} _{p^{2}}}orZp×Zp{\displaystyle \mathrm {Z} _{p}\times \mathrm {Z} _{p}}.
But there exist nonabelian groups of orderp3{\displaystyle p^{3}}; the dihedral groupD4{\displaystyle \mathrm {D} _{4}}of order23{\displaystyle 2^{3}}above is an example.[72]
When a groupG{\displaystyle G}has a normal subgroupN{\displaystyle N}other than{1}{\displaystyle \{1\}}andG{\displaystyle G}itself, questions aboutG{\displaystyle G}can sometimes be reduced to questions aboutN{\displaystyle N}andG/N{\displaystyle G/N}. A nontrivial group is calledsimpleif it has no such normal subgroup. Finite simple groups are to finite groups as prime numbers are to positive integers: they serve as building blocks, in a sense made precise by theJordan–Hölder theorem.
Computer algebra systemshave been used tolist all groups of order up to 2000.[t]Butclassifyingall finite groups is a problem considered too hard to be solved.
The classification of all finitesimplegroups was a major achievement in contemporary group theory. There areseveral infinite familiesof such groups, as well as 26 "sporadic groups" that do not belong to any of the families. The largestsporadic groupis called themonster group. Themonstrous moonshineconjectures, proved byRichard Borcherds, relate the monster group to certainmodular functions.[73]
The gap between the classification of simple groups and the classification of all groups lies in theextension problem.[74]
An equivalent definition of group consists of replacing the "there exist" part of the group axioms by operations whose result is the element that must exist. So, a group is a setG{\displaystyle G}equipped with a binary operationG×G→G{\displaystyle G\times G\rightarrow G}(the group operation), aunary operationG→G{\displaystyle G\rightarrow G}(which provides the inverse) and anullary operation, which has no operand and results in the identity element. Otherwise, the group axioms are exactly the same. This variant of the definition avoidsexistential quantifiersand is used in computing with groups and forcomputer-aided proofs.
This way of defining groups lends itself to generalizations such as the notion ofgroup objectin a category. Briefly, this is an object withmorphismsthat mimic the group axioms.[75]
Sometopological spacesmay be endowed with a group law. In order for the group law and the topology to interweave well, the group operations must be continuous functions; informally,g⋅h{\displaystyle g\cdot h}andg−1{\displaystyle g^{-1}}must not vary wildly ifg{\displaystyle g}andh{\displaystyle h}vary only a little. Such groups are calledtopological groups,and they are the group objects in thecategory of topological spaces.[76]The most basic examples are the group of real numbers under addition and the group of nonzero real numbers under multiplication. Similar examples can be formed from any othertopological field, such as the field of complex numbers or the field ofp-adic numbers. These examples arelocally compact, so they haveHaar measuresand can be studied viaharmonic analysis. Other locally compact topological groups include the group of points of an algebraic group over alocal fieldoradele ring; these are basic to number theory[77]Galois groups of infinite algebraic field extensions are equipped with theKrull topology, which plays a role ininfinite Galois theory.[78]A generalization used in algebraic geometry is theétale fundamental group.[79]
ALie groupis a group that also has the structure of adifferentiable manifold; informally, this means that itlooks locally likea Euclidean space of some fixed dimension.[80]Again, the definition requires the additional structure, here the manifold structure, to be compatible: the multiplication and inverse maps are required to besmooth.
A standard example is the general linear group introduced above: it is anopen subsetof the space of alln{\displaystyle n}-by-n{\displaystyle n}matrices, because it is given by the inequalitydet(A)≠0,{\displaystyle \det(A)\neq 0,}whereA{\displaystyle A}denotes ann{\displaystyle n}-by-n{\displaystyle n}matrix.[81]
Lie groups are of fundamental importance in modern physics:Noether's theoremlinks continuous symmetries toconserved quantities.[82]Rotation, as well as translations inspaceandtime, are basic symmetries of the laws ofmechanics. They can, for instance, be used to construct simple models—imposing, say, axial symmetry on a situation will typically lead to significant simplification in the equations one needs to solve to provide a physical description.[u]Another example is the group ofLorentz transformations, which relate measurements of time and velocity of two observers in motion relative to each other. They can be deduced in a purely group-theoretical way, by expressing the transformations as a rotational symmetry ofMinkowski space. The latter serves—in the absence of significantgravitation—as a model ofspacetimeinspecial relativity.[83]The full symmetry group of Minkowski space, i.e., including translations, is known as thePoincaré group. By the above, it plays a pivotal role in special relativity and, by implication, forquantum field theories.[84]Symmetries that vary with locationare central to the modern description of physical interactions with the help ofgauge theory. An important example of a gauge theory is theStandard Model, which describes three of the four knownfundamental forcesand classifies all knownelementary particles.[85]
More general structures may be defined by relaxing some of the axioms defining a group.[31][86][87]The table gives a list of several structures generalizing groups.
For example, if the requirement that every element has an inverse is eliminated, the resulting algebraic structure is called amonoid. Thenatural numbersN{\displaystyle \mathbb {N} }(including zero) under addition form a monoid, as do the nonzero integers under multiplication(Z∖{0},⋅){\displaystyle (\mathbb {Z} \smallsetminus \{0\},\cdot )}. Adjoining inverses of all elements of the monoid(Z∖{0},⋅){\displaystyle (\mathbb {Z} \smallsetminus \{0\},\cdot )}produces a group(Q∖{0},⋅){\displaystyle (\mathbb {Q} \smallsetminus \{0\},\cdot )}, and likewise adjoining inverses to any (abelian) monoidM{\displaystyle M}produces a group known as theGrothendieck groupofM{\displaystyle M}.
A group can be thought of as asmall categorywith one objectx{\displaystyle x}in which every morphism is an isomorphism: given such a category, the setHom(x,x){\displaystyle \operatorname {Hom} (x,x)}is a group; conversely, given a groupG{\displaystyle G}, one can build a small category with one objectx{\displaystyle x}in whichHom(x,x)≃G{\displaystyle \operatorname {Hom} (x,x)\simeq G}.
More generally, agroupoidis any small category in which every morphism is an isomorphism.
In a groupoid, the set of all morphisms in the category is usually not a group, because the composition is only partially defined:fg{\displaystyle fg}is defined only when the source off{\displaystyle f}matches the target ofg{\displaystyle g}.
Groupoids arise in topology (for instance, thefundamental groupoid) and in the theory ofstacks.
Finally, it is possible to generalize any of these concepts by replacing the binary operation with ann-aryoperation (i.e., an operation takingnarguments, for some nonnegative integern). With the proper generalization of the group axioms, this gives a notion ofn-ary group.[88]
|
https://en.wikipedia.org/wiki/Group_(mathematics)
|
Bluetoothis a short-rangewirelesstechnology standard that is used for exchanging data between fixed and mobile devices over short distances and buildingpersonal area networks(PANs). In the most widely used mode, transmission power is limited to 2.5milliwatts, giving it a very short range of up to 10 metres (33 ft). It employsUHFradio wavesin theISM bands, from 2.402GHzto 2.48GHz.[3]It is mainly used as an alternative to wired connections to exchange files between nearby portable devices and connectcell phonesand music players withwireless headphones,wireless speakers,HIFIsystems,car audioand wireless transmission betweenTVsandsoundbars.
Bluetooth is managed by theBluetooth Special Interest Group(SIG), which has more than 35,000 member companies in the areas of telecommunication, computing, networking, and consumer electronics. TheIEEEstandardized Bluetooth asIEEE 802.15.1but no longer maintains the standard. The Bluetooth SIG oversees the development of the specification, manages the qualification program, and protects the trademarks.[4]A manufacturer must meetBluetooth SIG standardsto market it as a Bluetooth device.[5]A network ofpatentsapplies to the technology, which is licensed to individual qualifying devices. As of 2021[update], 4.7 billion Bluetoothintegrated circuitchips are shipped annually.[6]Bluetooth was first demonstrated in space in 2024, an early test envisioned to enhanceIoTcapabilities.[7]
The name "Bluetooth" was proposed in 1997 by Jim Kardach ofIntel, one of the founders of the Bluetooth SIG. The name was inspired by a conversation with Sven Mattisson who related Scandinavian history through tales fromFrans G. Bengtsson'sThe Long Ships, a historical novel about Vikings and the 10th-century Danish kingHarald Bluetooth. Upon discovering a picture of therunestone of Harald Bluetooth[8]in the bookA History of the VikingsbyGwyn Jones, Kardach proposed Bluetooth as the codename for the short-range wireless program which is now called Bluetooth.[9][10][11]
According to Bluetooth's official website,
Bluetooth was only intended as a placeholder until marketing could come up with something really cool.
Later, when it came time to select a serious name, Bluetooth was to be replaced with either RadioWire or PAN (Personal Area Networking). PAN was the front runner, but an exhaustive search discovered it already had tens of thousands of hits throughout the internet.
A full trademark search on RadioWire couldn't be completed in time for launch, making Bluetooth the only choice. The name caught on fast and before it could be changed, it spread throughout the industry, becoming synonymous with short-range wireless technology.[12]
Bluetooth is theAnglicisedversion of the ScandinavianBlåtand/Blåtann(or inOld Norseblátǫnn). It was theepithetof King Harald Bluetooth, who united the disparate
Danish tribes into a single kingdom; Kardach chose the name to imply that Bluetooth similarly unites communication protocols.[13]
The Bluetooth logois abind runemerging theYounger Futharkrunes(ᚼ,Hagall) and(ᛒ,Bjarkan), Harald's initials.[14][15]
The development of the "short-link" radio technology, later named Bluetooth, was initiated in 1989 by Nils Rydbeck, CTO atEricsson MobileinLund, Sweden. The purpose was to develop wireless headsets, according to two inventions byJohan Ullman,SE 8902098-6, issued 12 June 1989andSE 9202239, issued 24 July 1992. Nils Rydbeck taskedTord Wingrenwith specifying and DutchmanJaap Haartsenand Sven Mattisson with developing.[16]Both were working for Ericsson in Lund.[17]Principal design and development began in 1994 and by 1997 the team had a workable solution.[18]From 1997 Örjan Johansson became the project leader and propelled the technology and standardization.[19][20][21][22]
In 1997, Adalio Sanchez, then head ofIBMThinkPadproduct R&D, approached Nils Rydbeck about collaborating on integrating amobile phoneinto a ThinkPad notebook. The two assigned engineers fromEricssonandIBMstudied the idea. The conclusion was that power consumption on cellphone technology at that time was too high to allow viable integration into a notebook and still achieve adequate battery life. Instead, the two companies agreed to integrate Ericsson's short-link technology on both a ThinkPad notebook and an Ericsson phone to accomplish the goal.
Since neither IBM ThinkPad notebooks nor Ericsson phones were the market share leaders in their respective markets at that time, Adalio Sanchez and Nils Rydbeck agreed to make the short-link technology an open industry standard to permit each player maximum market access. Ericsson contributed the short-link radio technology, and IBM contributed patents around the logical layer. Adalio Sanchez of IBM then recruited Stephen Nachtsheim of Intel to join and then Intel also recruitedToshibaandNokia. In May 1998, the Bluetooth SIG was launched with IBM and Ericsson as the founding signatories and a total of five members: Ericsson, Intel, Nokia, Toshiba, and IBM.
The first Bluetooth device was revealed in 1999. It was a hands-free mobile headset that earned the "Best of show Technology Award" atCOMDEX. The first Bluetooth mobile phone was the unreleased prototype Ericsson T36, though it was the revised Ericsson modelT39that actually made it to store shelves in June 2001. However Ericsson released the R520m in Quarter 1 of 2001,[23]making the R520m the first ever commercially available Bluetooth phone. In parallel, IBM introduced the IBM ThinkPad A30 in October 2001 which was the first notebook with integrated Bluetooth.
Bluetooth's early incorporation into consumer electronics products continued at Vosi Technologies in Costa Mesa, California, initially overseen by founding members Bejan Amini and Tom Davidson. Vosi Technologies had been created by real estate developer Ivano Stegmenga, with United States Patent 608507, for communication between a cellular phone and a vehicle's audio system. At the time, Sony/Ericsson had only a minor market share in the cellular phone market, which was dominated in the US by Nokia and Motorola. Due to ongoing negotiations for an intended licensing agreement with Motorola beginning in the late 1990s, Vosi could not publicly disclose the intention, integration, and initial development of other enabled devices which were to be the first "Smart Home" internet connected devices.
Vosi needed a means for the system to communicate without a wired connection from the vehicle to the other devices in the network. Bluetooth was chosen, sinceWi-Fiwas not yet readily available or supported in the public market. Vosi had begun to develop the Vosi Cello integrated vehicular system and some other internet connected devices, one of which was intended to be a table-top device named the Vosi Symphony, networked with Bluetooth. Through the negotiations withMotorola, Vosi introduced and disclosed its intent to integrate Bluetooth in its devices. In the early 2000s a legal battle[24]ensued between Vosi and Motorola, which indefinitely suspended release of the devices. Later, Motorola implemented it in their devices, which initiated the significant propagation of Bluetooth in the public market due to its large market share at the time.
In 2012, Jaap Haartsen was nominated by theEuropean Patent Officefor theEuropean Inventor Award.[18]
Bluetooth operates at frequencies between 2.402 and 2.480GHz, or 2.400 and 2.4835GHz, includingguard bands2MHz wide at the bottom end and 3.5MHz wide at the top.[25]This is in the globally unlicensed (but not unregulated) industrial, scientific and medical (ISM) 2.4GHz short-range radio frequency band. Bluetooth uses a radio technology calledfrequency-hopping spread spectrum. Bluetooth divides transmitted data into packets, and transmits each packet on one of 79 designated Bluetooth channels. Each channel has a bandwidth of 1MHz. It usually performs 1600hops per second, withadaptive frequency-hopping(AFH) enabled.[25]Bluetooth Low Energyuses 2MHz spacing, which accommodates 40 channels.[26]
Originally,Gaussian frequency-shift keying(GFSK) modulation was the only modulation scheme available. Since the introduction of Bluetooth 2.0+EDR, π/4-DQPSK(differential quadrature phase-shift keying) and 8-DPSK modulation may also be used between compatible devices. Devices functioning with GFSK are said to be operating in basic rate (BR) mode, where an instantaneousbit rateof 1Mbit/sis possible. The termEnhanced Data Rate(EDR) is used to describe π/4-DPSK (EDR2) and 8-DPSK (EDR3) schemes, transferring 2 and 3Mbit/s respectively.
In 2019, Apple published an extension called HDR which supports data rates of 4 (HDR4) and 8 (HDR8) Mbit/s using π/4-DQPSKmodulation on 4 MHz channels with forward error correction (FEC).[27]
Bluetooth is apacket-based protocolwith amaster/slave architecture. One master may communicate with up to seven slaves in apiconet. All devices within a given piconet use the clock provided by the master as the base for packet exchange. The master clock ticks with a period of 312.5μs, two clock ticks then make up a slot of 625μs, and two slots make up a slot pair of 1250μs. In the simple case of single-slot packets, the master transmits in even slots and receives in odd slots. The slave, conversely, receives in even slots and transmits in odd slots. Packets may be 1, 3, or 5 slots long, but in all cases, the master's transmission begins in even slots and the slave's in odd slots.
The above excludes Bluetooth Low Energy, introduced in the 4.0 specification,[28]whichuses the same spectrum but somewhat differently.
A master BR/EDR Bluetooth device can communicate with a maximum of seven devices in a piconet (an ad hoc computer network using Bluetooth technology), though not all devices reach this maximum. The devices can switch roles, by agreement, and the slave can become the master (for example, a headset initiating a connection to a phone necessarily begins as master—as an initiator of the connection—but may subsequently operate as the slave).
The Bluetooth Core Specification provides for the connection of two or more piconets to form ascatternet, in which certain devices simultaneously play the master/leader role in one piconet and the slave role in another.
At any given time, data can be transferred between the master and one other device (except for the little-used broadcast mode). The master chooses which slave device to address; typically, it switches rapidly from one device to another in around-robinfashion. Since it is the master that chooses which slave to address, whereas a slave is (in theory) supposed to listen in each receive slot, being a master is a lighter burden than being a slave. Being a master of seven slaves is possible; being a slave of more than one master is possible. The specification is vague as to required behavior in scatternets.[29]
Bluetooth is a standard wire-replacement communications protocol primarily designed for low power consumption, with a short range based on low-costtransceivermicrochipsin each device.[30]Because the devices use a radio (broadcast) communications system, they do not have to be in visual line of sight of each other; however, aquasi opticalwireless path must be viable.[31]
Historically, the Bluetooth range was defined by the radio class, with a lower class (and higher output power) having larger range.[2]The actual range of a given link depends on several qualities of both communicating devices and theair and obstacles in between. The primary attributes affecting range are the data rate, protocol (Bluetooth Classic or Bluetooth Low Energy), transmission power, and receiver sensitivity, and the relative orientations and gains of both antennas.[32]
The effective range varies depending on propagation conditions, material coverage, production sample variations, antenna configurations and battery conditions. Most Bluetooth applications are for indoor conditions, where attenuation of walls and signal fading due to signal reflections make the range far lower than specified line-of-sight ranges of the Bluetooth products.
Most Bluetooth applications are battery-powered Class 2 devices, with little difference in range whether the other end of the link is a Class 1 or Class 2 device as the lower-powered device tends to set the range limit. In some cases the effective range of the data link can be extended when a Class 2 device is connecting to a Class 1 transceiver with both higher sensitivity and transmission power than a typical Class 2 device.[33]In general, however, Class 1 devices have sensitivities similar to those of Class 2 devices. Connecting two Class 1 devices with both high sensitivity and high power can allow ranges far in excess of the typical 100 m, depending on the throughput required by the application. Some such devices allow open field ranges of up to 1 km and beyond between two similar devices without exceeding legal emission limits.[34][35][36]
To use Bluetooth wireless technology, a device must be able to interpret certain Bluetooth profiles.
For example,
Profiles are definitions of possible applications and specify general behaviors that Bluetooth-enabled devices use to communicate with other Bluetooth devices. These profiles include settings to parameterize and to control the communication from the start. Adherence to profiles saves the time for transmitting the parameters anew before the bi-directional link becomes effective. There are a wide range of Bluetooth profiles that describe many different types of applications or use cases for devices.[37]
Bluetooth exists in numerous products such as telephones,speakers, tablets, media players, robotics systems, laptops, and game console equipment as well as some high definitionheadsets,modems,hearing aids[53]and even watches.[54]Bluetooth is useful when transferring information between two or more devices that are near each other in low-bandwidth situations. Bluetooth is commonly used to transfer sound data with telephones (i.e., with a Bluetooth headset) or byte data with hand-held computers (transferring files).
Bluetooth protocols simplify the discovery and setup of services between devices.[55]Bluetooth devices can advertise all of the services they provide.[56]This makes using services easier, because more of the security,network addressand permission configuration can be automated than with many other network types.[55]
A personal computer that does not have embedded Bluetooth can use a Bluetooth adapter that enables the PC to communicate with Bluetooth devices. While somedesktop computersand most recent laptops come with a built-in Bluetooth radio, others require an external adapter, typically in the form of a small USB "dongle".
Unlike its predecessor,IrDA, which requires a separate adapter for each device, Bluetooth lets multiple devices communicate with a computer over a single adapter.[57]
ForMicrosoftplatforms,Windows XP Service Pack 2and SP3 releases work natively with Bluetooth v1.1, v2.0 and v2.0+EDR.[58]Previous versions required users to install their Bluetooth adapter's own drivers, which were not directly supported by Microsoft.[59]Microsoft's own Bluetooth dongles (packaged with their Bluetooth computer devices) have no external drivers and thus require at least Windows XP Service Pack 2. Windows Vista RTM/SP1 with the Feature Pack for Wireless or Windows Vista SP2 work with Bluetooth v2.1+EDR.[58]Windows 7 works with Bluetooth v2.1+EDR and Extended Inquiry Response (EIR).[58]The Windows XP and Windows Vista/Windows 7 Bluetooth stacks support the following Bluetooth profiles natively: PAN, SPP,DUN, HID, HCRP. The Windows XP stack can be replaced by a third party stack that supports more profiles or newer Bluetooth versions. The Windows Vista/Windows 7 Bluetooth stack supports vendor-supplied additional profiles without requiring that the Microsoft stack be replaced.[58]Windows 8 and later support Bluetooth Low Energy (BLE). It is generally recommended to install the latest vendor driver and its associated stack to be able to use the Bluetooth device at its fullest extent.
Appleproducts have worked with Bluetooth sinceMac OSX v10.2, which was released in 2002.[60]
Linuxhas two popularBluetooth stacks, BlueZ and Fluoride. The BlueZ stack is included with most Linux kernels and was originally developed byQualcomm.[61]Fluoride, earlier known as Bluedroid is included in Android OS and was originally developed byBroadcom.[62]There is also Affix stack, developed byNokia. It was once popular, but has not been updated since 2005.[63]
FreeBSDhas included Bluetooth since its v5.0 release, implemented throughnetgraph.[64][65]
NetBSDhas included Bluetooth since its v4.0 release.[66][67]Its Bluetooth stack was ported toOpenBSDas well, however OpenBSD later removed it as unmaintained.[68][69]
DragonFly BSDhas had NetBSD's Bluetooth implementation since 1.11 (2008).[70][71]Anetgraph-based implementation fromFreeBSDhas also been available in the tree, possibly disabled until 2014-11-15, and may require more work.[72][73]
The specifications were formalized by theBluetooth Special Interest Group(SIG) and formally announced on 20 May 1998.[74]In 2014 it had a membership of over 30,000 companies worldwide.[75]It was established byEricsson,IBM,Intel,NokiaandToshiba, and later joined by many other companies.
All versions of the Bluetooth standards arebackward-compatiblewith all earlier versions.[76]
The Bluetooth Core Specification Working Group (CSWG) produces mainly four kinds of specifications:
Major enhancements include:
This version of the Bluetooth Core Specification was released before 2005. The main difference is the introduction of an Enhanced Data Rate (EDR) for fasterdata transfer. The data rate of EDR is 3Mbit/s, although the maximum data transfer rate (allowing for inter-packet time and acknowledgements) is 2.1Mbit/s.[79]EDR uses a combination ofGFSKandphase-shift keyingmodulation (PSK) with two variants, π/4-DQPSKand 8-DPSK.[81]EDR can provide a lower power consumption through a reducedduty cycle.
The specification is published asBluetooth v2.0 + EDR, which implies that EDR is an optional feature. Aside from EDR, the v2.0 specification contains other minor improvements, and products may claim compliance to "Bluetooth v2.0" without supporting the higher data rate. At least one commercial device states "Bluetooth v2.0 without EDR" on its data sheet.[82]
Bluetooth Core Specification version 2.1 + EDR was adopted by the Bluetooth SIG on 26 July 2007.[81]
The headline feature of v2.1 issecure simple pairing(SSP): this improves the pairing experience for Bluetooth devices, while increasing the use and strength of security.[83]
Version 2.1 allows various other improvements, includingextended inquiry response(EIR), which provides more information during the inquiry procedure to allow better filtering of devices before connection; and sniff subrating, which reduces the power consumption in low-power mode.
Version 3.0 + HS of the Bluetooth Core Specification[81]was adopted by the Bluetooth SIG on 21 April 2009. Bluetooth v3.0 + HS provides theoretical data transfer speeds of up to 24 Mbit/s, though not over the Bluetooth link itself. Instead, the Bluetooth link is used for negotiation and establishment, and the high data rate traffic is carried over a colocated802.11link.
The main new feature isAMP(Alternative MAC/PHY), the addition of802.11as a high-speed transport. The high-speed part of the specification is not mandatory, and hence only devices that display the "+HS" logo actually support Bluetooth over 802.11 high-speed data transfer. A Bluetooth v3.0 device without the "+HS" suffix is only required to support features introduced in Core Specification version 3.0[84]or earlier Core Specification Addendum 1.[85]
The high-speed (AMP) feature of Bluetooth v3.0 was originally intended forUWB, but the WiMedia Alliance, the body responsible for the flavor of UWB intended for Bluetooth, announced in March 2009 that it was disbanding, and ultimately UWB was omitted from the Core v3.0 specification.[86]
On 16 March 2009, theWiMedia Allianceannounced it was entering into technology transfer agreements for the WiMediaUltra-wideband(UWB) specifications. WiMedia has transferred all current and future specifications, including work on future high-speed and power-optimized implementations, to the Bluetooth Special Interest Group (SIG),Wireless USBPromoter Group and theUSB Implementers Forum. After successful completion of the technology transfer, marketing, and related administrative items, the WiMedia Alliance ceased operations.[87][88][89][90][91]
In October 2009, theBluetooth Special Interest Groupsuspended development of UWB as part of the alternative MAC/PHY, Bluetooth v3.0 + HS solution. A small, but significant, number of formerWiMediamembers had not and would not sign up to the necessary agreements for theIPtransfer. As of 2009, the Bluetooth SIG was in the process of evaluating other options for its longer-term roadmap.[92][93][94]
The Bluetooth SIG completed the Bluetooth Core Specification version 4.0 (called Bluetooth Smart) and has been adopted as of 30 June 2010[update]. It includesClassic Bluetooth,Bluetooth high speedandBluetooth Low Energy(BLE) protocols. Bluetooth high speed is based on Wi-Fi, and Classic Bluetooth consists of legacy Bluetooth protocols.
Bluetooth Low Energy, previously known as Wibree,[95]is a subset of Bluetooth v4.0 with an entirely new protocol stack for rapid build-up of simple links. As an alternative to the Bluetooth standard protocols that were introduced in Bluetooth v1.0 to v3.0, it is aimed at very low power applications powered by acoin cell. Chip designs allow for two types of implementation, dual-mode, single-mode and enhanced past versions.[96]The provisional namesWibreeandBluetooth ULP(Ultra Low Power) were abandoned and the BLE name was used for a while. In late 2011, new logos "Bluetooth Smart Ready" for hosts and "Bluetooth Smart" for sensors were introduced as the general-public face of BLE.[97]
Compared toClassic Bluetooth, Bluetooth Low Energy is intended to provide considerably reduced power consumption and cost while maintaining asimilar communication range. In terms of lengthening the battery life of Bluetooth devices,BLErepresents a significant progression.
Cost-reduced single-mode chips, which enable highly integrated and compact devices, feature a lightweight Link Layer providing ultra-low power idle mode operation, simple device discovery, and reliable point-to-multipoint data transfer with advanced power-save and secure encrypted connections at the lowest possible cost.
General improvements in version 4.0 include the changes necessary to facilitate BLE modes, as well the Generic Attribute Profile (GATT) and Security Manager (SM) services withAESEncryption.
Core Specification Addendum 2 was unveiled in December 2011; it contains improvements to the audio Host Controller Interface and to the High Speed (802.11) Protocol Adaptation Layer.
Core Specification Addendum 3 revision 2 has an adoption date of 24 July 2012.
Core Specification Addendum 4 has an adoption date of 12 February 2013.
The Bluetooth SIG announced formal adoption of the Bluetooth v4.1 specification on 4 December 2013. This specification is an incremental software update to Bluetooth Specification v4.0, and not a hardware update. The update incorporates Bluetooth Core Specification Addenda (CSA 1, 2, 3 & 4) and adds new features that improve consumer usability. These include increased co-existence support for LTE, bulk data exchange rates—and aid developer innovation by allowing devices to support multiple roles simultaneously.[106]
New features of this specification include:
Some features were already available in a Core Specification Addendum (CSA) before the release of v4.1.
Released on 2 December 2014,[108]it introduces features for theInternet of things.
The major areas of improvement are:
Older Bluetooth hardware may receive 4.2 features such as Data Packet Length Extension and improved privacy via firmware updates.[109][110]
The Bluetooth SIG released Bluetooth 5 on 6 December 2016.[111]Its new features are mainly focused on newInternet of Thingstechnology. Sony was the first to announce Bluetooth 5.0 support with itsXperia XZ Premiumin Feb 2017 during the Mobile World Congress 2017.[112]The SamsungGalaxy S8launched with Bluetooth 5 support in April 2017. In September 2017, theiPhone 8, 8 Plus andiPhone Xlaunched with Bluetooth 5 support as well.Applealso integrated Bluetooth 5 in its newHomePodoffering released on 9 February 2018.[113]Marketing drops the point number; so that it is just "Bluetooth 5" (unlike Bluetooth 4.0);[114]the change is for the sake of "Simplifying our marketing, communicating user benefits more effectively and making it easier to signal significant technology updates to the market."
Bluetooth 5 provides, forBLE, options that can double the data rate (2Mbit/s burst) at the expense of range, or provide up to four times the range at the expense of data rate. The increase in transmissions could be important for Internet of Things devices, where many nodes connect throughout a whole house. Bluetooth 5 increases capacity of connectionless services such as location-relevant navigation[115]of low-energy Bluetooth connections.[116][117][118]
The major areas of improvement are:
Features added in CSA5 – integrated in v5.0:
The following features were removed in this version of the specification:
The Bluetooth SIG presented Bluetooth 5.1 on 21 January 2019.[120]
The major areas of improvement are:
Features added in Core Specification Addendum (CSA) 6 – integrated in v5.1:
The following features were removed in this version of the specification:
On 31 December 2019, the Bluetooth SIG published the Bluetooth Core Specification version 5.2. The new specification adds new features:[121]
The Bluetooth SIG published the Bluetooth Core Specification version 5.3 on 13 July 2021. The feature enhancements of Bluetooth 5.3 are:[128]
The following features were removed in this version of the specification:
The Bluetooth SIG released the Bluetooth Core Specification version 5.4 on 7 February 2023. This new version adds the following features:[129]
The Bluetooth SIG released the Bluetooth Core Specification version 6.0 on 27 August 2024.[130]This version adds the following features:[131]
The Bluetooth SIG released the Bluetooth Core Specification version 6.1 on 7 May 2025.[132]
Seeking to extend the compatibility of Bluetooth devices, the devices that adhere to the standard use an interface called HCI (Host Controller Interface) between the host and the controller.
High-level protocols such as the SDP (Protocol used to find other Bluetooth devices within the communication range, also responsible for detecting the function of devices in range), RFCOMM (Protocol used to emulate serial port connections) and TCS (Telephony control protocol) interact with the baseband controller through the L2CAP (Logical Link Control and Adaptation Protocol). The L2CAP protocol is responsible for the segmentation and reassembly of the packets.
The hardware that makes up the Bluetooth device is made up of, logically, two parts; which may or may not be physically separate. A radio device, responsible for modulating and transmitting the signal; and a digital controller. The digital controller is likely a CPU, one of whose functions is to run a Link Controller; and interfaces with the host device; but some functions may be delegated to hardware. The Link Controller is responsible for the processing of the baseband and the management of ARQ and physical layer FEC protocols. In addition, it handles the transfer functions (both asynchronous and synchronous), audio coding (e.g.SBC (codec)) and data encryption. The CPU of the device is responsible for attending the instructions related to Bluetooth of the host device, in order to simplify its operation. To do this, the CPU runs software called Link Manager that has the function of communicating with other devices through the LMP protocol.
A Bluetooth device is ashort-rangewirelessdevice. Bluetooth devices arefabricatedonRF CMOSintegrated circuit(RF circuit) chips.[133][134]
Bluetooth is defined as a layer protocol architecture consisting of core protocols, cable replacement protocols, telephony control protocols, and adopted protocols.[135]Mandatory protocols for all Bluetooth stacks are LMP, L2CAP and SDP. In addition, devices that communicate with Bluetooth almost universally can use these protocols:HCIand RFCOMM.[citation needed]
The Link Manager (LM) is the system that manages establishing the connection between devices. It is responsible for the establishment, authentication and configuration of the link. The Link Manager locates other managers and communicates with them via the management protocol of the LMP link. To perform its function as a service provider, the LM uses the services included in the Link Controller (LC).
The Link Manager Protocol basically consists of several PDUs (Protocol Data Units) that are sent from one device to another. The following is a list of supported services:
The Host Controller Interface provides a command interface between the controller and the host.
TheLogical Link Control and Adaptation Protocol(L2CAP) is used to multiplex multiple logical connections between two devices using different higher level protocols.
Provides segmentation and reassembly of on-air packets.
InBasicmode, L2CAP provides packets with a payload configurable up to 64 kB, with 672 bytes as the defaultMTU, and 48 bytes as the minimum mandatory supported MTU.
InRetransmission and Flow Controlmodes, L2CAP can be configured either for isochronous data or reliable data per channel by performing retransmissions and CRC checks.
Bluetooth Core Specification Addendum 1 adds two additional L2CAP modes to the core specification. These modes effectively deprecate original Retransmission and Flow Control modes:
Reliability in any of these modes is optionally and/or additionally guaranteed by the lower layer Bluetooth BDR/EDR air interface by configuring the number of retransmissions and flush timeout (time after which the radio flushes packets). In-order sequencing is guaranteed by the lower layer.
Only L2CAP channels configured in ERTM or SM may be operated over AMP logical links.
TheService Discovery Protocol(SDP) allows a device to discover services offered by other devices, and their associated parameters. For example, when you use a mobile phone with a Bluetooth headset, the phone uses SDP to determine whichBluetooth profilesthe headset can use (Headset Profile, Hands Free Profile (HFP),Advanced Audio Distribution Profile (A2DP)etc.) and the protocol multiplexer settings needed for the phone to connect to the headset using each of them. Each service is identified by aUniversally unique identifier(UUID), with official services (Bluetooth profiles) assigned a short form UUID (16 bits rather than the full 128).
Radio Frequency Communications(RFCOMM) is a cable replacement protocol used for generating a virtual serial data stream. RFCOMM provides for binary data transport and emulatesEIA-232(formerly RS-232) control signals over the Bluetooth baseband layer, i.e., it is a serial port emulation.
RFCOMM provides a simple, reliable, data stream to the user, similar to TCP. It is used directly by many telephony related profiles as a carrier for AT commands, as well as being a transport layer for OBEX over Bluetooth.
Many Bluetooth applications use RFCOMM because of its widespread support and publicly available API on most operating systems. Additionally, applications that used a serial port to communicate can be quickly ported to use RFCOMM.
TheBluetooth Network Encapsulation Protocol(BNEP) is used for transferring another protocol stack's data via an L2CAP channel.
Its main purpose is the transmission of IP packets in the Personal Area Networking Profile.
BNEP performs a similar function toSNAPin Wireless LAN.
TheAudio/Video Control Transport Protocol(AVCTP) is used by the remote control profile to transfer AV/C commands over an L2CAP channel. The music control buttons on a stereo headset use this protocol to control the music player.
TheAudio/Video Distribution Transport Protocol(AVDTP) is used by the advanced audio distribution (A2DP) profile to stream music to stereo headsets over anL2CAPchannel intended for video distribution profile in the Bluetooth transmission.
TheTelephony Control Protocol– Binary(TCS BIN) is the bit-oriented protocol that defines the call control signaling for the establishment of voice and data calls between Bluetooth devices. Additionally, "TCS BIN defines mobility management procedures for handling groups of Bluetooth TCS devices."
TCS-BIN is only used by the cordless telephony profile, which failed to attract implementers. As such it is only of historical interest.
Adopted protocols are defined by other standards-making organizations and incorporated into Bluetooth's protocol stack, allowing Bluetooth to code protocols only when necessary. The adopted protocols include:
Depending on packet type, individual packets may be protected byerror correction, either 1/3 rateforward error correction(FEC) or 2/3 rate. In addition, packets with CRC will be retransmitted until acknowledged byautomatic repeat request(ARQ).
Any Bluetooth device indiscoverable modetransmits the following information on demand:
Any device may perform an inquiry to find other devices to connect to, and any device can be configured to respond to such inquiries. However, if the device trying to connect knows the address of the device, it always responds to direct connection requests and transmits the information shown in the list above if requested. Use of a device's services may require pairing or acceptance by its owner, but the connection itself can be initiated by any device and held until it goes out of range. Some devices can be connected to only one device at a time, and connecting to them prevents them from connecting to other devices and appearing in inquiries until they disconnect from the other device.
Every device has aunique 48-bit address. However, these addresses are generally not shown in inquiries. Instead, friendly Bluetooth names are used, which can be set by the user. This name appears when another user scans for devices and in lists of paired devices.
Most cellular phones have the Bluetooth name set to the manufacturer and model of the phone by default. Most cellular phones and laptops show only the Bluetooth names and special programs are required to get additional information about remote devices. This can be confusing as, for example, there could be several cellular phones in range namedT610(seeBluejacking).
Many services offered over Bluetooth can expose private data or let a connecting party control the Bluetooth device. Security reasons make it necessary to recognize specific devices, and thus enable control over which devices can connect to a given Bluetooth device. At the same time, it is useful for Bluetooth devices to be able to establish a connection without user intervention (for example, as soon as in range).
To resolve this conflict, Bluetooth uses a process calledbonding, and a bond is generated through a process calledpairing. The pairing process is triggered either by a specific request from a user to generate a bond (for example, the user explicitly requests to "Add a Bluetooth device"), or it is triggered automatically when connecting to a service where (for the first time) the identity of a device is required for security purposes. These two cases are referred to as dedicated bonding and general bonding respectively.
Pairing often involves some level of user interaction. This user interaction confirms the identity of the devices. When pairing completes, a bond forms between the two devices, enabling those two devices to connect in the future without repeating the pairing process to confirm device identities. When desired, the user can remove the bonding relationship.
During pairing, the two devices establish a relationship by creating ashared secretknown as alink key. If both devices store the same link key, they are said to bepairedorbonded. A device that wants to communicate only with a bonded device cancryptographicallyauthenticatethe identity of the other device, ensuring it is the same device it previously paired with. Once a link key is generated, an authenticatedACLlink between the devices may beencryptedto protect exchanged data againsteavesdropping. Users can delete link keys from either device, which removes the bond between the devices—so it is possible for one device to have a stored link key for a device it is no longer paired with.
Bluetooth services generally require either encryption or authentication and as such require pairing before they let a remote device connect. Some services, such as the Object Push Profile, elect not to explicitly require authentication or encryption so that pairing does not interfere with the user experience associated with the service use-cases.
Pairing mechanisms changed significantly with the introduction of Secure Simple Pairing in Bluetooth v2.1. The following summarizes the pairing mechanisms:
SSP is considered simple for the following reasons:
Prior to Bluetooth v2.1, encryption is not required and can be turned off at any time. Moreover, the encryption key is only good for approximately 23.5 hours; using a single encryption key longer than this time allows simpleXOR attacksto retrieve the encryption key.
Bluetooth v2.1 addresses this in the following ways:
Link keys may be stored on the device file system, not on the Bluetooth chip itself. Many Bluetooth chip manufacturers let link keys be stored on the device—however, if the device is removable, this means that the link key moves with the device.
Bluetooth implementsconfidentiality,authenticationandkeyderivation with custom algorithms based on theSAFER+block cipher. Bluetooth key generation is generally based on a Bluetooth PIN, which must be entered into both devices. This procedure might be modified if one of the devices has a fixed PIN (e.g., for headsets or similar devices with a restricted user interface). During pairing, an initialization key or master key is generated, using the E22 algorithm.[136]TheE0stream cipher is used for encrypting packets, granting confidentiality, and is based on a shared cryptographic secret, namely a previously generated link key or master key. Those keys, used for subsequent encryption of data sent via the air interface, rely on the Bluetooth PIN, which has been entered into one or both devices.
An overview of Bluetooth vulnerabilities exploits was published in 2007 by Andreas Becker.[137]
In September 2008, theNational Institute of Standards and Technology(NIST) published a Guide to Bluetooth Security as a reference for organizations. It describes Bluetooth security capabilities and how to secure Bluetooth technologies effectively. While Bluetooth has its benefits, it is susceptible to denial-of-service attacks, eavesdropping, man-in-the-middle attacks, message modification, and resource misappropriation. Users and organizations must evaluate their acceptable level of risk and incorporate security into the lifecycle of Bluetooth devices. To help mitigate risks, included in the NIST document are security checklists with guidelines and recommendations for creating and maintaining secure Bluetooth piconets, headsets, and smart card readers.[138]
Bluetooth v2.1 – finalized in 2007 with consumer devices first appearing in 2009 – makes significant changes to Bluetooth's security, including pairing. See thepairing mechanismssection for more about these changes.
Bluejacking is the sending of either a picture or a message from one user to an unsuspecting user through Bluetooth wireless technology. Common applications include short messages, e.g., "You've just been bluejacked!"[139]Bluejacking does not involve the removal or alteration of any data from the device.[140]
Some form ofDoSis also possible, even in modern devices, by sending unsolicited pairing requests in rapid succession; this becomes disruptive because most systems display a full screen notification for every connection request, interrupting every other activity, especially on less powerful devices.
In 2001, Jakobsson and Wetzel fromBell Laboratoriesdiscovered flaws in the Bluetooth pairing protocol and also pointed to vulnerabilities in the encryption scheme.[141]In 2003, Ben and Adam Laurie from A.L. Digital Ltd. discovered that serious flaws in some poor implementations of Bluetooth security may lead to disclosure of personal data.[142]In a subsequent experiment, Martin Herfurt from the trifinite.group was able to do a field-trial at theCeBITfairgrounds, showing the importance of the problem to the world. A new attack calledBlueBugwas used for this experiment.[143]In 2004 the first purportedvirususing Bluetooth to spread itself among mobile phones appeared on theSymbian OS.[144]The virus was first described byKaspersky Laband requires users to confirm the installation of unknown software before it can propagate. The virus was written as a proof-of-concept by a group of virus writers known as "29A" and sent to anti-virus groups. Thus, it should be regarded as a potential (but not real) security threat to Bluetooth technology orSymbian OSsince the virus has never spread outside of this system. In August 2004, a world-record-setting experiment (see alsoBluetooth sniping) showed that the range of Class 2 Bluetooth radios could be extended to 1.78 km (1.11 mi) with directional antennas and signal amplifiers.[145]This poses a potential security threat because it enables attackers to access vulnerable Bluetooth devices from a distance beyond expectation. The attacker must also be able to receive information from the victim to set up a connection. No attack can be made against a Bluetooth device unless the attacker knows its Bluetooth address and which channels to transmit on, although these can be deduced within a few minutes if the device is in use.[146]
In January 2005, a mobilemalwareworm known as Lasco surfaced. The worm began targeting mobile phones usingSymbian OS(Series 60 platform) using Bluetooth enabled devices to replicate itself and spread to other devices. The worm is self-installing and begins once the mobile user approves the transfer of the file (Velasco.sis) from another device. Once installed, the worm begins looking for other Bluetooth enabled devices to infect. Additionally, the worm infects other.SISfiles on the device, allowing replication to another device through the use of removable media (Secure Digital,CompactFlash, etc.). The worm can render the mobile device unstable.[147]
In April 2005,University of Cambridgesecurity researchers published results of their actual implementation of passive attacks against thePIN-basedpairing between commercial Bluetooth devices. They confirmed that attacks are practicably fast, and the Bluetooth symmetric key establishment method is vulnerable. To rectify this vulnerability, they designed an implementation that showed that stronger, asymmetric key establishment is feasible for certain classes of devices, such as mobile phones.[148]
In June 2005, Yaniv Shaked[149]and Avishai Wool[150]published a paper describing both passive and active methods for obtaining the PIN for a Bluetooth link. The passive attack allows a suitably equipped attacker to eavesdrop on communications and spoof if the attacker was present at the time of initial pairing. The active method makes use of a specially constructed message that must be inserted at a specific point in the protocol, to make the master and slave repeat the pairing process. After that, the first method can be used to crack the PIN. This attack's major weakness is that it requires the user of the devices under attack to re-enter the PIN during the attack when the device prompts them to. Also, this active attack probably requires custom hardware, since most commercially available Bluetooth devices are not capable of the timing necessary.[151]
In August 2005, police inCambridgeshire, England, issued warnings about thieves using Bluetooth enabled phones to track other devices left in cars. Police are advising users to ensure that any mobile networking connections are de-activated if laptops and other devices are left in this way.[152]
In April 2006, researchers fromSecure NetworkandF-Securepublished a report that warns of the large number of devices left in a visible state, and issued statistics on the spread of various Bluetooth services and the ease of spread of an eventual Bluetooth worm.[153]
In October 2006, at the Luxembourgish Hack.lu Security Conference, Kevin Finistere and Thierry Zoller demonstrated and released a remote root shell via Bluetooth on Mac OS X v10.3.9 and v10.4. They also demonstrated the first Bluetooth PIN and Linkkeys cracker, which is based on the research of Wool and Shaked.[154]
In April 2017, security researchers at Armis discovered multiple exploits in the Bluetooth software in various platforms, includingMicrosoft Windows,Linux, AppleiOS, and GoogleAndroid. These vulnerabilities are collectively called "BlueBorne". The exploits allow an attacker to connect to devices or systems without authentication and can give them "virtually full control over the device". Armis contacted Google, Microsoft, Apple, Samsung and Linux developers allowing them to patch their software before the coordinated announcement of the vulnerabilities on 12 September 2017.[155]
In July 2018, Lior Neumann andEli Biham, researchers at the Technion – Israel Institute of Technology identified a security vulnerability in the latest Bluetooth pairing procedures: Secure Simple Pairing and LE Secure Connections.[156][157]
Also, in October 2018, Karim Lounis, a network security researcher at Queen's University, identified a security vulnerability, called CDV (Connection Dumping Vulnerability), on various Bluetooth devices that allows an attacker to tear down an existing Bluetooth connection and cause the deauthentication and disconnection of the involved devices. The researcher demonstrated the attack on various devices of different categories and from different manufacturers.[158]
In August 2019, security researchers at theSingapore University of Technology and Design, Helmholtz Center for Information Security, andUniversity of Oxforddiscovered a vulnerability, called KNOB (Key Negotiation of Bluetooth) in the key negotiation that would "brute force the negotiated encryption keys, decrypt the eavesdropped ciphertext, and inject valid encrypted messages (in real-time)".[159][160]Google released anAndroidsecurity patch on 5 August 2019, which removed this vulnerability.[161]
In November 2023, researchers fromEurecomrevealed a new class of attacks known as BLUFFS (Bluetooth Low Energy Forward and Future Secrecy Attacks). These 6 new attacks expand on and work in conjunction with the previously known KNOB and BIAS (Bluetooth Impersonation AttackS) attacks. While the previous KNOB and BIAS attacks allowed an attacker to decrypt and spoof Bluetooth packets within a session, BLUFFS extends this capability to all sessions generated by a device (including past, present, and future). All devices running Bluetooth versions 4.2 up to and including 5.4 are affected.[162][163]
Bluetooth uses theradio frequencyspectrum in the 2.402GHz to 2.480GHz range,[164]which isnon-ionizing radiation, of similar bandwidth to that used by wireless and mobile phones. No specific harm has been demonstrated, even though wireless transmission has been included byIARCin the possiblecarcinogenlist. Maximum power output from a Bluetooth radio is 100mWfor Class1, 2.5mW for Class2, and 1mW for Class3 devices. Even the maximum power output of Class1 is a lower level than the lowest-powered mobile phones.[165]UMTSandW-CDMAoutput 250mW,GSM1800/1900outputs 1000mW, andGSM850/900outputs 2000mW.
The Bluetooth Innovation World Cup, a marketing initiative of the Bluetooth Special Interest Group (SIG), was an international competition that encouraged the development of innovations for applications leveraging Bluetooth technology in sports, fitness and health care products. The competition aimed to stimulate new markets.[166]
The Bluetooth Innovation World Cup morphed into the Bluetooth Breakthrough Awards in 2013. Bluetooth SIG subsequently launched the Imagine Blue Award in 2016 at Bluetooth World.[167]The Bluetooth Breakthrough Awards program highlights the most innovative products and applications available today, prototypes coming soon, and student-led projects in the making.[168]
|
https://en.wikipedia.org/wiki/Bluetooth
|
Public-key cryptography, orasymmetric cryptography, is the field of cryptographic systems that use pairs of related keys. Each key pair consists of apublic keyand a correspondingprivate key.[1][2]Key pairs are generated withcryptographicalgorithmsbased onmathematicalproblems termedone-way functions. Security of public-key cryptography depends on keeping the private key secret; the public key can be openly distributed without compromising security.[3]There are many kinds of public-key cryptosystems, with different security goals, includingdigital signature,Diffie–Hellman key exchange,public-key key encapsulation, and public-key encryption.
Public key algorithms are fundamental security primitives in moderncryptosystems, including applications and protocols that offer assurance of the confidentiality and authenticity of electronic communications and data storage. They underpin numerous Internet standards, such asTransport Layer Security (TLS),SSH,S/MIME, andPGP. Compared tosymmetric cryptography, public-key cryptography can be too slow for many purposes,[4]so these protocols often combine symmetric cryptography with public-key cryptography inhybrid cryptosystems.
Before the mid-1970s, all cipher systems usedsymmetric key algorithms, in which the samecryptographic keyis used with the underlying algorithm by both the sender and the recipient, who must both keep it secret. Of necessity, the key in every such system had to be exchanged between the communicating parties in some secure way prior to any use of the system – for instance, via asecure channel. This requirement is never trivial and very rapidly becomes unmanageable as the number of participants increases, or when secure channels are not available, or when, (as is sensible cryptographic practice), keys are frequently changed. In particular, if messages are meant to be secure from other users, a separate key is required for each possible pair of users.
By contrast, in a public-key cryptosystem, the public keys can be disseminated widely and openly, and only the corresponding private keys need be kept secret.
The two best-known types of public key cryptography aredigital signatureand public-key encryption:
For example, a software publisher can create a signature key pair and include the public key in software installed on computers. Later, the publisher can distribute an update to the software signed using the private key, and any computer receiving an update can confirm it is genuine by verifying the signature using the public key. As long as the software publisher keeps the private key secret, even if a forger can distribute malicious updates to computers, they cannot convince the computers that any malicious updates are genuine.
For example, a journalist can publish the public key of an encryption key pair on a web site so that sources can send secret messages to the news organization in ciphertext.
Only the journalist who knows the corresponding private key can decrypt the ciphertexts to obtain the sources' messages—an eavesdropper reading email on its way to the journalist cannot decrypt the ciphertexts. However, public-key encryption does not concealmetadatalike what computer a source used to send a message, when they sent it, or how long it is.[9][10][11][12]Public-key encryption on its own also does not tell the recipient anything about who sent a message[8]:283[13][14]—it just conceals the content of the message.
One important issue is confidence/proof that a particular public key is authentic, i.e. that it is correct and belongs to the person or entity claimed, and has not been tampered with or replaced by some (perhaps malicious) third party. There are several possible approaches, including:
Apublic key infrastructure(PKI), in which one or more third parties – known ascertificate authorities– certify ownership of key pairs.TLSrelies upon this. This implies that the PKI system (software, hardware, and management) is trust-able by all involved.
A "web of trust" decentralizes authentication by using individual endorsements of links between a user and the public key belonging to that user.PGPuses this approach, in addition to lookup in thedomain name system(DNS). TheDKIMsystem for digitally signing emails also uses this approach.
The most obvious application of a public key encryption system is for encrypting communication to provideconfidentiality– a message that a sender encrypts using the recipient's public key, which can be decrypted only by the recipient's paired private key.
Another application in public key cryptography is thedigital signature. Digital signature schemes can be used for senderauthentication.
Non-repudiationsystems use digital signatures to ensure that one party cannot successfully dispute its authorship of a document or communication.
Further applications built on this foundation include:digital cash,password-authenticated key agreement,time-stamping servicesand non-repudiation protocols.
Because asymmetric key algorithms are nearly always much more computationally intensive than symmetric ones, it is common to use a public/privateasymmetrickey-exchange algorithmto encrypt and exchange asymmetric key, which is then used bysymmetric-key cryptographyto transmit data using the now-sharedsymmetric keyfor a symmetric key encryption algorithm.PGP,SSH, and theSSL/TLSfamily of schemes use this procedure; they are thus calledhybrid cryptosystems. The initialasymmetriccryptography-based key exchange to share a server-generatedsymmetrickey from the server to client has the advantage of not requiring that a symmetric key be pre-shared manually, such as on printed paper or discs transported by a courier, while providing the higher data throughput of symmetric key cryptography over asymmetric key cryptography for the remainder of the shared connection.
As with all security-related systems, there are various potential weaknesses in public-key cryptography. Aside from poor choice of an asymmetric key algorithm (there are few that are widely regarded as satisfactory) or too short a key length, the chief security risk is that the private key of a pair becomes known. All security of messages, authentication, etc., will then be lost.
Additionally, with the advent ofquantum computing, many asymmetric key algorithms are considered vulnerable to attacks, and new quantum-resistant schemes are being developed to overcome the problem.[15][16]
All public key schemes are in theory susceptible to a "brute-force key search attack".[17]However, such an attack is impractical if the amount of computation needed to succeed – termed the "work factor" byClaude Shannon– is out of reach of all potential attackers. In many cases, the work factor can be increased by simply choosing a longer key. But other algorithms may inherently have much lower work factors, making resistance to a brute-force attack (e.g., from longer keys) irrelevant. Some special and specific algorithms have been developed to aid in attacking some public key encryption algorithms; bothRSAandElGamal encryptionhave known attacks that are much faster than the brute-force approach.[citation needed]None of these are sufficiently improved to be actually practical, however.
Major weaknesses have been found for several formerly promising asymmetric key algorithms. The"knapsack packing" algorithmwas found to be insecure after the development of a new attack.[18]As with all cryptographic functions, public-key implementations may be vulnerable toside-channel attacksthat exploit information leakage to simplify the search for a secret key. These are often independent of the algorithm being used. Research is underway to both discover, and to protect against, new attacks.
Another potential security vulnerability in using asymmetric keys is the possibility of a"man-in-the-middle" attack, in which the communication of public keys is intercepted by a third party (the "man in the middle") and then modified to provide different public keys instead. Encrypted messages and responses must, in all instances, be intercepted, decrypted, and re-encrypted by the attacker using the correct public keys for the different communication segments so as to avoid suspicion.[citation needed]
A communication is said to be insecure where data is transmitted in a manner that allows for interception (also called "sniffing"). These terms refer to reading the sender's private data in its entirety. A communication is particularly unsafe when interceptions can not be prevented or monitored by the sender.[19]
A man-in-the-middle attack can be difficult to implement due to the complexities of modern security protocols. However, the task becomes simpler when a sender is using insecure media such as public networks, theInternet, or wireless communication. In these cases an attacker can compromise the communications infrastructure rather than the data itself. A hypothetical malicious staff member at anInternet service provider(ISP) might find a man-in-the-middle attack relatively straightforward. Capturing the public key would only require searching for the key as it gets sent through the ISP's communications hardware; in properly implemented asymmetric key schemes, this is not a significant risk.[citation needed]
In some advanced man-in-the-middle attacks, one side of the communication will see the original data while the other will receive a malicious variant. Asymmetric man-in-the-middle attacks can prevent users from realizing their connection is compromised. This remains so even when one user's data is known to be compromised because the data appears fine to the other user. This can lead to confusing disagreements between users such as "it must be on your end!" when neither user is at fault. Hence, man-in-the-middle attacks are only fully preventable when the communications infrastructure is physically controlled by one or both parties; such as via a wired route inside the sender's own building. In summation, public keys are easier to alter when the communications hardware used by a sender is controlled by an attacker.[20][21][22]
One approach to prevent such attacks involves the use of apublic key infrastructure(PKI); a set of roles, policies, and procedures needed to create, manage, distribute, use, store andrevokedigital certificates and manage public-key encryption. However, this has potential weaknesses.
For example, the certificate authority issuing the certificate must be trusted by all participating parties to have properly checked the identity of the key-holder, to have ensured the correctness of the public key when it issues a certificate, to be secure from computer piracy, and to have made arrangements with all participants to check all their certificates before protected communications can begin.Web browsers, for instance, are supplied with a long list of "self-signed identity certificates" from PKI providers – these are used to check thebona fidesof the certificate authority and then, in a second step, the certificates of potential communicators. An attacker who could subvert one of those certificate authorities into issuing a certificate for a bogus public key could then mount a "man-in-the-middle" attack as easily as if the certificate scheme were not used at all. An attacker who penetrates an authority's servers and obtains its store of certificates and keys (public and private) would be able to spoof, masquerade, decrypt, and forge transactions without limit, assuming that they were able to place themselves in the communication stream.
Despite its theoretical and potential problems, Public key infrastructure is widely used. Examples includeTLSand its predecessorSSL, which are commonly used to provide security for web browser transactions (for example, most websites utilize TLS forHTTPS).
Aside from the resistance to attack of a particular key pair, the security of the certificationhierarchymust be considered when deploying public key systems. Some certificate authority – usually a purpose-built program running on a server computer – vouches for the identities assigned to specific private keys by producing a digital certificate.Public key digital certificatesare typically valid for several years at a time, so the associated private keys must be held securely over that time. When a private key used for certificate creation higher in the PKI server hierarchy is compromised, or accidentally disclosed, then a "man-in-the-middle attack" is possible, making any subordinate certificate wholly insecure.
Most of the available public-key encryption software does not concealmetadatain the message header, which might include the identities of the sender and recipient, the sending date, subject field, and the software they use etc. Rather, only the body of the message is concealed and can only be decrypted with the private key of the intended recipient. This means that a third party could construct quite a detailed model of participants in a communication network, along with the subjects being discussed, even if the message body itself is hidden.
However, there has been a recent demonstration of messaging with encrypted headers, which obscures the identities of the sender and recipient, and significantly reduces the available metadata to a third party.[23]The concept is based around an open repository containing separately encrypted metadata blocks and encrypted messages. Only the intended recipient is able to decrypt the metadata block, and having done so they can identify and download their messages and decrypt them. Such a messaging system is at present in an experimental phase and not yet deployed. Scaling this method would reveal to the third party only the inbox server being used by the recipient and the timestamp of sending and receiving. The server could be shared by thousands of users, making social network modelling much more challenging.
During the earlyhistory of cryptography, two parties would rely upon a key that they would exchange by means of a secure, but non-cryptographic, method such as a face-to-face meeting, or a trusted courier. This key, which both parties must then keep absolutely secret, could then be used to exchange encrypted messages. A number of significant practical difficulties arise with this approach todistributing keys.
In his 1874 bookThe Principles of Science,William Stanley Jevonswrote:[24]
Can the reader say what two numbers multiplied together will produce the number8616460799?[25]I think it unlikely that anyone but myself will ever know.[24]
Here he described the relationship ofone-way functionsto cryptography, and went on to discuss specifically thefactorizationproblem used to create atrapdoor function. In July 1996, mathematicianSolomon W. Golombsaid: "Jevons anticipated a key feature of the RSA Algorithm for public key cryptography, although he certainly did not invent the concept of public key cryptography."[26]
In 1970,James H. Ellis, a British cryptographer at the UKGovernment Communications Headquarters(GCHQ), conceived of the possibility of "non-secret encryption", (now called public key cryptography), but could see no way to implement it.[27][28]
In 1973, his colleagueClifford Cocksimplemented what has become known as theRSA encryption algorithm, giving a practical method of "non-secret encryption", and in 1974 another GCHQ mathematician and cryptographer,Malcolm J. Williamson, developed what is now known asDiffie–Hellman key exchange.
The scheme was also passed to the US'sNational Security Agency.[29]Both organisations had a military focus and only limited computing power was available in any case; the potential of public key cryptography remained unrealised by either organization:
I judged it most important for military use ... if you can share your key rapidly and electronically, you have a major advantage over your opponent. Only at the end of the evolution fromBerners-Leedesigning an open internet architecture forCERN, its adaptation and adoption for theArpanet... did public key cryptography realise its full potential.
—Ralph Benjamin[29]
These discoveries were not publicly acknowledged for 27 years, until the research was declassified by the British government in 1997.[30]
In 1976, an asymmetric key cryptosystem was published byWhitfield DiffieandMartin Hellmanwho, influenced byRalph Merkle's work on public key distribution, disclosed a method of public key agreement. This method of key exchange, which usesexponentiation in a finite field, came to be known asDiffie–Hellman key exchange.[31]This was the first published practical method for establishing a shared secret-key over an authenticated (but not confidential) communications channel without using a prior shared secret. Merkle's "public key-agreement technique" became known asMerkle's Puzzles, and was invented in 1974 and only published in 1978. This makes asymmetric encryption a rather new field in cryptography although cryptography itself dates back more than 2,000 years.[32]
In 1977, a generalization of Cocks's scheme was independently invented byRon Rivest,Adi ShamirandLeonard Adleman, all then atMIT. The latter authors published their work in 1978 inMartin Gardner'sScientific Americancolumn, and the algorithm came to be known asRSA, from their initials.[33]RSA usesexponentiation moduloa product of two very largeprimes, to encrypt and decrypt, performing both public key encryption and public key digital signatures. Its security is connected to the extreme difficulty offactoring large integers, a problem for which there is no known efficient general technique. A description of the algorithm was published in theMathematical Gamescolumn in the August 1977 issue ofScientific American.[34]
Since the 1970s, a large number and variety of encryption, digital signature, key agreement, and other techniques have been developed, including theRabin cryptosystem,ElGamal encryption,DSAandECC.
Examples of well-regarded asymmetric key techniques for varied purposes include:
Examples of asymmetric key algorithms not yet widely adopted include:
Examples of notable – yet insecure – asymmetric key algorithms include:
Examples of protocols using asymmetric key algorithms include:
|
https://en.wikipedia.org/wiki/Public-key_cryptography
|
Incryptography,encryption(more specifically,encoding) is the process of transforming information in a way that, ideally, only authorized parties can decode. This process converts the original representation of the information, known asplaintext, into an alternative form known asciphertext. Despite its goal, encryption does not itself prevent interference but denies the intelligible content to a would-be interceptor.
For technical reasons, an encryption scheme usually uses apseudo-randomencryptionkeygenerated by analgorithm. It is possible to decrypt the message without possessing the key but, for a well-designed encryption scheme, considerable computational resources and skills are required. An authorized recipient can easily decrypt the message with the key provided by the originator to recipients but not to unauthorized users.
Historically, various forms of encryption have been used to aid in cryptography. Early encryption techniques were often used in military messaging. Since then, new techniques have emerged and become commonplace in all areas of modern computing.[1]Modern encryption schemes use the concepts ofpublic-key[2]andsymmetric-key.[1]Modern encryption techniques ensure security because modern computers are inefficient at cracking the encryption.
One of the earliest forms of encryption is symbol replacement, which was first found in the tomb ofKhnumhotep II, who lived in 1900 BC Egypt. Symbol replacement encryption is “non-standard,” which means that the symbols require a cipher or key to understand. This type of early encryption was used throughout Ancient Greece and Rome for military purposes.[3]One of the most famous military encryption developments was theCaesar cipher, in which a plaintext letter is shifted a fixed number of positions along the alphabet to get the encoded letter. A message encoded with this type of encryption could be decoded with a fixed number on the Caesar cipher.[4]
Around 800 AD, Arab mathematicianAl-Kindideveloped the technique offrequency analysis– which was an attempt to crack ciphers systematically, including the Caesar cipher.[3]This technique looked at the frequency of letters in the encrypted message to determine the appropriate shift: for example, the most common letter in English text is E and is therefore likely to be represented by the letter that appears most commonly in the ciphertext. This technique was rendered ineffective by thepolyalphabetic cipher, described byAl-Qalqashandi(1355–1418)[2]andLeon Battista Alberti(in 1465), which varied the substitution alphabet as encryption proceeded in order to confound such analysis.
Around 1790,Thomas Jeffersontheorized a cipher to encode and decode messages to provide a more secure way of military correspondence. The cipher, known today as the Wheel Cipher or theJefferson Disk, although never actually built, was theorized as a spool that could jumble an English message up to 36 characters. The message could be decrypted by plugging in the jumbled message to a receiver with an identical cipher.[5]
A similar device to the Jefferson Disk, theM-94, was developed in 1917 independently by US Army Major Joseph Mauborne. This device was used in U.S. military communications until 1942.[6]
In World War II, the Axis powers used a more advanced version of the M-94 called theEnigma Machine. The Enigma Machine was more complex because unlike the Jefferson Wheel and the M-94, each day the jumble of letters switched to a completely new combination. Each day's combination was only known by the Axis, so many thought the only way to break the code would be to try over 17,000 combinations within 24 hours.[7]The Allies used computing power to severely limit the number of reasonable combinations they needed to check every day, leading to the breaking of the Enigma Machine.
Today, encryption is used in the transfer of communication over theInternetfor security and commerce.[1]As computing power continues to increase, computer encryption is constantly evolving to preventeavesdroppingattacks.[8]One of the first "modern" cipher suites,DES, used a 56-bit key with 72,057,594,037,927,936 possibilities; it was cracked in 1999 byEFF'sbrute-forceDES cracker, which required 22 hours and 15 minutes to do so. Modern encryption standards often use stronger key sizes, such asAES(256-bit mode),TwoFish,ChaCha20-Poly1305,Serpent(configurable up to 512-bit). Cipher suites that use a 128-bit or higher key, like AES, will not be able to be brute-forced because the total amount of keys is 3.4028237e+38 possibilities. The most likely option for cracking ciphers with high key size is to find vulnerabilities in the cipher itself, like inherent biases andbackdoorsor by exploiting physical side effects throughSide-channel attacks. For example,RC4, a stream cipher, was cracked due to inherent biases and vulnerabilities in the cipher.
In the context of cryptography, encryption serves as a mechanism to ensureconfidentiality.[1]Since data may be visible on the Internet, sensitive information such aspasswordsand personal communication may be exposed to potentialinterceptors.[1]The process of encrypting and decrypting messages involveskeys. The two main types of keys in cryptographic systems are symmetric-key and public-key (also known as asymmetric-key).[9][10]
Many complex cryptographic algorithms often use simplemodular arithmeticin their implementations.[11]
Insymmetric-keyschemes,[12]the encryption and decryption keys are the same. Communicating parties must have the same key in order to achieve secure communication. The German Enigma Machine used a new symmetric-key each day for encoding and decoding messages.
Inpublic-key cryptographyschemes, the encryption key is published for anyone to use and encrypt messages. However, only the receiving party has access to the decryption key that enables messages to be read.[13]Public-key encryption was first described in a secret document in 1973;[14]beforehand, all encryption schemes were symmetric-key (also called private-key).[15]: 478Although published subsequently, the work of Diffie and Hellman was published in a journal with a large readership, and the value of the methodology was explicitly described.[16]The method became known as theDiffie-Hellman key exchange.
RSA (Rivest–Shamir–Adleman)is another notable public-keycryptosystem. Created in 1978, it is still used today for applications involvingdigital signatures.[17]Usingnumber theory, the RSA algorithm selects twoprime numbers, which help generate both the encryption and decryption keys.[18]
A publicly available public-key encryption application calledPretty Good Privacy(PGP) was written in 1991 byPhil Zimmermann, and distributed free of charge with source code. PGP was purchased bySymantecin 2010 and is regularly updated.[19]
Encryption has long been used bymilitariesandgovernmentsto facilitate secret communication. It is now commonly used in protecting information within many kinds of civilian systems. For example, theComputer Security Institutereported that in 2007, 71% of companies surveyed used encryption for some of their data in transit, and 53% used encryption for some of their data in storage.[20]Encryption can be used to protect data "at rest", such as information stored on computers and storage devices (e.g.USB flash drives). In recent years, there have been numerous reports of confidential data, such as customers' personal records, being exposed through loss or theft of laptops or backup drives; encrypting such files at rest helps protect them if physical security measures fail.[21][22][23]Digital rights managementsystems, which prevent unauthorized use or reproduction of copyrighted material and protect software againstreverse engineering(see alsocopy protection), is another somewhat different example of using encryption on data at rest.[24]
Encryption is also used to protect data in transit, for example data being transferred vianetworks(e.g. the Internet,e-commerce),mobile telephones,wireless microphones,wireless intercomsystems,Bluetoothdevices and bankautomatic teller machines. There have been numerous reports of data in transit being intercepted in recent years.[25]Data should also be encrypted when transmitted across networks in order to protect againsteavesdroppingof network traffic by unauthorized users.[26]
Conventional methods for permanently deleting data from a storage device involve overwriting the device's whole content with zeros, ones, or other patterns – a process which can take a significant amount of time, depending on the capacity and the type of storage medium. Cryptography offers a way of making the erasure almost instantaneous. This method is calledcrypto-shredding. An example implementation of this method can be found oniOSdevices, where the cryptographic key is kept in a dedicated 'effaceablestorage'.[27]Because the key is stored on the same device, this setup on its own does not offer full privacy or security protection if an unauthorized person gains physical access to the device.
Encryption is used in the 21st century to protect digital data and information systems. As computing power increased over the years, encryption technology has only become more advanced and secure. However, this advancement in technology has also exposed a potential limitation of today's encryption methods.
The length of the encryption key is an indicator of the strength of the encryption method.[28]For example, the original encryption key,DES(Data Encryption Standard), was 56 bits, meaning it had 2^56 combination possibilities. With today's computing power, a 56-bit key is no longer secure, being vulnerable tobrute force attacks.[29]
Quantum computinguses properties ofquantum mechanicsin order to process large amounts of data simultaneously. Quantum computing has been found to achieve computing speeds thousands of times faster than today's supercomputers.[30]This computing power presents a challenge to today's encryption technology. For example, RSA encryption uses the multiplication of very large prime numbers to create asemiprime numberfor its public key. Decoding this key without its private key requires this semiprime number to be factored, which can take a very long time to do with modern computers. It would take a supercomputer anywhere between weeks to months to factor in this key.[citation needed]However, quantum computing can usequantum algorithmsto factor this semiprime number in the same amount of time it takes for normal computers to generate it. This would make all data protected by current public-key encryption vulnerable to quantum computing attacks.[31]Other encryption techniques likeelliptic curve cryptographyand symmetric key encryption are also vulnerable to quantum computing.[citation needed]
While quantum computing could be a threat to encryption security in the future, quantum computing as it currently stands is still very limited. Quantum computing currently is not commercially available, cannot handle large amounts of code, and only exists as computational devices, not computers.[32]Furthermore, quantum computing advancements will be able to be used in favor of encryption as well. TheNational Security Agency(NSA) is currently preparing post-quantum encryption standards for the future.[33]Quantum encryption promises a level of security that will be able to counter the threat of quantum computing.[32]
Encryption is an important tool but is not sufficient alone to ensure thesecurityorprivacyof sensitive information throughout its lifetime. Most applications of encryption protect information only at rest or in transit, leaving sensitive data in clear text and potentially vulnerable to improper disclosure during processing, such as by acloudservice for example.Homomorphic encryptionandsecure multi-party computationare emerging techniques to compute encrypted data; these techniques are general andTuring completebut incur high computational and/or communication costs.
In response to encryption of data at rest, cyber-adversaries have developed new types of attacks. These more recent threats to encryption of data at rest include cryptographic attacks,[34]stolen ciphertext attacks,[35]attacks on encryption keys,[36]insider attacks, data corruption or integrity attacks,[37]data destruction attacks, andransomwareattacks. Data fragmentation[38]andactive defense[39]data protection technologies attempt to counter some of these attacks, by distributing, moving, or mutating ciphertext so it is more difficult to identify, steal, corrupt, or destroy.[40]
The question of balancing the need for national security with the right to privacy has been debated for years, since encryption has become critical in today's digital society. The modern encryption debate[41]started around the '90s when US government tried to ban cryptography because, according to them, it would threaten national security. The debate is polarized around two opposing views. Those who see strong encryption as a problem making it easier for criminals to hide their illegal acts online and others who argue that encryption keep digital communications safe. The debate heated up in 2014, when Big Tech like Apple and Google set encryption by default in their devices. This was the start of a series of controversies that puts governments, companies and internet users at stake.
Encryption, by itself, can protect the confidentiality of messages, but other techniques are still needed to protect the integrity and authenticity of a message; for example, verification of amessage authentication code(MAC) or adigital signatureusually done by ahashing algorithmor aPGP signature.Authenticated encryptionalgorithms are designed to provide both encryption and integrity protection together. Standards forcryptographic softwareandhardware to perform encryptionare widely available, but successfully using encryption to ensure security may be a challenging problem. A single error in system design or execution can allow successful attacks. Sometimes an adversary can obtain unencrypted information without directly undoing the encryption. See for exampletraffic analysis,TEMPEST, orTrojan horse.[42]
Integrity protection mechanisms such asMACsanddigital signaturesmust be applied to the ciphertext when it is first created, typically on the same device used to compose the message, to protect a messageend-to-endalong its full transmission path; otherwise, any node between the sender and the encryption agent could potentially tamper with it. Encrypting at the time of creation is only secure if the encryption device itself has correctkeysand has not been tampered with. If an endpoint device has been configured to trust aroot certificatethat an attacker controls, for example, then the attacker can both inspect and tamper with encrypted data by performing aman-in-the-middle attackanywhere along the message's path. The common practice ofTLS interceptionby network operators represents a controlled and institutionally sanctioned form of such an attack, but countries have also attempted to employ such attacks as a form of control and censorship.[43]
Even when encryption correctly hides a message's content and it cannot be tampered with at rest or in transit, a message'slengthis a form ofmetadatathat can still leak sensitive information about the message. For example, the well-knownCRIMEandBREACHattacks againstHTTPSwereside-channel attacksthat relied on information leakage via the length of encrypted content.[44]Traffic analysisis a broad class of techniques that often employs message lengths to infer sensitive implementation about traffic flows by aggregating information about a large number of messages.
Paddinga message's payload before encrypting it can help obscure the cleartext's true length, at the cost of increasing the ciphertext's size and introducing or increasingbandwidth overhead. Messages may be paddedrandomlyordeterministically, with each approach having different tradeoffs. Encrypting and padding messages to formpadded uniform random blobs or PURBsis a practice guaranteeing that the cipher text leaks nometadataabout its cleartext's content, and leaks asymptotically minimalO(loglogM){\displaystyle O(\log \log M)}informationvia its length.[45]
|
https://en.wikipedia.org/wiki/Encryption
|
Key managementrefers to management ofcryptographic keysin acryptosystem. This includes dealing with the generation, exchange, storage, use,crypto-shredding(destruction) and replacement of keys. It includescryptographic protocoldesign,key servers, user procedures, and other relevant protocols.[1][2]
Key management concerns keys at the user level, either between users or systems. This is in contrast tokey scheduling, which typically refers to the internal handling of keys within the operation of a cipher.
Successful key management is critical to the security of a cryptosystem. It is the more challenging side ofcryptographyin a sense that it involves aspects of social engineering such as system policy, user training, organizational and departmental interactions, and coordination between all of these elements, in contrast to pure mathematical practices that can be automated.
Cryptographic systems may use different types of keys, with some systems using more than one. These may include symmetric keys or asymmetric keys. In asymmetric key algorithmthe keys involved are identical for both encrypting and decrypting a message. Keys must be chosen carefully, and distributed and stored securely. Asymmetric keys, also known aspublic keys, in contrast are two distinct keys that are mathematically linked. They are typically used together to communicate. Public key infrastructure (PKI), the implementation of public key cryptography, requires an organization to establish an infrastructure to create and manage public and private key pairs along with digital certificates.[3]
The starting point in any certificate and private key management strategy is to create a comprehensive inventory of all certificates, their locations and responsible parties. This is not a trivial matter because certificates from a variety of sources are deployed in a variety of locations by different individuals and teams - it's simply not possible to rely on a list from a singlecertificate authority. Certificates that are not renewed and replaced before they expire can cause serious downtime and outages. Some other considerations:
Once keys are inventoried, key management typically consists of three steps: exchange, storage and use.
Prior to any secured communication, users must set up the details of the cryptography. In some instances this may require exchanging identical keys (in the case of a symmetric key system). In others it may require possessing the other party's public key. While public keys can be openly exchanged (their corresponding private key is kept secret), symmetric keys must be exchanged over a secure communication channel. Formerly, exchange of such a key was extremely troublesome, and was greatly eased by access to secure channels such as adiplomatic bag.Clear textexchange of symmetric keys would enable any interceptor to immediately learn the key, and any encrypted data.
The advance of public key cryptography in the 1970s has made the exchange of keys less troublesome. Since theDiffie-Hellmankey exchange protocol was published in 1975, it has become possible to exchange a key over an insecure communications channel, which has substantially reduced the risk of key disclosure during distribution. It is possible, using something akin to abook code, to include key indicators as clear text attached to an encrypted message. The encryption technique used byRichard Sorge's code clerk was of this type, referring to a page in a statistical manual, though it was in fact a code. TheGerman ArmyEnigmasymmetric encryption key was a mixed type early in its use; the key was a combination of secretly distributed key schedules and a user chosen session key component for each message.
In more modern systems, such asOpenPGPcompatible systems, a session key for a symmetric key algorithm is distributed encrypted by anasymmetric key algorithm. This approach avoids even the necessity for using a key exchange protocol like Diffie-Hellman key exchange.
Another method of key exchange involves encapsulating one key within another. Typically a master key is generated and exchanged using some secure method. This method is usually cumbersome or expensive (breaking a master key into multiple parts and sending each with a trusted courier for example) and not suitable for use on a larger scale. Once the master key has been securely exchanged, it can then be used to securely exchange subsequent keys with ease. This technique is usually termedkey wrap. A common technique usesblock ciphersand cryptographichash functions.[6]
A related method is to exchange a master key (sometimes termed a root key) and derive subsidiary keys as needed from that key and some other data (often referred to as diversification data). The most common use for this method is probably insmartcard-based cryptosystems, such as those found in banking cards. The bank or credit network embeds their secret key into the card's secure key storage during card production at a secured production facility. Then at thepoint of salethe card and card reader are both able to derive a common set of session keys based on the shared secret key and card-specific data (such as the card serial number). This method can also be used when keys must be related to each other (i.e., departmental keys are tied to divisional keys, and individual keys tied to departmental keys). However, tying keys to each other in this way increases the damage which may result from a security breach as attackers will learn something about more than one key. This reduces entropy, with regard to an attacker, for each key involved.
Arecent methoduses anoblivious pseudorandom functionto issue keys without the key management system ever being in a position to see the keys.[7]
However distributed, keys must be stored securely to maintain communications security. Security is a big concern[8][9]and hence there are various techniques in use to do so. Likely the most common is that an encryption application manages keys for the user and depends on an access password to control use of the key. Likewise, in the case of smartphone keyless access platforms, they keep all identifying door information off mobile phones and servers and encrypt all data, where just like low-tech keys, users give codes only to those they trust.[8]
In terms of regulation, there are few that address key storage in depth. "Some contain minimal guidance like 'don’t store keys with encrypted data' or suggest that 'keys should be kept securely.'" The notable exceptions to that are PCI DSS 3.2.1, NIST 800-53 and NIST 800–57.[9]
For optimal security, keys may be stored in aHardware Security Module(HSM) or protected using technologies such asTrusted Execution Environment(TEE, e.g.Intel SGX) orMulti-Party Computation(MPC). Additional alternatives include utilizingTrusted Platform Modules(TPM),[10]virtual HSMs, aka "Poor Man's Hardware Security Modules" (pmHSM),[11]or non-volatileField-Programmable-Gate-Arrays(FPGA) with supportingSystem-on-Chipconfigurations.[12]In order to verify the integrity of a key stored without compromising its actual value aKCValgorithm can be used.
The major issue is length of time a key is to be used, and therefore frequency of replacement. Because it increases any attacker's required effort, keys should be frequently changed. This also limits loss of information, as the number of stored encrypted messages which will become readable when a key is found will decrease as the frequency of key change increases. Historically, symmetric keys have been used for long periods in situations in which key exchange was very difficult or only possible intermittently. Ideally, the symmetric key should change with each message or interaction, so that only that message will become readable if the key is learned (e.g., stolen, cryptanalyzed, or social engineered).
Several challenges IT organizations face when trying to control and manage their encryption keys are:
Key management compliance refers to the oversight, assurance, and capability of being able to demonstrate that keys are securely managed. This includes the following individual compliance domains:
Compliancecan be achieved with respect to national and internationaldata protectionstandards and regulations, such asPayment Card Industry Data Security Standard,Health Insurance Portability and Accountability Act,Sarbanes–Oxley Act, orGeneral Data Protection Regulation.[15]
Akey management system(KMS), also known as acryptographic key management system(CKMS) orenterprise key management system(EKMS), is an integrated approach for generating, distributing and managingcryptographic keysfor devices and applications. They may cover all aspects of security - from the secure generation of keys over the secure exchange of keys up to secure key handling and storage on the client. Thus, a KMS includes the backend functionality forkey generation, distribution, and replacement as well as the client functionality for injecting keys, storing and managing keys on devices.
Many specific applications have developed their own key management systems with home grown protocols. However, as systems become more interconnected keys need to be shared between those different systems. To facilitate this, key management standards have evolved to define the protocols used to manage and exchange cryptographic keys and related information.
KMIP is an extensible key management protocol that has been developed by many organizations working within theOASIS standards body. The first version was released in 2010, and it has been further developed by an active technical committee.
The protocol allows for the creation of keys and their distribution among disparate software systems that need to utilize them. It covers the full key life cycle of both symmetric and asymmetric keys in a variety of formats, the wrapping of keys, provisioning schemes, and cryptographic operations as well as meta data associated with the keys.
The protocol is backed by an extensive series of test cases, and interoperability testing is performed between compliant systems each year.
A list of some 80 products that conform to the KMIP standard can be found onthe OASIS website.
The security policy of a key management system provides the rules that are to be used to protect keys and metadata that the key management system supports. As defined by the National Institute of Standards and TechnologyNIST, the policy shall establish and specify rules for this information that will protect its:[14]
This protection covers the complete key life-cycle from the time the key becomes operational to its elimination.[1]
Bring your own encryption(BYOE)—also calledbring your own key(BYOK)—refers to a cloud-computing security model to allow public-cloud customers to use their own encryption software and manage their own encryption keys.
This security model is usually considered a marketing stunt, as critical keys are being handed over to third parties (cloud providers) and key owners are still left with the operational burden of generating, rotating and sharing their keys.
Apublic-key infrastructureis a type of key management system that uses hierarchicaldigital certificatesto provide authentication, and public keys to provide encryption. PKIs are used in World Wide Web traffic, commonly in the form ofSSLandTLS.
Group key management means managing the keys in a group communication. Most of the group communications usemulticastcommunication so that if the message is sent once by the sender, it will be received by all the users. The main problem in multicast group communication is its security. In order to improve the security, various keys are given to the users. Using the keys, the users can encrypt their messages and send them secretly. IETF.org released RFC 4046, entitled Multicast Security (MSEC) Group Key Management Architecture, which discusses the challenges of group key management.[55]
45.NeoKeyManager - Hancom Intelligence Inc.
Q*The IEEE Security in Storage Working Group (SISWG) that is creating the P1619.3 standard for Key Management
|
https://en.wikipedia.org/wiki/Key_management
|
Innumber theory,Euler's theorem(also known as theFermat–Euler theoremorEuler's totient theorem) states that, ifnandaarecoprimepositive integers, thenaφ(n){\displaystyle a^{\varphi (n)}}is congruent to1{\displaystyle 1}modulon, whereφ{\displaystyle \varphi }denotesEuler's totient function; that is
In 1736,Leonhard Eulerpublished a proof ofFermat's little theorem[1](stated byFermatwithout proof), which is the restriction of Euler's theorem to the case wherenis a prime number. Subsequently, Euler presented other proofs of the theorem, culminating with his paper of 1763, in which he proved a generalization to the case wherenis not prime.[2]
The converse of Euler's theorem is also true: if the above congruence is true, thena{\displaystyle a}andn{\displaystyle n}must be coprime.
The theorem is further generalized by some ofCarmichael's theorems.
The theorem may be used to easily reduce large powers modulon{\displaystyle n}. For example, consider finding the ones place decimal digit of7222{\displaystyle 7^{222}}, i.e.7222(mod10){\displaystyle 7^{222}{\pmod {10}}}. The integers 7 and 10 are coprime, andφ(10)=4{\displaystyle \varphi (10)=4}. So Euler's theorem yields74≡1(mod10){\displaystyle 7^{4}\equiv 1{\pmod {10}}}, and we get7222≡74×55+2≡(74)55×72≡155×72≡49≡9(mod10){\displaystyle 7^{222}\equiv 7^{4\times 55+2}\equiv (7^{4})^{55}\times 7^{2}\equiv 1^{55}\times 7^{2}\equiv 49\equiv 9{\pmod {10}}}.
In general, when reducing a power ofa{\displaystyle a}modulon{\displaystyle n}(wherea{\displaystyle a}andn{\displaystyle n}are coprime), one needs to work moduloφ(n){\displaystyle \varphi (n)}in the exponent ofa{\displaystyle a}:
Euler's theorem underlies theRSA cryptosystem, which is widely used inInternetcommunications. In this cryptosystem, Euler's theorem is used withnbeing a product of two largeprime numbers, and the security of the system is based on the difficulty offactoringsuch an integer.
1. Euler's theorem can be proven using concepts from thetheory of groups:[3]The residue classes modulonthat are coprime tonform a group under multiplication (see the articleMultiplicative group of integers modulonfor details). Theorderof that group isφ(n).Lagrange's theoremstates that the order of any subgroup of afinite groupdivides the order of the entire group, in this caseφ(n). Ifais any numbercoprimetonthenais in one of these residue classes, and its powersa,a2, ... ,akmodulonform a subgroup of the group of residue classes, withak≡ 1 (modn). Lagrange's theorem sayskmust divideφ(n), i.e. there is an integerMsuch thatkM=φ(n). This then implies,
2. There is also a direct proof:[4][5]LetR= {x1,x2, ... ,xφ(n)}be areduced residue system(modn) and letabe any integer coprime ton. The proof hinges on the fundamental fact that multiplication byapermutes thexi: in other words ifaxj≡axk(modn)thenj=k. (This law of cancellation is proved in the articleMultiplicative group of integers modulon.[6]) That is, the setsRandaR= {ax1,ax2, ... ,axφ(n)}, considered as sets of congruence classes (modn), are identical (as sets—they may be listed in different orders), so the product of all the numbers inRis congruent (modn) to the product of all the numbers inaR:
TheDisquisitiones Arithmeticaehas been translated from Gauss's Ciceronian Latin into English and German. The German edition includes all of his papers on number theory: all the proofs of quadratic reciprocity, the determination of the sign of the Gauss sum, the investigations into biquadratic reciprocity, and unpublished notes.
|
https://en.wikipedia.org/wiki/Euler%27s_theorem
|
TheRSA(Rivest–Shamir–Adleman)cryptosystemis apublic-key cryptosystem, one of the oldest widely used for secure data transmission. Theinitialism"RSA" comes from the surnames ofRon Rivest,Adi ShamirandLeonard Adleman, who publicly described the algorithm in 1977. An equivalent system was developed secretly in 1973 atGovernment Communications Headquarters(GCHQ), the Britishsignals intelligenceagency, by the English mathematicianClifford Cocks. That system wasdeclassifiedin 1997.[2]
In a public-keycryptosystem, theencryption keyis public and distinct from thedecryption key, which is kept secret (private).
An RSA user creates and publishes a public key based on two largeprime numbers, along with an auxiliary value. The prime numbers are kept secret. Messages can be encrypted by anyone, via the public key, but can only be decrypted by someone who knows the private key.[1]
The security of RSA relies on the practical difficulty offactoringthe product of two largeprime numbers, the "factoring problem". Breaking RSA encryption is known as theRSA problem. Whether it is as difficult as the factoring problem is an open question.[3]There are no published methods to defeat the system if a large enough key is used.
RSA is a relatively slow algorithm. Because of this, it is not commonly used to directly encrypt user data. More often, RSA is used to transmit shared keys forsymmetric-keycryptography, which are then used for bulk encryption–decryption.
The idea of an asymmetric public-private key cryptosystem is attributed toWhitfield DiffieandMartin Hellman, who published this concept in 1976. They also introduced digital signatures and attempted to apply number theory. Their formulation used a shared-secret-key created from exponentiation of some number, modulo a prime number. However, they left open the problem of realizing a one-way function, possibly because the difficulty of factoring was not well-studied at the time.[4]Moreover, likeDiffie-Hellman, RSA is based onmodular exponentiation.
Ron Rivest,Adi Shamir, andLeonard Adlemanat theMassachusetts Institute of Technologymade several attempts over the course of a year to create a function that was hard to invert. Rivest and Shamir, as computer scientists, proposed many potential functions, while Adleman, as a mathematician, was responsible for finding their weaknesses. They tried many approaches, including "knapsack-based" and "permutation polynomials". For a time, they thought what they wanted to achieve was impossible due to contradictory requirements.[5]In April 1977, they spentPassoverat the house of a student and drank a good deal of wine before returning to their homes at around midnight.[6]Rivest, unable to sleep, lay on the couch with a math textbook and started thinking about their one-way function. He spent the rest of the night formalizing his idea, and he had much of the paper ready by daybreak. The algorithm is now known as RSA – the initials of their surnames in same order as their paper.[7]
Clifford Cocks, an Englishmathematicianworking for theBritishintelligence agencyGovernment Communications Headquarters(GCHQ), described a similar system in an internal document in 1973.[8]However, given the relatively expensive computers needed to implement it at the time, it was considered to be mostly a curiosity and, as far as is publicly known, was never deployed. His ideas and concepts were not revealed until 1997 due to its top-secret classification.
Kid-RSA (KRSA) is a simplified, insecure public-key cipher published in 1997, designed for educational purposes. Some people feel that learning Kid-RSA gives insight into RSA and other public-key ciphers, analogous tosimplified DES.[9][10][11][12][13]
Apatentdescribing the RSA algorithm was granted toMITon 20 September 1983:U.S. patent 4,405,829"Cryptographic communications system and method". FromDWPI's abstract of the patent:
The system includes a communications channel coupled to at least one terminal having an encoding device and to at least one terminal having a decoding device. A message-to-be-transferred is enciphered to ciphertext at the encoding terminal by encoding the message as a number M in a predetermined set. That number is then raised to a first predetermined power (associated with the intended receiver) and finally computed. The remainder or residue, C, is... computed when the exponentiated number is divided by the product of two predetermined prime numbers (associated with the intended receiver).
A detailed description of the algorithm was published in August 1977, inScientific American'sMathematical Gamescolumn.[7]This preceded the patent's filing date of December 1977. Consequently, the patent had no legal standing outside theUnited States. Had Cocks' work been publicly known, a patent in the United States would not have been legal either.
When the patent was issued,terms of patentwere 17 years. The patent was about to expire on 21 September 2000, butRSA Securityreleased the algorithm to the public domain on 6 September 2000.[14]
The RSA algorithm involves four steps:keygeneration, key distribution, encryption, and decryption.
A basic principle behind RSA is the observation that it is practical to find three very large positive integerse,d, andn, such that for all integersm(0 ≤m<n), both(me)d{\displaystyle (m^{e})^{d}}andm{\displaystyle m}have the sameremainderwhen divided byn{\displaystyle n}(they arecongruent modulon{\displaystyle n}):(me)d≡m(modn).{\displaystyle (m^{e})^{d}\equiv m{\pmod {n}}.}However, when given onlyeandn, it is extremely difficult to findd.
The integersnandecomprise the public key,drepresents the private key, andmrepresents the message. Themodular exponentiationtoeanddcorresponds to encryption and decryption, respectively.
In addition, because the two exponentscan be swapped, the private and public key can also be swapped, allowing for messagesigning and verificationusing the same algorithm.
The keys for the RSA algorithm are generated in the following way:
Thepublic keyconsists of the modulusnand the public (or encryption) exponente. Theprivate keyconsists of the private (or decryption) exponentd, which must be kept secret.p,q, andλ(n)must also be kept secret because they can be used to calculated. In fact, they can all be discarded afterdhas been computed.[16]
In the original RSA paper,[1]theEuler totient functionφ(n) = (p− 1)(q− 1)is used instead ofλ(n)for calculating the private exponentd. Sinceφ(n)is always divisible byλ(n), the algorithm works as well. The possibility of usingEuler totient functionresults also fromLagrange's theoremapplied to themultiplicative group of integers modulopq. Thus anydsatisfyingd⋅e≡ 1 (modφ(n))also satisfiesd⋅e≡ 1 (modλ(n)). However, computingdmoduloφ(n)will sometimes yield a result that is larger than necessary (i.e.d>λ(n)). Most of the implementations of RSA will accept exponents generated using either method (if they use the private exponentdat all, rather than using the optimized decryption methodbased on the Chinese remainder theoremdescribed below), but some standards such asFIPS 186-4(Section B.3.1) may require thatd<λ(n). Any "oversized" private exponents not meeting this criterion may always be reduced moduloλ(n)to obtain a smaller equivalent exponent.
Since any common factors of(p− 1)and(q− 1)are present in the factorisation ofn− 1=pq− 1=(p− 1)(q− 1) + (p− 1) + (q− 1),[17][self-published source?]it is recommended that(p− 1)and(q− 1)have only very small common factors, if any, besides the necessary 2.[1][18][19][failed verification][20][failed verification]
Note: The authors of the original RSA paper carry out the key generation by choosingdand then computingeas themodular multiplicative inverseofdmoduloφ(n), whereas most current implementations of RSA, such as those followingPKCS#1, do the reverse (chooseeand computed). Since the chosen key can be small, whereas the computed key normally is not, the RSA paper's algorithm optimizes decryption compared to encryption, while the modern algorithm optimizes encryption instead.[1][21]
Suppose thatBobwants to send information toAlice. If they decide to use RSA, Bob must know Alice's public key to encrypt the message, and Alice must use her private key to decrypt the message.
To enable Bob to send his encrypted messages, Alice transmits her public key(n,e)to Bob via a reliable, but not necessarily secret, route. Alice's private key(d)is never distributed.
After Bob obtains Alice's public key, he can send a messageMto Alice.
To do it, he first turnsM(strictly speaking, the un-padded plaintext) into an integerm(strictly speaking, thepaddedplaintext), such that0 ≤m<nby using an agreed-upon reversible protocol known as apadding scheme. He then computes the ciphertextc, using Alice's public keye, corresponding to
c≡me(modn).{\displaystyle c\equiv m^{e}{\pmod {n}}.}
This can be done reasonably quickly, even for very large numbers, usingmodular exponentiation. Bob then transmitscto Alice. Note that at least nine values ofmwill yield a ciphertextcequal tom,[a]but this is very unlikely to occur in practice.
Alice can recovermfromcby using her private key exponentdby computing
cd≡(me)d≡m(modn).{\displaystyle c^{d}\equiv (m^{e})^{d}\equiv m{\pmod {n}}.}
Givenm, she can recover the original messageMby reversing the padding scheme.
Here is an example of RSA encryption and decryption:[b]
Thepublic keyis(n= 3233,e= 17). For a paddedplaintextmessagem, the encryption function isc(m)=memodn=m17mod3233.{\displaystyle {\begin{aligned}c(m)&=m^{e}{\bmod {n}}\\&=m^{17}{\bmod {3}}233.\end{aligned}}}
Theprivate keyis(n= 3233,d= 413). For an encryptedciphertextc, the decryption function ism(c)=cdmodn=c413mod3233.{\displaystyle {\begin{aligned}m(c)&=c^{d}{\bmod {n}}\\&=c^{413}{\bmod {3}}233.\end{aligned}}}
For instance, in order to encryptm= 65, one calculatesc=6517mod3233=2790.{\displaystyle c=65^{17}{\bmod {3}}233=2790.}
To decryptc= 2790, one calculatesm=2790413mod3233=65.{\displaystyle m=2790^{413}{\bmod {3}}233=65.}
Both of these calculations can be computed efficiently using thesquare-and-multiply algorithmformodular exponentiation. In real-life situations the primes selected would be much larger; in our example it would be trivial to factorn= 3233(obtained from the freely available public key) back to the primespandq.e, also from the public key, is then inverted to getd, thus acquiring the private key.
Practical implementations use theChinese remainder theoremto speed up the calculation using modulus of factors (modpqusing modpand modq).
The valuesdp,dqandqinv, which are part of the private key are computed as follows:dp=dmod(p−1)=413mod(61−1)=53,dq=dmod(q−1)=413mod(53−1)=49,qinv=q−1modp=53−1mod61=38⇒(qinv×q)modp=38×53mod61=1.{\displaystyle {\begin{aligned}d_{p}&=d{\bmod {(}}p-1)=413{\bmod {(}}61-1)=53,\\d_{q}&=d{\bmod {(}}q-1)=413{\bmod {(}}53-1)=49,\\q_{\text{inv}}&=q^{-1}{\bmod {p}}=53^{-1}{\bmod {6}}1=38\\&\Rightarrow (q_{\text{inv}}\times q){\bmod {p}}=38\times 53{\bmod {6}}1=1.\end{aligned}}}
Here is howdp,dqandqinvare used for efficient decryption (encryption is efficient by choice of a suitabledandepair):m1=cdpmodp=279053mod61=4,m2=cdqmodq=279049mod53=12,h=(qinv×(m1−m2))modp=(38×−8)mod61=1,m=m2+h×q=12+1×53=65.{\displaystyle {\begin{aligned}m_{1}&=c^{d_{p}}{\bmod {p}}=2790^{53}{\bmod {6}}1=4,\\m_{2}&=c^{d_{q}}{\bmod {q}}=2790^{49}{\bmod {5}}3=12,\\h&=(q_{\text{inv}}\times (m_{1}-m_{2})){\bmod {p}}=(38\times -8){\bmod {6}}1=1,\\m&=m_{2}+h\times q=12+1\times 53=65.\end{aligned}}}
SupposeAliceusesBob's public key to send him an encrypted message. In the message, she can claim to be Alice, but Bob has no way of verifying that the message was from Alice, since anyone can use Bob's public key to send him encrypted messages. In order to verify the origin of a message, RSA can also be used tosigna message.
Suppose Alice wishes to send a signed message to Bob. She can use her own private key to do so. She produces ahash valueof the message, raises it to the power ofd(modulon) (as she does when decrypting a message), and attaches it as a "signature" to the message. When Bob receives the signed message, he uses the same hash algorithm in conjunction with Alice's public key. He raises the signature to the power ofe(modulon) (as he does when encrypting a message), and compares the resulting hash value with the message's hash value. If the two agree, he knows that the author of the message was in possession of Alice's private key and that the message has not been tampered with since being sent.
This works because ofexponentiationrules:h=hash(m),{\displaystyle h=\operatorname {hash} (m),}(he)d=hed=hde=(hd)e≡h(modn).{\displaystyle (h^{e})^{d}=h^{ed}=h^{de}=(h^{d})^{e}\equiv h{\pmod {n}}.}
Thus the keys may be swapped without loss of generality, that is, a private key of a key pair may be used either to:
The proof of the correctness of RSA is based onFermat's little theorem, stating thatap− 1≡ 1 (modp)for any integeraand primep, not dividinga.[note 1]
We want to show that(me)d≡m(modpq){\displaystyle (m^{e})^{d}\equiv m{\pmod {pq}}}for every integermwhenpandqare distinct prime numbers andeanddare positive integers satisfyinged≡ 1 (modλ(pq)).
Sinceλ(pq) =lcm(p− 1,q− 1)is, by construction, divisible by bothp− 1andq− 1, we can writeed−1=h(p−1)=k(q−1){\displaystyle ed-1=h(p-1)=k(q-1)}for some nonnegative integershandk.[note 2]
To check whether two numbers, such asmedandm, are congruentmodpq, it suffices (and in fact is equivalent) to check that they are congruentmodpandmodqseparately.[note 3]
To showmed≡m(modp), we consider two cases:
The verification thatmed≡m(modq)proceeds in a completely analogous way:
This completes the proof that, for any integerm, and integerse,dsuch thated≡ 1 (modλ(pq)),(me)d≡m(modpq).{\displaystyle (m^{e})^{d}\equiv m{\pmod {pq}}.}
Although the original paper of Rivest, Shamir, and Adleman used Fermat's little theorem to explain why RSA works, it is common to find proofs that rely instead onEuler's theorem.
We want to show thatmed≡m(modn), wheren=pqis a product of two different prime numbers, andeanddare positive integers satisfyinged≡ 1 (modφ(n)). Sinceeanddare positive, we can writeed= 1 +hφ(n)for some non-negative integerh.Assumingthatmis relatively prime ton, we havemed=m1+hφ(n)=m(mφ(n))h≡m(1)h≡m(modn),{\displaystyle m^{ed}=m^{1+h\varphi (n)}=m(m^{\varphi (n)})^{h}\equiv m(1)^{h}\equiv m{\pmod {n}},}
where the second-last congruence follows fromEuler's theorem.
More generally, for anyeanddsatisfyinged≡ 1 (modλ(n)), the same conclusion follows fromCarmichael's generalization of Euler's theorem, which states thatmλ(n)≡ 1 (modn)for allmrelatively prime ton.
Whenmis not relatively prime ton, the argument just given is invalid. This is highly improbable (only a proportion of1/p+ 1/q− 1/(pq)numbers have this property), but even in this case, the desired congruence is still true. Eitherm≡ 0 (modp)orm≡ 0 (modq), and these cases can be treated using the previous proof.
There are a number of attacks against plain RSA as described below.
To avoid these problems, practical RSA implementations typically embed some form of structured, randomizedpaddinginto the valuembefore encrypting it. This padding ensures thatmdoes not fall into the range of insecure plaintexts, and that a given message, once padded, will encrypt to one of a large number of different possible ciphertexts.
Standards such asPKCS#1have been carefully designed to securely pad messages prior to RSA encryption. Because these schemes pad the plaintextmwith some number of additional bits, the size of the un-padded messageMmust be somewhat smaller. RSA padding schemes must be carefully designed so as to prevent sophisticated attacks that may be facilitated by a predictable message structure. Early versions of the PKCS#1 standard (up to version 1.5) used a construction that appears to make RSA semantically secure. However, atCrypto1998, Bleichenbacher showed that this version is vulnerable to a practicaladaptive chosen-ciphertext attack. Furthermore, atEurocrypt2000, Coron et al.[25]showed that for some types of messages, this padding does not provide a high enough level of security. Later versions of the standard includeOptimal Asymmetric Encryption Padding(OAEP), which prevents these attacks. As such, OAEP should be used in any new application, and PKCS#1 v1.5 padding should be replaced wherever possible. The PKCS#1 standard also incorporates processing schemes designed to provide additional security for RSA signatures, e.g. the Probabilistic Signature Scheme for RSA (RSA-PSS).
Secure padding schemes such as RSA-PSS are as essential for the security of message signing as they are for message encryption. Two USA patents on PSS were granted (U.S. patent 6,266,771andU.S. patent 7,036,014); however, these patents expired on 24 July 2009 and 25 April 2010 respectively. Use of PSS no longer seems to be encumbered by patents.[original research?]Note that using different RSA key pairs for encryption and signing is potentially more secure.[26]
For efficiency, many popular crypto libraries (such asOpenSSL,Javaand.NET) use for decryption and signing the following optimization based on theChinese remainder theorem.[27][citation needed]The following values are precomputed and stored as part of the private key:
These values allow the recipient to compute the exponentiationm=cd(modpq)more efficiently as follows:m1=cdP(modp){\displaystyle m_{1}=c^{d_{P}}{\pmod {p}}},m2=cdQ(modq){\displaystyle m_{2}=c^{d_{Q}}{\pmod {q}}},h=qinv(m1−m2)(modp){\displaystyle h=q_{\text{inv}}(m_{1}-m_{2}){\pmod {p}}},[c]m=m2+hq{\displaystyle m=m_{2}+hq}.
This is more efficient than computingexponentiation by squaring, even though two modular exponentiations have to be computed. The reason is that these two modular exponentiations both use a smaller exponent and a smaller modulus.
The security of the RSA cryptosystem is based on two mathematical problems: the problem offactoring large numbersand theRSA problem. Full decryption of an RSA ciphertext is thought to be infeasible on the assumption that both of these problems arehard, i.e., no efficient algorithm exists for solving them. Providing security againstpartialdecryption may require the addition of a securepadding scheme.[28]
TheRSA problemis defined as the task of takingeth roots modulo a compositen: recovering a valuemsuch thatc≡me(modn), where(n,e)is an RSA public key, andcis an RSA ciphertext. Currently the most promising approach to solving the RSA problem is to factor the modulusn. With the ability to recover prime factors, an attacker can compute the secret exponentdfrom a public key(n,e), then decryptcusing the standard procedure. To accomplish this, an attacker factorsnintopandq, and computeslcm(p− 1,q− 1)that allows the determination ofdfrome. No polynomial-time method for factoring large integers on a classical computer has yet been found, but it has not been proven that none exists; seeinteger factorizationfor a discussion of this problem.
The first RSA-512 factorization in 1999 used hundreds of computers and required the equivalent of 8,400 MIPS years, over an elapsed time of about seven months.[29]By 2009, Benjamin Moody could factor an 512-bit RSA key in 73 days using only public software (GGNFS) and his desktop computer (a dual-coreAthlon64with a 1,900 MHz CPU). Just less than 5 gigabytes of disk storage was required and about 2.5 gigabytes of RAM for the sieving process.
Rivest, Shamir, and Adleman noted[1]that Miller has shown that – assuming the truth of theextended Riemann hypothesis– findingdfromnandeis as hard as factoringnintopandq(up to a polynomial time difference).[30]However, Rivest, Shamir, and Adleman noted, in section IX/D of their paper, that they had not found a proof that inverting RSA is as hard as factoring.
As of 2020[update], the largest publicly known factoredRSA numberhad 829 bits (250 decimal digits,RSA-250).[31]Its factorization, by a state-of-the-art distributed implementation, took about 2,700 CPU-years. In practice, RSA keys are typically 1024 to 4096 bits long. In 2003,RSA Securityestimated that 1024-bit keys were likely to become crackable by 2010.[32]As of 2020, it is not known whether such keys can be cracked, but minimum recommendations have moved to at least 2048 bits.[33]It is generally presumed that RSA is secure ifnis sufficiently large, outside of quantum computing.
Ifnis 300bitsor shorter, it can be factored in a few hours on apersonal computer, using software already freely available. Keys of 512 bits have been shown to be practically breakable in 1999, whenRSA-155was factored by using several hundred computers, and these are now factored in a few weeks using common hardware. Exploits using 512-bit code-signing certificates that may have been factored were reported in 2011.[34]A theoretical hardware device namedTWIRL, described by Shamir and Tromer in 2003, called into question the security of 1024-bit keys.[32]
In 1994,Peter Shorshowed that aquantum computer– if one could ever be practically created for the purpose – would be able to factor inpolynomial time, breaking RSA; seeShor's algorithm.
Finding the large primespandqis usually done by testing random numbers of the correct size with probabilisticprimality teststhat quickly eliminate virtually all of the nonprimes.
The numberspandqshould not be "too close", lest theFermat factorizationfornbe successful. Ifp−qis less than2n1/4(n=p⋅q, which even for "small" 1024-bit values ofnis3×1077), solving forpandqis trivial. Furthermore, if eitherp− 1orq− 1has only small prime factors,ncan be factored quickly byPollard'sp− 1 algorithm, and hence such values ofporqshould be discarded.
It is important that the private exponentdbe large enough. Michael J. Wiener showed that ifpis betweenqand2q(which is quite typical) andd<n1/4/3, thendcan be computed efficiently fromnande.[35]
There is no known attack against small public exponents such ase= 3, provided that the proper padding is used.Coppersmith's attackhas many applications in attacking RSA specifically if the public exponenteis small and if the encrypted message is short and not padded.65537is a commonly used value fore; this value can be regarded as a compromise between avoiding potential small-exponent attacks and still allowing efficient encryptions (or signature verification). The NIST Special Publication on Computer Security (SP 800-78 Rev. 1 of August 2007) does not allow public exponentsesmaller than 65537, but does not state a reason for this restriction.
In October 2017, a team of researchers fromMasaryk Universityannounced theROCA vulnerability, which affects RSA keys generated by an algorithm embodied in a library fromInfineonknown as RSALib. A large number ofsmart cardsandtrusted platform modules(TPM) were shown to be affected. Vulnerable RSA keys are easily identified using a test program the team released.[36]
A cryptographically strongrandom number generator, which has been properly seeded with adequate entropy, must be used to generate the primespandq. An analysis comparing millions of public keys gathered from the Internet was carried out in early 2012 byArjen K. Lenstra, James P. Hughes, Maxime Augier, Joppe W. Bos, Thorsten Kleinjung and Christophe Wachter. They were able to factor 0.2% of the keys using only Euclid's algorithm.[37][38][self-published source?]
They exploited a weakness unique to cryptosystems based on integer factorization. Ifn=pqis one public key, andn′ =p′q′is another, then if by chancep=p′(butqis not equal toq'), then a simple computation ofgcd(n,n′) =pfactors bothnandn', totally compromising both keys. Lenstra et al. note that this problem can be minimized by using a strong random seed of bit length twice the intended security level, or by employing a deterministic function to chooseqgivenp, instead of choosingpandqindependently.
Nadia Heningerwas part of a group that did a similar experiment. They used an idea ofDaniel J. Bernsteinto compute the GCD of each RSA keynagainst the product of all the other keysn' they had found (a 729-million-digit number), instead of computing eachgcd(n,n′)separately, thereby achieving a very significant speedup, since after one large division, the GCD problem is of normal size.
Heninger says in her blog that the bad keys occurred almost entirely in embedded applications, including "firewalls, routers, VPN devices, remote server administration devices, printers, projectors, and VOIP phones" from more than 30 manufacturers. Heninger explains that the one-shared-prime problem uncovered by the two groups results from situations where the pseudorandom number generator is poorly seeded initially, and then is reseeded between the generation of the first and second primes. Using seeds of sufficiently high entropy obtained from key stroke timings or electronic diode noise oratmospheric noisefrom a radio receiver tuned between stations should solve the problem.[39]
Strong random number generation is important throughout every phase of public-key cryptography. For instance, if a weak generator is used for the symmetric keys that are being distributed by RSA, then an eavesdropper could bypass RSA and guess the symmetric keys directly.
Kocherdescribed a new attack on RSA in 1995: if the attacker Eve knows Alice's hardware in sufficient detail and is able to measure the decryption times for several known ciphertexts, Eve can deduce the decryption keydquickly. This attack can also be applied against the RSA signature scheme. In 2003,BonehandBrumleydemonstrated a more practical attack capable of recovering RSA factorizations over a network connection (e.g., from aSecure Sockets Layer(SSL)-enabled webserver).[40]This attack takes advantage of information leaked by theChinese remainder theoremoptimization used by many RSA implementations.
One way to thwart these attacks is to ensure that the decryption operation takes a constant amount of time for every ciphertext. However, this approach can significantly reduce performance. Instead, most RSA implementations use an alternate technique known ascryptographic blinding. RSA blinding makes use of the multiplicative property of RSA. Instead of computingcd(modn), Alice first chooses a secret random valuerand computes(rec)d(modn). The result of this computation, after applyingEuler's theorem, isrcd(modn), and so the effect ofrcan be removed by multiplying by its inverse. A new value ofris chosen for each ciphertext. With blinding applied, the decryption time is no longer correlated to the value of the input ciphertext, and so the timing attack fails.
In 1998,Daniel Bleichenbacherdescribed the first practicaladaptive chosen-ciphertext attackagainst RSA-encrypted messages using the PKCS #1 v1padding scheme(a padding scheme randomizes and adds structure to an RSA-encrypted message, so it is possible to determine whether a decrypted message is valid). Due to flaws with the PKCS #1 scheme, Bleichenbacher was able to mount a practical attack against RSA implementations of theSecure Sockets Layerprotocol and to recover session keys. As a result of this work, cryptographers now recommend the use of provably secure padding schemes such asOptimal Asymmetric Encryption Padding, and RSA Laboratories has released new versions of PKCS #1 that are not vulnerable to these attacks.
A variant of this attack, dubbed "BERserk", came back in 2014.[41][42]It impacted the Mozilla NSS Crypto Library, which was used notably by Firefox and Chrome.
A side-channel attack using branch-prediction analysis (BPA) has been described. Many processors use abranch predictorto determine whether a conditional branch in the instruction flow of a program is likely to be taken or not. Often these processors also implementsimultaneous multithreading(SMT). Branch-prediction analysis attacks use a spy process to discover (statistically) the private key when processed with these processors.
Simple Branch Prediction Analysis (SBPA) claims to improve BPA in a non-statistical way. In their paper, "On the Power of Simple Branch Prediction Analysis",[43]the authors of SBPA (Onur Aciicmez and Cetin Kaya Koc) claim to have discovered 508 out of 512 bits of an RSA key in 10 iterations.
A power-fault attack on RSA implementations was described in 2010.[44]The author recovered the key by varying the CPU power voltage outside limits; this caused multiple power faults on the server.
There are many details to keep in mind in order to implement RSA securely (strongPRNG, acceptable public exponent, etc.). This makes the implementation challenging, to the point the book Practical Cryptography With Go suggests avoiding RSA if possible.[45]
Some cryptography libraries that provide support for RSA include:
|
https://en.wikipedia.org/wiki/RSA_algorithm
|
Inmathematics, afinite fieldorGalois field(so-named in honor ofÉvariste Galois) is afieldthat contains a finite number ofelements. As with any field, a finite field is aseton which the operations of multiplication, addition, subtraction and division are defined and satisfy certain basic rules. The most common examples of finite fields are theintegers modp{\displaystyle p}whenp{\displaystyle p}is aprime number.
Theorderof a finite field is its number of elements, which is either a prime number or aprime power. For every prime numberp{\displaystyle p}and every positive integerk{\displaystyle k}there are fields of orderpk{\displaystyle p^{k}}. All finite fields of a given order areisomorphic.
Finite fields are fundamental in a number of areas of mathematics andcomputer science, includingnumber theory,algebraic geometry,Galois theory,finite geometry,cryptographyandcoding theory.
A finite field is a finite set that is afield; this means that multiplication, addition, subtraction and division (excluding division by zero) are defined and satisfy the rules of arithmetic known as thefield axioms.[1]
The number of elements of a finite field is called itsorderor, sometimes, itssize. A finite field of orderq{\displaystyle q}exists if and only ifq{\displaystyle q}is aprime powerpk{\displaystyle p^{k}}(wherep{\displaystyle p}is a prime number andk{\displaystyle k}is a positive integer). In a field of orderpk{\displaystyle p^{k}}, addingp{\displaystyle p}copies of any element always results in zero; that is, thecharacteristicof the field isp{\displaystyle p}.[1]
Forq=pk{\displaystyle q=p^{k}}, all fields of orderq{\displaystyle q}areisomorphic(see§ Existence and uniquenessbelow).[2]Moreover, a field cannot contain two different finitesubfieldswith the same order. One may therefore identify all finite fields with the same order, and they are unambiguously denotedFq{\displaystyle \mathbb {F} _{q}},Fq{\displaystyle \mathbf {F} _{q}}orGF(q){\displaystyle \mathrm {GF} (q)}, where the letters GF stand for "Galois field".[3]
In a finite field of orderq{\displaystyle q}, thepolynomialXq−X{\displaystyle X^{q}-X}has allq{\displaystyle q}elements of the finite field asroots.[citation needed]The non-zero elements of a finite field form amultiplicative group. This group iscyclic, so all non-zero elements can be expressed as powers of a single element called aprimitive elementof the field. (In general there will be several primitive elements for a given field.)[1]
The simplest examples of finite fields are the fields of prime order: for eachprime numberp{\displaystyle p}, theprime fieldof orderp{\displaystyle p}may be constructed as theintegers modulop{\displaystyle p},Z/pZ{\displaystyle \mathbb {Z} /p\mathbb {Z} }.[1]
The elements of the prime field of orderp{\displaystyle p}may be represented by integers in the range0,…,p−1{\displaystyle 0,\ldots ,p-1}. The sum, the difference and the product are theremainder of the divisionbyp{\displaystyle p}of the result of the corresponding integer operation.[1]The multiplicative inverse of an element may be computed by using the extended Euclidean algorithm (seeExtended Euclidean algorithm § Modular integers).[citation needed]
LetF{\displaystyle F}be a finite field. For any elementx{\displaystyle x}inF{\displaystyle F}and anyintegern{\displaystyle n}, denote byn⋅x{\displaystyle n\cdot x}the sum ofn{\displaystyle n}copies ofx{\displaystyle x}. The least positiven{\displaystyle n}such thatn⋅1=0{\displaystyle n\cdot 1=0}is the characteristicp{\displaystyle p}of the field.[1]This allows defining a multiplication(k,x)↦k⋅x{\displaystyle (k,x)\mapsto k\cdot x}of an elementk{\displaystyle k}ofGF(p){\displaystyle \mathrm {GF} (p)}by an elementx{\displaystyle x}ofF{\displaystyle F}by choosing an integer representative fork{\displaystyle k}. This multiplication makesF{\displaystyle F}into aGF(p){\displaystyle \mathrm {GF} (p)}-vector space.[1]It follows that the number of elements ofF{\displaystyle F}ispn{\displaystyle p^{n}}for some integern{\displaystyle n}.[1]
Theidentity(x+y)p=xp+yp{\displaystyle (x+y)^{p}=x^{p}+y^{p}}(sometimes called thefreshman's dream) is true in a field of characteristicp{\displaystyle p}. This follows from thebinomial theorem, as eachbinomial coefficientof the expansion of(x+y)p{\displaystyle (x+y)^{p}}, except the first and the last, is a multiple ofp{\displaystyle p}.[citation needed]
ByFermat's little theorem, ifp{\displaystyle p}is a prime number andx{\displaystyle x}is in the fieldGF(p){\displaystyle \mathrm {GF} (p)}thenxp=x{\displaystyle x^{p}=x}. This implies the equalityXp−X=∏a∈GF(p)(X−a){\displaystyle X^{p}-X=\prod _{a\in \mathrm {GF} (p)}(X-a)}for polynomials overGF(p){\displaystyle \mathrm {GF} (p)}. More generally, every element inGF(pn){\displaystyle \mathrm {GF} (p^{n})}satisfies the polynomial equationxpn−x=0{\displaystyle x^{p^{n}}-x=0}.[citation needed]
Any finitefield extensionof a finite field isseparableand simple. That is, ifE{\displaystyle E}is a finite field andF{\displaystyle F}is a subfield ofE{\displaystyle E}, thenE{\displaystyle E}is obtained fromF{\displaystyle F}by adjoining a single element whoseminimal polynomialisseparable. To use a piece of jargon, finite fields areperfect.[1]
A more generalalgebraic structurethat satisfies all the other axioms of a field, but whose multiplication is not required to becommutative, is called adivision ring(or sometimesskew field). ByWedderburn's little theorem, any finite division ring is commutative, and hence is a finite field.[1]
Letq=pn{\displaystyle q=p^{n}}be aprime power, andF{\displaystyle F}be thesplitting fieldof the polynomialP=Xq−X{\displaystyle P=X^{q}-X}over the prime fieldGF(p){\displaystyle \mathrm {GF} (p)}. This means thatF{\displaystyle F}is a finite field of lowest order, in whichP{\displaystyle P}hasq{\displaystyle q}distinct roots (theformal derivativeofP{\displaystyle P}isP′=−1{\displaystyle P'=-1}, implying thatgcd(P,P′)=1{\displaystyle \mathrm {gcd} (P,P')=1}, which in general implies that the splitting field is aseparable extensionof the original). Theabove identityshows that the sum and the product of two roots ofP{\displaystyle P}are roots ofP{\displaystyle P}, as well as the multiplicative inverse of a root ofP{\displaystyle P}. In other words, the roots ofP{\displaystyle P}form a field of orderq{\displaystyle q}, which is equal toF{\displaystyle F}by the minimality of the splitting field.
The uniqueness up to isomorphism of splitting fields implies thus that all fields of orderq{\displaystyle q}are isomorphic. Also, if a fieldF{\displaystyle F}has a field of orderq=pk{\displaystyle q=p^{k}}as a subfield, its elements are theq{\displaystyle q}roots ofXq−X{\displaystyle X^{q}-X}, andF{\displaystyle F}cannot contain another subfield of orderq{\displaystyle q}.
In summary, we have the following classification theorem first proved in 1893 byE. H. Moore:[2]
The order of a finite field is a prime power. For every prime powerq{\displaystyle q}there are fields of orderq{\displaystyle q}, and they are all isomorphic. In these fields, every element satisfiesxq=x,{\displaystyle x^{q}=x,}and the polynomialXq−X{\displaystyle X^{q}-X}factors asXq−X=∏a∈F(X−a).{\displaystyle X^{q}-X=\prod _{a\in F}(X-a).}
It follows thatGF(pn){\displaystyle \mathrm {GF} (p^{n})}contains a subfield isomorphic toGF(pm){\displaystyle \mathrm {GF} (p^{m})}if and only ifm{\displaystyle m}is a divisor ofn{\displaystyle n}; in that case, this subfield is unique. In fact, the polynomialXpm−X{\displaystyle X^{p^{m}}-X}dividesXpn−X{\displaystyle X^{p^{n}}-X}if and only ifm{\displaystyle m}is a divisor ofn{\displaystyle n}.
Given a prime powerq=pn{\displaystyle q=p^{n}}withp{\displaystyle p}prime andn>1{\displaystyle n>1}, the fieldGF(q){\displaystyle \mathrm {GF} (q)}may be explicitly constructed in the following way. One first chooses anirreducible polynomialP{\displaystyle P}inGF(p)[X]{\displaystyle \mathrm {GF} (p)[X]}of degreen{\displaystyle n}(such an irreducible polynomial always exists). Then thequotient ringGF(q)=GF(p)[X]/(P){\displaystyle \mathrm {GF} (q)=\mathrm {GF} (p)[X]/(P)}of the polynomial ringGF(p)[X]{\displaystyle \mathrm {GF} (p)[X]}by the ideal generated byP{\displaystyle P}is a field of orderq{\displaystyle q}.
More explicitly, the elements ofGF(q){\displaystyle \mathrm {GF} (q)}are the polynomials overGF(p){\displaystyle \mathrm {GF} (p)}whose degree is strictly less thann{\displaystyle n}. The addition and the subtraction are those of polynomials overGF(p){\displaystyle \mathrm {GF} (p)}. The product of two elements is the remainder of theEuclidean divisionbyP{\displaystyle P}of the product inGF(q)[X]{\displaystyle \mathrm {GF} (q)[X]}.
The multiplicative inverse of a non-zero element may be computed with the extended Euclidean algorithm; seeExtended Euclidean algorithm § Simple algebraic field extensions.
However, with this representation, elements ofGF(q){\displaystyle \mathrm {GF} (q)}may be difficult to distinguish from the corresponding polynomials. Therefore, it is common to give a name, commonlyα{\displaystyle \alpha }to the element ofGF(q){\displaystyle \mathrm {GF} (q)}that corresponds to the polynomialX{\displaystyle X}. So, the elements ofGF(q){\displaystyle \mathrm {GF} (q)}become polynomials inα{\displaystyle \alpha }, whereP(α)=0{\displaystyle P(\alpha )=0}, and, when one encounters a polynomial inα{\displaystyle \alpha }of degree greater or equal ton{\displaystyle n}(for example after a multiplication), one knows that one has to use the relationP(α)=0{\displaystyle P(\alpha )=0}to reduce its degree (it is what Euclidean division is doing).
Except in the construction ofGF(4){\displaystyle \mathrm {GF} (4)}, there are several possible choices forP{\displaystyle P}, which produce isomorphic results. To simplify the Euclidean division, one commonly chooses forP{\displaystyle P}a polynomial of the formXn+aX+b,{\displaystyle X^{n}+aX+b,}which make the needed Euclidean divisions very efficient. However, for some fields, typically in characteristic2{\displaystyle 2}, irreducible polynomials of the formXn+aX+b{\displaystyle X^{n}+aX+b}may not exist. In characteristic2{\displaystyle 2}, if the polynomialXn+X+1{\displaystyle X^{n}+X+1}is reducible, it is recommended to chooseXn+Xk+1{\displaystyle X^{n}+X^{k}+1}with the lowest possiblek{\displaystyle k}that makes the polynomial irreducible. If all thesetrinomialsare reducible, one chooses "pentanomials"Xn+Xa+Xb+Xc+1{\displaystyle X^{n}+X^{a}+X^{b}+X^{c}+1}, as polynomials of degree greater than1{\displaystyle 1}, with an even number of terms, are never irreducible in characteristic2{\displaystyle 2}, having1{\displaystyle 1}as a root.[4]
A possible choice for such a polynomial is given byConway polynomials. They ensure a certain compatibility between the representation of a field and the representations of its subfields.
In the next sections, we will show how the general construction method outlined above works for small finite fields.
The smallest non-prime field is the field with four elements, which is commonly denotedGF(4){\displaystyle \mathrm {GF} (4)}orF4.{\displaystyle \mathbb {F} _{4}.}It consists of the four elements0,1,α,1+α{\displaystyle 0,1,\alpha ,1+\alpha }such thatα2=1+α{\displaystyle \alpha ^{2}=1+\alpha },1⋅α=α⋅1=α{\displaystyle 1\cdot \alpha =\alpha \cdot 1=\alpha },x+x=0{\displaystyle x+x=0}, andx⋅0=0⋅x=0{\displaystyle x\cdot 0=0\cdot x=0}, for everyx∈GF(4){\displaystyle x\in \mathrm {GF} (4)}, the other operation results being easily deduced from thedistributive law. See below for the complete operation tables.
This may be deduced as follows from the results of the preceding section.
OverGF(2){\displaystyle \mathrm {GF} (2)}, there is only oneirreducible polynomialof degree2{\displaystyle 2}:X2+X+1{\displaystyle X^{2}+X+1}Therefore, forGF(4){\displaystyle \mathrm {GF} (4)}the construction of the preceding section must involve this polynomial, andGF(4)=GF(2)[X]/(X2+X+1).{\displaystyle \mathrm {GF} (4)=\mathrm {GF} (2)[X]/(X^{2}+X+1).}Letα{\displaystyle \alpha }denote a root of this polynomial inGF(4){\displaystyle \mathrm {GF} (4)}. This implies thatα2=1+α,{\displaystyle \alpha ^{2}=1+\alpha ,}and thatα{\displaystyle \alpha }and1+α{\displaystyle 1+\alpha }are the elements ofGF(4){\displaystyle \mathrm {GF} (4)}that are not inGF(2){\displaystyle \mathrm {GF} (2)}. The tables of the operations inGF(4){\displaystyle \mathrm {GF} (4)}result from this, and are as follows:
A table for subtraction is not given, because subtraction is identical to addition, as is the case for every field of characteristic 2.
In the third table, for the division ofx{\displaystyle x}byy{\displaystyle y}, the values ofx{\displaystyle x}must be read in the left column, and the values ofy{\displaystyle y}in the top row. (Because0⋅z=0{\displaystyle 0\cdot z=0}for everyz{\displaystyle z}in everyringthedivision by 0has to remain undefined.) From the tables, it can be seen that the additive structure ofGF(4){\displaystyle \mathrm {GF} (4)}is isomorphic to theKlein four-group, while the non-zero multiplicative structure is isomorphic to the groupZ3{\displaystyle Z_{3}}.
The mapφ:x↦x2{\displaystyle \varphi :x\mapsto x^{2}}is the non-trivial field automorphism, called theFrobenius automorphism, which sendsα{\displaystyle \alpha }into the second root1+α{\displaystyle 1+\alpha }of the above-mentioned irreducible polynomialX2+X+1{\displaystyle X^{2}+X+1}.
For applying theabove general constructionof finite fields in the case ofGF(p2){\displaystyle \mathrm {GF} (p^{2})}, one has to find an irreducible polynomial of degree 2. Forp=2{\displaystyle p=2}, this has been done in the preceding section. Ifp{\displaystyle p}is an odd prime, there are always irreducible polynomials of the formX2−r{\displaystyle X^{2}-r}, withr{\displaystyle r}inGF(p){\displaystyle \mathrm {GF} (p)}.
More precisely, the polynomialX2−r{\displaystyle X^{2}-r}is irreducible overGF(p){\displaystyle \mathrm {GF} (p)}if and only ifr{\displaystyle r}is aquadratic non-residuemodulop{\displaystyle p}(this is almost the definition of a quadratic non-residue). There arep−12{\displaystyle {\frac {p-1}{2}}}quadratic non-residues modulop{\displaystyle p}. For example,2{\displaystyle 2}is a quadratic non-residue forp=3,5,11,13,…{\displaystyle p=3,5,11,13,\ldots }, and3{\displaystyle 3}is a quadratic non-residue forp=5,7,17,…{\displaystyle p=5,7,17,\ldots }. Ifp≡3mod4{\displaystyle p\equiv 3\mod 4}, that isp=3,7,11,19,…{\displaystyle p=3,7,11,19,\ldots }, one may choose−1≡p−1{\displaystyle -1\equiv p-1}as a quadratic non-residue, which allows us to have a very simple irreducible polynomialX2+1{\displaystyle X^{2}+1}.
Having chosen a quadratic non-residuer{\displaystyle r}, letα{\displaystyle \alpha }be a symbolic square root ofr{\displaystyle r}, that is, a symbol that has the propertyα2=r{\displaystyle \alpha ^{2}=r}, in the same way that the complex numberi{\displaystyle i}is a symbolic square root of−1{\displaystyle -1}. Then, the elements ofGF(p2){\displaystyle \mathrm {GF} (p^{2})}are all the linear expressionsa+bα,{\displaystyle a+b\alpha ,}witha{\displaystyle a}andb{\displaystyle b}inGF(p){\displaystyle \mathrm {GF} (p)}. The operations onGF(p2){\displaystyle \mathrm {GF} (p^{2})}are defined as follows (the operations between elements ofGF(p){\displaystyle \mathrm {GF} (p)}represented by Latin letters are the operations inGF(p){\displaystyle \mathrm {GF} (p)}):−(a+bα)=−a+(−b)α(a+bα)+(c+dα)=(a+c)+(b+d)α(a+bα)(c+dα)=(ac+rbd)+(ad+bc)α(a+bα)−1=a(a2−rb2)−1+(−b)(a2−rb2)−1α{\displaystyle {\begin{aligned}-(a+b\alpha )&=-a+(-b)\alpha \\(a+b\alpha )+(c+d\alpha )&=(a+c)+(b+d)\alpha \\(a+b\alpha )(c+d\alpha )&=(ac+rbd)+(ad+bc)\alpha \\(a+b\alpha )^{-1}&=a(a^{2}-rb^{2})^{-1}+(-b)(a^{2}-rb^{2})^{-1}\alpha \end{aligned}}}
The polynomialX3−X−1{\displaystyle X^{3}-X-1}is irreducible overGF(2){\displaystyle \mathrm {GF} (2)}andGF(3){\displaystyle \mathrm {GF} (3)}, that is, it is irreduciblemodulo2{\displaystyle 2}and3{\displaystyle 3}(to show this, it suffices to show that it has no root inGF(2){\displaystyle \mathrm {GF} (2)}nor inGF(3){\displaystyle \mathrm {GF} (3)}). It follows that the elements ofGF(8){\displaystyle \mathrm {GF} (8)}andGF(27){\displaystyle \mathrm {GF} (27)}may be represented byexpressionsa+bα+cα2,{\displaystyle a+b\alpha +c\alpha ^{2},}wherea,b,c{\displaystyle a,b,c}are elements ofGF(2){\displaystyle \mathrm {GF} (2)}orGF(3){\displaystyle \mathrm {GF} (3)}(respectively), andα{\displaystyle \alpha }is a symbol such thatα3=α+1.{\displaystyle \alpha ^{3}=\alpha +1.}
The addition, additive inverse and multiplication onGF(8){\displaystyle \mathrm {GF} (8)}andGF(27){\displaystyle \mathrm {GF} (27)}may thus be defined as follows; in following formulas, the operations between elements ofGF(2){\displaystyle \mathrm {GF} (2)}orGF(3){\displaystyle \mathrm {GF} (3)}, represented by Latin letters, are the operations inGF(2){\displaystyle \mathrm {GF} (2)}orGF(3){\displaystyle \mathrm {GF} (3)}, respectively:−(a+bα+cα2)=−a+(−b)α+(−c)α2(forGF(8),this operation is the identity)(a+bα+cα2)+(d+eα+fα2)=(a+d)+(b+e)α+(c+f)α2(a+bα+cα2)(d+eα+fα2)=(ad+bf+ce)+(ae+bd+bf+ce+cf)α+(af+be+cd+cf)α2{\displaystyle {\begin{aligned}-(a+b\alpha +c\alpha ^{2})&=-a+(-b)\alpha +(-c)\alpha ^{2}\qquad {\text{(for }}\mathrm {GF} (8),{\text{this operation is the identity)}}\\(a+b\alpha +c\alpha ^{2})+(d+e\alpha +f\alpha ^{2})&=(a+d)+(b+e)\alpha +(c+f)\alpha ^{2}\\(a+b\alpha +c\alpha ^{2})(d+e\alpha +f\alpha ^{2})&=(ad+bf+ce)+(ae+bd+bf+ce+cf)\alpha +(af+be+cd+cf)\alpha ^{2}\end{aligned}}}
The polynomialX4+X+1{\displaystyle X^{4}+X+1}is irreducible overGF(2){\displaystyle \mathrm {GF} (2)}, that is, it is irreducible modulo2{\displaystyle 2}. It follows that the elements ofGF(16){\displaystyle \mathrm {GF} (16)}may be represented byexpressionsa+bα+cα2+dα3,{\displaystyle a+b\alpha +c\alpha ^{2}+d\alpha ^{3},}wherea,b,c,d{\displaystyle a,b,c,d}are either0{\displaystyle 0}or1{\displaystyle 1}(elements ofGF(2){\displaystyle \mathrm {GF} (2)}), andα{\displaystyle \alpha }is a symbol such thatα4=α+1{\displaystyle \alpha ^{4}=\alpha +1}(that is,α{\displaystyle \alpha }is defined as a root of the given irreducible polynomial). As the characteristic ofGF(2){\displaystyle \mathrm {GF} (2)}is2{\displaystyle 2}, each element is its additive inverse inGF(16){\displaystyle \mathrm {GF} (16)}. The addition and multiplication onGF(16){\displaystyle \mathrm {GF} (16)}may be defined as follows; in following formulas, the operations between elements ofGF(2){\displaystyle \mathrm {GF} (2)}, represented by Latin letters are the operations inGF(2){\displaystyle \mathrm {GF} (2)}.(a+bα+cα2+dα3)+(e+fα+gα2+hα3)=(a+e)+(b+f)α+(c+g)α2+(d+h)α3(a+bα+cα2+dα3)(e+fα+gα2+hα3)=(ae+bh+cg+df)+(af+be+bh+cg+df+ch+dg)α+(ag+bf+ce+ch+dg+dh)α2+(ah+bg+cf+de+dh)α3{\displaystyle {\begin{aligned}(a+b\alpha +c\alpha ^{2}+d\alpha ^{3})+(e+f\alpha +g\alpha ^{2}+h\alpha ^{3})&=(a+e)+(b+f)\alpha +(c+g)\alpha ^{2}+(d+h)\alpha ^{3}\\(a+b\alpha +c\alpha ^{2}+d\alpha ^{3})(e+f\alpha +g\alpha ^{2}+h\alpha ^{3})&=(ae+bh+cg+df)+(af+be+bh+cg+df+ch+dg)\alpha \;+\\&\quad \;(ag+bf+ce+ch+dg+dh)\alpha ^{2}+(ah+bg+cf+de+dh)\alpha ^{3}\end{aligned}}}
The fieldGF(16){\displaystyle \mathrm {GF} (16)}has eightprimitive elements(the elements that have all nonzero elements ofGF(16){\displaystyle \mathrm {GF} (16)}as integer powers). These elements are the four roots ofX4+X+1{\displaystyle X^{4}+X+1}and theirmultiplicative inverses. In particular,α{\displaystyle \alpha }is a primitive element, and the primitive elements areαm{\displaystyle \alpha ^{m}}withm{\displaystyle m}less than andcoprimewith15{\displaystyle 15}(that is, 1, 2, 4, 7, 8, 11, 13, 14).
The set of non-zero elements inGF(q){\displaystyle \mathrm {GF} (q)}is anabelian groupunder the multiplication, of orderq−1{\displaystyle q-1}. ByLagrange's theorem, there exists a divisork{\displaystyle k}ofq−1{\displaystyle q-1}such thatxk=1{\displaystyle x^{k}=1}for every non-zerox{\displaystyle x}inGF(q){\displaystyle \mathrm {GF} (q)}. As the equationxk=1{\displaystyle x^{k}=1}has at mostk{\displaystyle k}solutions in any field,q−1{\displaystyle q-1}is the lowest possible value fork{\displaystyle k}.
Thestructure theorem of finite abelian groupsimplies that this multiplicative group iscyclic, that is, all non-zero elements are powers of a single element. In summary:
Such an elementa{\displaystyle a}is called aprimitive elementofGF(q){\displaystyle \mathrm {GF} (q)}. Unlessq=2,3{\displaystyle q=2,3}, the primitive element is not unique. The number of primitive elements isϕ(q−1){\displaystyle \phi (q-1)}whereϕ{\displaystyle \phi }isEuler's totient function.
The result above implies thatxq=x{\displaystyle x^{q}=x}for everyx{\displaystyle x}inGF(q){\displaystyle \mathrm {GF} (q)}. The particular case whereq{\displaystyle q}is prime isFermat's little theorem.
Ifa{\displaystyle a}is a primitive element inGF(q){\displaystyle \mathrm {GF} (q)}, then for any non-zero elementx{\displaystyle x}inF{\displaystyle F}, there is a unique integern{\displaystyle n}with0≤n≤q−2{\displaystyle 0\leq n\leq q-2}such thatx=an{\displaystyle x=a^{n}}.
This integern{\displaystyle n}is called thediscrete logarithmofx{\displaystyle x}to the basea{\displaystyle a}.
Whilean{\displaystyle a^{n}}can be computed very quickly, for example usingexponentiation by squaring, there is no known efficient algorithm for computing the inverse operation, the discrete logarithm. This has been used in variouscryptographic protocols, seeDiscrete logarithmfor details.
When the nonzero elements ofGF(q){\displaystyle \mathrm {GF} (q)}are represented by their discrete logarithms, multiplication and division are easy, as they reduce to addition and subtraction moduloq−1{\displaystyle q-1}. However, addition amounts to computing the discrete logarithm ofam+an{\displaystyle a^{m}+a^{n}}. The identityam+an=an(am−n+1){\displaystyle a^{m}+a^{n}=a^{n}\left(a^{m-n}+1\right)}allows one to solve this problem by constructing the table of the discrete logarithms ofan+1{\displaystyle a^{n}+1}, calledZech's logarithms, forn=0,…,q−2{\displaystyle n=0,\ldots ,q-2}(it is convenient to define the discrete logarithm of zero as being−∞{\displaystyle -\infty }).
Zech's logarithms are useful for large computations, such aslinear algebraover medium-sized fields, that is, fields that are sufficiently large for making natural algorithms inefficient, but not too large, as one has to pre-compute a table of the same size as the order of the field.
Every nonzero element of a finite field is aroot of unity, asxq−1=1{\displaystyle x^{q-1}=1}for every nonzero element ofGF(q){\displaystyle \mathrm {GF} (q)}.
Ifn{\displaystyle n}is a positive integer, ann{\displaystyle n}thprimitive root of unityis a solution of the equationxn=1{\displaystyle x^{n}=1}that is not a solution of the equationxm=1{\displaystyle x^{m}=1}for any positive integerm<n{\displaystyle m<n}. Ifa{\displaystyle a}is an{\displaystyle n}th primitive root of unity in a fieldF{\displaystyle F}, thenF{\displaystyle F}contains all then{\displaystyle n}roots of unity, which are1,a,a2,…,an−1{\displaystyle 1,a,a^{2},\ldots ,a^{n-1}}.
The fieldGF(q){\displaystyle \mathrm {GF} (q)}contains an{\displaystyle n}th primitive root of unity if and only ifn{\displaystyle n}is a divisor ofq−1{\displaystyle q-1}; ifn{\displaystyle n}is a divisor ofq−1{\displaystyle q-1}, then the number of primitiven{\displaystyle n}th roots of unity inGF(q){\displaystyle \mathrm {GF} (q)}isϕ(n){\displaystyle \phi (n)}(Euler's totient function). The number ofn{\displaystyle n}th roots of unity inGF(q){\displaystyle \mathrm {GF} (q)}isgcd(n,q−1){\displaystyle \mathrm {gcd} (n,q-1)}.
In a field of characteristicp{\displaystyle p}, everynp{\displaystyle np}th root of unity is also an{\displaystyle n}th root of unity. It follows that primitivenp{\displaystyle np}th roots of unity never exist in a field of characteristicp{\displaystyle p}.
On the other hand, ifn{\displaystyle n}iscoprimetop{\displaystyle p}, the roots of then{\displaystyle n}thcyclotomic polynomialare distinct in every field of characteristicp{\displaystyle p}, as this polynomial is a divisor ofXn−1{\displaystyle X^{n}-1}, whosediscriminantnn{\displaystyle n^{n}}is nonzero modulop{\displaystyle p}. It follows that then{\displaystyle n}thcyclotomic polynomialfactors overGF(q){\displaystyle \mathrm {GF} (q)}into distinct irreducible polynomials that have all the same degree, sayd{\displaystyle d}, and thatGF(pd){\displaystyle \mathrm {GF} (p^{d})}is the smallest field of characteristicp{\displaystyle p}that contains then{\displaystyle n}th primitive roots of unity.
When computingBrauer characters, one uses the mapαk↦exp(2πik/(q−1)){\displaystyle \alpha ^{k}\mapsto \exp(2\pi ik/(q-1))}to map eigenvalues of a representation matrix to the complex numbers. Under this mapping, the base subfieldGF(p){\displaystyle \mathrm {GF} (p)}consists of evenly spaced points around the unit circle (omitting zero).
The fieldGF(64){\displaystyle \mathrm {GF} (64)}has several interesting properties that smaller fields do not share: it has two subfields such that neither is contained in the other; not all generators (elements withminimal polynomialof degree6{\displaystyle 6}overGF(2){\displaystyle \mathrm {GF} (2)}) are primitive elements; and the primitive elements are not all conjugate under theGalois group.
The order of this field being26, and the divisors of6being1, 2, 3, 6, the subfields ofGF(64)areGF(2),GF(22) = GF(4),GF(23) = GF(8), andGF(64)itself. As2and3arecoprime, the intersection ofGF(4)andGF(8)inGF(64)is the prime fieldGF(2).
The union ofGF(4)andGF(8)has thus10elements. The remaining54elements ofGF(64)generateGF(64)in the sense that no other subfield contains any of them. It follows that they are roots of irreducible polynomials of degree6overGF(2). This implies that, overGF(2), there are exactly9 =54/6irreduciblemonic polynomialsof degree6. This may be verified by factoringX64−XoverGF(2).
The elements ofGF(64)are primitiven{\displaystyle n}th roots of unity for somen{\displaystyle n}dividing63{\displaystyle 63}. As the 3rd and the 7th roots of unity belong toGF(4)andGF(8), respectively, the54generators are primitiventh roots of unity for somenin{9, 21, 63}.Euler's totient functionshows that there are6primitive9th roots of unity,12{\displaystyle 12}primitive21{\displaystyle 21}st roots of unity, and36{\displaystyle 36}primitive63rd roots of unity. Summing these numbers, one finds again54{\displaystyle 54}elements.
By factoring thecyclotomic polynomialsoverGF(2){\displaystyle \mathrm {GF} (2)}, one finds that:
This shows that the best choice to constructGF(64){\displaystyle \mathrm {GF} (64)}is to define it asGF(2)[X] / (X6+X+ 1). In fact, this generator is a primitive element, and this polynomial is the irreducible polynomial that produces the easiest Euclidean division.
In this section,p{\displaystyle p}is a prime number, andq=pn{\displaystyle q=p^{n}}is a power ofp{\displaystyle p}.
InGF(q){\displaystyle \mathrm {GF} (q)}, the identity(x+y)p=xp+ypimplies that the mapφ:x↦xp{\displaystyle \varphi :x\mapsto x^{p}}is aGF(p){\displaystyle \mathrm {GF} (p)}-linear endomorphismand afield automorphismofGF(q){\displaystyle \mathrm {GF} (q)}, which fixes every element of the subfieldGF(p){\displaystyle \mathrm {GF} (p)}. It is called theFrobenius automorphism, afterFerdinand Georg Frobenius.
Denoting byφkthecompositionofφwith itselfktimes, we haveφk:x↦xpk.{\displaystyle \varphi ^{k}:x\mapsto x^{p^{k}}.}It has been shown in the preceding section thatφnis the identity. For0 <k<n, the automorphismφkis not the identity, as, otherwise, the polynomialXpk−X{\displaystyle X^{p^{k}}-X}would have more thanpkroots.
There are no otherGF(p)-automorphisms ofGF(q). In other words,GF(pn)has exactlynGF(p)-automorphisms, which areId=φ0,φ,φ2,…,φn−1.{\displaystyle \mathrm {Id} =\varphi ^{0},\varphi ,\varphi ^{2},\ldots ,\varphi ^{n-1}.}
In terms ofGalois theory, this means thatGF(pn)is aGalois extensionofGF(p), which has acyclicGalois group.
The fact that the Frobenius map is surjective implies that every finite field isperfect.
IfFis a finite field, a non-constantmonic polynomialwith coefficients inFisirreducibleoverF, if it is not the product of two non-constant monic polynomials, with coefficients inF.
As everypolynomial ringover a field is aunique factorization domain, every monic polynomial over a finite field may be factored in a unique way (up to the order of the factors) into a product of irreducible monic polynomials.
There are efficient algorithms for testing polynomial irreducibility and factoring polynomials over finite fields. They are a key step for factoring polynomials over the integers or therational numbers. At least for this reason, everycomputer algebra systemhas functions for factoring polynomials over finite fields, or, at least, over finite prime fields.
The polynomialXq−X{\displaystyle X^{q}-X}factors into linear factors over a field of orderq. More precisely, this polynomial is the product of all monic polynomials of degree one over a field of orderq.
This implies that, ifq=pnthenXq−Xis the product of all monic irreducible polynomials overGF(p), whose degree dividesn. In fact, ifPis an irreducible factor overGF(p)ofXq−X, its degree dividesn, as itssplitting fieldis contained inGF(pn). Conversely, ifPis an irreducible monic polynomial overGF(p)of degreeddividingn, it defines a field extension of degreed, which is contained inGF(pn), and all roots ofPbelong toGF(pn), and are roots ofXq−X; thusPdividesXq−X. AsXq−Xdoes not have any multiple factor, it is thus the product of all the irreducible monic polynomials that divide it.
This property is used to compute the product of the irreducible factors of each degree of polynomials overGF(p); seeDistinct degree factorization.
The numberN(q,n)of monic irreducible polynomials of degreenoverGF(q)is given by[5]N(q,n)=1n∑d∣nμ(d)qn/d,{\displaystyle N(q,n)={\frac {1}{n}}\sum _{d\mid n}\mu (d)q^{n/d},}whereμis theMöbius function. This formula is an immediate consequence of the property ofXq−Xabove and theMöbius inversion formula.
By the above formula, the number of irreducible (not necessarily monic) polynomials of degreenoverGF(q)is(q− 1)N(q,n).
The exact formula implies the inequalityN(q,n)≥1n(qn−∑ℓ∣n,ℓprimeqn/ℓ);{\displaystyle N(q,n)\geq {\frac {1}{n}}\left(q^{n}-\sum _{\ell \mid n,\ \ell {\text{ prime}}}q^{n/\ell }\right);}this is sharp if and only ifnis a power of some prime.
For everyqand everyn, the right hand side is positive, so there is at least one irreducible polynomial of degreenoverGF(q).
Incryptography, the difficulty of thediscrete logarithm problemin finite fields or inelliptic curvesis the basis of several widely used protocols, such as theDiffie–Hellmanprotocol. For example, in 2014, a secure internet connection to Wikipedia involved the elliptic curve Diffie–Hellman protocol (ECDHE) over a large finite field.[6]Incoding theory, many codes are constructed assubspacesofvector spacesover finite fields.
Finite fields are used by manyerror correction codes, such asReed–Solomon error correction codeorBCH code. The finite field almost always has characteristic of2, since computer data is stored in binary. For example, a byte of data can be interpreted as an element ofGF(28). One exception isPDF417bar code, which isGF(929). Some CPUs have special instructions that can be useful for finite fields of characteristic2, generally variations ofcarry-less product.
Finite fields are widely used innumber theory, as many problems over the integers may be solved by reducing themmoduloone or severalprime numbers. For example, the fastest known algorithms forpolynomial factorizationandlinear algebraover the field ofrational numbersproceed by reduction modulo one or several primes, and then reconstruction of the solution by usingChinese remainder theorem,Hensel liftingor theLLL algorithm.
Similarly many theoretical problems in number theory can be solved by considering their reductions modulo some or all prime numbers. See, for example,Hasse principle. Many recent developments ofalgebraic geometrywere motivated by the need to enlarge the power of these modular methods.Wiles' proof of Fermat's Last Theoremis an example of a deep result involving many mathematical tools, including finite fields.
TheWeil conjecturesconcern the number of points onalgebraic varietiesover finite fields and the theory has many applications includingexponentialandcharacter sumestimates.
Finite fields have widespread application incombinatorics, two well known examples being the definition ofPaley Graphsand the related construction forHadamard Matrices. Inarithmetic combinatoricsfinite fields[7]and finite field models[8][9]are used extensively, such as inSzemerédi's theoremon arithmetic progressions.
Adivision ringis a generalization of field. Division rings are not assumed to be commutative. There are no non-commutative finite division rings:Wedderburn's little theoremstates that all finitedivision ringsare commutative, and hence are finite fields. This result holds even if we relax theassociativityaxiom toalternativity, that is, all finitealternative division ringsare finite fields, by theArtin–Zorn theorem.[10]
A finite fieldF{\displaystyle F}is not algebraically closed: the polynomialf(T)=1+∏α∈F(T−α),{\displaystyle f(T)=1+\prod _{\alpha \in F}(T-\alpha ),}has no roots inF{\displaystyle F}, sincef(α) = 1for allα{\displaystyle \alpha }inF{\displaystyle F}.
Given a prime numberp, letF¯p{\displaystyle {\overline {\mathbb {F} }}_{p}}be an algebraic closure ofFp.{\displaystyle \mathbb {F} _{p}.}It is not only uniqueup toan isomorphism, as do all algebraic closures, but contrarily to the general case, all its subfield are fixed by all its automorphisms, and it is also the algebraic closure of all finite fields of the same characteristicp.
This property results mainly from the fact that the elements ofFpn{\displaystyle \mathbb {F} _{p^{n}}}are exactly the roots ofxpn−x,{\displaystyle x^{p^{n}}-x,}and this defines an inclusionFpn⊂Fpnm{\displaystyle \mathbb {\mathbb {F} } _{p^{n}}\subset \mathbb {F} _{p^{nm}}}form>1.{\displaystyle m>1.}These inclusions allow writing informallyF¯p=⋃n≥1Fpn.{\displaystyle {\overline {\mathbb {F} }}_{p}=\bigcup _{n\geq 1}\mathbb {F} _{p^{n}}.}The formal validation of this notation results from the fact that the above field inclusions form adirected setof fields; Itsdirect limitisF¯p,{\displaystyle {\overline {\mathbb {F} }}_{p},}which may thus be considered as "directed union".
Given aprimitive elementgmn{\displaystyle g_{mn}}ofFqmn,{\displaystyle \mathbb {F} _{q^{mn}},}thengmnm{\displaystyle g_{mn}^{m}}is a primitive element ofFqn.{\displaystyle \mathbb {F} _{q^{n}}.}
For explicit computations, it may be useful to have a coherent choice of the primitive elements for all finite fields; that is, to choose the primitive elementgn{\displaystyle g_{n}}ofFqn{\displaystyle \mathbb {F} _{q^{n}}}in order that, whenevern=mh,{\displaystyle n=mh,}one hasgm=gnh,{\displaystyle g_{m}=g_{n}^{h},}wheregm{\displaystyle g_{m}}is the primitive element already chosen forFqm.{\displaystyle \mathbb {F} _{q^{m}}.}
Such a construction may be obtained byConway polynomials.
Although finite fields are not algebraically closed, they arequasi-algebraically closed, which means that everyhomogeneous polynomialover a finite field has a non-trivial zero whose components are in the field if the number of its variables is more than its degree. This was a conjecture ofArtinandDicksonproved byChevalley(seeChevalley–Warning theorem).
|
https://en.wikipedia.org/wiki/Galois_field
|
Inmathematics, the concept of aninverse elementgeneralises the concepts ofopposite(−x) andreciprocal(1/x) of numbers.
Given anoperationdenoted here∗, and anidentity elementdenotede, ifx∗y=e, one says thatxis aleft inverseofy, and thatyis aright inverseofx. (An identity element is an element such thatx*e=xande*y=yfor allxandyfor which the left-hand sides are defined.[1])
When the operation∗isassociative, if an elementxhas both a left inverse and a right inverse, then these two inverses are equal and unique; they are called theinverse elementor simply theinverse. Often an adjective is added for specifying the operation, such as inadditive inverse,multiplicative inverse, andfunctional inverse. In this case (associative operation), aninvertible elementis an element that has an inverse. In aring, aninvertible element, also called aunit, is an element that is invertible under multiplication (this is not ambiguous, as every element is invertible under addition).
Inverses are commonly used ingroups—where every element is invertible, andrings—where invertible elements are also calledunits. They are also commonly used for operations that are not defined for all possible operands, such asinverse matricesandinverse functions. This has been generalized tocategory theory, where, by definition, anisomorphismis an invertiblemorphism.
The word 'inverse' is derived fromLatin:inversusthat means 'turned upside down', 'overturned'. This may take its origin from the case offractions, where the (multiplicative) inverse is obtained by exchanging the numerator and the denominator (the inverse ofxy{\displaystyle {\tfrac {x}{y}}}isyx{\displaystyle {\tfrac {y}{x}}}).
The concepts ofinverse elementandinvertible elementare commonly defined forbinary operationsthat are everywhere defined (that is, the operation is defined for any two elements of itsdomain). However, these concepts are also commonly used withpartial operations, that is operations that are not defined everywhere. Common examples arematrix multiplication,function compositionand composition ofmorphismsin acategory. It follows that the common definitions ofassociativityandidentity elementmust be extended to partial operations; this is the object of the first subsections.
In this section,Xis aset(possibly aproper class) on which a partial operation (possibly total) is defined, which is denoted with∗.{\displaystyle *.}
A partial operation isassociativeif
for everyx,y,zinXfor which one of the members of the equality is defined; the equality means that the other member of the equality must also be defined.
Examples of non-total associative operations aremultiplication of matricesof arbitrary size, andfunction composition.
Let∗{\displaystyle *}be a possiblypartialassociative operation on a setX.
Anidentity element, or simply anidentityis an elementesuch that
for everyxandyfor which the left-hand sides of the equalities are defined.
Ifeandfare two identity elements such thate∗f{\displaystyle e*f}is defined, thene=f.{\displaystyle e=f.}(This results immediately from the definition, bye=e∗f=f.{\displaystyle e=e*f=f.})
It follows that a total operation has at most one identity element, and ifeandfare different identities, thene∗f{\displaystyle e*f}is not defined.
For example, in the case ofmatrix multiplication, there is onen×nidentity matrixfor every positive integern, and two identity matrices of different size cannot be multiplied together.
Similarly,identity functionsare identity elements forfunction composition, and the composition of the identity functions of two different sets are not defined.
Ifx∗y=e,{\displaystyle x*y=e,}whereeis an identity element, one says thatxis aleft inverseofy, andyis aright inverseofx.
Left and right inverses do not always exist, even when the operation is total and associative. For example, addition is a total associative operation onnonnegative integers, which has0asadditive identity, and0is the only element that has anadditive inverse. This lack of inverses is the main motivation for extending thenatural numbersinto the integers.
An element can have several left inverses and several right inverses, even when the operation is total and associative. For example, consider thefunctionsfrom the integers to the integers. Thedoubling functionx↦2x{\displaystyle x\mapsto 2x}has infinitely many left inverses underfunction composition, which are the functions that divide by two the even numbers, and give any value to odd numbers. Similarly, every function that mapsnto either2n{\displaystyle 2n}or2n+1{\displaystyle 2n+1}is a right inverse of the functionn↦⌊n2⌋,{\textstyle n\mapsto \left\lfloor {\frac {n}{2}}\right\rfloor ,}thefloor functionthat mapsnton2{\textstyle {\frac {n}{2}}}orn−12,{\textstyle {\frac {n-1}{2}},}depending whethernis even or odd.
More generally, a function has a left inverse forfunction compositionif and only if it isinjective, and it has a right inverse if and only if it issurjective.
Incategory theory, right inverses are also calledsections, and left inverses are calledretractions.
An element isinvertibleunder an operation if it has a left inverse and a right inverse.
In the common case where the operation is associative, the left and right inverse of an element are equal and unique. Indeed, iflandrare respectively a left inverse and a right inverse ofx, then
The inverseof an invertible element is its unique left or right inverse.
If the operation is denoted as an addition, the inverse, oradditive inverse, of an elementxis denoted−x.{\displaystyle -x.}Otherwise, the inverse ofxis generally denotedx−1,{\displaystyle x^{-1},}or, in the case of acommutativemultiplication1x.{\textstyle {\frac {1}{x}}.}When there may be a confusion between several operations, the symbol of the operation may be added before the exponent, such as inx∗−1.{\displaystyle x^{*-1}.}The notationf∘−1{\displaystyle f^{\circ -1}}is not commonly used forfunction composition, since1f{\textstyle {\frac {1}{f}}}can be used for themultiplicative inverse.
Ifxandyare invertible, andx∗y{\displaystyle x*y}is defined, thenx∗y{\displaystyle x*y}is invertible, and its inverse isy−1x−1.{\displaystyle y^{-1}x^{-1}.}
An invertiblehomomorphismis called anisomorphism. Incategory theory, an invertiblemorphismis also called anisomorphism.
Agroupis asetwith anassociative operationthat has an identity element, and for which every element has an inverse.
Thus, the inverse is afunctionfrom the group to itself that may also be considered as an operation ofarityone. It is also aninvolution, since the inverse of the inverse of an element is the element itself.
A group mayacton a set astransformationsof this set. In this case, the inverseg−1{\displaystyle g^{-1}}of a group elementg{\displaystyle g}defines a transformation that is the inverse of the transformation defined byg,{\displaystyle g,}that is, the transformation that "undoes" the transformation defined byg.{\displaystyle g.}
For example, theRubik's cube grouprepresents the finite sequences of elementary moves. The inverse of such a sequence is obtained by applying the inverse of each move in the reverse order.
Amonoidis a set with anassociative operationthat has anidentity element.
Theinvertible elementsin a monoid form agroupunder monoid operation.
Aringis a monoid for ring multiplication. In this case, the invertible elements are also calledunitsand form thegroup of unitsof the ring.
If a monoid is notcommutative, there may exist non-invertible elements that have a left inverse or a right inverse (not both, as, otherwise, the element would be invertible).
For example, the set of thefunctionsfrom a set to itself is a monoid underfunction composition. In this monoid, the invertible elements are thebijective functions; the elements that have left inverses are theinjective functions, and those that have right inverses are thesurjective functions.
Given a monoid, one may want extend it by adding inverse to some elements. This is generally impossible for non-commutative monoids, but, in a commutative monoid, it is possible to add inverses to the elements that have thecancellation property(an elementxhas the cancellation property ifxy=xz{\displaystyle xy=xz}impliesy=z,{\displaystyle y=z,}andyx=zx{\displaystyle yx=zx}impliesy=z{\displaystyle y=z}).This extension of a monoid is allowed byGrothendieck groupconstruction. This is the method that is commonly used for constructingintegersfromnatural numbers,rational numbersfromintegersand, more generally, thefield of fractionsof anintegral domain, andlocalizationsofcommutative rings.
Aringis analgebraic structurewith two operations,additionandmultiplication, which are denoted as the usual operations on numbers.
Under addition, a ring is anabelian group, which means that addition iscommutativeandassociative; it has an identity, called theadditive identity, and denoted0; and every elementxhas an inverse, called itsadditive inverseand denoted−x. Because of commutativity, the concepts of left and right inverses are meaningless since they do not differ from inverses.
Under multiplication, a ring is amonoid; this means that multiplication is associative and has an identity called themultiplicative identityand denoted1. Aninvertible elementfor multiplication is called aunit. The inverse ormultiplicative inverse(for avoiding confusion with additive inverses) of a unitxis denotedx−1,{\displaystyle x^{-1},}or, when the multiplication is commutative,1x.{\textstyle {\frac {1}{x}}.}
The additive identity0is never a unit, except when the ring is thezero ring, which has0as its unique element.
If0is the only non-unit, the ring is afieldif the multiplication is commutative, or adivision ringotherwise.
In anoncommutative ring(that is, a ring whose multiplication is not commutative), a non-invertible element may have one or several left or right inverses. This is, for example, the case of thelinear functionsfrom aninfinite-dimensional vector spaceto itself.
Acommutative ring(that is, a ring whose multiplication is commutative) may be extended by adding inverses to elements that are notzero divisors(that is, their product with a nonzero element cannot be0). This is the process oflocalization, which produces, in particular, the field ofrational numbersfrom the ring of integers, and, more generally, thefield of fractionsof anintegral domain. Localization is also used with zero divisors, but, in this case the original ring is not asubringof the localisation; instead, it is mapped non-injectively to the localization.
Matrix multiplicationis commonly defined formatricesover afield, and straightforwardly extended to matrices overrings,rngsandsemirings. However,in this section, only matrices over acommutative ringare considered, because of the use of the concept ofrankanddeterminant.
IfAis am×nmatrix (that is, a matrix withmrows andncolumns), andBis ap×qmatrix, the productABis defined ifn=p, and only in this case. Anidentity matrix, that is, an identity element for matrix multiplication is asquare matrix(same number for rows and columns) whose entries of themain diagonalare all equal to1, and all other entries are0.
Aninvertible matrixis an invertible element under matrix multiplication. A matrix over a commutative ringRis invertible if and only if its determinant is aunitinR(that is, is invertible inR. In this case, itsinverse matrixcan be computed withCramer's rule.
IfRis a field, the determinant is invertible if and only if it is not zero. As the case of fields is more common, one see often invertible matrices defined as matrices with a nonzero determinant, but this is incorrect over rings.
In the case ofinteger matrices(that is, matrices with integer entries), an invertible matrix is a matrix that has an inverse that is also an integer matrix. Such a matrix is called aunimodular matrixfor distinguishing it from matrices that are invertible over thereal numbers. A square integer matrix is unimodular if and only if its determinant is1or−1, since these two numbers are the only units in the ring of integers.
A matrix has a left inverse if and only if its rank equals its number of columns. This left inverse is not unique except for square matrices where the left inverse equal the inverse matrix. Similarly, a right inverse exists if and only if the rank equals the number of rows; it is not unique in the case of a rectangular matrix, and equals the inverse matrix in the case of a square matrix.
Compositionis apartial operationthat generalizes tohomomorphismsofalgebraic structuresandmorphismsofcategoriesinto operations that are also calledcomposition, and share many properties with function composition.
In all the case, composition isassociative.
Iff:X→Y{\displaystyle f\colon X\to Y}andg:Y′→Z,{\displaystyle g\colon Y'\to Z,}the compositiong∘f{\displaystyle g\circ f}is defined if and only ifY′=Y{\displaystyle Y'=Y}or, in the function and homomorphism cases,Y⊂Y′.{\displaystyle Y\subset Y'.}In the function and homomorphism cases, this means that thecodomainoff{\displaystyle f}equals or is included in thedomainofg. In the morphism case, this means that thecodomainoff{\displaystyle f}equals thedomainofg.
There is anidentityidX:X→X{\displaystyle \operatorname {id} _{X}\colon X\to X}for every objectX(set, algebraic structure orobject), which is called also anidentity functionin the function case.
A function is invertible if and only if it is abijection. An invertible homomorphism or morphism is called anisomorphism. An homomorphism of algebraic structures is an isomorphism if and only if it is a bijection. The inverse of a bijection is called aninverse function. In the other cases, one talks ofinverse isomorphisms.
A function has a left inverse or a right inverse if and only it isinjectiveorsurjective, respectively. An homomorphism of algebraic structures that has a left inverse or a right inverse is respectively injective or surjective, but the converse is not true in some algebraic structures. For example, the converse is true forvector spacesbut not formodulesover a ring: a homomorphism of modules that has a left inverse of a right inverse is called respectively asplit epimorphismor asplit monomorphism. This terminology is also used for morphisms in any category.
LetS{\displaystyle S}be a unitalmagma, that is, asetwith abinary operation∗{\displaystyle *}and anidentity elemente∈S{\displaystyle e\in S}. If, fora,b∈S{\displaystyle a,b\in S}, we havea∗b=e{\displaystyle a*b=e}, thena{\displaystyle a}is called aleft inverseofb{\displaystyle b}andb{\displaystyle b}is called aright inverseofa{\displaystyle a}. If an elementx{\displaystyle x}is both a left inverse and a right inverse ofy{\displaystyle y}, thenx{\displaystyle x}is called atwo-sided inverse, or simply aninverse, ofy{\displaystyle y}. An element with a two-sided inverse inS{\displaystyle S}is calledinvertibleinS{\displaystyle S}. An element with an inverse element only on one side isleft invertibleorright invertible.
Elements of a unital magma(S,∗){\displaystyle (S,*)}may have multiple left, right or two-sided inverses. For example, in the magma given by the Cayley table
the elements 2 and 3 each have two two-sided inverses.
A unital magma in which all elements are invertible need not be aloop. For example, in the magma(S,∗){\displaystyle (S,*)}given by theCayley table
every element has a unique two-sided inverse (namely itself), but(S,∗){\displaystyle (S,*)}is not a loop because the Cayley table is not aLatin square.
Similarly, a loop need not have two-sided inverses. For example, in the loop given by the Cayley table
the only element with a two-sided inverse is the identity element 1.
If the operation∗{\displaystyle *}isassociativethen if an element has both a left inverse and a right inverse, they are equal. In other words, in amonoid(an associative unital magma) every element has at most one inverse (as defined in this section). In a monoid, the set of invertible elements is agroup, called thegroup of unitsofS{\displaystyle S}, and denoted byU(S){\displaystyle U(S)}orH1.
The definition in the previous section generalizes the notion of inverse in group relative to the notion of identity. It's also possible, albeit less obvious, to generalize the notion of an inverse by dropping the identity element but keeping associativity; that is, in asemigroup.
In a semigroupSan elementxis called(von Neumann) regularif there exists some elementzinSsuch thatxzx=x;zis sometimes called apseudoinverse. An elementyis called (simply) aninverseofxifxyx=xandy=yxy. Every regular element has at least one inverse: ifx=xzxthen it is easy to verify thaty=zxzis an inverse ofxas defined in this section. Another easy to prove fact: ifyis an inverse ofxthene=xyandf=yxareidempotents, that isee=eandff=f. Thus, every pair of (mutually) inverse elements gives rise to two idempotents, andex=xf=x,ye=fy=y, andeacts as a left identity onx, whilefacts a right identity, and the left/right roles are reversed fory. This simple observation can be generalized usingGreen's relations: every idempotentein an arbitrary semigroup is a left identity forReand right identity forLe.[2]An intuitive description of this fact is that every pair of mutually inverse elements produces a local left identity, and respectively, a local right identity.
In a monoid, the notion of inverse as defined in the previous section is strictly narrower than the definition given in this section. Only elements in the Green classH1have an inverse from the unital magma perspective, whereas for any idempotente, the elements ofHehave an inverse as defined in this section. Under this more general definition, inverses need not be unique (or exist) in an arbitrary semigroup or monoid. If all elements are regular, then the semigroup (or monoid) is called regular, and every element has at least one inverse. If every element has exactly one inverse as defined in this section, then the semigroup is called aninverse semigroup. Finally, an inverse semigroup with only one idempotent is a group. An inverse semigroup may have anabsorbing element0 because 000 = 0, whereas a group may not.
Outside semigroup theory, a unique inverse as defined in this section is sometimes called aquasi-inverse. This is generally justified because in most applications (for example, all examples in this article) associativity holds, which makes this notion a generalization of the left/right inverse relative to an identity (seeGeneralized inverse).
A natural generalization of the inverse semigroup is to define an (arbitrary) unary operation ° such that (a°)° =afor allainS; this endowsSwith a type ⟨2,1⟩ algebra. A semigroup endowed with such an operation is called aU-semigroup. Although it may seem thata° will be the inverse ofa, this is not necessarily the case. In order to obtain interesting notion(s), the unary operation must somehow interact with the semigroup operation. Two classes ofU-semigroups have been studied:[3]
Clearly a group is both anI-semigroup and a *-semigroup. A class of semigroups important in semigroup theory arecompletely regular semigroups; these areI-semigroups in which one additionally hasaa° =a°a; in other words every element has commuting pseudoinversea°. There are few concrete examples of such semigroups however; most arecompletely simple semigroups. In contrast, a subclass of *-semigroups, the*-regular semigroups(in the sense of Drazin), yield one of best known examples of a (unique) pseudoinverse, theMoore–Penrose inverse. In this case however the involutiona* is not the pseudoinverse. Rather, the pseudoinverse ofxis the unique elementysuch thatxyx=x,yxy=y, (xy)* =xy, (yx)* =yx. Since *-regular semigroups generalize inverse semigroups, the unique element defined this way in a *-regular semigroup is called thegeneralized inverseorMoore–Penrose inverse.
All examples in this section involve associative operators.
The lower and upper adjoints in a (monotone)Galois connection,LandGare quasi-inverses of each other; that is,LGL=LandGLG=Gand one uniquely determines the other. They are not left or right inverses of each other however.
Asquare matrixM{\displaystyle M}with entries in afieldK{\displaystyle K}is invertible (in the set of all square matrices of the same size, undermatrix multiplication) if and only if itsdeterminantis different from zero. If the determinant ofM{\displaystyle M}is zero, it is impossible for it to have a one-sided inverse; therefore a left inverse or right inverse implies the existence of the other one. Seeinvertible matrixfor more.
More generally, a square matrix over acommutative ringR{\displaystyle R}is invertibleif and only ifits determinant is invertible inR{\displaystyle R}.
Non-square matrices offull rankhave several one-sided inverses:[4]
The left inverse can be used to determine the least norm solution ofAx=b{\displaystyle Ax=b}, which is also theleast squaresformula forregressionand is given byx=(ATA)−1ATb.{\displaystyle x=\left(A^{\text{T}}A\right)^{-1}A^{\text{T}}b.}
Norank deficientmatrix has any (even one-sided) inverse. However, the Moore–Penrose inverse exists for all matrices, and coincides with the left or right (or true) inverse when it exists.
As an example of matrix inverses, consider:
So, asm<n, we have a right inverse,Aright−1=AT(AAT)−1.{\displaystyle A_{\text{right}}^{-1}=A^{\text{T}}\left(AA^{\text{T}}\right)^{-1}.}By components it is computed as
The left inverse doesn't exist, because
which is asingular matrix, and cannot be inverted.
|
https://en.wikipedia.org/wiki/Invertible_element#In_the_context_of_modular_arithmetic
|
Authentication(fromGreek:αὐθεντικόςauthentikos, "real, genuine", from αὐθέντηςauthentes, "author") is the act of proving anassertion, such as theidentityof a computer system user. In contrast withidentification, the act of indicating a person or thing's identity, authentication is the process of verifying that identity.[1][2]
Authentication is relevant to multiple fields. Inart,antiques, andanthropology, a common problem is verifying that a given artifact was produced by a certain person, or in a certain place (i.e. to assert that it is notcounterfeit), or in a given period of history (e.g. by determining the age viacarbon dating). Incomputer science, verifying a user's identity is often required to allow access to confidential data or systems.[3]It might involve validating personalidentity documents.
Authentication can be considered to be of three types:
Thefirsttype of authentication is accepting proof of identity given by a credible person who has first-hand evidence that the identity is genuine. When authentication is required of art or physical objects, this proof could be a friend, family member, or colleague attesting to the item's provenance, perhaps by having witnessed the item in its creator's possession. With autographed sportsmemorabilia, this could involve someone attesting that they witnessed the object being signed. A vendor selling branded items implies authenticity, while they may not have evidence that every step in the supply chain was authenticated.
Thesecondtype of authentication is comparing the attributes of the object itself to what is known about objects of that origin. For example, an art expert might look for similarities in the style of painting, check the location and form of a signature, or compare the object to an old photograph. Anarchaeologist, on the other hand, might use carbon dating to verify the age of an artifact, do a chemical andspectroscopicanalysis of the materials used, or compare the style of construction or decoration to other artifacts of similar origin. The physics of sound and light, and comparison with a known physical environment, can be used to examine the authenticity of audio recordings, photographs, or videos. Documents can be verified as being created on ink or paper readily available at the time of the item's implied creation.
Attribute comparison may be vulnerable toforgery. In general, it relies on the facts that creating a forgery indistinguishable from a genuine artifact requires expert knowledge, that mistakes are easily made, and that the amount of effort required to do so is considerably greater than the amount of profit that can be gained from the forgery.
In art and antiques, certificates are of great importance for authenticating an object of interest and value. Certificates can, however, also be forged, and the authentication of these poses a problem. For instance, the son ofHan van Meegeren, the well-known art-forger, forged the work of his father and provided a certificate for its provenance as well.
Criminal and civil penalties forfraud,forgery, and counterfeiting can reduce the incentive for falsification, depending on the risk of getting caught.
Currency and other financial instruments commonly use this second type of authentication method. Bills, coins, andchequesincorporate hard-to-duplicate physical features, such as fine printing or engraving, distinctive feel, watermarks, and holographic imagery, which are easy for trained receivers to verify.
Thethirdtype of authentication relies on documentation or other external affirmations. In criminal courts, therules of evidenceoften require establishing thechain of custodyof evidence presented. This can be accomplished through a written evidence log, or by testimony from the police detectives and forensics staff that handled it. Some antiques are accompanied by certificates attesting to their authenticity. Signed sports memorabilia is usually accompanied by a certificate of authenticity. These external records have their own problems of forgery andperjuryand are also vulnerable to being separated from the artifact and lost.
Consumer goodssuch as pharmaceuticals,[4]perfume, and clothing can use all forms of authentication to prevent counterfeit goods from taking advantage of a popular brand's reputation. As mentioned above, having an item for sale in a reputable store implicitly attests to it being genuine, the first type of authentication. The second type of authentication might involve comparing the quality and craftsmanship of an item, such as an expensive handbag, to genuine articles. The third type of authentication could be the presence of atrademarkon the item, which is a legally protected marking, or any other identifying feature which aids consumers in the identification of genuine brand-name goods. With software, companies have taken great steps to protect from counterfeiters, including adding holograms, security rings, security threads and color shifting ink.[5]
Counterfeit products are often offered to consumers as being authentic.Counterfeit consumer goods, such as electronics, music, apparel, andcounterfeit medications, have been sold as being legitimate. Efforts to control thesupply chainand educate consumers help ensure that authentic products are sold and used. Evensecurity printingon packages, labels, and nameplates, however, is subject to counterfeiting.[6]
In their anti-counterfeiting technology guide,[7]theEUIPOObservatory on Infringements of Intellectual Property Rights categorizes the main anti-counterfeiting technologies on the market currently into five main categories: electronic, marking, chemical and physical, mechanical, and technologies for digital media.[8]
Products or their packaging can include a variableQR Code. A QR Code alone is easy to verify but offers a weak level of authentication as it offers no protection against counterfeits unless scan data is analyzed at the system level to detect anomalies.[9]To increase the security level, the QR Code can be combined with adigital watermarkorcopy detection patternthat are robust to copy attempts and can be authenticated with a smartphone.
Asecure key storage devicecan be used for authentication in consumer electronics, network authentication, license management, supply chain management, etc. Generally, the device to be authenticated needs some sort of wireless or wired digital connection to either a host system or a network. Nonetheless, the component being authenticated need not be electronic in nature as an authentication chip can be mechanically attached and read through a connector to the host e.g. an authenticated ink tank for use with a printer. For products and services that these secure coprocessors can be applied to, they can offer a solution that can be much more difficult to counterfeit than most other options while at the same time being more easily verified.[2]
Packaging and labeling can be engineered to help reduce the risks of counterfeit consumer goods or the theft and resale of products.[10][11]Some package constructions are more difficult to copy and some have pilfer indicating seals. Counterfeit goods, unauthorized sales (diversion), material substitution and tampering can all be reduced with these anti-counterfeiting technologies. Packages may include authentication seals and usesecurity printingto help indicate that the package and contents are not counterfeit; these too are subject to counterfeiting. Packages also can include anti-theft devices, such as dye-packs,RFIDtags, orelectronic article surveillance[12]tags that can be activated or detected by devices at exit points and require specialized tools to deactivate. Anti-counterfeiting technologies that can be used with packaging include:
In literacy, authentication is a readers’ process of questioning the veracity of an aspect of literature and then verifying those questions via research. The fundamental question for authentication of literature is – Does one believe it? Related to that, an authentication project is therefore a reading and writing activity in which students document the relevant research process.[13]It builds students' critical literacy. The documentation materials for literature go beyond narrative texts and likely include informational texts, primary sources, and multimedia. The process typically involves both internet and hands-on library research. When authenticating historical fiction in particular, readers consider the extent that the major historical events, as well as the culture portrayed (e.g., the language, clothing, food, gender roles), are believable for the period.[3]Literary forgerycan involve imitating the style of a famous author. If an originalmanuscript, typewritten text, or recording is available, then the medium itself (or its packaging – anything from a box toe-mail headers) can help prove or disprove the authenticity of the document. However, text, audio, and video can be copied into new media, possibly leaving only the informational content itself to use in authentication. Various systems have been invented to allow authors to provide a means for readers to reliably authenticate that a given message originated from or was relayed by them. These involve authentication factors like:
The opposite problem is the detection ofplagiarism, where information from a different author is passed off as a person's own work. A common technique for proving plagiarism is the discovery of another copy of the same or very similar text, which has different attribution. In some cases, excessively high quality or a style mismatch may raise suspicion of plagiarism.
The process of authentication is distinct from that ofauthorization. Whereas authentication is the process of verifying that "you are who you say you are", authorization is the process of verifying that "you are permitted to do what you are trying to do". While authorization often happens immediately after authentication (e.g., when logging into a computer system), this does not mean authorization presupposes authentication: an anonymous agent could be authorized to a limited action set.[14]Similarly, the establishment of the authorization can occur long before theauthorizationdecision occurs.
A user can be given access to secure systems based on user credentials that imply authenticity.[15]A network administrator can give a user apassword, or provide the user with a key card or other access devices to allow system access. In this case, authenticity is implied but not guaranteed.
Most secure internet communication relies on centralized authority-based trust relationships, such as those used inHTTPS, where publiccertificate authorities(CAs) vouch for the authenticity of websites. This same centralized trust model underpins protocols like OIDC (OpenID Connect) where identity providers (e.g.,Google) authenticate users on behalf of relying applications. In contrast, decentralized peer-based trust, also known as aweb of trust, is commonly used for personal services such as secure email or file sharing. In systems likePGP, trust is established when individuals personally verify and sign each other’s cryptographic keys, without relying on a central authority.
These systems usecryptographicprotocolsthat, in theory, are not vulnerable tospoofingas long as the originator’s private key remains uncompromised. Importantly, even if the key owner is unaware of a compromise, the cryptographic failure still invalidates trust. However, while these methods are currently considered secure, they are not provably unbreakable—future mathematical or computational advances (such asquantum computingor new algorithmic attacks) could expose vulnerabilities. If that happens, it could retroactively undermine trust in past communications or agreements. For example, adigitally signedcontractmight be challenged if the signature algorithm is later found to be insecure..[citation needed]
The ways in which someone may be authenticated fall into three categories, based on what is known as the factors of authentication: something the user knows, something the user has, and something the user is. Each authentication factor covers a range of elements used to authenticate or verify a person's identity before being granted access, approving a transaction request, signing a document or other work product, granting authority to others, and establishing a chain of authority.
Security research has determined that for a positive authentication, elements from at least two, and preferably all three, factors should be verified.[16][17]The three factors (classes) and some of the elements of each factor are:
As the weakest level of authentication, only a single component from one of the three categories of factors is used to authenticate an individual's identity. The use of only one factor does not offer much protection from misuse or malicious intrusion. This type of authentication is not recommended for financial or personally relevant transactions that warrant a higher level of security.[21]
Multi-factor authentication involves two or more authentication factors (something you know, something you have, or something you are). Two-factor authentication is a special case of multi-factor authentication involving exactly two factors.[21]
For example, using a bank card (something the user has) along with a PIN (something the user knows) provides two-factor authentication. Business networks may require users to provide a password (knowledge factor) and a pseudorandom number from a security token (ownership factor). Access to a very-high-security system might require amantrapscreening of height, weight, facial, and fingerprint checks (several inherence factor elements) plus a PIN and a day code (knowledge factor elements),[22]but this is still a two-factor authentication.
The United States government'sNational Information Assurance Glossarydefines strong authentication as a layered authentication approach relying on two or more authenticators to establish the identity of an originator or receiver of information.[23]
The European Central Bank (ECB) has defined strong authentication as "a procedure based on two or more of the three authentication factors". The factors that are used must be mutually independent and at least one factor must be "non-reusable and non-replicable", except in the case of an inherence factor and must also be incapable of being stolen off the Internet. In the European, as well as in the US-American understanding, strong authentication is very similar to multi-factor authentication or 2FA, but exceeding those with more rigorous requirements.[21][24]
TheFIDO Alliancehas been striving to establish technical specifications for strong authentication.[25]
Conventional computer systems authenticate users only at the initial log-in session, which can be the cause of a critical security flaw. To resolve this problem, systems need continuous user authentication methods that continuously monitor and authenticate users based on some biometric trait(s). A study used behavioural biometrics based on writing styles as a continuous authentication method.[26][27]
Recent research has shown the possibility of using smartphones sensors and accessories to extract some behavioral attributes such as touch dynamics,keystroke dynamicsandgait recognition.[28]These attributes are known as behavioral biometrics and could be used to verify or identify users implicitly and continuously on smartphones. The authentication systems that have been built based on these behavioral biometric traits are known as active or continuous authentication systems.[29][27]
The term digital authentication, also known aselectronic authenticationor e-authentication, refers to a group of processes where the confidence for user identities is established and presented via electronic methods to an information system. The digital authentication process creates technical challenges because of the need to authenticate individuals or entities remotely over a network.
The AmericanNational Institute of Standards and Technology(NIST) has created a generic model for digital authentication that describes the processes that are used to accomplish secure authentication:
The authentication of information can pose special problems with electronic communication, such as vulnerability toman-in-the-middle attacks, whereby a third party taps into the communication stream, and poses as each of the two other communicating parties, in order to intercept information from each. Extra identity factors can be required to authenticate each party's identity.
|
https://en.wikipedia.org/wiki/Authentication
|
S/KEYis aone-time passwordsystem developed forauthenticationtoUnix-likeoperating systems, especially fromdumb terminalsor untrusted public computers on which one does not want to type a long-term password. A user's real password is combined in an offline device with a short set of characters and a decrementing counter to form a single-use password. Because each password is only used once, they are useless topassword sniffers.
Because the short set of characters does not change until the counter reaches zero, it is possible to prepare a list of single-use passwords, in order, that can be carried by the user. Alternatively, the user can present the password, characters, and desired counter value to a local calculator to generate the appropriate one-time password that can then be transmitted over the network in the clear. The latter form is more common and practically amounts tochallenge–response authentication.
S/KEY is supported inLinux(viapluggable authentication modules),OpenBSD,NetBSD, andFreeBSD, and a generic open-source implementation can be used to enable its use on other systems.OpenSSHalso implements S/KEY since version OpenSSH 1.2.2 was released on December 1, 1999.[1]One common implementation is calledOPIE. S/KEY is a trademark ofTelcordia Technologies, formerly known as Bell Communications Research (Bellcore).
S/KEY is also sometimes referred to asLamport's scheme, after its author,Leslie Lamport. It was developed by Neil Haller,Phil Karnand John Walden at Bellcore in the late 1980s. With the expiration of the basic patents onpublic-key cryptographyand the widespread use oflaptop computersrunningSSHand
other cryptographic protocols that can secure an entire session, not just the password, S/KEY is falling
into disuse.[citation needed]Schemes that implementtwo-factor authentication, by comparison, are growing in use.[2]
Theserveris the computer that will perform the authentication.
After password generation, the user has a sheet of paper withnpasswords on it. Ifnis very large, either storing allnpasswords or calculate the given password fromH(W) become inefficient. There are methods to efficiently calculate the passwords in the required order, using only⌈logn2⌉{\displaystyle \left\lceil {\frac {\log n}{2}}\right\rceil }hash calculations per step and storing⌈logn⌉{\displaystyle \lceil \log n\rceil }passwords.[3]
More ideally, though perhaps less commonly in practice, the user may carry a small, portable, secure, non-networked computing device capable of regenerating any needed password given the secret passphrase, thesalt, and the number of iterations of the hash required, the latter two of which are conveniently provided by the server requesting authentication for login.
In any case, the first password will be the same password that the server has stored. This first password will not be used for authentication (the user should scratch this password on the sheet of paper), the second one will be used instead:
For subsequent authentications, the user will providepasswordi. (The last password on the printed list,passwordn, is the first password generated by the server,H(W), whereWis the initial secret).
The server will computeH(passwordi) and will compare the result topasswordi−1, which is stored as reference on the server.
The security of S/KEY relies on the difficulty of reversingcryptographic hash functions. Assume an attacker manages to get hold of a password that was used for a successful authentication. Supposing this ispasswordi, this password is already useless for subsequent authentications, because each password can only be used once. It would be interesting for the attacker to find outpasswordi−1, because this password is the one that will be used for the next authentication.
However, this would require inverting the hash function that producedpasswordi−1usingpasswordi(H(passwordi−1) =passwordi), which is extremely difficult to do with currentcryptographic hash functions.
Nevertheless, S/KEY is vulnerable to aman in the middle attackif used by itself. It is also vulnerable to certainrace conditions, such as where an attacker's software sniffs the network to learn the firstN− 1 characters in the password (whereNequals the password length), establishes its own TCP session to the server, and in rapid succession tries all valid characters in theN-th position until one succeeds. These types of vulnerabilities can be avoided by usingssh,SSL, SPKM, or other encrypted transport layer.
Since each iteration of S/KEY doesn't include the salt or count, it is feasible to find collisions directly without breaking the initial password. This has a complexity of 264, which can be pre-calculated with the same amount of space. The space complexity can be optimized by storing chains of values, although collisions might reduce the coverage of this method, especially for long chains.[4]
Someone with access to an S/KEY database can break all of them in parallel with a complexity of 264. While they wouldn't get the original password, they would be able to find valid credentials for each user. In this regard, it is similar to storing unsalted 64-bit hashes of strong, unique passwords.
The S/KEY protocol can loop. If such a loop were created in the S/KEY chain, an attacker could use user's key without finding the original value, and possibly without tipping off the valid user. The pathological case of this would be an OTP that hashes to itself.
Internally, S/KEY uses64-bitnumbers. For humanusabilitypurposes, each number is mapped to six short words, of one to four characters each, from a publicly accessible 2048-word dictionary. For example, one 64-bit number maps to "ROY HURT SKI FAIL GRIM KNEE".[5]
|
https://en.wikipedia.org/wiki/S/Key
|
Acryptographic hash function(CHF) is ahash algorithm(amapof an arbitrary binary string to a binary string with a fixed size ofn{\displaystyle n}bits) that has special properties desirable for acryptographicapplication:[1]
Cryptographic hash functions have manyinformation-securityapplications, notably indigital signatures,message authentication codes(MACs), and other forms ofauthentication. They can also be used as ordinaryhash functions, to index data inhash tables, forfingerprinting, to detect duplicate data or uniquely identify files, and aschecksumsto detect accidental data corruption. Indeed, in information-security contexts, cryptographic hash values are sometimes called (digital)fingerprints,checksums, (message)digests,[2]or justhash values, even though all these terms stand for more general functions with rather different properties and purposes.[3]
Non-cryptographic hash functionsare used inhash tablesand to detect accidental errors; their constructions frequently provide no resistance to a deliberate attack. For example, adenial-of-service attackon hash tables is possible if the collisions are easy to find, as in the case of linearcyclic redundancy check(CRC) functions.[4]
Most cryptographic hash functions are designed to take astringof any length as input and produce a fixed-length hash value.
A cryptographic hash function must be able to withstand all knowntypes of cryptanalytic attack. In theoretical cryptography, the security level of a cryptographic hash function has been defined using the following properties:
Collision resistance implies second pre-image resistance but does not imply pre-image resistance.[6]The weaker assumption is always preferred in theoretical cryptography, but in practice, a hash-function that is only second pre-image resistant is considered insecure and is therefore not recommended for real applications.
Informally, these properties mean that amalicious adversarycannot replace or modify the input data without changing its digest. Thus, if two strings have the same digest, one can be very confident that they are identical. Second pre-image resistance prevents an attacker from crafting a document with the same hash as a document the attacker cannot control. Collision resistance prevents an attacker from creating two distinct documents with the same hash.
A function meeting these criteria may still have undesirable properties. Currently, popular cryptographic hash functions are vulnerable tolength-extensionattacks: givenhash(m)andlen(m)but notm, by choosing a suitablem′an attacker can calculatehash(m∥m′), where∥denotesconcatenation.[7]This property can be used to break naive authentication schemes based on hash functions. TheHMACconstruction works around these problems.
In practice, collision resistance is insufficient for many practical uses. In addition to collision resistance, it should be impossible for an adversary to find two messages with substantially similar digests; or to infer any useful information about the data, given only its digest. In particular, a hash function should behave as much as possible like arandom function(often called arandom oraclein proofs of security) while still being deterministic and efficiently computable. This rules out functions like theSWIFFTfunction, which can be rigorously proven to be collision-resistant assuming that certain problems on ideal lattices are computationally difficult, but, as a linear function, does not satisfy these additional properties.[8]
Checksum algorithms, such asCRC32and othercyclic redundancy checks, are designed to meet much weaker requirements and are generally unsuitable as cryptographic hash functions. For example, a CRC was used for message integrity in theWEPencryption standard, but an attack was readily discovered, which exploited the linearity of the checksum.
In cryptographic practice, "difficult" generally means "almost certainly beyond the reach of any adversary who must be prevented from breaking the system for as long as the security of the system is deemed important". The meaning of the term is therefore somewhat dependent on the application since the effort that a malicious agent may put into the task is usually proportional to their expected gain. However, since the needed effort usually multiplies with the digest length, even a thousand-fold advantage in processing power can be neutralized by adding a dozen bits to the latter.
For messages selected from a limited set of messages, for examplepasswordsor other short messages, it can be feasible to invert a hash by trying all possible messages in the set. Because cryptographic hash functions are typically designed to be computed quickly, specialkey derivation functionsthat require greater computing resources have been developed that make suchbrute-force attacksmore difficult.
In sometheoretical analyses"difficult" has a specific mathematical meaning, such as "not solvable inasymptoticpolynomial time". Such interpretations ofdifficultyare important in the study ofprovably secure cryptographic hash functionsbut do not usually have a strong connection to practical security. For example, anexponential-timealgorithm can sometimes still be fast enough to make a feasible attack. Conversely, a polynomial-time algorithm (e.g., one that requiresn20steps forn-digit keys) may be too slow for any practical use.
An illustration of the potential use of a cryptographic hash is as follows:Aliceposes a tough math problem toBoband claims that she has solved it. Bob would like to try it himself, but would yet like to be sure that Alice is not bluffing. Therefore, Alice writes down her solution, computes its hash, and tells Bob the hash value (whilst keeping the solution secret). Then, when Bob comes up with the solution himself a few days later, Alice can prove that she had the solution earlier by revealing it and having Bob hash it and check that it matches the hash value given to him before. (This is an example of a simplecommitment scheme; in actual practice, Alice and Bob will often be computer programs, and the secret would be something less easily spoofed than a claimed puzzle solution.)
An important application of secure hashes is the verification ofmessage integrity. Comparing message digests (hash digests over the message) calculated before, and after, transmission can determine whether any changes have been made to the message orfile.
MD5,SHA-1, orSHA-2hash digests are sometimes published on websites or forums to allow verification of integrity for downloaded files,[9]including files retrieved usingfile sharingsuch asmirroring. This practice establishes achain of trustas long as the hashes are posted on a trusted site – usually the originating site – authenticated byHTTPS. Using a cryptographic hash and a chain of trust detects malicious changes to the file. Non-cryptographicerror-detecting codessuch ascyclic redundancy checksonly prevent againstnon-maliciousalterations of the file, since an intentionalspoofcan readily be crafted to have thecolliding codevalue.
Almost alldigital signatureschemes require a cryptographic hash to be calculated over the message. This allows the signature calculation to be performed on the relatively small, statically sized hash digest. The message is considered authentic if the signature verification succeeds given the signature and recalculated hash digest over the message. So the message integrity property of the cryptographic hash is used to create secure and efficient digital signature schemes.
Password verification commonly relies on cryptographic hashes. Storing all user passwords ascleartextcan result in a massive security breach if the password file is compromised. One way to reduce this danger is to only store the hash digest of each password. To authenticate a user, the password presented by the user is hashed and compared with the stored hash. A password reset method is required when password hashing is performed; original passwords cannot be recalculated from the stored hash value.
However, use of standard cryptographic hash functions, such as the SHA series, is no longer considered safe for password storage.[10]: 5.1.1.2These algorithms are designed to be computed quickly, so if the hashed values are compromised, it is possible to try guessed passwords at high rates. Commongraphics processing unitscan try billions of possible passwords each second. Password hash functions that performkey stretching– such asPBKDF2,scryptorArgon2– commonly use repeated invocations of a cryptographic hash to increase the time (and in some cases computer memory) required to performbrute-force attackson stored password hash digests. For details, see§ Attacks on hashed passwords.
A password hash also requires the use of a large random, non-secretsaltvalue that can be stored with the password hash. The salt is hashed with the password, altering the password hash mapping for each password, thereby making it infeasible for an adversary to store tables ofprecomputedhash values to which the password hash digest can be compared or to test a large number of purloined hash values in parallel.
A proof-of-work system (or protocol, or function) is an economic measure to deterdenial-of-service attacksand other service abuses such as spam on a network by requiring some work from the service requester, usually meaning processing time by a computer. A key feature of these schemes is their asymmetry: the work must be moderately hard (but feasible) on the requester side but easy to check for the service provider. One popular system – used inBitcoin miningandHashcash– uses partial hash inversions to prove that work was done, to unlock a mining reward in Bitcoin, and as a good-will token to send an e-mail in Hashcash. The sender is required to find a message whose hash value begins with a number of zero bits. The average work that the sender needs to perform in order to find a valid message is exponential in the number of zero bits required in the hash value, while the recipient can verify the validity of the message by executing a single hash function. For instance, in Hashcash, a sender is asked to generate a header whose 160-bit SHA-1 hash value has the first 20 bits as zeros. The sender will, on average, have to try219times to find a valid header.
A message digest can also serve as a means of reliably identifying a file; severalsource code managementsystems, includingGit,MercurialandMonotone, use thesha1sumof various types of content (file content, directory trees, ancestry information, etc.) to uniquely identify them. Hashes are used to identify files onpeer-to-peerfilesharingnetworks. For example, in aned2k link, anMD4-variant hash is combined with the file size, providing sufficient information for locating file sources, downloading the file, and verifying its contents.Magnet linksare another example. Such file hashes are often the top hash of ahash listor ahash tree, which allows for additional benefits.
One of the main applications of ahash functionis to allow the fast look-up of data in ahash table. Being hash functions of a particular kind, cryptographic hash functions lend themselves well to this application too.
However, compared with standard hash functions, cryptographic hash functions tend to be much more expensive computationally. For this reason, they tend to be used in contexts where it is necessary for users to protect themselves against the possibility of forgery (the creation of data with the same digest as the expected data) by potentially malicious participants, such as open source applications with multiple sources of download, where malicious files could be substituted in with the same appearance to the user, or an authentic file is modified to contain malicious data.[11]
Content-addressable storage(CAS), also referred to as content-addressed storage or fixed-content storage, is a way to store information so it can be retrieved based on its content, not its name or location. It has been used for high-speed storage andretrievalof fixed content, such as documents stored for compliance with government regulations.[citation needed]Content-addressable storage is similar tocontent-addressable memory.
CAS systems work by passing the content of the file through a cryptographic hash function to generate a unique key, the "content address". Thefile system'sdirectorystores these addresses and a pointer to the physical storage of the content. Because an attempt to store the same file will generate the same key, CAS systems ensure that the files within them are unique, and because changing the file will result in a new key, CAS systems provide assurance that the file is unchanged.
There are several methods to use ablock cipherto build a cryptographic hash function, specifically aone-way compression function.
The methods resemble theblock cipher modes of operationusually used for encryption. Many well-known hash functions, includingMD4,MD5,SHA-1andSHA-2, are built from block-cipher-like components designed for the purpose, with feedback to ensure that the resulting function is not invertible.SHA-3finalists included functions with block-cipher-like components (e.g.,Skein,BLAKE) though the function finally selected,Keccak, was built on acryptographic spongeinstead.
A standard block cipher such asAEScan be used in place of these custom block ciphers; that might be useful when anembedded systemneeds to implement both encryption and hashing with minimal code size or hardware area. However, that approach can have costs in efficiency and security. The ciphers in hash functions are built for hashing: they use large keys and blocks, can efficiently change keys every block, and have been designed and vetted for resistance torelated-key attacks. General-purpose ciphers tend to have different design goals. In particular, AES has key and block sizes that make it nontrivial to use to generate long hash values; AES encryption becomes less efficient when the key changes each block; and related-key attacks make it potentially less secure for use in a hash function than for encryption.
A hash function must be able to process an arbitrary-length message into a fixed-length output. This can be achieved by breaking the input up into a series of equally sized blocks, and operating on them in sequence using aone-way compression function. The compression function can either be specially designed for hashing or be built from a block cipher. A hash function built with the Merkle–Damgård construction is as resistant to collisions as is its compression function; any collision for the full hash function can be traced back to a collision in the compression function.
The last block processed should also be unambiguouslylength padded; this is crucial to the security of this construction. This construction is called theMerkle–Damgård construction. Most common classical hash functions, includingSHA-1andMD5, take this form.
A straightforward application of the Merkle–Damgård construction, where the size of hash output is equal to the internal state size (between each compression step), results in anarrow-pipehash design. This design causes many inherent flaws, includinglength-extension, multicollisions,[12]long message attacks,[13]generate-and-paste attacks,[citation needed]and also cannot be parallelized. As a result, modern hash functions are built onwide-pipeconstructions that have a larger internal state size – which range from tweaks of the Merkle–Damgård construction[12]to new constructions such as thesponge constructionandHAIFA construction.[14]None of the entrants in theNIST hash function competitionuse a classical Merkle–Damgård construction.[15]
Meanwhile, truncating the output of a longer hash, such as used in SHA-512/256, also defeats many of these attacks.[16]
Hash functions can be used to build othercryptographic primitives. For these other primitives to be cryptographically secure, care must be taken to build them correctly.
Message authentication codes(MACs) (also called keyed hash functions) are often built from hash functions.HMACis such a MAC.
Just asblock cipherscan be used to build hash functions, hash functions can be used to build block ciphers.Luby-Rackoffconstructions using hash functions can be provably secure if the underlying hash function is secure. Also, many hash functions (includingSHA-1andSHA-2) are built by using a special-purpose block cipher in aDavies–Meyeror other construction. That cipher can also be used in a conventional mode of operation, without the same security guarantees; for example,SHACAL,BEARandLION.
Pseudorandom number generators(PRNGs) can be built using hash functions. This is done by combining a (secret) random seed with a counter and hashing it.
Some hash functions, such asSkein,Keccak, andRadioGatún, output an arbitrarily long stream and can be used as astream cipher, and stream ciphers can also be built from fixed-length digest hash functions. Often this is done by first building acryptographically secure pseudorandom number generatorand then using its stream of random bytes askeystream.SEALis a stream cipher that usesSHA-1to generate internal tables, which are then used in a keystream generator more or less unrelated to the hash algorithm. SEAL is not guaranteed to be as strong (or weak) as SHA-1. Similarly, the key expansion of theHC-128 and HC-256stream ciphers makes heavy use of theSHA-256hash function.
Concatenatingoutputs from multiple hash functions provide collision resistance as good as the strongest of the algorithms included in the concatenated result.[citation needed]For example, older versions ofTransport Layer Security (TLS) and Secure Sockets Layer (SSL)used concatenatedMD5andSHA-1sums.[17][18]This ensures that a method to find collisions in one of the hash functions does not defeat data protected by both hash functions.[citation needed]
ForMerkle–Damgård constructionhash functions, the concatenated function is as collision-resistant as its strongest component, but not more collision-resistant.[citation needed]Antoine Jouxobserved that 2-collisions lead ton-collisions: if it is feasible for an attacker to find two messages with the same MD5 hash, then they can find as many additional messages with that same MD5 hash as they desire, with no greater difficulty.[19]Among thosenmessages with the same MD5 hash, there is likely to be a collision in SHA-1. The additional work needed to find the SHA-1 collision (beyond the exponential birthday search) requires onlypolynomial time.[20][21]
There are many cryptographic hash algorithms; this section lists a few algorithms that are referenced relatively often. A more extensive list can be found on the page containing acomparison of cryptographic hash functions.
MD5 was designed byRonald Rivestin 1991 to replace an earlier hash function, MD4, and was specified in 1992 as RFC 1321. Collisions against MD5 can be calculated within seconds, which makes the algorithm unsuitable for most use cases where a cryptographic hash is required. MD5 produces a digest of 128 bits (16 bytes).
SHA-1 was developed as part of the U.S. Government'sCapstoneproject. The original specification – now commonly called SHA-0 – of the algorithm was published in 1993 under the title Secure Hash Standard, FIPS PUB 180, by U.S. government standards agency NIST (National Institute of Standards and Technology). It was withdrawn by the NSA shortly after publication and was superseded by the revised version, published in 1995 in FIPS PUB 180-1 and commonly designated SHA-1. Collisions against the full SHA-1 algorithm can be produced using theshattered attackand the hash function should be considered broken. SHA-1 produces a hash digest of 160 bits (20 bytes).
Documents may refer to SHA-1 as just "SHA", even though this may conflict with the other Secure Hash Algorithms such as SHA-0, SHA-2, and SHA-3.
RIPEMD (RACE Integrity Primitives Evaluation Message Digest) is a family of cryptographic hash functions developed in Leuven, Belgium, by Hans Dobbertin, Antoon Bosselaers, and Bart Preneel at the COSIC research group at the Katholieke Universiteit Leuven, and first published in 1996. RIPEMD was based upon the design principles used in MD4 and is similar in performance to the more popular SHA-1. RIPEMD-160 has, however, not been broken. As the name implies, RIPEMD-160 produces a hash digest of 160 bits (20 bytes).
Whirlpool is a cryptographic hash function designed by Vincent Rijmen and Paulo S. L. M. Barreto, who first described it in 2000. Whirlpool is based on a substantially modified version of the Advanced Encryption Standard (AES). Whirlpool produces a hash digest of 512 bits (64 bytes).
SHA-2 (Secure Hash Algorithm 2) is a set of cryptographic hash functions designed by the United States National Security Agency (NSA), first published in 2001. They are built using the Merkle–Damgård structure, from a one-way compression function itself built using the Davies–Meyer structure from a (classified) specialized block cipher.
SHA-2 basically consists of two hash algorithms: SHA-256 and SHA-512. SHA-224 is a variant of SHA-256 with different starting values and truncated output. SHA-384 and the lesser-known SHA-512/224 and SHA-512/256 are all variants of SHA-512. SHA-512 is more secure than SHA-256 and is commonly faster than SHA-256 on 64-bit machines such asAMD64.
The output size in bits is given by the extension to the "SHA" name, so SHA-224 has an output size of 224 bits (28 bytes); SHA-256, 32 bytes; SHA-384, 48 bytes; and SHA-512, 64 bytes.
SHA-3 (Secure Hash Algorithm 3) was released by NIST on August 5, 2015. SHA-3 is a subset of the broader cryptographic primitive family Keccak. The Keccak algorithm is the work of Guido Bertoni, Joan Daemen, Michael Peeters, and Gilles Van Assche. Keccak is based on a sponge construction, which can also be used to build other cryptographic primitives such as a stream cipher. SHA-3 provides the same output sizes as SHA-2: 224, 256, 384, and 512 bits.
Configurable output sizes can also be obtained using the SHAKE-128 and SHAKE-256 functions. Here the -128 and -256 extensions to the name imply thesecurity strengthof the function rather than the output size in bits.
BLAKE2, an improved version of BLAKE, was announced on December 21, 2012. It was created by Jean-Philippe Aumasson, Samuel Neves,Zooko Wilcox-O'Hearn, and Christian Winnerlein with the goal of replacing the widely used but broken MD5 and SHA-1 algorithms. When run on 64-bit x64 and ARM architectures, BLAKE2b is faster than SHA-3, SHA-2, SHA-1, and MD5. Although BLAKE and BLAKE2 have not been standardized as SHA-3 has, BLAKE2 has been used in many protocols including theArgon2password hash, for the high efficiency that it offers on modern CPUs. As BLAKE was a candidate for SHA-3, BLAKE and BLAKE2 both offer the same output sizes as SHA-3 – including a configurable output size.
BLAKE3, an improved version of BLAKE2, was announced on January 9, 2020. It was created by Jack O'Connor, Jean-Philippe Aumasson, Samuel Neves, and Zooko Wilcox-O'Hearn. BLAKE3 is a single algorithm, in contrast to BLAKE and BLAKE2, which are algorithm families with multiple variants. The BLAKE3 compression function is closely based on that of BLAKE2s, with the biggest difference being that the number of rounds is reduced from 10 to 7. Internally, BLAKE3 is aMerkle tree, and it supports higher degrees of parallelism than BLAKE2.
There is a long list of cryptographic hash functions but many have been found to be vulnerable and should not be used. For instance, NIST selected 51 hash functions[22]as candidates for round 1 of the SHA-3 hash competition, of which 10 were considered broken and 16 showed significant weaknesses and therefore did not make it to the next round; more information can be found on the main article about theNIST hash function competitions.
Even if a hash function has never been broken, asuccessful attackagainst a weakened variant may undermine the experts' confidence. For instance, in August 2004 collisions were found in several then-popular hash functions, including MD5.[23]These weaknesses called into question the security of stronger algorithms derived from the weak hash functions – in particular, SHA-1 (a strengthened version of SHA-0), RIPEMD-128, and RIPEMD-160 (both strengthened versions of RIPEMD).[24]
On August 12, 2004, Joux, Carribault, Lemuel, and Jalby announced a collision for the full SHA-0 algorithm.[19]Joux et al. accomplished this using a generalization of the Chabaud and Joux attack. They found that the collision had complexity 251and took about 80,000 CPU hours on asupercomputerwith 256Itanium 2processors – equivalent to 13 days of full-time use of the supercomputer.[citation needed]
In February 2005, an attack on SHA-1 was reported that would find collision in about 269hashing operations, rather than the 280expected for a 160-bit hash function. In August 2005, another attack on SHA-1 was reported that would find collisions in 263operations. Other theoretical weaknesses of SHA-1 have been known,[25][26]and in February 2017 Google announced a collision in SHA-1.[27]Security researchers recommend that new applications can avoid these problems by using later members of the SHA family, such asSHA-2, or using techniques such asrandomized hashing[28]that do not require collision resistance.
A successful, practical attack broke MD5 (used within certificates forTransport Layer Security) in 2008.[29]
Many cryptographic hashes are based on theMerkle–Damgård construction. All cryptographic hashes that directly use the full output of a Merkle–Damgård construction are vulnerable tolength extension attacks. This makes the MD5, SHA-1, RIPEMD-160, Whirlpool, and the SHA-256 / SHA-512 hash algorithms all vulnerable to this specific attack. SHA-3, BLAKE2, BLAKE3, and the truncated SHA-2 variants are not vulnerable to this type of attack.[citation needed]
Rather than store plain user passwords, controlled-access systems frequently store the hash of each user's password in a file or database. When someone requests access, the password they submit is hashed and compared with the stored value. If the database is stolen (an all-too-frequent occurrence[30]), the thief will only have the hash values, not the passwords.
Passwords may still be retrieved by an attacker from the hashes, because most people choose passwords in predictable ways. Lists of common passwords are widely circulated and many passwords are short enough that even all possible combinations may be tested if calculation of the hash does not take too much time.[31]
The use ofcryptographic saltprevents some attacks, such as building files of precomputing hash values, e.g.rainbow tables. But searches on the order of 100 billion tests per second are possible with high-endgraphics processors, making direct attacks possible even with salt.[32][33]The United StatesNational Institute of Standards and Technologyrecommends storing passwords using special hashes calledkey derivation functions(KDFs) that have been created to slow brute force searches.[10]: 5.1.1.2Slow hashes includepbkdf2,bcrypt,scrypt,argon2,Balloonand some recent modes ofUnix crypt. For KDFs that perform multiple hashes to slow execution, NIST recommends an iteration count of 10,000 or more.[10]: 5.1.1.2
|
https://en.wikipedia.org/wiki/Cryptographic_hash_function
|
Aone-time password(OTP), also known as aone-time PIN,one-time passcode,one-time authorization code(OTAC) ordynamic password, is a password that is valid for only one login session or transaction, on a computer system or other digital device. OTPs avoid several shortcomings that are associated with traditional (static) password-based authentication; a number of implementations also incorporatetwo-factor authenticationby ensuring that the one-time password requires access tosomething a person has(such as a small keyring fob device with the OTP calculator built into it, or a smartcard or specific cellphone) as well assomething a person knows(such as a PIN).
OTP generation algorithms typically make use ofpseudorandomnessorrandomnessto generate a shared key or seed, andcryptographic hash functions, which can be used to derive a value but are hard to reverse and therefore difficult for an attacker to obtain the data that was used for the hash. This is necessary because otherwise, it would be easy to predict future OTPs by observing previous ones.
OTPs have been discussed as a possible replacement for, as well as an enhancer to, traditional passwords. On the downside, OTPs can be intercepted or rerouted, and hard tokens can get lost, damaged, or stolen. Many systems that use OTPs do not securely implement them, and attackers can still learn the password throughphishing attacksto impersonate the authorized user.[1]
The most important advantage addressed by OTPs is that, in contrast to static passwords, they are not vulnerable toreplay attacks. This means that a potential intruder who manages to record an OTP that was already used to log into a service or to conduct a transaction will not be able to use it, since it will no longer be valid.[1]A second major advantage is that a user who uses the same (or similar) password for multiple systems, is not made vulnerable on all of them, if the password for one of these is gained by an attacker. A number of OTP systems also aim to ensure that a session cannot easily be intercepted or impersonated without knowledge of unpredictable data created during theprevioussession, thus reducing theattack surfacefurther.
There are also different ways to make the user aware of the next OTP to use. Some systems use special electronicsecurity tokensthat the user carries and that generate OTPs and show them using a small display. Other systems consist of software that runs on the user'smobile phone. Yet other systems generate OTPs on the server-side and send them to the user using anout-of-bandchannel such asSMSmessaging. Finally, in some systems, OTPs are printed on paper that the user is required to carry.
In some mathematical algorithm schemes, it is possible for the user to provide the server with a static key for use as an encryption key, by only sending a one-time password.[2]
Concrete OTP algorithms vary greatly in their details. Various approaches for the generation of OTPs include:
A time-synchronized OTP is usually related to a piece of hardware called asecurity token(e.g., each user is given a personal token that generates a one-time password). It might look like a small calculator or a keychain charm, with an LCD that shows a number that changes occasionally. Inside the token is an accurate clock that has been synchronized with the clock on the authenticationserver. In these OTP systems, time is an important part of the password algorithm, since the generation of new passwords is based on the current time rather than, or in addition to, the previous password or asecret key. This token may be aproprietarydevice, or amobile phoneor similarmobile devicewhich runssoftwarethat is proprietary,freeware, oropen-source. An example of a time-synchronized OTP standard istime-based one-time password(TOTP). Some applications can be used to keep time-synchronized OTP, likeGoogle Authenticatoror apassword manager.
Each new OTP may be created from the past OTPs used. An example of this type of algorithm, credited toLeslie Lamport, uses aone-way function(call itf{\displaystyle f}). This one-time password system works as follows:
To get the next password in the series from the previous passwords, one needs to find a way of calculating theinverse functionf−1{\displaystyle f^{-1}}. Sincef{\displaystyle f}was chosen to be one-way, this is extremely difficult to do. Iff{\displaystyle f}is acryptographic hash function, which is generally the case, it is assumed to be acomputationally intractabletask. An intruder who happens to see a one-time password may have access for one time period or login, but it becomes useless once that period expires. TheS/KEYone-time password system and its derivative OTP are based on Lamport's scheme.
The use ofchallenge–responseone-time passwords requires a user to provide a response to a challenge. For example, this can be done by inputting the value that the token has generated into the token itself. To avoid duplicates, an additional counter is usually involved, so if one happens to get the same challenge twice, this still results in different one-time passwords. However, the computation does not usually[citation needed]involve the previous one-time password; that is, usually, this or another algorithm is used, rather than using both algorithms.
A common technology used for the delivery of OTPs istext messaging. Because text messaging is a ubiquitous communication channel, being directly available in nearly all mobile handsets and, through text-to-speech conversion, to any mobile or landline telephone, text messaging has a great potential to reach all consumers with a low total cost to implement. OTP over text messaging may be encrypted using anA5/xstandard, which several hacking groups report can be successfullydecryptedwithin minutes or seconds.[4][5][6][7]Additionally, security flaws in theSS7routing protocol can and have been used to redirect the associated text messages to attackers; in 2017, severalO2customers in Germany were breached in this manner in order to gain access to theirmobile bankingaccounts. In July 2016, the U.S.NISTissued a draft of a special publication with guidance on authentication practices, which discourages the use of SMS as a method of implementing out-of-band two-factor authentication, due to the ability for SMS to beinterceptedat scale.[8][9][10]Text messages are also vulnerable toSIM swap scams—in which an attacker fraudulently transfers a victim's phone number to their ownSIM card, which can then be used to gain access to messages being sent to it.[11][12]
RSA Security'sSecurIDis one example of a time-synchronization type of token, along withHID Global's solutions. Like all tokens, these may be lost, damaged, or stolen; additionally, there is an inconvenience as batteries die, especially for tokens without a recharging facility or with a non-replaceable battery. A variant of the proprietary token was proposed by RSA in 2006 and was described as "ubiquitous authentication", in which RSA would partner with manufacturers to add physicalSecurIDchips to devices such as mobile phones.
Recently,[when?]it has become possible to take the electronic components associated with regular keyfob OTP tokens and embed them in a credit card form factor. However, the thinness of the cards, at 0.79mm to 0.84mm thick, prevents standard components or batteries from being used. Specialpolymer-based batteriesmust be used which have a much lower battery life thancoin (button) cells. Semiconductor components must not only be very flat but must minimise the power used in standby and when operating.[citation needed]
Yubicooffers a small USB token with an embedded chip that creates an OTP when a key is pressed and simulates a keyboard to facilitate easily entering a long password.[13]Since it is a USB device it avoids the inconvenience of battery replacement.
A new version of this technology has been developed that embeds a keypad into apayment cardof standard size and thickness. The card has an embedded keypad, display, microprocessor and proximity chip.
On smartphones, one-time passwords can also be delivered directly throughmobile apps, including dedicated authentication apps such asAuthyandGoogle Authenticator, or within a service's existing app, such as in the case ofSteam. These systems do not share the same security vulnerabilities as SMS, and do not necessarily require a connection to a mobile network to use.[14][10][15]
In some countries' online banking, the bank sends to the user a numbered list of OTPs that is printed on paper. Other banks send plastic cards with actual OTPs obscured by a layer that the user has to scratch off to reveal a numbered OTP. For every online transaction, the user is required to enter a specific OTP from that list. Some systems ask for the numbered OTPs sequentially, others pseudorandomly choose an OTP to be entered.
When correctly implemented, OTPs are no longer useful to an attacker within a short time of their initial use. This differs from passwords, which may remain useful to attackers years after the fact.
As with passwords, OTPs are vulnerable tosocial engineeringattacks in whichphisherssteal OTPs by tricking customers into providing them with their OTPs. Also like passwords, OTPs can be vulnerable toman-in-the-middle attacks, making it important to communicate them via a secure channel, for exampleTransport Layer Security.
The fact that both passwords and OTP are vulnerable to similar kinds of attacks was a key motivation forUniversal 2nd Factor, which is designed to be more resistant to phishing attacks.
OTPs which don't involve a time-synchronization or challenge–response component will necessarily have a longer window of vulnerability if compromised before their use. In late 2005 customers of a Swedish bank were tricked into giving up their pre-supplied one-time passwords.[16]In 2006 this type of attack was used on customers of a US bank.[17]
Many OTP technologies are patented. This makes standardization in this area more difficult, as each company tries to push its own technology. Standards do, however, exist – for example, RFC 1760 (S/KEY), RFC 2289 (OTP), RFC 4226 (HOTP) and RFC 6238 (TOTP).
Amobile phoneitself can be a hand-heldauthentication token.[18]Mobile text messaging is one of the ways of receiving an OTAC through a mobile phone. In this way, a service provider sends a text message that includes an OTAC enciphered bya digital certificateto a user for authentication. According to a report, mobile text messaging provides high security when it usespublic key infrastructure (PKI)to provide bidirectional authentication and non-repudiation, in accordance with theoretical analysis.[19]
SMSas a method of receiving OTACs is broadly used in our daily lives for purposes such as banking, credit/debit cards, and security.[20][21][22]
There are two methods of using atelephoneto verify a user’s authentication.
With the first method, a service provider shows an OTAC on the computer or smartphone screen and then makes an automatic telephone call to a number that has already been authenticated. Then the user enters the OTAC that appears on their screen into the telephone keypad.[23]
With the second method, which is used to authenticate and activateMicrosoft Windows, the user call a number that is provided by the service provider and enters the OTAC that the phone system gives the user.[24]
In the field ofcomputer technology, it is known that using one-time authorization code (OTAC) throughemail, in a broad sense, and using one-time authorization code (OTAC) throughweb-application, in a professional sense.
It is possible to send OTACs to a user via post orregistered mail. When a user requests an OTAC, the service provider sends it via post or registered mail and then the user can use it for authentication. For example, in the UK, some banks send their OTAC for Internet banking authorization via post orregistered mail.[27]
Quantum cryptography, which is based on theuncertainty principleis one of the ideal methods to produce an OTAC.[28]
Moreover, it has been discussed and used not only using an enciphered code for authentication but also using graphical one time PIN authentication[29]such asQR codewhich provides decentralized access control technique with anonymous authentication.[30][31]
|
https://en.wikipedia.org/wiki/One-time_password
|
Public-key cryptography, orasymmetric cryptography, is the field of cryptographic systems that use pairs of related keys. Each key pair consists of apublic keyand a correspondingprivate key.[1][2]Key pairs are generated withcryptographicalgorithmsbased onmathematicalproblems termedone-way functions. Security of public-key cryptography depends on keeping the private key secret; the public key can be openly distributed without compromising security.[3]There are many kinds of public-key cryptosystems, with different security goals, includingdigital signature,Diffie–Hellman key exchange,public-key key encapsulation, and public-key encryption.
Public key algorithms are fundamental security primitives in moderncryptosystems, including applications and protocols that offer assurance of the confidentiality and authenticity of electronic communications and data storage. They underpin numerous Internet standards, such asTransport Layer Security (TLS),SSH,S/MIME, andPGP. Compared tosymmetric cryptography, public-key cryptography can be too slow for many purposes,[4]so these protocols often combine symmetric cryptography with public-key cryptography inhybrid cryptosystems.
Before the mid-1970s, all cipher systems usedsymmetric key algorithms, in which the samecryptographic keyis used with the underlying algorithm by both the sender and the recipient, who must both keep it secret. Of necessity, the key in every such system had to be exchanged between the communicating parties in some secure way prior to any use of the system – for instance, via asecure channel. This requirement is never trivial and very rapidly becomes unmanageable as the number of participants increases, or when secure channels are not available, or when, (as is sensible cryptographic practice), keys are frequently changed. In particular, if messages are meant to be secure from other users, a separate key is required for each possible pair of users.
By contrast, in a public-key cryptosystem, the public keys can be disseminated widely and openly, and only the corresponding private keys need be kept secret.
The two best-known types of public key cryptography aredigital signatureand public-key encryption:
For example, a software publisher can create a signature key pair and include the public key in software installed on computers. Later, the publisher can distribute an update to the software signed using the private key, and any computer receiving an update can confirm it is genuine by verifying the signature using the public key. As long as the software publisher keeps the private key secret, even if a forger can distribute malicious updates to computers, they cannot convince the computers that any malicious updates are genuine.
For example, a journalist can publish the public key of an encryption key pair on a web site so that sources can send secret messages to the news organization in ciphertext.
Only the journalist who knows the corresponding private key can decrypt the ciphertexts to obtain the sources' messages—an eavesdropper reading email on its way to the journalist cannot decrypt the ciphertexts. However, public-key encryption does not concealmetadatalike what computer a source used to send a message, when they sent it, or how long it is.[9][10][11][12]Public-key encryption on its own also does not tell the recipient anything about who sent a message[8]:283[13][14]—it just conceals the content of the message.
One important issue is confidence/proof that a particular public key is authentic, i.e. that it is correct and belongs to the person or entity claimed, and has not been tampered with or replaced by some (perhaps malicious) third party. There are several possible approaches, including:
Apublic key infrastructure(PKI), in which one or more third parties – known ascertificate authorities– certify ownership of key pairs.TLSrelies upon this. This implies that the PKI system (software, hardware, and management) is trust-able by all involved.
A "web of trust" decentralizes authentication by using individual endorsements of links between a user and the public key belonging to that user.PGPuses this approach, in addition to lookup in thedomain name system(DNS). TheDKIMsystem for digitally signing emails also uses this approach.
The most obvious application of a public key encryption system is for encrypting communication to provideconfidentiality– a message that a sender encrypts using the recipient's public key, which can be decrypted only by the recipient's paired private key.
Another application in public key cryptography is thedigital signature. Digital signature schemes can be used for senderauthentication.
Non-repudiationsystems use digital signatures to ensure that one party cannot successfully dispute its authorship of a document or communication.
Further applications built on this foundation include:digital cash,password-authenticated key agreement,time-stamping servicesand non-repudiation protocols.
Because asymmetric key algorithms are nearly always much more computationally intensive than symmetric ones, it is common to use a public/privateasymmetrickey-exchange algorithmto encrypt and exchange asymmetric key, which is then used bysymmetric-key cryptographyto transmit data using the now-sharedsymmetric keyfor a symmetric key encryption algorithm.PGP,SSH, and theSSL/TLSfamily of schemes use this procedure; they are thus calledhybrid cryptosystems. The initialasymmetriccryptography-based key exchange to share a server-generatedsymmetrickey from the server to client has the advantage of not requiring that a symmetric key be pre-shared manually, such as on printed paper or discs transported by a courier, while providing the higher data throughput of symmetric key cryptography over asymmetric key cryptography for the remainder of the shared connection.
As with all security-related systems, there are various potential weaknesses in public-key cryptography. Aside from poor choice of an asymmetric key algorithm (there are few that are widely regarded as satisfactory) or too short a key length, the chief security risk is that the private key of a pair becomes known. All security of messages, authentication, etc., will then be lost.
Additionally, with the advent ofquantum computing, many asymmetric key algorithms are considered vulnerable to attacks, and new quantum-resistant schemes are being developed to overcome the problem.[15][16]
All public key schemes are in theory susceptible to a "brute-force key search attack".[17]However, such an attack is impractical if the amount of computation needed to succeed – termed the "work factor" byClaude Shannon– is out of reach of all potential attackers. In many cases, the work factor can be increased by simply choosing a longer key. But other algorithms may inherently have much lower work factors, making resistance to a brute-force attack (e.g., from longer keys) irrelevant. Some special and specific algorithms have been developed to aid in attacking some public key encryption algorithms; bothRSAandElGamal encryptionhave known attacks that are much faster than the brute-force approach.[citation needed]None of these are sufficiently improved to be actually practical, however.
Major weaknesses have been found for several formerly promising asymmetric key algorithms. The"knapsack packing" algorithmwas found to be insecure after the development of a new attack.[18]As with all cryptographic functions, public-key implementations may be vulnerable toside-channel attacksthat exploit information leakage to simplify the search for a secret key. These are often independent of the algorithm being used. Research is underway to both discover, and to protect against, new attacks.
Another potential security vulnerability in using asymmetric keys is the possibility of a"man-in-the-middle" attack, in which the communication of public keys is intercepted by a third party (the "man in the middle") and then modified to provide different public keys instead. Encrypted messages and responses must, in all instances, be intercepted, decrypted, and re-encrypted by the attacker using the correct public keys for the different communication segments so as to avoid suspicion.[citation needed]
A communication is said to be insecure where data is transmitted in a manner that allows for interception (also called "sniffing"). These terms refer to reading the sender's private data in its entirety. A communication is particularly unsafe when interceptions can not be prevented or monitored by the sender.[19]
A man-in-the-middle attack can be difficult to implement due to the complexities of modern security protocols. However, the task becomes simpler when a sender is using insecure media such as public networks, theInternet, or wireless communication. In these cases an attacker can compromise the communications infrastructure rather than the data itself. A hypothetical malicious staff member at anInternet service provider(ISP) might find a man-in-the-middle attack relatively straightforward. Capturing the public key would only require searching for the key as it gets sent through the ISP's communications hardware; in properly implemented asymmetric key schemes, this is not a significant risk.[citation needed]
In some advanced man-in-the-middle attacks, one side of the communication will see the original data while the other will receive a malicious variant. Asymmetric man-in-the-middle attacks can prevent users from realizing their connection is compromised. This remains so even when one user's data is known to be compromised because the data appears fine to the other user. This can lead to confusing disagreements between users such as "it must be on your end!" when neither user is at fault. Hence, man-in-the-middle attacks are only fully preventable when the communications infrastructure is physically controlled by one or both parties; such as via a wired route inside the sender's own building. In summation, public keys are easier to alter when the communications hardware used by a sender is controlled by an attacker.[20][21][22]
One approach to prevent such attacks involves the use of apublic key infrastructure(PKI); a set of roles, policies, and procedures needed to create, manage, distribute, use, store andrevokedigital certificates and manage public-key encryption. However, this has potential weaknesses.
For example, the certificate authority issuing the certificate must be trusted by all participating parties to have properly checked the identity of the key-holder, to have ensured the correctness of the public key when it issues a certificate, to be secure from computer piracy, and to have made arrangements with all participants to check all their certificates before protected communications can begin.Web browsers, for instance, are supplied with a long list of "self-signed identity certificates" from PKI providers – these are used to check thebona fidesof the certificate authority and then, in a second step, the certificates of potential communicators. An attacker who could subvert one of those certificate authorities into issuing a certificate for a bogus public key could then mount a "man-in-the-middle" attack as easily as if the certificate scheme were not used at all. An attacker who penetrates an authority's servers and obtains its store of certificates and keys (public and private) would be able to spoof, masquerade, decrypt, and forge transactions without limit, assuming that they were able to place themselves in the communication stream.
Despite its theoretical and potential problems, Public key infrastructure is widely used. Examples includeTLSand its predecessorSSL, which are commonly used to provide security for web browser transactions (for example, most websites utilize TLS forHTTPS).
Aside from the resistance to attack of a particular key pair, the security of the certificationhierarchymust be considered when deploying public key systems. Some certificate authority – usually a purpose-built program running on a server computer – vouches for the identities assigned to specific private keys by producing a digital certificate.Public key digital certificatesare typically valid for several years at a time, so the associated private keys must be held securely over that time. When a private key used for certificate creation higher in the PKI server hierarchy is compromised, or accidentally disclosed, then a "man-in-the-middle attack" is possible, making any subordinate certificate wholly insecure.
Most of the available public-key encryption software does not concealmetadatain the message header, which might include the identities of the sender and recipient, the sending date, subject field, and the software they use etc. Rather, only the body of the message is concealed and can only be decrypted with the private key of the intended recipient. This means that a third party could construct quite a detailed model of participants in a communication network, along with the subjects being discussed, even if the message body itself is hidden.
However, there has been a recent demonstration of messaging with encrypted headers, which obscures the identities of the sender and recipient, and significantly reduces the available metadata to a third party.[23]The concept is based around an open repository containing separately encrypted metadata blocks and encrypted messages. Only the intended recipient is able to decrypt the metadata block, and having done so they can identify and download their messages and decrypt them. Such a messaging system is at present in an experimental phase and not yet deployed. Scaling this method would reveal to the third party only the inbox server being used by the recipient and the timestamp of sending and receiving. The server could be shared by thousands of users, making social network modelling much more challenging.
During the earlyhistory of cryptography, two parties would rely upon a key that they would exchange by means of a secure, but non-cryptographic, method such as a face-to-face meeting, or a trusted courier. This key, which both parties must then keep absolutely secret, could then be used to exchange encrypted messages. A number of significant practical difficulties arise with this approach todistributing keys.
In his 1874 bookThe Principles of Science,William Stanley Jevonswrote:[24]
Can the reader say what two numbers multiplied together will produce the number8616460799?[25]I think it unlikely that anyone but myself will ever know.[24]
Here he described the relationship ofone-way functionsto cryptography, and went on to discuss specifically thefactorizationproblem used to create atrapdoor function. In July 1996, mathematicianSolomon W. Golombsaid: "Jevons anticipated a key feature of the RSA Algorithm for public key cryptography, although he certainly did not invent the concept of public key cryptography."[26]
In 1970,James H. Ellis, a British cryptographer at the UKGovernment Communications Headquarters(GCHQ), conceived of the possibility of "non-secret encryption", (now called public key cryptography), but could see no way to implement it.[27][28]
In 1973, his colleagueClifford Cocksimplemented what has become known as theRSA encryption algorithm, giving a practical method of "non-secret encryption", and in 1974 another GCHQ mathematician and cryptographer,Malcolm J. Williamson, developed what is now known asDiffie–Hellman key exchange.
The scheme was also passed to the US'sNational Security Agency.[29]Both organisations had a military focus and only limited computing power was available in any case; the potential of public key cryptography remained unrealised by either organization:
I judged it most important for military use ... if you can share your key rapidly and electronically, you have a major advantage over your opponent. Only at the end of the evolution fromBerners-Leedesigning an open internet architecture forCERN, its adaptation and adoption for theArpanet... did public key cryptography realise its full potential.
—Ralph Benjamin[29]
These discoveries were not publicly acknowledged for 27 years, until the research was declassified by the British government in 1997.[30]
In 1976, an asymmetric key cryptosystem was published byWhitfield DiffieandMartin Hellmanwho, influenced byRalph Merkle's work on public key distribution, disclosed a method of public key agreement. This method of key exchange, which usesexponentiation in a finite field, came to be known asDiffie–Hellman key exchange.[31]This was the first published practical method for establishing a shared secret-key over an authenticated (but not confidential) communications channel without using a prior shared secret. Merkle's "public key-agreement technique" became known asMerkle's Puzzles, and was invented in 1974 and only published in 1978. This makes asymmetric encryption a rather new field in cryptography although cryptography itself dates back more than 2,000 years.[32]
In 1977, a generalization of Cocks's scheme was independently invented byRon Rivest,Adi ShamirandLeonard Adleman, all then atMIT. The latter authors published their work in 1978 inMartin Gardner'sScientific Americancolumn, and the algorithm came to be known asRSA, from their initials.[33]RSA usesexponentiation moduloa product of two very largeprimes, to encrypt and decrypt, performing both public key encryption and public key digital signatures. Its security is connected to the extreme difficulty offactoring large integers, a problem for which there is no known efficient general technique. A description of the algorithm was published in theMathematical Gamescolumn in the August 1977 issue ofScientific American.[34]
Since the 1970s, a large number and variety of encryption, digital signature, key agreement, and other techniques have been developed, including theRabin cryptosystem,ElGamal encryption,DSAandECC.
Examples of well-regarded asymmetric key techniques for varied purposes include:
Examples of asymmetric key algorithms not yet widely adopted include:
Examples of notable – yet insecure – asymmetric key algorithms include:
Examples of protocols using asymmetric key algorithms include:
|
https://en.wikipedia.org/wiki/Public_key_cryptography
|
Innumber theory, a branch ofmathematics, theCarmichael functionλ(n)of apositive integernis the smallest positive integermsuch that
holds for every integeracoprimeton. In algebraic terms,λ(n)is theexponentof themultiplicative group of integers modulon. As this is afinite abelian group, there must exist an element whoseorderequals the exponent,λ(n). Such an element is called aprimitiveλ-root modulon.
The Carmichael function is named after the American mathematicianRobert Carmichaelwho defined it in 1910.[1]It is also known asCarmichael's λ function, thereduced totient function, and theleast universal exponent function.
The order of the multiplicative group of integers modulonisφ(n), whereφisEuler's totient function. Since the order of an element of a finite group divides the order of the group,λ(n)dividesφ(n). The following table compares the first 36 values ofλ(n)(sequenceA002322in theOEIS) andφ(n)(inboldif they are different; thens such that they are different are listed inOEIS:A033949).
The Carmichael lambda function of a prime power can be expressed in terms of the Euler totient. Any number that is not 1 or a prime power can be written uniquely as the product of distinct prime powers, in which caseλof the product is theleast common multipleof theλof the prime power factors. Specifically,λ(n)is given by the recurrence
Euler's totient for a prime power, that is, a numberprwithpprime andr≥ 1, is given by
Carmichael proved two theorems that, together, establish that ifλ(n)is considered as defined by the recurrence of the previous section, then it satisfies the property stated in the introduction, namely that it is the smallest positive integermsuch thatam≡1(modn){\displaystyle a^{m}\equiv 1{\pmod {n}}}for allarelatively prime ton.
Theorem 1—Ifais relatively prime tonthenaλ(n)≡1(modn){\displaystyle a^{\lambda (n)}\equiv 1{\pmod {n}}}.[2]
This implies that the order of every element of the multiplicative group of integers modulondividesλ(n). Carmichael calls an elementafor whichaλ(n){\displaystyle a^{\lambda (n)}}is the least power ofacongruent to 1 (modn) aprimitive λ-root modulo n.[3](This is not to be confused with aprimitive root modulon, which Carmichael sometimes refers to as a primitiveφ{\displaystyle \varphi }-root modulon.)
Theorem 2—For every positive integernthere exists a primitiveλ-root modulon. Moreover, ifgis such a root, then there areφ(λ(n)){\displaystyle \varphi (\lambda (n))}primitiveλ-roots that are congruent to powers ofg.[4]
Ifgis one of the primitiveλ-roots guaranteed by the theorem, thengm≡1(modn){\displaystyle g^{m}\equiv 1{\pmod {n}}}has no positive integer solutionsmless thanλ(n), showing that there is no positivem<λ(n)such thatam≡1(modn){\displaystyle a^{m}\equiv 1{\pmod {n}}}for allarelatively prime ton.
The second statement of Theorem 2 does not imply that all primitiveλ-roots modulonare congruent to powers of a single rootg.[5]For example, ifn= 15, thenλ(n) = 4whileφ(n)=8{\displaystyle \varphi (n)=8}andφ(λ(n))=2{\displaystyle \varphi (\lambda (n))=2}. There are four primitiveλ-roots modulo 15, namely 2, 7, 8, and 13 as1≡24≡84≡74≡134{\displaystyle 1\equiv 2^{4}\equiv 8^{4}\equiv 7^{4}\equiv 13^{4}}. The roots 2 and 8 are congruent to powers of each other and the roots 7 and 13 are congruent to powers of each other, but neither 7 nor 13 is congruent to a power of 2 or 8 and vice versa. The other four elements of the multiplicative group modulo 15, namely 1, 4 (which satisfies4≡22≡82≡72≡132{\displaystyle 4\equiv 2^{2}\equiv 8^{2}\equiv 7^{2}\equiv 13^{2}}), 11, and 14, are not primitiveλ-roots modulo 15.
For a contrasting example, ifn= 9, thenλ(n)=φ(n)=6{\displaystyle \lambda (n)=\varphi (n)=6}andφ(λ(n))=2{\displaystyle \varphi (\lambda (n))=2}. There are two primitiveλ-roots modulo 9, namely 2 and 5, each of which is congruent to the fifth power of the other. They are also both primitiveφ{\displaystyle \varphi }-roots modulo 9.
In this section, anintegern{\displaystyle n}is divisible by a nonzero integerm{\displaystyle m}if there exists an integerk{\displaystyle k}such thatn=km{\displaystyle n=km}. This is written as
Supposeam≡ 1 (modn)for all numbersacoprime withn. Thenλ(n) |m.
Proof:Ifm=kλ(n) +rwith0 ≤r<λ(n), then
for all numbersacoprime withn. It follows thatr= 0sincer<λ(n)andλ(n)is the minimal positive exponent for which the congruence holds for allacoprime withn.
This follows from elementarygroup theory, because the exponent of anyfinite groupmust divide the order of the group.λ(n)is the exponent of the multiplicative group of integers modulonwhileφ(n)is the order of that group. In particular, the two must be equal in the cases where the multiplicative group is cyclic due to the existence of aprimitive root, which is the case for odd prime powers.
We can thus view Carmichael's theorem as a sharpening ofEuler's theorem.
Proof.
By definition, for any integerk{\displaystyle k}withgcd(k,b)=1{\displaystyle \gcd(k,b)=1}(and thus alsogcd(k,a)=1{\displaystyle \gcd(k,a)=1}), we have thatb|(kλ(b)−1){\displaystyle b\,|\,(k^{\lambda (b)}-1)}, and thereforea|(kλ(b)−1){\displaystyle a\,|\,(k^{\lambda (b)}-1)}. This establishes thatkλ(b)≡1(moda){\displaystyle k^{\lambda (b)}\equiv 1{\pmod {a}}}for allkrelatively prime toa. By the consequence of minimality proved above, we haveλ(a)|λ(b){\displaystyle \lambda (a)\,|\,\lambda (b)}.
For all positive integersaandbit holds that
This is an immediate consequence of the recurrence for the Carmichael function.
Ifrmax=maxi{ri}{\displaystyle r_{\mathrm {max} }=\max _{i}\{r_{i}\}}is the biggest exponent in the prime factorizationn=p1r1p2r2⋯pkrk{\displaystyle n=p_{1}^{r_{1}}p_{2}^{r_{2}}\cdots p_{k}^{r_{k}}}ofn, then for alla(including those not coprime ton) and allr≥rmax,
In particular, forsquare-freen(rmax= 1), for allawe have
For anyn≥ 16:[6][7]
(called Erdős approximation in the following) with the constant
andγ≈ 0.57721, theEuler–Mascheroni constant.
The following table gives some overview over the first226– 1 =67108863values of theλfunction, for both, the exact average and its Erdős-approximation.
Additionally given is some overview over the more easily accessible“logarithm over logarithm” valuesLoL(n) :=lnλ(n)/lnnwith
There, the table entry in row number 26 at column
indicates that 60.49% (≈40000000) of the integers1 ≤n≤67108863haveλ(n) >n4/5meaning that the majority of theλvalues is exponential in the lengthl:= log2(n)of the inputn, namely
For all numbersNand all buto(N)[8]positive integersn≤N(a "prevailing" majority):
with the constant[7]
For any sufficiently large numberNand for anyΔ ≥ (ln lnN)3, there are at most
positive integersn≤ Nsuch thatλ(n) ≤ne−Δ.[9]
For any sequencen1<n2<n3< ⋯of positive integers, any constant0 <c<1/ln 2, and any sufficiently largei:[10][11]
For a constantcand any sufficiently large positiveA, there exists an integern>Asuch that[11]
Moreover,nis of the form
for some square-free integerm< (lnA)cln ln lnA.[10]
The set of values of the Carmichael function has counting function[12]
where
The Carmichael function is important incryptographydue to its use in theRSA encryption algorithm.
Forn=p, a prime, Theorem 1 is equivalent to Fermat's little theorem:
For prime powerspr,r> 1, if
holds for some integerh, then raising both sides to the powerpgives
for some other integerh′{\displaystyle h'}. By induction it follows thataφ(pr)≡1(modpr){\displaystyle a^{\varphi (p^{r})}\equiv 1{\pmod {p^{r}}}}for allarelatively prime topand hence topr. This establishes the theorem forn= 4or any odd prime power.
Foracoprime to (powers of) 2 we havea= 1 + 2h2for some integerh2. Then,
whereh3{\displaystyle h_{3}}is an integer. Withr= 3, this is written
Squaring both sides gives
wherehr+1{\displaystyle h_{r+1}}is an integer. It follows by induction that
for allr≥3{\displaystyle r\geq 3}and allacoprime to2r{\displaystyle 2^{r}}.[13]
By theunique factorization theorem, anyn> 1can be written in a unique way as
wherep1<p2< ... <pkare primes andr1,r2, ...,rkare positive integers. The results for prime powers establish that, for1≤j≤k{\displaystyle 1\leq j\leq k},
From this it follows that
where, as given by the recurrence,
From theChinese remainder theoremone concludes that
|
https://en.wikipedia.org/wiki/Carmichael_function
|
Inabstract algebra,group theorystudies thealgebraic structuresknown asgroups.
The concept of a group is central to abstract algebra: other well-known algebraic structures, such asrings,fields, andvector spaces, can all be seen as groups endowed with additionaloperationsandaxioms. Groups recur throughout mathematics, and the methods of group theory have influenced many parts of algebra.Linear algebraic groupsandLie groupsare two branches of group theory that have experienced advances and have become subject areas in their own right.
Various physical systems, such ascrystalsand thehydrogen atom, andthree of the fourknown fundamental forces in the universe, may be modelled bysymmetry groups. Thus group theory and the closely relatedrepresentation theoryhave many important applications inphysics,chemistry, andmaterials science. Group theory is also central topublic key cryptography.
The earlyhistory of group theorydates from the 19th century. One of the most important mathematical achievements of the 20th century[1]was the collaborative effort, taking up more than 10,000 journal pages and mostly published between 1960 and 2004, that culminated in a completeclassification of finite simple groups.
Group theory has three main historical sources:number theory, the theory ofalgebraic equations, andgeometry. The number-theoretic strand was begun byLeonhard Euler, and developed byGauss'swork onmodular arithmeticand additive and multiplicative groups related toquadratic fields. Early results about permutation groups were obtained byLagrange,Ruffini, andAbelin their quest for general solutions of polynomial equations of high degree.Évariste Galoiscoined the term "group" and established a connection, now known asGalois theory, between the nascent theory of groups andfield theory. In geometry, groups first became important inprojective geometryand, later,non-Euclidean geometry.Felix Klein'sErlangen programproclaimed group theory to be the organizing principle of geometry.
Galois, in the 1830s, was the first to employ groups to determine the solvability ofpolynomial equations.Arthur CayleyandAugustin Louis Cauchypushed these investigations further by creating the theory of permutation groups. The second historical source for groups stems fromgeometricalsituations. In an attempt to come to grips with possible geometries (such aseuclidean,hyperbolicorprojective geometry) using group theory,Felix Kleininitiated theErlangen programme.Sophus Lie, in 1884, started using groups (now calledLie groups) attached toanalyticproblems. Thirdly, groups were, at first implicitly and later explicitly, used inalgebraic number theory.
The different scope of these early sources resulted in different notions of groups. The theory of groups was unified starting around 1880. Since then, the impact of group theory has been ever growing, giving rise to the birth ofabstract algebrain the early 20th century,representation theory, and many more influential spin-off domains. Theclassification of finite simple groupsis a vast body of work from the mid 20th century, classifying all thefinitesimple groups.
The range of groups being considered has gradually expanded from finite permutation groups and special examples ofmatrix groupsto abstract groups that may be specified through apresentationbygeneratorsandrelations.
The firstclassof groups to undergo a systematic study waspermutation groups. Given any setXand a collectionGofbijectionsofXinto itself (known aspermutations) that is closed under compositions and inverses,Gis a groupactingonX. IfXconsists ofnelements andGconsists ofallpermutations,Gis thesymmetric groupSn; in general, any permutation groupGis asubgroupof the symmetric group ofX. An early construction due toCayleyexhibited any group as a permutation group, acting on itself (X=G) by means of the leftregular representation.
In many cases, the structure of a permutation group can be studied using the properties of its action on the corresponding set. For example, in this way one proves that forn≥ 5, thealternating groupAnissimple, i.e. does not admit any propernormal subgroups. This fact plays a key role in theimpossibility of solving a general algebraic equation of degreen≥ 5in radicals.
The next important class of groups is given bymatrix groups, orlinear groups. HereGis a set consisting of invertiblematricesof given ordernover afieldKthat is closed under the products and inverses. Such a group acts on then-dimensional vector spaceKnbylinear transformations. This action makes matrix groups conceptually similar to permutation groups, and the geometry of the action may be usefully exploited to establish properties of the groupG.
Permutation groups and matrix groups are special cases oftransformation groups: groups that act on a certain spaceXpreserving its inherent structure. In the case of permutation groups,Xis a set; for matrix groups,Xis avector space. The concept of a transformation group is closely related with the concept of asymmetry group: transformation groups frequently consist ofalltransformations that preserve a certain structure.
The theory of transformation groups forms a bridge connecting group theory withdifferential geometry. A long line of research, originating withLieandKlein, considers group actions onmanifoldsbyhomeomorphismsordiffeomorphisms. The groups themselves may bediscreteorcontinuous.
Most groups considered in the first stage of the development of group theory were "concrete", having been realized through numbers, permutations, or matrices. It was not until the late nineteenth century that the idea of anabstract groupbegan to take hold, where "abstract" means that the nature of the elements are ignored in such a way that twoisomorphic groupsare considered as the same group. A typical way of specifying an abstract group is through apresentationbygenerators and relations,
A significant source of abstract groups is given by the construction of afactor group, orquotient group,G/H, of a groupGby anormal subgroupH.Class groupsofalgebraic number fieldswere among the earliest examples of factor groups, of much interest innumber theory. If a groupGis a permutation group on a setX, the factor groupG/His no longer acting onX; but the idea of an abstract group permits one not to worry about this discrepancy.
The change of perspective from concrete to abstract groups makes it natural to consider properties of groups that are independent of a particular realization, or in modern language, invariant underisomorphism, as well as the classes of group with a given such property:finite groups,periodic groups,simple groups,solvable groups, and so on. Rather than exploring properties of an individual group, one seeks to establish results that apply to a whole class of groups. The new paradigm was of paramount importance for the development of mathematics: it foreshadowed the creation ofabstract algebrain the works ofHilbert,Emil Artin,Emmy Noether, and mathematicians of their school.[citation needed]
An important elaboration of the concept of a group occurs ifGis endowed with additional structure, notably, of atopological space,differentiable manifold, oralgebraic variety. If the multiplication and inversion of the group are compatible with this structure, that is, they arecontinuous,smoothorregular(in the sense of algebraic geometry) maps, thenGis atopological group, aLie group, or analgebraic group.[2]
The presence of extra structure relates these types of groups with other mathematical disciplines and means that more tools are available in their study. Topological groups form a natural domain forabstract harmonic analysis, whereasLie groups(frequently realized as transformation groups) are the mainstays ofdifferential geometryand unitaryrepresentation theory. Certain classification questions that cannot be solved in general can be approached and resolved for special subclasses of groups. Thus,compact connected Lie groupshave been completely classified. There is a fruitful relation between infinite abstract groups and topological groups: whenever a groupΓcan be realized as alatticein a topological groupG, the geometry and analysis pertaining toGyield important results aboutΓ. A comparatively recent trend in the theory of finite groups exploits their connections with compact topological groups (profinite groups): for example, a singlep-adic analytic groupGhas a family of quotients which are finitep-groupsof various orders, and properties ofGtranslate into the properties of its finite quotients.
During the twentieth century, mathematicians investigated some aspects of the theory of finite groups in great depth, especially thelocal theoryof finite groups and the theory ofsolvableandnilpotent groups.[citation needed]As a consequence, the completeclassification of finite simple groupswas achieved, meaning that all thosesimple groupsfrom which all finite groups can be built are now known.
During the second half of the twentieth century, mathematicians such asChevalleyandSteinbergalso increased our understanding of finite analogs ofclassical groups, and other related groups. One such family of groups is the family ofgeneral linear groupsoverfinite fields.
Finite groups often occur when consideringsymmetryof mathematical or
physical objects, when those objects admit just a finite number of structure-preserving transformations. The theory ofLie groups,
which may be viewed as dealing with "continuous symmetry", is strongly influenced by the associatedWeyl groups. These are finite groups generated by reflections which act on a finite-dimensionalEuclidean space. The properties of finite groups can thus play a role in subjects such astheoretical physicsandchemistry.
Saying that a groupGactson a setXmeans that every element ofGdefines a bijective map on the setXin a way compatible with the group structure. WhenXhas more structure, it is useful to restrict this notion further: a representation ofGon avector spaceVis agroup homomorphism:
whereGL(V) consists of the invertiblelinear transformationsofV. In other words, to every group elementgis assigned anautomorphismρ(g) such thatρ(g) ∘ρ(h) =ρ(gh)for anyhinG.
This definition can be understood in two directions, both of which give rise to whole new domains of mathematics.[3]On the one hand, it may yield new information about the groupG: often, the group operation inGis abstractly given, but viaρ, it corresponds to themultiplication of matrices, which is very explicit.[4]On the other hand, given a well-understood group acting on a complicated object, this simplifies the study of the object in question. For example, ifGis finite, it is known thatVabove decomposes intoirreducible parts(seeMaschke's theorem). These parts, in turn, are much more easily manageable than the wholeV(viaSchur's lemma).
Given a groupG,representation theorythen asks what representations ofGexist. There are several settings, and the employed methods and obtained results are rather different in every case:representation theory of finite groupsand representations ofLie groupsare two main subdomains of the theory. The totality of representations is governed by the group'scharacters. For example,Fourier polynomialscan be interpreted as the characters ofU(1), the group ofcomplex numbersofabsolute value1, acting on theL2-space of periodic functions.
ALie groupis agroupthat is also adifferentiable manifold, with the property that the group operations are compatible with thesmooth structure. Lie groups are named afterSophus Lie, who laid the foundations of the theory of continuoustransformation groups. The termgroupes de Liefirst appeared in French in 1893 in the thesis of Lie's studentArthur Tresse, page 3.[5]
Lie groups represent the best-developed theory ofcontinuous symmetryofmathematical objectsandstructures, which makes them indispensable tools for many parts of contemporary mathematics, as well as for moderntheoretical physics. They provide a natural framework for analysing the continuous symmetries ofdifferential equations(differential Galois theory), in much the same way as permutation groups are used inGalois theoryfor analysing the discrete symmetries ofalgebraic equations. An extension of Galois theory to the case of continuous symmetry groups was one of Lie's principal motivations.
Groups can be described in different ways. Finite groups can be described by writing down thegroup tableconsisting of all possible multiplicationsg•h. A more compact way of defining a group is bygenerators and relations, also called thepresentationof a group. Given any setFof generators{gi}i∈I{\displaystyle \{g_{i}\}_{i\in I}}, thefree groupgenerated byFsurjects onto the groupG. The kernel of this map is called the subgroup of relations, generated by some subsetD. The presentation is usually denoted by⟨F∣D⟩.{\displaystyle \langle F\mid D\rangle .}For example, the group presentation⟨a,b∣aba−1b−1⟩{\displaystyle \langle a,b\mid aba^{-1}b^{-1}\rangle }describes a group which is isomorphic toZ×Z.{\displaystyle \mathbb {Z} \times \mathbb {Z} .}A string consisting of generator symbols and their inverses is called aword.
Combinatorial group theorystudies groups from the perspective of generators and relations.[6]It is particularly useful where finiteness assumptions are satisfied, for example finitely generated groups, or finitely presented groups (i.e. in addition the relations are finite). The area makes use of the connection ofgraphsvia theirfundamental groups. A fundamental theorem of this area is that every subgroup of a free group is free.
There are several natural questions arising from giving a group by its presentation. Theword problemasks whether two words are effectively the same group element. By relating the problem toTuring machines, one can show that there is in general noalgorithmsolving this task. Another, generally harder, algorithmically insoluble problem is thegroup isomorphism problem, which asks whether two groups given by different presentations are actually isomorphic. For example, the group with presentation⟨x,y∣xyxyx=e⟩,{\displaystyle \langle x,y\mid xyxyx=e\rangle ,}is isomorphic to the additive groupZof integers, although this may not be immediately apparent. (Writingz=xy{\displaystyle z=xy}, one hasG≅⟨z,y∣z3=y⟩≅⟨z⟩.{\displaystyle G\cong \langle z,y\mid z^{3}=y\rangle \cong \langle z\rangle .})
Geometric group theoryattacks these problems from a geometric viewpoint, either by viewing groups as geometric objects, or by finding suitable geometric objects a group acts on.[7]The first idea is made precise by means of theCayley graph, whose vertices correspond to group elements and edges correspond to right multiplication in the group. Given two elements, one constructs theword metricgiven by the length of the minimal path between the elements. A theorem ofMilnorand Svarc then says that given a groupGacting in a reasonable manner on ametric spaceX, for example acompact manifold, thenGisquasi-isometric(i.e. looks similar from a distance) to the spaceX.
Given a structured objectXof any sort, asymmetryis a mapping of the object onto itself which preserves the structure. This occurs in many cases, for example
The axioms of a group formalize the essential aspects ofsymmetry. Symmetries form a group: they areclosedbecause if you take a symmetry of an object, and then apply another symmetry, the result will still be a symmetry. The identity keeping the object fixed is always a symmetry of an object. Existence of inverses is guaranteed by undoing the symmetry and the associativity comes from the fact that symmetries are functions on a space, and composition of functions is associative.
Frucht's theoremsays that every group is the symmetry group of somegraph. So every abstract group is actually the symmetries of some explicit object.
The saying of "preserving the structure" of an object can be made precise by working in acategory. Maps preserving the structure are then themorphisms, and the symmetry group is theautomorphism groupof the object in question.
Applications of group theory abound. Almost all structures inabstract algebraare special cases of groups.Rings, for example, can be viewed asabelian groups(corresponding to addition) together with a second operation (corresponding to multiplication). Therefore, group theoretic arguments underlie large parts of the theory of those entities.
Galois theoryuses groups to describe the symmetries of the roots of a polynomial (or more precisely the automorphisms of the algebras generated by these roots). Thefundamental theorem of Galois theoryprovides a link betweenalgebraic field extensionsand group theory. It gives an effective criterion for the solvability of polynomial equations in terms of the solvability of the correspondingGalois group. For example,S5, thesymmetric groupin 5 elements, is not solvable which implies that the generalquintic equationcannot be solved by radicals in the way equations of lower degree can. The theory, being one of the historical roots of group theory, is still fruitfully applied to yield new results in areas such asclass field theory.
Algebraic topologyis another domain which prominentlyassociatesgroups to the objects the theory is interested in. There, groups are used to describe certain invariants oftopological spaces. They are called "invariants" because they are defined in such a way that they do not change if the space is subjected to somedeformation. For example, thefundamental group"counts" how many paths in the space are essentially different. ThePoincaré conjecture, proved in 2002/2003 byGrigori Perelman, is a prominent application of this idea. The influence is not unidirectional, though. For example, algebraic topology makes use ofEilenberg–MacLane spaceswhich are spaces with prescribedhomotopy groups. Similarlyalgebraic K-theoryrelies in a way onclassifying spacesof groups. Finally, the name of thetorsion subgroupof an infinite group shows the legacy of topology in group theory.
Algebraic geometrylikewise uses group theory in many ways.Abelian varietieshave been introduced above. The presence of the group operation yields additional information which makes these varieties particularly accessible. They also often serve as a test for new conjectures. (For example theHodge conjecture(in certain cases).) The one-dimensional case, namelyelliptic curvesis studied in particular detail. They are both theoretically and practically intriguing.[8]In another direction,toric varietiesarealgebraic varietiesacted on by atorus. Toroidal embeddings have recently led to advances inalgebraic geometry, in particularresolution of singularities.[9]
Algebraic number theorymakes uses of groups for some important applications. For example,Euler's product formula,
capturesthe factthat any integer decomposes in a unique way intoprimes. The failure of this statement formore general ringsgives rise toclass groupsandregular primes, which feature inKummer'streatment ofFermat's Last Theorem.
Analysis on Lie groups and certain other groups is calledharmonic analysis.Haar measures, that is, integrals invariant under the translation in a Lie group, are used forpattern recognitionand otherimage processingtechniques.[10]
Incombinatorics, the notion ofpermutationgroup and the concept of group action are often used to simplify the counting of a set of objects; see in particularBurnside's lemma.
The presence of the 12-periodicityin thecircle of fifthsyields applications ofelementary group theoryinmusical set theory.Transformational theorymodels musical transformations as elements of a mathematical group.
Inphysics, groups are important because they describe the symmetries which the laws of physics seem to obey. According toNoether's theorem, every continuous symmetry of a physical system corresponds to aconservation lawof the system. Physicists are very interested in group representations, especially of Lie groups, since these representations often point the way to the "possible" physical theories. Examples of the use of groups in physics include theStandard Model,gauge theory, theLorentz group, and thePoincaré group.
Group theory can be used to resolve the incompleteness of the statistical interpretations of mechanics developed byWillard Gibbs, relating to the summing of an infinite number of probabilities to yield a meaningful solution.[11]
Inchemistryandmaterials science,point groupsare used to classify regular polyhedra, and thesymmetries of molecules, andspace groupsto classifycrystal structures. The assigned groups can then be used to determine physical properties (such aschemical polarityandchirality), spectroscopic properties (particularly useful forRaman spectroscopy,infrared spectroscopy, circular dichroism spectroscopy, magnetic circular dichroism spectroscopy, UV/Vis spectroscopy, and fluorescence spectroscopy), and to constructmolecular orbitals.
Molecular symmetryis responsible for many physical and spectroscopic properties of compounds and provides relevant information about how chemical reactions occur. In order to assign a point group for any given molecule, it is necessary to find the set of symmetry operations present on it. The symmetry operation is an action, such as a rotation around an axis or a reflection through a mirror plane. In other words, it is an operation that moves the molecule such that it is indistinguishable from the original configuration. In group theory, the rotation axes and mirror planes are called "symmetry elements". These elements can be a point, line or plane with respect to which the symmetry operation is carried out. The symmetry operations of a molecule determine the specific point group for this molecule.
Inchemistry, there are five important symmetry operations. They are identity operation (E), rotation operation or proper rotation (Cn), reflection operation (σ), inversion (i) and rotation reflection operation or improper rotation (Sn). The identity operation (E) consists of leaving the molecule as it is. This is equivalent to any number of full rotations around any axis. This is a symmetry of all molecules, whereas the symmetry group of achiralmolecule consists of only the identity operation. An identity operation is a characteristic of every molecule even if it has no symmetry. Rotation around an axis (Cn) consists of rotating the molecule around a specific axis by a specific angle. It is rotation through the angle 360°/n, wherenis an integer, about a rotation axis. For example, if awatermolecule rotates 180° around the axis that passes through theoxygenatom and between thehydrogenatoms, it is in the same configuration as it started. In this case,n= 2, since applying it twice produces the identity operation. In molecules with more than one rotation axis, the Cnaxis having the largest value of n is the highest order rotation axis or principal axis. For example inboron trifluoride(BF3), the highest order of rotation axis isC3, so the principal axis of rotation isC3.
In the reflection operation (σ) many molecules have mirror planes, although they may not be obvious. The reflection operation exchanges left and right, as if each point had moved perpendicularly through the plane to a position exactly as far from the plane as when it started. When the plane is perpendicular to the principal axis of rotation, it is calledσh(horizontal). Other planes, which contain the principal axis of rotation, are labeled vertical (σv) or dihedral (σd).
Inversion (i ) is a more complex operation. Each point moves through the center of the molecule to a position opposite the original position and as far from the central point as where it started. Many molecules that seem at first glance to have an inversion center do not; for example,methaneand othertetrahedralmolecules lack inversion symmetry. To see this, hold a methane model with two hydrogen atoms in the vertical plane on the right and two hydrogen atoms in the horizontal plane on the left. Inversion results in two hydrogen atoms in the horizontal plane on the right and two hydrogen atoms in the vertical plane on the left. Inversion is therefore not a symmetry operation of methane, because the orientation of the molecule following the inversion operation differs from the original orientation. And the last operation is improper rotation or rotation reflection operation (Sn) requires rotation of 360°/n, followed by reflection through a plane perpendicular to the axis of rotation.
Very large groups of prime order constructed inelliptic curve cryptographyserve forpublic-key cryptography. Cryptographical methods of this kind benefit from the flexibility of the geometric objects, hence their group structures, together with the complicated structure of these groups, which make thediscrete logarithmvery hard to calculate. One of the earliest encryption protocols,Caesar's cipher, may also be interpreted as a (very easy) group operation. Most cryptographic schemes use groups in some way. In particularDiffie–Hellman key exchangeuses finitecyclic groups. So the termgroup-based cryptographyrefers mostly tocryptographic protocolsthat use infinitenon-abelian groupssuch as abraid group.
|
https://en.wikipedia.org/wiki/Group_theory
|
Inmodular arithmetic, theintegerscoprime(relatively prime) tonfrom the set{0,1,…,n−1}{\displaystyle \{0,1,\dots ,n-1\}}ofnnon-negative integers form agroupunder multiplicationmodulon, called themultiplicative group of integers modulon. Equivalently, the elements of this group can be thought of as thecongruence classes, also known asresiduesmodulon, that are coprime ton.
Hence another name is the group ofprimitive residue classes modulon.
In thetheory of rings, a branch ofabstract algebra, it is described as thegroup of unitsof thering of integers modulon. Hereunitsrefers to elements with amultiplicative inverse, which, in this ring, are exactly those coprime ton.
This group, usually denoted(Z/nZ)×{\displaystyle (\mathbb {Z} /n\mathbb {Z} )^{\times }}, is fundamental innumber theory. It is used incryptography,integer factorization, andprimality testing. It is anabelian,finitegroup whoseorderis given byEuler's totient function:|(Z/nZ)×|=φ(n).{\displaystyle |(\mathbb {Z} /n\mathbb {Z} )^{\times }|=\varphi (n).}For primenthe group iscyclic, and in general the structure is easy to describe, but no simple general formula for findinggeneratorsis known.
It is a straightforward exercise to show that, under multiplication, the set ofcongruence classesmodulonthat are coprime tonsatisfy the axioms for anabelian group.
Indeed,ais coprime tonif and only ifgcd(a,n) = 1. Integers in the same congruence classa≡b(modn)satisfygcd(a,n) = gcd(b,n); hence one is coprime tonif and only if the other is. Thus the notion of congruence classes modulonthat are coprime tonis well-defined.
Sincegcd(a,n) = 1andgcd(b,n) = 1impliesgcd(ab,n) = 1, the set of classes coprime tonis closed under multiplication.
Integer multiplication respects the congruence classes; that is,a≡a'andb≡b'(modn)impliesab≡a'b'(modn).
This implies that the multiplication is associative, commutative, and that the class of 1 is the unique multiplicative identity.
Finally, givena, themultiplicative inverseofamodulonis an integerxsatisfyingax≡ 1 (modn).
It exists precisely whenais coprime ton, because in that casegcd(a,n) = 1and byBézout's lemmathere are integersxandysatisfyingax+ny= 1. Notice that the equationax+ny= 1implies thatxis coprime ton, so the multiplicative inverse belongs to the group.
The set of (congruence classes of) integers modulonwith the operations of addition and multiplication is aring.
It is denotedZ/nZ{\displaystyle \mathbb {Z} /n\mathbb {Z} }orZ/(n){\displaystyle \mathbb {Z} /(n)}(the notation refers to taking thequotientof integers modulo theidealnZ{\displaystyle n\mathbb {Z} }or(n){\displaystyle (n)}consisting of the multiples ofn).
Outside of number theory the simpler notationZn{\displaystyle \mathbb {Z} _{n}}is often used, though it can be confused with thep-adic integerswhennis a prime number.
The multiplicative group of integers modulon, which is thegroup of unitsin this ring, may be written as (depending on the author)(Z/nZ)×,{\displaystyle (\mathbb {Z} /n\mathbb {Z} )^{\times },}(Z/nZ)∗,{\displaystyle (\mathbb {Z} /n\mathbb {Z} )^{*},}U(Z/nZ),{\displaystyle \mathrm {U} (\mathbb {Z} /n\mathbb {Z} ),}E(Z/nZ){\displaystyle \mathrm {E} (\mathbb {Z} /n\mathbb {Z} )}(for GermanEinheit, which translates asunit),Zn∗{\displaystyle \mathbb {Z} _{n}^{*}}, or similar notations. This article uses(Z/nZ)×.{\displaystyle (\mathbb {Z} /n\mathbb {Z} )^{\times }.}
The notationCn{\displaystyle \mathrm {C} _{n}}refers to thecyclic groupof ordern.
It isisomorphicto the group of integers modulonunder addition.
Note thatZ/nZ{\displaystyle \mathbb {Z} /n\mathbb {Z} }orZn{\displaystyle \mathbb {Z} _{n}}may also refer to the group under addition.
For example, the multiplicative group(Z/pZ)×{\displaystyle (\mathbb {Z} /p\mathbb {Z} )^{\times }}for a primepis cyclic and hence isomorphic to the additive groupZ/(p−1)Z{\displaystyle \mathbb {Z} /(p-1)\mathbb {Z} }, but the isomorphism is not obvious.
The order of the multiplicative group of integers modulonis the number of integers in{0,1,…,n−1}{\displaystyle \{0,1,\dots ,n-1\}}coprime ton. It is given byEuler's totient function:|(Z/nZ)×|=φ(n){\displaystyle |(\mathbb {Z} /n\mathbb {Z} )^{\times }|=\varphi (n)}(sequenceA000010in theOEIS).
For primep,φ(p)=p−1{\displaystyle \varphi (p)=p-1}.
The group(Z/nZ)×{\displaystyle (\mathbb {Z} /n\mathbb {Z} )^{\times }}iscyclicif and only ifnis 1, 2, 4,pkor 2pk, wherepis an odd prime andk> 0. For all other values ofnthe group is not cyclic.[1][2][3]This was first proved byGauss.[4]
This means that for thesen:
By definition, the group is cyclic if and only if it has ageneratorg; that is, the powersg0,g1,g2,…,{\displaystyle g^{0},g^{1},g^{2},\dots ,}give all possible residues moduloncoprime ton(the firstφ(n){\displaystyle \varphi (n)}powersg0,…,gφ(n)−1{\displaystyle g^{0},\dots ,g^{\varphi (n)-1}}give each exactly once).
A generator of(Z/nZ)×{\displaystyle (\mathbb {Z} /n\mathbb {Z} )^{\times }}is called aprimitive root modulon.[5]If there is any generator, then there areφ(φ(n)){\displaystyle \varphi (\varphi (n))}of them.
Modulo 1 any two integers are congruent, i.e., there is only one congruence class, [0], coprime to 1. Therefore,(Z/1Z)×≅C1{\displaystyle (\mathbb {Z} /1\,\mathbb {Z} )^{\times }\cong \mathrm {C} _{1}}is the trivial group withφ(1) = 1element. Because of its trivial nature, the case of congruences modulo 1 is generally ignored and some authors choose not to include the case ofn= 1 in theorem statements.
Modulo 2 there is only one coprime congruence class, [1], so(Z/2Z)×≅C1{\displaystyle (\mathbb {Z} /2\mathbb {Z} )^{\times }\cong \mathrm {C} _{1}}is thetrivial group.
Modulo 4 there are two coprime congruence classes, [1] and [3], so(Z/4Z)×≅C2,{\displaystyle (\mathbb {Z} /4\mathbb {Z} )^{\times }\cong \mathrm {C} _{2},}the cyclic group with two elements.
Modulo 8 there are four coprime congruence classes, [1], [3], [5] and [7]. The square of each of these is 1, so(Z/8Z)×≅C2×C2,{\displaystyle (\mathbb {Z} /8\mathbb {Z} )^{\times }\cong \mathrm {C} _{2}\times \mathrm {C} _{2},}theKlein four-group.
Modulo 16 there are eight coprime congruence classes [1], [3], [5], [7], [9], [11], [13] and [15].{±1,±7}≅C2×C2,{\displaystyle \{\pm 1,\pm 7\}\cong \mathrm {C} _{2}\times \mathrm {C} _{2},}is the 2-torsion subgroup(i.e., the square of each element is 1), so(Z/16Z)×{\displaystyle (\mathbb {Z} /16\mathbb {Z} )^{\times }}is not cyclic. The powers of 3,{1,3,9,11}{\displaystyle \{1,3,9,11\}}are a subgroup of order 4, as are the powers of 5,{1,5,9,13}.{\displaystyle \{1,5,9,13\}.}Thus(Z/16Z)×≅C2×C4.{\displaystyle (\mathbb {Z} /16\mathbb {Z} )^{\times }\cong \mathrm {C} _{2}\times \mathrm {C} _{4}.}
The pattern shown by 8 and 16 holds[6]for higher powers 2k,k> 2:{±1,2k−1±1}≅C2×C2,{\displaystyle \{\pm 1,2^{k-1}\pm 1\}\cong \mathrm {C} _{2}\times \mathrm {C} _{2},}is the 2-torsion subgroup, so(Z/2kZ)×{\displaystyle (\mathbb {Z} /2^{k}\mathbb {Z} )^{\times }}cannot be cyclic, and the powers of 3 are a cyclic subgroup of order2k− 2, so:
(Z/2kZ)×≅C2×C2k−2.{\displaystyle (\mathbb {Z} /2^{k}\mathbb {Z} )^{\times }\cong \mathrm {C} _{2}\times \mathrm {C} _{2^{k-2}}.}
By thefundamental theorem of finite abelian groups, the group(Z/nZ)×{\displaystyle (\mathbb {Z} /n\mathbb {Z} )^{\times }}is isomorphic to adirect productof cyclic groups of prime power orders.
More specifically, theChinese remainder theorem[7]says that ifn=p1k1p2k2p3k3…,{\displaystyle \;\;n=p_{1}^{k_{1}}p_{2}^{k_{2}}p_{3}^{k_{3}}\dots ,\;}then the ringZ/nZ{\displaystyle \mathbb {Z} /n\mathbb {Z} }is thedirect productof the rings corresponding to each of its prime power factors:
Similarly, the group of units(Z/nZ)×{\displaystyle (\mathbb {Z} /n\mathbb {Z} )^{\times }}is the direct product of the groups corresponding to each of the prime power factors:
For each odd prime powerpk{\displaystyle p^{k}}the corresponding factor(Z/pkZ)×{\displaystyle (\mathbb {Z} /{p^{k}}\mathbb {Z} )^{\times }}is the cyclic group of orderφ(pk)=pk−pk−1{\displaystyle \varphi (p^{k})=p^{k}-p^{k-1}}, which may further factor into cyclic groups of prime-power orders.
For powers of 2 the factor(Z/2kZ)×{\displaystyle (\mathbb {Z} /{2^{k}}\mathbb {Z} )^{\times }}is not cyclic unlessk= 0, 1, 2, but factors into cyclic groups as described above.
The order of the groupφ(n){\displaystyle \varphi (n)}is the product of the orders of the cyclic groups in the direct product.
Theexponentof the group; that is, theleast common multipleof the orders in the cyclic groups, is given by theCarmichael functionλ(n){\displaystyle \lambda (n)}(sequenceA002322in theOEIS).
In other words,λ(n){\displaystyle \lambda (n)}is the smallest number such that for eachacoprime ton,aλ(n)≡1(modn){\displaystyle a^{\lambda (n)}\equiv 1{\pmod {n}}}holds.
It dividesφ(n){\displaystyle \varphi (n)}and is equal to it if and only if the group is cyclic.
Ifnis composite, there exists a possibly proper subgroup ofZn×{\displaystyle \mathbb {Z} _{n}^{\times }}, called the "group of false witnesses", comprising the solutions of the equationxn−1=1{\displaystyle x^{n-1}=1}, the elements which, raised to the powern− 1, are congruent to 1 modulon.[8]Fermat's Little Theoremstates that forn = pa prime, this group consists of allx∈Zp×{\displaystyle x\in \mathbb {Z} _{p}^{\times }}; thus forncomposite, such residuesxare "false positives" or "false witnesses" for the primality ofn. The numberx =2 is most often used in this basic primality check, andn =341 = 11 × 31is notable since2341−1≡1mod341{\displaystyle 2^{341-1}\equiv 1\mod 341}, andn =341 is the smallest composite number for whichx =2 is a false witness to primality. In fact, the false witnesses subgroup for 341 contains 100 elements, and is of index 3 inside the 300-element groupZ341×{\displaystyle \mathbb {Z} _{341}^{\times }}.
The smallest example with a nontrivial subgroup of false witnesses is9 = 3 × 3. There are 6 residues coprime to 9: 1, 2, 4, 5, 7, 8. Since 8 is congruent to−1 modulo 9, it follows that 88is congruent to 1 modulo 9. So 1 and 8 are false positives for the "primality" of 9 (since 9 is not actually prime). These are in fact the only ones, so the subgroup {1,8} is the subgroup of false witnesses. The same argument shows thatn− 1is a "false witness" for any odd compositen.
Forn= 91 (= 7 × 13), there areφ(91)=72{\displaystyle \varphi (91)=72}residues coprime to 91, half of them (i.e., 36 of them) are false witnesses of 91, namely 1, 3, 4, 9, 10, 12, 16, 17, 22, 23, 25, 27, 29, 30, 36, 38, 40, 43, 48, 51, 53, 55, 61, 62, 64, 66, 68, 69, 74, 75, 79, 81, 82, 87, 88, and 90, since for these values ofx,x90is congruent to 1 mod 91.
n= 561 (= 3 × 11 × 17) is aCarmichael number, thuss560is congruent to 1 modulo 561 for any integerscoprime to 561. The subgroup of false witnesses is, in this case, not proper; it is the entire group of multiplicative units modulo 561, which consists of 320 residues.
This table shows the cyclic decomposition of(Z/nZ)×{\displaystyle (\mathbb {Z} /n\mathbb {Z} )^{\times }}and agenerating setforn≤ 128. The decomposition and generating sets are not unique; for example,
(Z/35Z)×≅(Z/5Z)××(Z/7Z)×≅C4×C6≅C4×C2×C3≅C2×C12≅(Z/4Z)××(Z/13Z)×≅(Z/52Z)×{\displaystyle \displaystyle {\begin{aligned}(\mathbb {Z} /35\mathbb {Z} )^{\times }&\cong (\mathbb {Z} /5\mathbb {Z} )^{\times }\times (\mathbb {Z} /7\mathbb {Z} )^{\times }\cong \mathrm {C} _{4}\times \mathrm {C} _{6}\cong \mathrm {C} _{4}\times \mathrm {C} _{2}\times \mathrm {C} _{3}\cong \mathrm {C} _{2}\times \mathrm {C} _{12}\cong (\mathbb {Z} /4\mathbb {Z} )^{\times }\times (\mathbb {Z} /13\mathbb {Z} )^{\times }\\&\cong (\mathbb {Z} /52\mathbb {Z} )^{\times }\end{aligned}}}
(but≇C24≅C8×C3{\displaystyle \not \cong \mathrm {C} _{24}\cong \mathrm {C} _{8}\times \mathrm {C} _{3}}). The table below lists the shortest decomposition (among those, the lexicographically first is chosen – this guarantees isomorphic groups are listed with the same decompositions). The generating set is also chosen to be as short as possible, and fornwith primitive root, the smallest primitive root modulonis listed.
For example, take(Z/20Z)×{\displaystyle (\mathbb {Z} /20\mathbb {Z} )^{\times }}. Thenφ(20)=8{\displaystyle \varphi (20)=8}means that the order of the group is 8 (i.e., there are 8 numbers less than 20 and coprime to it);λ(20)=4{\displaystyle \lambda (20)=4}means the order of each element divides 4; that is, the fourth power of any number coprime to 20 is congruent to 1 (mod 20). The set {3,19} generates the group, which means that every element of(Z/20Z)×{\displaystyle (\mathbb {Z} /20\mathbb {Z} )^{\times }}is of the form3a× 19b(whereais 0, 1, 2, or 3, because the element 3 has order 4, and similarlybis 0 or 1, because the element 19 has order 2).
Smallest primitive root modnare (0 if no root exists)
Numbers of the elements in a minimal generating set of modnare
TheDisquisitiones Arithmeticaehas been translated from Gauss'sCiceronian LatinintoEnglishandGerman. The German edition includes all of his papers on number theory: all the proofs ofquadratic reciprocity, the determination of the sign of theGauss sum, the investigations intobiquadratic reciprocity, and unpublished notes.
|
https://en.wikipedia.org/wiki/Multiplicative_group_of_integers_mod_n
|
Ring homomorphisms
Algebraic structures
Related structures
Algebraic number theory
Noncommutative algebraic geometry
Free algebra
Clifford algebra
Inmathematics, and more specifically inring theory, anidealof aringis a specialsubsetof its elements. Ideals generalize certain subsets of theintegers, such as theeven numbersor the multiples of 3. Addition and subtraction of even numbers preserves evenness, and multiplying an even number by any integer (even or odd) results in an even number; theseclosureand absorption properties are the defining properties of an ideal. An ideal can be used to construct aquotient ringin a way similar to how, ingroup theory, anormal subgroupcan be used to construct aquotient group.
Among the integers, the ideals correspond one-for-one with thenon-negative integers: in this ring, every ideal is aprincipal idealconsisting of the multiples of a single non-negative number. However, in other rings, the ideals may not correspond directly to the ring elements, and certain properties of integers, when generalized to rings, attach more naturally to the ideals than to the elements of the ring. For instance, theprime idealsof a ring are analogous toprime numbers, and theChinese remainder theoremcan be generalized to ideals. There is a version ofunique prime factorizationfor the ideals of aDedekind domain(a type of ring important innumber theory).
The related, but distinct, concept of anidealinorder theoryis derived from the notion of ideal in ring theory. Afractional idealis a generalization of an ideal, and the usual ideals are sometimes calledintegral idealsfor clarity.
Ernst Kummerinvented the concept ofideal numbersto serve as the "missing" factors in number rings in which unique factorization fails; here the word "ideal" is in the sense of existing in imagination only, in analogy with "ideal" objects in geometry such as points at infinity.[1]In 1876,Richard Dedekindreplaced Kummer's undefined concept by concrete sets of numbers, sets that he called ideals, in the third edition ofDirichlet's bookVorlesungen über Zahlentheorie, to which Dedekind had added many supplements.[1][2][3]Later the notion was extended beyond number rings to the setting of polynomial rings and other commutative rings byDavid Hilbertand especiallyEmmy Noether.
Given aringR, aleft idealis a subsetIofRthat is asubgroupof theadditive groupofR{\displaystyle R}that "absorbs multiplication from the left by elements ofR{\displaystyle R}"; that is,I{\displaystyle I}is a left ideal if it satisfies the following two conditions:
In other words, a left ideal is a leftsubmoduleofR, considered as aleft moduleover itself.[5]
Aright idealis defined similarly, with the conditionrx∈I{\displaystyle rx\in I}replaced byxr∈I{\displaystyle xr\in I}. Atwo-sided idealis a left ideal that is also a right ideal.
If the ring iscommutative, the three definitions are the same, and one talks simply of anideal. In the non-commutative case, "ideal" is often used instead of "two-sided ideal".
IfIis a left, right or two-sided ideal, the relationx∼y{\displaystyle x\sim y}if and only if
is anequivalence relationonR, and the set ofequivalence classesforms a left, right or bi module denotedR/I{\displaystyle R/I}and called thequotientofRbyI.[6](It is an instance of acongruence relationand is a generalization ofmodular arithmetic.)
If the idealIis two-sided,R/I{\displaystyle R/I}is a ring,[7]and the function
that associates to each element ofRits equivalence class is asurjectivering homomorphismthat has the ideal as itskernel.[8]Conversely, the kernel of a ring homomorphism is a two-sided ideal. Therefore,the two-sided ideals are exactly the kernels of ring homomorphisms.
By convention, a ring has the multiplicative identity. But some authors do not require a ring to have the multiplicative identity; i.e., for them, a ring is arng. For a rngR, aleft idealIis a subrng with the additional property thatrx{\displaystyle rx}is inIfor everyr∈R{\displaystyle r\in R}and everyx∈I{\displaystyle x\in I}. (Right and two-sided ideals are defined similarly.) For a ring, an idealI(say a left ideal) is rarely a subring; since a subring shares the same multiplicative identity with the ambient ringR, ifIwere a subring, for everyr∈R{\displaystyle r\in R}, we haver=r1∈I;{\displaystyle r=r1\in I;}i.e.,I=R{\displaystyle I=R}.
The notion of an ideal does not involve associativity; thus, an ideal is also defined fornon-associative rings(often without the multiplicative identity) such as aLie algebra.
(For the sake of brevity, some results are stated only for left ideals but are usually also true for right ideals with appropriate notation changes.)
To simplify the description all rings are assumed to be commutative. The non-commutative case is discussed in detail in the respective articles.
Ideals are important because they appear as kernels of ring homomorphisms and allow one to definefactor rings. Different types of ideals are studied because they can be used to construct different types of factor rings.
Two other important terms using "ideal" are not always ideals of their ring. See their respective articles for details:
The sum and product of ideals are defined as follows. Fora{\displaystyle {\mathfrak {a}}}andb{\displaystyle {\mathfrak {b}}}, left (resp. right) ideals of a ringR, their sum is
which is a left (resp. right) ideal,
and, ifa,b{\displaystyle {\mathfrak {a}},{\mathfrak {b}}}are two-sided,
i.e. the product is the ideal generated by all products of the formabwithaina{\displaystyle {\mathfrak {a}}}andbinb{\displaystyle {\mathfrak {b}}}.
Notea+b{\displaystyle {\mathfrak {a}}+{\mathfrak {b}}}is the smallest left (resp. right) ideal containing botha{\displaystyle {\mathfrak {a}}}andb{\displaystyle {\mathfrak {b}}}(or the uniona∪b{\displaystyle {\mathfrak {a}}\cup {\mathfrak {b}}}), while the productab{\displaystyle {\mathfrak {a}}{\mathfrak {b}}}is contained in the intersection ofa{\displaystyle {\mathfrak {a}}}andb{\displaystyle {\mathfrak {b}}}.
The distributive law holds for two-sided idealsa,b,c{\displaystyle {\mathfrak {a}},{\mathfrak {b}},{\mathfrak {c}}},
If a product is replaced by an intersection, a partial distributive law holds:
where the equality holds ifa{\displaystyle {\mathfrak {a}}}containsb{\displaystyle {\mathfrak {b}}}orc{\displaystyle {\mathfrak {c}}}.
Remark: The sum and the intersection of ideals is again an ideal; with these two operations as join and meet, the set of all ideals of a given ring forms acompletemodular lattice. The lattice is not, in general, adistributive lattice. The three operations of intersection, sum (or join), and product make the set of ideals of a commutative ring into aquantale.
Ifa,b{\displaystyle {\mathfrak {a}},{\mathfrak {b}}}are ideals of a commutative ringR, thena∩b=ab{\displaystyle {\mathfrak {a}}\cap {\mathfrak {b}}={\mathfrak {a}}{\mathfrak {b}}}in the following two cases (at least)
(More generally, the difference between a product and an intersection of ideals is measured by theTor functor:Tor1R(R/a,R/b)=(a∩b)/ab{\displaystyle \operatorname {Tor} _{1}^{R}(R/{\mathfrak {a}},R/{\mathfrak {b}})=({\mathfrak {a}}\cap {\mathfrak {b}})/{\mathfrak {a}}{\mathfrak {b}}}.[17])
An integral domain is called aDedekind domainif for each pair of idealsa⊂b{\displaystyle {\mathfrak {a}}\subset {\mathfrak {b}}}, there is an idealc{\displaystyle {\mathfrak {c}}}such thata=bc{\displaystyle {\mathfrak {\mathfrak {a}}}={\mathfrak {b}}{\mathfrak {c}}}.[18]It can then be shown that every nonzero ideal of a Dedekind domain can be uniquely written as a product of maximal ideals, a generalization of thefundamental theorem of arithmetic.
InZ{\displaystyle \mathbb {Z} }we have
since(n)∩(m){\displaystyle (n)\cap (m)}is the set of integers that are divisible by bothn{\displaystyle n}andm{\displaystyle m}.
LetR=C[x,y,z,w]{\displaystyle R=\mathbb {C} [x,y,z,w]}and leta=(z,w),b=(x+z,y+w),c=(x+z,w){\displaystyle {\mathfrak {a}}=(z,w),{\mathfrak {b}}=(x+z,y+w),{\mathfrak {c}}=(x+z,w)}. Then,
In the first computation, we see the general pattern for taking the sum of two finitely generated ideals, it is the ideal generated by the union of their generators. In the last three we observe that products and intersections agree whenever the two ideals intersect in the zero ideal. These computations can be checked usingMacaulay2.[19][20][21]
Ideals appear naturally in the study of modules, especially in the form of a radical.
LetRbe a commutative ring. By definition, aprimitive idealofRis the annihilator of a (nonzero)simpleR-module. TheJacobson radicalJ=Jac(R){\displaystyle J=\operatorname {Jac} (R)}ofRis the intersection of all primitive ideals. Equivalently,
Indeed, ifM{\displaystyle M}is a simple module andxis a nonzero element inM, thenRx=M{\displaystyle Rx=M}andR/Ann(M)=R/Ann(x)≃M{\displaystyle R/\operatorname {Ann} (M)=R/\operatorname {Ann} (x)\simeq M}, meaningAnn(M){\displaystyle \operatorname {Ann} (M)}is a maximal ideal. Conversely, ifm{\displaystyle {\mathfrak {m}}}is a maximal ideal, thenm{\displaystyle {\mathfrak {m}}}is the annihilator of the simpleR-moduleR/m{\displaystyle R/{\mathfrak {m}}}. There is also another characterization (the proof is not hard):
For a not-necessarily-commutative ring, it is a general fact that1−yx{\displaystyle 1-yx}is aunit elementif and only if1−xy{\displaystyle 1-xy}is (see the link) and so this last characterization shows that the radical can be defined both in terms of left and right primitive ideals.
The following simple but important fact (Nakayama's lemma) is built-in to the definition of a Jacobson radical: ifMis a module such thatJM=M{\displaystyle JM=M}, thenMdoes not admit amaximal submodule, since if there is a maximal submoduleL⊊M{\displaystyle L\subsetneq M},J⋅(M/L)=0{\displaystyle J\cdot (M/L)=0}and soM=JM⊂L⊊M{\displaystyle M=JM\subset L\subsetneq M}, a contradiction. Since a nonzerofinitely generated moduleadmits a maximal submodule, in particular, one has:
A maximal ideal is a prime ideal and so one has
where the intersection on the left is called thenilradicalofR. As it turns out,nil(R){\displaystyle \operatorname {nil} (R)}is also the set ofnilpotent elementsofR.
IfRis anArtinian ring, thenJac(R){\displaystyle \operatorname {Jac} (R)}is nilpotent andnil(R)=Jac(R){\displaystyle \operatorname {nil} (R)=\operatorname {Jac} (R)}. (Proof: first note the DCC impliesJn=Jn+1{\displaystyle J^{n}=J^{n+1}}for somen. If (DCC)a⊋Ann(Jn){\displaystyle {\mathfrak {a}}\supsetneq \operatorname {Ann} (J^{n})}is an ideal properly minimal over the latter, thenJ⋅(a/Ann(Jn))=0{\displaystyle J\cdot ({\mathfrak {a}}/\operatorname {Ann} (J^{n}))=0}. That is,Jna=Jn+1a=0{\displaystyle J^{n}{\mathfrak {a}}=J^{n+1}{\mathfrak {a}}=0}, a contradiction.)
LetAandBbe twocommutative rings, and letf:A→Bbe aring homomorphism. Ifa{\displaystyle {\mathfrak {a}}}is an ideal inA, thenf(a){\displaystyle f({\mathfrak {a}})}need not be an ideal inB(e.g. takefto be theinclusionof the ring of integersZinto the field of rationalsQ). Theextensionae{\displaystyle {\mathfrak {a}}^{e}}ofa{\displaystyle {\mathfrak {a}}}inBis defined to be the ideal inBgenerated byf(a){\displaystyle f({\mathfrak {a}})}. Explicitly,
Ifb{\displaystyle {\mathfrak {b}}}is an ideal ofB, thenf−1(b){\displaystyle f^{-1}({\mathfrak {b}})}is always an ideal ofA, called thecontractionbc{\displaystyle {\mathfrak {b}}^{c}}ofb{\displaystyle {\mathfrak {b}}}toA.
Assumingf:A→Bis a ring homomorphism,a{\displaystyle {\mathfrak {a}}}is an ideal inA,b{\displaystyle {\mathfrak {b}}}is an ideal inB, then:
It is false, in general, thata{\displaystyle {\mathfrak {a}}}being prime (or maximal) inAimplies thatae{\displaystyle {\mathfrak {a}}^{e}}is prime (or maximal) inB. Many classic examples of this stem from algebraic number theory. For example,embeddingZ→Z[i]{\displaystyle \mathbb {Z} \to \mathbb {Z} \left\lbrack i\right\rbrack }. InB=Z[i]{\displaystyle B=\mathbb {Z} \left\lbrack i\right\rbrack }, the element 2 factors as2=(1+i)(1−i){\displaystyle 2=(1+i)(1-i)}where (one can show) neither of1+i,1−i{\displaystyle 1+i,1-i}are units inB. So(2)e{\displaystyle (2)^{e}}is not prime inB(and therefore not maximal, as well). Indeed,(1±i)2=±2i{\displaystyle (1\pm i)^{2}=\pm 2i}shows that(1+i)=((1−i)−(1−i)2){\displaystyle (1+i)=((1-i)-(1-i)^{2})},(1−i)=((1+i)−(1+i)2){\displaystyle (1-i)=((1+i)-(1+i)^{2})}, and therefore(2)e=(1+i)2{\displaystyle (2)^{e}=(1+i)^{2}}.
On the other hand, iffissurjectiveanda⊇kerf{\displaystyle {\mathfrak {a}}\supseteq \ker f}then:
Remark: LetKbe afield extensionofL, and letBandAbe therings of integersofKandL, respectively. ThenBis anintegral extensionofA, and we letfbe theinclusion mapfromAtoB. The behaviour of aprime ideala=p{\displaystyle {\mathfrak {a}}={\mathfrak {p}}}ofAunder extension is one of the central problems ofalgebraic number theory.
The following is sometimes useful:[22]a prime idealp{\displaystyle {\mathfrak {p}}}is a contraction of a prime ideal if and only ifp=pec{\displaystyle {\mathfrak {p}}={\mathfrak {p}}^{ec}}. (Proof: Assuming the latter, notepeBp=Bp⇒pe{\displaystyle {\mathfrak {p}}^{e}B_{\mathfrak {p}}=B_{\mathfrak {p}}\Rightarrow {\mathfrak {p}}^{e}}intersectsA−p{\displaystyle A-{\mathfrak {p}}}, a contradiction. Now, the prime ideals ofBp{\displaystyle B_{\mathfrak {p}}}correspond to those inBthat are disjoint fromA−p{\displaystyle A-{\mathfrak {p}}}. Hence, there is a prime idealq{\displaystyle {\mathfrak {q}}}ofB, disjoint fromA−p{\displaystyle A-{\mathfrak {p}}}, such thatqBp{\displaystyle {\mathfrak {q}}B_{\mathfrak {p}}}is a maximal ideal containingpeBp{\displaystyle {\mathfrak {p}}^{e}B_{\mathfrak {p}}}. One then checks thatq{\displaystyle {\mathfrak {q}}}lies overp{\displaystyle {\mathfrak {p}}}. The converse is obvious.)
Ideals can be generalized to anymonoid object(R,⊗){\displaystyle (R,\otimes )}, whereR{\displaystyle R}is the object where themonoidstructure has beenforgotten. Aleft idealofR{\displaystyle R}is asubobjectI{\displaystyle I}that "absorbs multiplication from the left by elements ofR{\displaystyle R}"; that is,I{\displaystyle I}is aleft idealif it satisfies the following two conditions:
Aright idealis defined with the condition "r⊗x∈(I,⊗){\displaystyle r\otimes x\in (I,\otimes )}" replaced by "'x⊗r∈(I,⊗){\displaystyle x\otimes r\in (I,\otimes )}". Atwo-sided idealis a left ideal that is also a right ideal, and is sometimes simply called an ideal. WhenR{\displaystyle R}is a commutative monoid object respectively, the definitions of left, right, and two-sided ideal coincide, and the termidealis used alone.
|
https://en.wikipedia.org/wiki/Ideal_(ring_theory)
|
Inmathematics, afinite fieldorGalois field(so-named in honor ofÉvariste Galois) is afieldthat contains a finite number ofelements. As with any field, a finite field is aseton which the operations of multiplication, addition, subtraction and division are defined and satisfy certain basic rules. The most common examples of finite fields are theintegers modp{\displaystyle p}whenp{\displaystyle p}is aprime number.
Theorderof a finite field is its number of elements, which is either a prime number or aprime power. For every prime numberp{\displaystyle p}and every positive integerk{\displaystyle k}there are fields of orderpk{\displaystyle p^{k}}. All finite fields of a given order areisomorphic.
Finite fields are fundamental in a number of areas of mathematics andcomputer science, includingnumber theory,algebraic geometry,Galois theory,finite geometry,cryptographyandcoding theory.
A finite field is a finite set that is afield; this means that multiplication, addition, subtraction and division (excluding division by zero) are defined and satisfy the rules of arithmetic known as thefield axioms.[1]
The number of elements of a finite field is called itsorderor, sometimes, itssize. A finite field of orderq{\displaystyle q}exists if and only ifq{\displaystyle q}is aprime powerpk{\displaystyle p^{k}}(wherep{\displaystyle p}is a prime number andk{\displaystyle k}is a positive integer). In a field of orderpk{\displaystyle p^{k}}, addingp{\displaystyle p}copies of any element always results in zero; that is, thecharacteristicof the field isp{\displaystyle p}.[1]
Forq=pk{\displaystyle q=p^{k}}, all fields of orderq{\displaystyle q}areisomorphic(see§ Existence and uniquenessbelow).[2]Moreover, a field cannot contain two different finitesubfieldswith the same order. One may therefore identify all finite fields with the same order, and they are unambiguously denotedFq{\displaystyle \mathbb {F} _{q}},Fq{\displaystyle \mathbf {F} _{q}}orGF(q){\displaystyle \mathrm {GF} (q)}, where the letters GF stand for "Galois field".[3]
In a finite field of orderq{\displaystyle q}, thepolynomialXq−X{\displaystyle X^{q}-X}has allq{\displaystyle q}elements of the finite field asroots.[citation needed]The non-zero elements of a finite field form amultiplicative group. This group iscyclic, so all non-zero elements can be expressed as powers of a single element called aprimitive elementof the field. (In general there will be several primitive elements for a given field.)[1]
The simplest examples of finite fields are the fields of prime order: for eachprime numberp{\displaystyle p}, theprime fieldof orderp{\displaystyle p}may be constructed as theintegers modulop{\displaystyle p},Z/pZ{\displaystyle \mathbb {Z} /p\mathbb {Z} }.[1]
The elements of the prime field of orderp{\displaystyle p}may be represented by integers in the range0,…,p−1{\displaystyle 0,\ldots ,p-1}. The sum, the difference and the product are theremainder of the divisionbyp{\displaystyle p}of the result of the corresponding integer operation.[1]The multiplicative inverse of an element may be computed by using the extended Euclidean algorithm (seeExtended Euclidean algorithm § Modular integers).[citation needed]
LetF{\displaystyle F}be a finite field. For any elementx{\displaystyle x}inF{\displaystyle F}and anyintegern{\displaystyle n}, denote byn⋅x{\displaystyle n\cdot x}the sum ofn{\displaystyle n}copies ofx{\displaystyle x}. The least positiven{\displaystyle n}such thatn⋅1=0{\displaystyle n\cdot 1=0}is the characteristicp{\displaystyle p}of the field.[1]This allows defining a multiplication(k,x)↦k⋅x{\displaystyle (k,x)\mapsto k\cdot x}of an elementk{\displaystyle k}ofGF(p){\displaystyle \mathrm {GF} (p)}by an elementx{\displaystyle x}ofF{\displaystyle F}by choosing an integer representative fork{\displaystyle k}. This multiplication makesF{\displaystyle F}into aGF(p){\displaystyle \mathrm {GF} (p)}-vector space.[1]It follows that the number of elements ofF{\displaystyle F}ispn{\displaystyle p^{n}}for some integern{\displaystyle n}.[1]
Theidentity(x+y)p=xp+yp{\displaystyle (x+y)^{p}=x^{p}+y^{p}}(sometimes called thefreshman's dream) is true in a field of characteristicp{\displaystyle p}. This follows from thebinomial theorem, as eachbinomial coefficientof the expansion of(x+y)p{\displaystyle (x+y)^{p}}, except the first and the last, is a multiple ofp{\displaystyle p}.[citation needed]
ByFermat's little theorem, ifp{\displaystyle p}is a prime number andx{\displaystyle x}is in the fieldGF(p){\displaystyle \mathrm {GF} (p)}thenxp=x{\displaystyle x^{p}=x}. This implies the equalityXp−X=∏a∈GF(p)(X−a){\displaystyle X^{p}-X=\prod _{a\in \mathrm {GF} (p)}(X-a)}for polynomials overGF(p){\displaystyle \mathrm {GF} (p)}. More generally, every element inGF(pn){\displaystyle \mathrm {GF} (p^{n})}satisfies the polynomial equationxpn−x=0{\displaystyle x^{p^{n}}-x=0}.[citation needed]
Any finitefield extensionof a finite field isseparableand simple. That is, ifE{\displaystyle E}is a finite field andF{\displaystyle F}is a subfield ofE{\displaystyle E}, thenE{\displaystyle E}is obtained fromF{\displaystyle F}by adjoining a single element whoseminimal polynomialisseparable. To use a piece of jargon, finite fields areperfect.[1]
A more generalalgebraic structurethat satisfies all the other axioms of a field, but whose multiplication is not required to becommutative, is called adivision ring(or sometimesskew field). ByWedderburn's little theorem, any finite division ring is commutative, and hence is a finite field.[1]
Letq=pn{\displaystyle q=p^{n}}be aprime power, andF{\displaystyle F}be thesplitting fieldof the polynomialP=Xq−X{\displaystyle P=X^{q}-X}over the prime fieldGF(p){\displaystyle \mathrm {GF} (p)}. This means thatF{\displaystyle F}is a finite field of lowest order, in whichP{\displaystyle P}hasq{\displaystyle q}distinct roots (theformal derivativeofP{\displaystyle P}isP′=−1{\displaystyle P'=-1}, implying thatgcd(P,P′)=1{\displaystyle \mathrm {gcd} (P,P')=1}, which in general implies that the splitting field is aseparable extensionof the original). Theabove identityshows that the sum and the product of two roots ofP{\displaystyle P}are roots ofP{\displaystyle P}, as well as the multiplicative inverse of a root ofP{\displaystyle P}. In other words, the roots ofP{\displaystyle P}form a field of orderq{\displaystyle q}, which is equal toF{\displaystyle F}by the minimality of the splitting field.
The uniqueness up to isomorphism of splitting fields implies thus that all fields of orderq{\displaystyle q}are isomorphic. Also, if a fieldF{\displaystyle F}has a field of orderq=pk{\displaystyle q=p^{k}}as a subfield, its elements are theq{\displaystyle q}roots ofXq−X{\displaystyle X^{q}-X}, andF{\displaystyle F}cannot contain another subfield of orderq{\displaystyle q}.
In summary, we have the following classification theorem first proved in 1893 byE. H. Moore:[2]
The order of a finite field is a prime power. For every prime powerq{\displaystyle q}there are fields of orderq{\displaystyle q}, and they are all isomorphic. In these fields, every element satisfiesxq=x,{\displaystyle x^{q}=x,}and the polynomialXq−X{\displaystyle X^{q}-X}factors asXq−X=∏a∈F(X−a).{\displaystyle X^{q}-X=\prod _{a\in F}(X-a).}
It follows thatGF(pn){\displaystyle \mathrm {GF} (p^{n})}contains a subfield isomorphic toGF(pm){\displaystyle \mathrm {GF} (p^{m})}if and only ifm{\displaystyle m}is a divisor ofn{\displaystyle n}; in that case, this subfield is unique. In fact, the polynomialXpm−X{\displaystyle X^{p^{m}}-X}dividesXpn−X{\displaystyle X^{p^{n}}-X}if and only ifm{\displaystyle m}is a divisor ofn{\displaystyle n}.
Given a prime powerq=pn{\displaystyle q=p^{n}}withp{\displaystyle p}prime andn>1{\displaystyle n>1}, the fieldGF(q){\displaystyle \mathrm {GF} (q)}may be explicitly constructed in the following way. One first chooses anirreducible polynomialP{\displaystyle P}inGF(p)[X]{\displaystyle \mathrm {GF} (p)[X]}of degreen{\displaystyle n}(such an irreducible polynomial always exists). Then thequotient ringGF(q)=GF(p)[X]/(P){\displaystyle \mathrm {GF} (q)=\mathrm {GF} (p)[X]/(P)}of the polynomial ringGF(p)[X]{\displaystyle \mathrm {GF} (p)[X]}by the ideal generated byP{\displaystyle P}is a field of orderq{\displaystyle q}.
More explicitly, the elements ofGF(q){\displaystyle \mathrm {GF} (q)}are the polynomials overGF(p){\displaystyle \mathrm {GF} (p)}whose degree is strictly less thann{\displaystyle n}. The addition and the subtraction are those of polynomials overGF(p){\displaystyle \mathrm {GF} (p)}. The product of two elements is the remainder of theEuclidean divisionbyP{\displaystyle P}of the product inGF(q)[X]{\displaystyle \mathrm {GF} (q)[X]}.
The multiplicative inverse of a non-zero element may be computed with the extended Euclidean algorithm; seeExtended Euclidean algorithm § Simple algebraic field extensions.
However, with this representation, elements ofGF(q){\displaystyle \mathrm {GF} (q)}may be difficult to distinguish from the corresponding polynomials. Therefore, it is common to give a name, commonlyα{\displaystyle \alpha }to the element ofGF(q){\displaystyle \mathrm {GF} (q)}that corresponds to the polynomialX{\displaystyle X}. So, the elements ofGF(q){\displaystyle \mathrm {GF} (q)}become polynomials inα{\displaystyle \alpha }, whereP(α)=0{\displaystyle P(\alpha )=0}, and, when one encounters a polynomial inα{\displaystyle \alpha }of degree greater or equal ton{\displaystyle n}(for example after a multiplication), one knows that one has to use the relationP(α)=0{\displaystyle P(\alpha )=0}to reduce its degree (it is what Euclidean division is doing).
Except in the construction ofGF(4){\displaystyle \mathrm {GF} (4)}, there are several possible choices forP{\displaystyle P}, which produce isomorphic results. To simplify the Euclidean division, one commonly chooses forP{\displaystyle P}a polynomial of the formXn+aX+b,{\displaystyle X^{n}+aX+b,}which make the needed Euclidean divisions very efficient. However, for some fields, typically in characteristic2{\displaystyle 2}, irreducible polynomials of the formXn+aX+b{\displaystyle X^{n}+aX+b}may not exist. In characteristic2{\displaystyle 2}, if the polynomialXn+X+1{\displaystyle X^{n}+X+1}is reducible, it is recommended to chooseXn+Xk+1{\displaystyle X^{n}+X^{k}+1}with the lowest possiblek{\displaystyle k}that makes the polynomial irreducible. If all thesetrinomialsare reducible, one chooses "pentanomials"Xn+Xa+Xb+Xc+1{\displaystyle X^{n}+X^{a}+X^{b}+X^{c}+1}, as polynomials of degree greater than1{\displaystyle 1}, with an even number of terms, are never irreducible in characteristic2{\displaystyle 2}, having1{\displaystyle 1}as a root.[4]
A possible choice for such a polynomial is given byConway polynomials. They ensure a certain compatibility between the representation of a field and the representations of its subfields.
In the next sections, we will show how the general construction method outlined above works for small finite fields.
The smallest non-prime field is the field with four elements, which is commonly denotedGF(4){\displaystyle \mathrm {GF} (4)}orF4.{\displaystyle \mathbb {F} _{4}.}It consists of the four elements0,1,α,1+α{\displaystyle 0,1,\alpha ,1+\alpha }such thatα2=1+α{\displaystyle \alpha ^{2}=1+\alpha },1⋅α=α⋅1=α{\displaystyle 1\cdot \alpha =\alpha \cdot 1=\alpha },x+x=0{\displaystyle x+x=0}, andx⋅0=0⋅x=0{\displaystyle x\cdot 0=0\cdot x=0}, for everyx∈GF(4){\displaystyle x\in \mathrm {GF} (4)}, the other operation results being easily deduced from thedistributive law. See below for the complete operation tables.
This may be deduced as follows from the results of the preceding section.
OverGF(2){\displaystyle \mathrm {GF} (2)}, there is only oneirreducible polynomialof degree2{\displaystyle 2}:X2+X+1{\displaystyle X^{2}+X+1}Therefore, forGF(4){\displaystyle \mathrm {GF} (4)}the construction of the preceding section must involve this polynomial, andGF(4)=GF(2)[X]/(X2+X+1).{\displaystyle \mathrm {GF} (4)=\mathrm {GF} (2)[X]/(X^{2}+X+1).}Letα{\displaystyle \alpha }denote a root of this polynomial inGF(4){\displaystyle \mathrm {GF} (4)}. This implies thatα2=1+α,{\displaystyle \alpha ^{2}=1+\alpha ,}and thatα{\displaystyle \alpha }and1+α{\displaystyle 1+\alpha }are the elements ofGF(4){\displaystyle \mathrm {GF} (4)}that are not inGF(2){\displaystyle \mathrm {GF} (2)}. The tables of the operations inGF(4){\displaystyle \mathrm {GF} (4)}result from this, and are as follows:
A table for subtraction is not given, because subtraction is identical to addition, as is the case for every field of characteristic 2.
In the third table, for the division ofx{\displaystyle x}byy{\displaystyle y}, the values ofx{\displaystyle x}must be read in the left column, and the values ofy{\displaystyle y}in the top row. (Because0⋅z=0{\displaystyle 0\cdot z=0}for everyz{\displaystyle z}in everyringthedivision by 0has to remain undefined.) From the tables, it can be seen that the additive structure ofGF(4){\displaystyle \mathrm {GF} (4)}is isomorphic to theKlein four-group, while the non-zero multiplicative structure is isomorphic to the groupZ3{\displaystyle Z_{3}}.
The mapφ:x↦x2{\displaystyle \varphi :x\mapsto x^{2}}is the non-trivial field automorphism, called theFrobenius automorphism, which sendsα{\displaystyle \alpha }into the second root1+α{\displaystyle 1+\alpha }of the above-mentioned irreducible polynomialX2+X+1{\displaystyle X^{2}+X+1}.
For applying theabove general constructionof finite fields in the case ofGF(p2){\displaystyle \mathrm {GF} (p^{2})}, one has to find an irreducible polynomial of degree 2. Forp=2{\displaystyle p=2}, this has been done in the preceding section. Ifp{\displaystyle p}is an odd prime, there are always irreducible polynomials of the formX2−r{\displaystyle X^{2}-r}, withr{\displaystyle r}inGF(p){\displaystyle \mathrm {GF} (p)}.
More precisely, the polynomialX2−r{\displaystyle X^{2}-r}is irreducible overGF(p){\displaystyle \mathrm {GF} (p)}if and only ifr{\displaystyle r}is aquadratic non-residuemodulop{\displaystyle p}(this is almost the definition of a quadratic non-residue). There arep−12{\displaystyle {\frac {p-1}{2}}}quadratic non-residues modulop{\displaystyle p}. For example,2{\displaystyle 2}is a quadratic non-residue forp=3,5,11,13,…{\displaystyle p=3,5,11,13,\ldots }, and3{\displaystyle 3}is a quadratic non-residue forp=5,7,17,…{\displaystyle p=5,7,17,\ldots }. Ifp≡3mod4{\displaystyle p\equiv 3\mod 4}, that isp=3,7,11,19,…{\displaystyle p=3,7,11,19,\ldots }, one may choose−1≡p−1{\displaystyle -1\equiv p-1}as a quadratic non-residue, which allows us to have a very simple irreducible polynomialX2+1{\displaystyle X^{2}+1}.
Having chosen a quadratic non-residuer{\displaystyle r}, letα{\displaystyle \alpha }be a symbolic square root ofr{\displaystyle r}, that is, a symbol that has the propertyα2=r{\displaystyle \alpha ^{2}=r}, in the same way that the complex numberi{\displaystyle i}is a symbolic square root of−1{\displaystyle -1}. Then, the elements ofGF(p2){\displaystyle \mathrm {GF} (p^{2})}are all the linear expressionsa+bα,{\displaystyle a+b\alpha ,}witha{\displaystyle a}andb{\displaystyle b}inGF(p){\displaystyle \mathrm {GF} (p)}. The operations onGF(p2){\displaystyle \mathrm {GF} (p^{2})}are defined as follows (the operations between elements ofGF(p){\displaystyle \mathrm {GF} (p)}represented by Latin letters are the operations inGF(p){\displaystyle \mathrm {GF} (p)}):−(a+bα)=−a+(−b)α(a+bα)+(c+dα)=(a+c)+(b+d)α(a+bα)(c+dα)=(ac+rbd)+(ad+bc)α(a+bα)−1=a(a2−rb2)−1+(−b)(a2−rb2)−1α{\displaystyle {\begin{aligned}-(a+b\alpha )&=-a+(-b)\alpha \\(a+b\alpha )+(c+d\alpha )&=(a+c)+(b+d)\alpha \\(a+b\alpha )(c+d\alpha )&=(ac+rbd)+(ad+bc)\alpha \\(a+b\alpha )^{-1}&=a(a^{2}-rb^{2})^{-1}+(-b)(a^{2}-rb^{2})^{-1}\alpha \end{aligned}}}
The polynomialX3−X−1{\displaystyle X^{3}-X-1}is irreducible overGF(2){\displaystyle \mathrm {GF} (2)}andGF(3){\displaystyle \mathrm {GF} (3)}, that is, it is irreduciblemodulo2{\displaystyle 2}and3{\displaystyle 3}(to show this, it suffices to show that it has no root inGF(2){\displaystyle \mathrm {GF} (2)}nor inGF(3){\displaystyle \mathrm {GF} (3)}). It follows that the elements ofGF(8){\displaystyle \mathrm {GF} (8)}andGF(27){\displaystyle \mathrm {GF} (27)}may be represented byexpressionsa+bα+cα2,{\displaystyle a+b\alpha +c\alpha ^{2},}wherea,b,c{\displaystyle a,b,c}are elements ofGF(2){\displaystyle \mathrm {GF} (2)}orGF(3){\displaystyle \mathrm {GF} (3)}(respectively), andα{\displaystyle \alpha }is a symbol such thatα3=α+1.{\displaystyle \alpha ^{3}=\alpha +1.}
The addition, additive inverse and multiplication onGF(8){\displaystyle \mathrm {GF} (8)}andGF(27){\displaystyle \mathrm {GF} (27)}may thus be defined as follows; in following formulas, the operations between elements ofGF(2){\displaystyle \mathrm {GF} (2)}orGF(3){\displaystyle \mathrm {GF} (3)}, represented by Latin letters, are the operations inGF(2){\displaystyle \mathrm {GF} (2)}orGF(3){\displaystyle \mathrm {GF} (3)}, respectively:−(a+bα+cα2)=−a+(−b)α+(−c)α2(forGF(8),this operation is the identity)(a+bα+cα2)+(d+eα+fα2)=(a+d)+(b+e)α+(c+f)α2(a+bα+cα2)(d+eα+fα2)=(ad+bf+ce)+(ae+bd+bf+ce+cf)α+(af+be+cd+cf)α2{\displaystyle {\begin{aligned}-(a+b\alpha +c\alpha ^{2})&=-a+(-b)\alpha +(-c)\alpha ^{2}\qquad {\text{(for }}\mathrm {GF} (8),{\text{this operation is the identity)}}\\(a+b\alpha +c\alpha ^{2})+(d+e\alpha +f\alpha ^{2})&=(a+d)+(b+e)\alpha +(c+f)\alpha ^{2}\\(a+b\alpha +c\alpha ^{2})(d+e\alpha +f\alpha ^{2})&=(ad+bf+ce)+(ae+bd+bf+ce+cf)\alpha +(af+be+cd+cf)\alpha ^{2}\end{aligned}}}
The polynomialX4+X+1{\displaystyle X^{4}+X+1}is irreducible overGF(2){\displaystyle \mathrm {GF} (2)}, that is, it is irreducible modulo2{\displaystyle 2}. It follows that the elements ofGF(16){\displaystyle \mathrm {GF} (16)}may be represented byexpressionsa+bα+cα2+dα3,{\displaystyle a+b\alpha +c\alpha ^{2}+d\alpha ^{3},}wherea,b,c,d{\displaystyle a,b,c,d}are either0{\displaystyle 0}or1{\displaystyle 1}(elements ofGF(2){\displaystyle \mathrm {GF} (2)}), andα{\displaystyle \alpha }is a symbol such thatα4=α+1{\displaystyle \alpha ^{4}=\alpha +1}(that is,α{\displaystyle \alpha }is defined as a root of the given irreducible polynomial). As the characteristic ofGF(2){\displaystyle \mathrm {GF} (2)}is2{\displaystyle 2}, each element is its additive inverse inGF(16){\displaystyle \mathrm {GF} (16)}. The addition and multiplication onGF(16){\displaystyle \mathrm {GF} (16)}may be defined as follows; in following formulas, the operations between elements ofGF(2){\displaystyle \mathrm {GF} (2)}, represented by Latin letters are the operations inGF(2){\displaystyle \mathrm {GF} (2)}.(a+bα+cα2+dα3)+(e+fα+gα2+hα3)=(a+e)+(b+f)α+(c+g)α2+(d+h)α3(a+bα+cα2+dα3)(e+fα+gα2+hα3)=(ae+bh+cg+df)+(af+be+bh+cg+df+ch+dg)α+(ag+bf+ce+ch+dg+dh)α2+(ah+bg+cf+de+dh)α3{\displaystyle {\begin{aligned}(a+b\alpha +c\alpha ^{2}+d\alpha ^{3})+(e+f\alpha +g\alpha ^{2}+h\alpha ^{3})&=(a+e)+(b+f)\alpha +(c+g)\alpha ^{2}+(d+h)\alpha ^{3}\\(a+b\alpha +c\alpha ^{2}+d\alpha ^{3})(e+f\alpha +g\alpha ^{2}+h\alpha ^{3})&=(ae+bh+cg+df)+(af+be+bh+cg+df+ch+dg)\alpha \;+\\&\quad \;(ag+bf+ce+ch+dg+dh)\alpha ^{2}+(ah+bg+cf+de+dh)\alpha ^{3}\end{aligned}}}
The fieldGF(16){\displaystyle \mathrm {GF} (16)}has eightprimitive elements(the elements that have all nonzero elements ofGF(16){\displaystyle \mathrm {GF} (16)}as integer powers). These elements are the four roots ofX4+X+1{\displaystyle X^{4}+X+1}and theirmultiplicative inverses. In particular,α{\displaystyle \alpha }is a primitive element, and the primitive elements areαm{\displaystyle \alpha ^{m}}withm{\displaystyle m}less than andcoprimewith15{\displaystyle 15}(that is, 1, 2, 4, 7, 8, 11, 13, 14).
The set of non-zero elements inGF(q){\displaystyle \mathrm {GF} (q)}is anabelian groupunder the multiplication, of orderq−1{\displaystyle q-1}. ByLagrange's theorem, there exists a divisork{\displaystyle k}ofq−1{\displaystyle q-1}such thatxk=1{\displaystyle x^{k}=1}for every non-zerox{\displaystyle x}inGF(q){\displaystyle \mathrm {GF} (q)}. As the equationxk=1{\displaystyle x^{k}=1}has at mostk{\displaystyle k}solutions in any field,q−1{\displaystyle q-1}is the lowest possible value fork{\displaystyle k}.
Thestructure theorem of finite abelian groupsimplies that this multiplicative group iscyclic, that is, all non-zero elements are powers of a single element. In summary:
Such an elementa{\displaystyle a}is called aprimitive elementofGF(q){\displaystyle \mathrm {GF} (q)}. Unlessq=2,3{\displaystyle q=2,3}, the primitive element is not unique. The number of primitive elements isϕ(q−1){\displaystyle \phi (q-1)}whereϕ{\displaystyle \phi }isEuler's totient function.
The result above implies thatxq=x{\displaystyle x^{q}=x}for everyx{\displaystyle x}inGF(q){\displaystyle \mathrm {GF} (q)}. The particular case whereq{\displaystyle q}is prime isFermat's little theorem.
Ifa{\displaystyle a}is a primitive element inGF(q){\displaystyle \mathrm {GF} (q)}, then for any non-zero elementx{\displaystyle x}inF{\displaystyle F}, there is a unique integern{\displaystyle n}with0≤n≤q−2{\displaystyle 0\leq n\leq q-2}such thatx=an{\displaystyle x=a^{n}}.
This integern{\displaystyle n}is called thediscrete logarithmofx{\displaystyle x}to the basea{\displaystyle a}.
Whilean{\displaystyle a^{n}}can be computed very quickly, for example usingexponentiation by squaring, there is no known efficient algorithm for computing the inverse operation, the discrete logarithm. This has been used in variouscryptographic protocols, seeDiscrete logarithmfor details.
When the nonzero elements ofGF(q){\displaystyle \mathrm {GF} (q)}are represented by their discrete logarithms, multiplication and division are easy, as they reduce to addition and subtraction moduloq−1{\displaystyle q-1}. However, addition amounts to computing the discrete logarithm ofam+an{\displaystyle a^{m}+a^{n}}. The identityam+an=an(am−n+1){\displaystyle a^{m}+a^{n}=a^{n}\left(a^{m-n}+1\right)}allows one to solve this problem by constructing the table of the discrete logarithms ofan+1{\displaystyle a^{n}+1}, calledZech's logarithms, forn=0,…,q−2{\displaystyle n=0,\ldots ,q-2}(it is convenient to define the discrete logarithm of zero as being−∞{\displaystyle -\infty }).
Zech's logarithms are useful for large computations, such aslinear algebraover medium-sized fields, that is, fields that are sufficiently large for making natural algorithms inefficient, but not too large, as one has to pre-compute a table of the same size as the order of the field.
Every nonzero element of a finite field is aroot of unity, asxq−1=1{\displaystyle x^{q-1}=1}for every nonzero element ofGF(q){\displaystyle \mathrm {GF} (q)}.
Ifn{\displaystyle n}is a positive integer, ann{\displaystyle n}thprimitive root of unityis a solution of the equationxn=1{\displaystyle x^{n}=1}that is not a solution of the equationxm=1{\displaystyle x^{m}=1}for any positive integerm<n{\displaystyle m<n}. Ifa{\displaystyle a}is an{\displaystyle n}th primitive root of unity in a fieldF{\displaystyle F}, thenF{\displaystyle F}contains all then{\displaystyle n}roots of unity, which are1,a,a2,…,an−1{\displaystyle 1,a,a^{2},\ldots ,a^{n-1}}.
The fieldGF(q){\displaystyle \mathrm {GF} (q)}contains an{\displaystyle n}th primitive root of unity if and only ifn{\displaystyle n}is a divisor ofq−1{\displaystyle q-1}; ifn{\displaystyle n}is a divisor ofq−1{\displaystyle q-1}, then the number of primitiven{\displaystyle n}th roots of unity inGF(q){\displaystyle \mathrm {GF} (q)}isϕ(n){\displaystyle \phi (n)}(Euler's totient function). The number ofn{\displaystyle n}th roots of unity inGF(q){\displaystyle \mathrm {GF} (q)}isgcd(n,q−1){\displaystyle \mathrm {gcd} (n,q-1)}.
In a field of characteristicp{\displaystyle p}, everynp{\displaystyle np}th root of unity is also an{\displaystyle n}th root of unity. It follows that primitivenp{\displaystyle np}th roots of unity never exist in a field of characteristicp{\displaystyle p}.
On the other hand, ifn{\displaystyle n}iscoprimetop{\displaystyle p}, the roots of then{\displaystyle n}thcyclotomic polynomialare distinct in every field of characteristicp{\displaystyle p}, as this polynomial is a divisor ofXn−1{\displaystyle X^{n}-1}, whosediscriminantnn{\displaystyle n^{n}}is nonzero modulop{\displaystyle p}. It follows that then{\displaystyle n}thcyclotomic polynomialfactors overGF(q){\displaystyle \mathrm {GF} (q)}into distinct irreducible polynomials that have all the same degree, sayd{\displaystyle d}, and thatGF(pd){\displaystyle \mathrm {GF} (p^{d})}is the smallest field of characteristicp{\displaystyle p}that contains then{\displaystyle n}th primitive roots of unity.
When computingBrauer characters, one uses the mapαk↦exp(2πik/(q−1)){\displaystyle \alpha ^{k}\mapsto \exp(2\pi ik/(q-1))}to map eigenvalues of a representation matrix to the complex numbers. Under this mapping, the base subfieldGF(p){\displaystyle \mathrm {GF} (p)}consists of evenly spaced points around the unit circle (omitting zero).
The fieldGF(64){\displaystyle \mathrm {GF} (64)}has several interesting properties that smaller fields do not share: it has two subfields such that neither is contained in the other; not all generators (elements withminimal polynomialof degree6{\displaystyle 6}overGF(2){\displaystyle \mathrm {GF} (2)}) are primitive elements; and the primitive elements are not all conjugate under theGalois group.
The order of this field being26, and the divisors of6being1, 2, 3, 6, the subfields ofGF(64)areGF(2),GF(22) = GF(4),GF(23) = GF(8), andGF(64)itself. As2and3arecoprime, the intersection ofGF(4)andGF(8)inGF(64)is the prime fieldGF(2).
The union ofGF(4)andGF(8)has thus10elements. The remaining54elements ofGF(64)generateGF(64)in the sense that no other subfield contains any of them. It follows that they are roots of irreducible polynomials of degree6overGF(2). This implies that, overGF(2), there are exactly9 =54/6irreduciblemonic polynomialsof degree6. This may be verified by factoringX64−XoverGF(2).
The elements ofGF(64)are primitiven{\displaystyle n}th roots of unity for somen{\displaystyle n}dividing63{\displaystyle 63}. As the 3rd and the 7th roots of unity belong toGF(4)andGF(8), respectively, the54generators are primitiventh roots of unity for somenin{9, 21, 63}.Euler's totient functionshows that there are6primitive9th roots of unity,12{\displaystyle 12}primitive21{\displaystyle 21}st roots of unity, and36{\displaystyle 36}primitive63rd roots of unity. Summing these numbers, one finds again54{\displaystyle 54}elements.
By factoring thecyclotomic polynomialsoverGF(2){\displaystyle \mathrm {GF} (2)}, one finds that:
This shows that the best choice to constructGF(64){\displaystyle \mathrm {GF} (64)}is to define it asGF(2)[X] / (X6+X+ 1). In fact, this generator is a primitive element, and this polynomial is the irreducible polynomial that produces the easiest Euclidean division.
In this section,p{\displaystyle p}is a prime number, andq=pn{\displaystyle q=p^{n}}is a power ofp{\displaystyle p}.
InGF(q){\displaystyle \mathrm {GF} (q)}, the identity(x+y)p=xp+ypimplies that the mapφ:x↦xp{\displaystyle \varphi :x\mapsto x^{p}}is aGF(p){\displaystyle \mathrm {GF} (p)}-linear endomorphismand afield automorphismofGF(q){\displaystyle \mathrm {GF} (q)}, which fixes every element of the subfieldGF(p){\displaystyle \mathrm {GF} (p)}. It is called theFrobenius automorphism, afterFerdinand Georg Frobenius.
Denoting byφkthecompositionofφwith itselfktimes, we haveφk:x↦xpk.{\displaystyle \varphi ^{k}:x\mapsto x^{p^{k}}.}It has been shown in the preceding section thatφnis the identity. For0 <k<n, the automorphismφkis not the identity, as, otherwise, the polynomialXpk−X{\displaystyle X^{p^{k}}-X}would have more thanpkroots.
There are no otherGF(p)-automorphisms ofGF(q). In other words,GF(pn)has exactlynGF(p)-automorphisms, which areId=φ0,φ,φ2,…,φn−1.{\displaystyle \mathrm {Id} =\varphi ^{0},\varphi ,\varphi ^{2},\ldots ,\varphi ^{n-1}.}
In terms ofGalois theory, this means thatGF(pn)is aGalois extensionofGF(p), which has acyclicGalois group.
The fact that the Frobenius map is surjective implies that every finite field isperfect.
IfFis a finite field, a non-constantmonic polynomialwith coefficients inFisirreducibleoverF, if it is not the product of two non-constant monic polynomials, with coefficients inF.
As everypolynomial ringover a field is aunique factorization domain, every monic polynomial over a finite field may be factored in a unique way (up to the order of the factors) into a product of irreducible monic polynomials.
There are efficient algorithms for testing polynomial irreducibility and factoring polynomials over finite fields. They are a key step for factoring polynomials over the integers or therational numbers. At least for this reason, everycomputer algebra systemhas functions for factoring polynomials over finite fields, or, at least, over finite prime fields.
The polynomialXq−X{\displaystyle X^{q}-X}factors into linear factors over a field of orderq. More precisely, this polynomial is the product of all monic polynomials of degree one over a field of orderq.
This implies that, ifq=pnthenXq−Xis the product of all monic irreducible polynomials overGF(p), whose degree dividesn. In fact, ifPis an irreducible factor overGF(p)ofXq−X, its degree dividesn, as itssplitting fieldis contained inGF(pn). Conversely, ifPis an irreducible monic polynomial overGF(p)of degreeddividingn, it defines a field extension of degreed, which is contained inGF(pn), and all roots ofPbelong toGF(pn), and are roots ofXq−X; thusPdividesXq−X. AsXq−Xdoes not have any multiple factor, it is thus the product of all the irreducible monic polynomials that divide it.
This property is used to compute the product of the irreducible factors of each degree of polynomials overGF(p); seeDistinct degree factorization.
The numberN(q,n)of monic irreducible polynomials of degreenoverGF(q)is given by[5]N(q,n)=1n∑d∣nμ(d)qn/d,{\displaystyle N(q,n)={\frac {1}{n}}\sum _{d\mid n}\mu (d)q^{n/d},}whereμis theMöbius function. This formula is an immediate consequence of the property ofXq−Xabove and theMöbius inversion formula.
By the above formula, the number of irreducible (not necessarily monic) polynomials of degreenoverGF(q)is(q− 1)N(q,n).
The exact formula implies the inequalityN(q,n)≥1n(qn−∑ℓ∣n,ℓprimeqn/ℓ);{\displaystyle N(q,n)\geq {\frac {1}{n}}\left(q^{n}-\sum _{\ell \mid n,\ \ell {\text{ prime}}}q^{n/\ell }\right);}this is sharp if and only ifnis a power of some prime.
For everyqand everyn, the right hand side is positive, so there is at least one irreducible polynomial of degreenoverGF(q).
Incryptography, the difficulty of thediscrete logarithm problemin finite fields or inelliptic curvesis the basis of several widely used protocols, such as theDiffie–Hellmanprotocol. For example, in 2014, a secure internet connection to Wikipedia involved the elliptic curve Diffie–Hellman protocol (ECDHE) over a large finite field.[6]Incoding theory, many codes are constructed assubspacesofvector spacesover finite fields.
Finite fields are used by manyerror correction codes, such asReed–Solomon error correction codeorBCH code. The finite field almost always has characteristic of2, since computer data is stored in binary. For example, a byte of data can be interpreted as an element ofGF(28). One exception isPDF417bar code, which isGF(929). Some CPUs have special instructions that can be useful for finite fields of characteristic2, generally variations ofcarry-less product.
Finite fields are widely used innumber theory, as many problems over the integers may be solved by reducing themmoduloone or severalprime numbers. For example, the fastest known algorithms forpolynomial factorizationandlinear algebraover the field ofrational numbersproceed by reduction modulo one or several primes, and then reconstruction of the solution by usingChinese remainder theorem,Hensel liftingor theLLL algorithm.
Similarly many theoretical problems in number theory can be solved by considering their reductions modulo some or all prime numbers. See, for example,Hasse principle. Many recent developments ofalgebraic geometrywere motivated by the need to enlarge the power of these modular methods.Wiles' proof of Fermat's Last Theoremis an example of a deep result involving many mathematical tools, including finite fields.
TheWeil conjecturesconcern the number of points onalgebraic varietiesover finite fields and the theory has many applications includingexponentialandcharacter sumestimates.
Finite fields have widespread application incombinatorics, two well known examples being the definition ofPaley Graphsand the related construction forHadamard Matrices. Inarithmetic combinatoricsfinite fields[7]and finite field models[8][9]are used extensively, such as inSzemerédi's theoremon arithmetic progressions.
Adivision ringis a generalization of field. Division rings are not assumed to be commutative. There are no non-commutative finite division rings:Wedderburn's little theoremstates that all finitedivision ringsare commutative, and hence are finite fields. This result holds even if we relax theassociativityaxiom toalternativity, that is, all finitealternative division ringsare finite fields, by theArtin–Zorn theorem.[10]
A finite fieldF{\displaystyle F}is not algebraically closed: the polynomialf(T)=1+∏α∈F(T−α),{\displaystyle f(T)=1+\prod _{\alpha \in F}(T-\alpha ),}has no roots inF{\displaystyle F}, sincef(α) = 1for allα{\displaystyle \alpha }inF{\displaystyle F}.
Given a prime numberp, letF¯p{\displaystyle {\overline {\mathbb {F} }}_{p}}be an algebraic closure ofFp.{\displaystyle \mathbb {F} _{p}.}It is not only uniqueup toan isomorphism, as do all algebraic closures, but contrarily to the general case, all its subfield are fixed by all its automorphisms, and it is also the algebraic closure of all finite fields of the same characteristicp.
This property results mainly from the fact that the elements ofFpn{\displaystyle \mathbb {F} _{p^{n}}}are exactly the roots ofxpn−x,{\displaystyle x^{p^{n}}-x,}and this defines an inclusionFpn⊂Fpnm{\displaystyle \mathbb {\mathbb {F} } _{p^{n}}\subset \mathbb {F} _{p^{nm}}}form>1.{\displaystyle m>1.}These inclusions allow writing informallyF¯p=⋃n≥1Fpn.{\displaystyle {\overline {\mathbb {F} }}_{p}=\bigcup _{n\geq 1}\mathbb {F} _{p^{n}}.}The formal validation of this notation results from the fact that the above field inclusions form adirected setof fields; Itsdirect limitisF¯p,{\displaystyle {\overline {\mathbb {F} }}_{p},}which may thus be considered as "directed union".
Given aprimitive elementgmn{\displaystyle g_{mn}}ofFqmn,{\displaystyle \mathbb {F} _{q^{mn}},}thengmnm{\displaystyle g_{mn}^{m}}is a primitive element ofFqn.{\displaystyle \mathbb {F} _{q^{n}}.}
For explicit computations, it may be useful to have a coherent choice of the primitive elements for all finite fields; that is, to choose the primitive elementgn{\displaystyle g_{n}}ofFqn{\displaystyle \mathbb {F} _{q^{n}}}in order that, whenevern=mh,{\displaystyle n=mh,}one hasgm=gnh,{\displaystyle g_{m}=g_{n}^{h},}wheregm{\displaystyle g_{m}}is the primitive element already chosen forFqm.{\displaystyle \mathbb {F} _{q^{m}}.}
Such a construction may be obtained byConway polynomials.
Although finite fields are not algebraically closed, they arequasi-algebraically closed, which means that everyhomogeneous polynomialover a finite field has a non-trivial zero whose components are in the field if the number of its variables is more than its degree. This was a conjecture ofArtinandDicksonproved byChevalley(seeChevalley–Warning theorem).
|
https://en.wikipedia.org/wiki/Finite_field
|
Inabstract algebra, acyclic groupormonogenous groupis agroup, denoted Cn(also frequentlyZ{\displaystyle \mathbb {Z} }nor Zn, not to be confused with the commutative ring ofp-adic numbers), that isgeneratedby a single element.[1]That is, it is asetofinvertibleelements with a singleassociativebinary operation, and it contains an elementgsuch that every other element of the group may be obtained by repeatedly applying the group operation togor its inverse. Each element can be written as an integerpowerofgin multiplicative notation, or as an integer multiple ofgin additive notation. This elementgis called ageneratorof the group.[1]
Every infinite cyclic group isisomorphicto theadditive groupofZ, theintegers. Every finite cyclic group ofordernis isomorphic to the additive group ofZ/nZ, the integersmodulon. Every cyclic group is anabelian group(meaning that its group operation iscommutative), and everyfinitely generatedabelian group is adirect productof cyclic groups.
Every cyclic group ofprimeorder is asimple group, which cannot be broken down into smaller groups. In theclassification of finite simple groups, one of the three infinite classes consists of the cyclic groups of prime order. The cyclic groups of prime order are thus among the building blocks from which all groups can be built.
For any elementgin any groupG, one can form thesubgroupthat consists of all its integerpowers:⟨g⟩ = {gk|k∈Z}, called thecyclic subgroupgenerated byg. Theorderofgis |⟨g⟩|, the number of elements in ⟨g⟩, conventionally abbreviated as |g|, as ord(g), or as o(g). That is, the order of an element is equal to the order of the cyclic subgroup that it generates.
Acyclic groupis a group which is equal to one of its cyclic subgroups:G= ⟨g⟩for some elementg, called ageneratorofG.
For afinite cyclic groupGof ordernwe haveG= {e,g,g2, ... ,gn−1}, whereeis the identity element andgi=gjwheneveri≡j(modn); in particulargn=g0=e, andg−1=gn−1. An abstract group defined by this multiplication is often denoted Cn, and we say thatGisisomorphicto the standard cyclic group Cn. Such a group is also isomorphic toZ/nZ, the group of integers modulonwith the addition operation, which is the standard cyclic group in additive notation. Under the isomorphismχdefined byχ(gi) =ithe identity elementecorresponds to 0, products correspond to sums, and powers correspond to multiples.
For example, the set of complex 6th roots of unity:G={±1,±(12+32i),±(12−32i)}{\displaystyle G=\left\{\pm 1,\pm {\left({\tfrac {1}{2}}+{\tfrac {\sqrt {3}}{2}}i\right)},\pm {\left({\tfrac {1}{2}}-{\tfrac {\sqrt {3}}{2}}i\right)}\right\}}forms a group under multiplication. It is cyclic, since it is generated by theprimitive rootz=12+32i=e2πi/6:{\displaystyle z={\tfrac {1}{2}}+{\tfrac {\sqrt {3}}{2}}i=e^{2\pi i/6}:}that is,G= ⟨z⟩ = { 1,z,z2,z3,z4,z5} withz6= 1. Under a change of letters, this is isomorphic to (structurally the same as) the standard cyclic group of order 6, defined as C6= ⟨g⟩ = {e,g,g2,g3,g4,g5} with multiplicationgj·gk=gj+k(mod 6), so thatg6=g0=e. These groups are also isomorphic toZ/6Z= {0, 1, 2, 3, 4, 5} with the operation of additionmodulo6, withzkandgkcorresponding tok. For example,1 + 2 ≡ 3 (mod 6)corresponds toz1·z2=z3, and2 + 5 ≡ 1 (mod 6)corresponds toz2·z5=z7=z1, and so on. Any element generates its own cyclic subgroup, such as ⟨z2⟩ = {e,z2,z4} of order 3, isomorphic to C3andZ/3Z; and ⟨z5⟩ = {e,z5,z10=z4,z15=z3,z20=z2,z25=z} =G, so thatz5has order 6 and is an alternative generator ofG.
Instead of thequotientnotationsZ/nZ,Z/(n), orZ/n, some authors denote a finite cyclic group asZn, but this clashes with the notation ofnumber theory, whereZpdenotes ap-adic numberring, orlocalizationat aprime ideal.
On the other hand, in aninfinite cyclic groupG= ⟨g⟩, the powersgkgive distinct elements for all integersk, so thatG= { ... ,g−2,g−1,e,g,g2, ... }, andGis isomorphic to the standard groupC = C∞and toZ, the additive group of the integers. An example is the firstfrieze group. Here there are no finite cycles, and the name "cyclic" may be misleading.[2]
To avoid this confusion,Bourbakiintroduced the termmonogenous groupfor a group with a single generator and restricted "cyclic group" to mean a finite monogenous group, avoiding the term "infinite cyclic group".[note 1]
The set ofintegersZ, with the operation of addition, forms a group.[1]It is aninfinite cyclic group, because all integers can be written by repeatedly adding or subtracting the single number 1. In this group, 1 and −1 are the only generators. Every infinite cyclic group is isomorphic toZ.
For every positive integern, the set of integersmodulon, again with the operation of addition, forms a finite cyclic group, denotedZ/nZ.[1]A modular integeriis a generator of this group ifiisrelatively primeton, because these elements can generate all other elements of the group through integer addition.
(The number of such generators isφ(n), whereφis theEuler totient function.)
Every finite cyclic groupGis isomorphic toZ/nZ, wheren= |G| is the order of the group.
The addition operations on integers and modular integers, used to define the cyclic groups, are the addition operations ofcommutative rings, also denotedZandZ/nZorZ/(n). Ifpis aprime, thenZ/pZis afinite field, and is usually denotedFpor GF(p) for Galois field.
For every positive integern, the set of the integers modulonthat are relatively prime tonis written as (Z/nZ)×; itforms a groupunder the operation of multiplication. This group is not always cyclic, but is so whenevernis 1, 2, 4, apower of an odd prime, or twice a power of an odd prime (sequenceA033948in theOEIS).[4][5]This is the multiplicative group ofunitsof the ringZ/nZ; there areφ(n) of them, where againφis theEuler totient function. For example, (Z/6Z)×= {1, 5}, and since 6 is twice an odd prime this is a cyclic group. In contrast, (Z/8Z)×= {1, 3, 5, 7} is aKlein 4-groupand is not cyclic. When (Z/nZ)×is cyclic, its generators are calledprimitive roots modulon.
For a prime numberp, the group (Z/pZ)×is always cyclic, consisting of the non-zero elements of thefinite fieldof orderp. More generally, every finitesubgroupof the multiplicative group of anyfieldis cyclic.[6]
The set ofrotational symmetriesof apolygonforms a finite cyclic group.[7]If there arendifferent ways of moving the polygon to itself by a rotation (including the null rotation) then this symmetry group is isomorphic toZ/nZ. In three or higher dimensions there exist otherfinite symmetry groups that are cyclic, but which are not all rotations around an axis, but insteadrotoreflections.
The group of all rotations of acircle(thecircle group, also denotedS1) isnotcyclic, because there is no single rotation whose integer powers generate all rotations. In fact, the infinite cyclic group C∞iscountable, whileS1is not. The group of rotations by rational anglesiscountable, but still not cyclic.
Annthroot of unityis acomplex numberwhosenth power is 1, arootof thepolynomialxn− 1. The set of allnth roots of unity forms a cyclic group of ordernunder multiplication.[1]The generators of this cyclic group are thenthprimitive roots of unity; they are the roots of thenthcyclotomic polynomial.
For example, the polynomialz3− 1factors as(z− 1)(z−ω)(z−ω2), whereω=e2πi/3; the set {1,ω,ω2} = {ω0,ω1,ω2} forms a cyclic group under multiplication. TheGalois groupof thefield extensionof therational numbersgenerated by thenth roots of unity forms a different group, isomorphic to the multiplicative group (Z/nZ)×of orderφ(n), which is cyclic for some but not alln(see above).
A field extension is called acyclic extensionif its Galois group is cyclic. For fields ofcharacteristic zero, such extensions are the subject ofKummer theory, and are intimately related tosolvability by radicals. For an extension offinite fieldsof characteristicp, its Galois group is always finite and cyclic, generated by a power of theFrobenius mapping.[8]Conversely, given a finite fieldFand a finite cyclic groupG, there is a finite field extension ofFwhose Galois group isG.[9]
Allsubgroupsandquotient groupsof cyclic groups are cyclic. Specifically, all subgroups ofZare of the form ⟨m⟩ =mZ, withma positive integer. All of these subgroups are distinct from each other, and apart from the trivial group {0} = 0Z, they all areisomorphictoZ. Thelattice of subgroupsofZis isomorphic to thedualof the lattice of natural numbers ordered bydivisibility.[10]Thus, since a prime numberphas no nontrivial divisors,pZis a maximal proper subgroup, and the quotient groupZ/pZissimple; in fact, a cyclic group is simple if and only if its order is prime.[11]
All quotient groupsZ/nZare finite, with the exceptionZ/0Z=Z/{0}.For every positive divisordofn, the quotient groupZ/nZhas precisely one subgroup of orderd, generated by theresidue classofn/d. There are no other subgroups.
Every cyclic group isabelian.[1]That is, its group operation iscommutative:gh=hg(for allgandhinG). This is clear for the groups of integer and modular addition sincer+s≡s+r(modn), and it follows for all cyclic groups since they are all isomorphic to these standard groups. For a finite cyclic group of ordern,gnis the identity element for any elementg. This again follows by using the isomorphism to modular addition, sincekn≡ 0 (modn)for every integerk. (This is also true for a general group of ordern, due toLagrange's theorem.)
For aprime powerpk{\displaystyle p^{k}}, the groupZ/pkZ{\displaystyle Z/p^{k}Z}is called aprimary cyclic group. Thefundamental theorem of abelian groupsstates that everyfinitely generated abelian groupis a finite direct product of primary cyclic and infinite cyclic groups.
Because a cyclic group is abelian, each of itsconjugacy classesconsists of a single element. A cyclic group of orderntherefore hasnconjugacy classes.
Ifdis adivisorofn, then the number of elements inZ/nZwhich have orderdisφ(d), and the number of elements whose order dividesdis exactlyd.
IfGis a finite group in which, for eachn> 0,Gcontains at mostnelements of order dividingn, thenGmust be cyclic.[note 2]The order of an elementminZ/nZisn/gcd(n,m).
Ifnandmarecoprime, then thedirect productof two cyclic groupsZ/nZandZ/mZis isomorphic to the cyclic groupZ/nmZ, and the converse also holds: this is one form of theChinese remainder theorem. For example,Z/12Zis isomorphic to the direct productZ/3Z×Z/4Zunder the isomorphism(kmod 12) → (kmod 3,kmod 4); but it is not isomorphic toZ/6Z×Z/2Z, in which every element has order at most 6.
Ifpis aprime number, then any group withpelements is isomorphic to the simple groupZ/pZ.
A numbernis called acyclic numberifZ/nZis the only group of ordern, which is true exactly whengcd(n,φ(n)) = 1.[13]The sequence of cyclic numbers include all primes, but some arecompositesuch as 15. However, all cyclic numbers are odd except 2. The cyclic numbers are:
The definition immediately implies that cyclic groups havegroup presentationC∞= ⟨x| ⟩andCn= ⟨x|xn⟩for finiten.[14]
Therepresentation theoryof the cyclic group is a critical base case for the representation theory of more general finite groups. In thecomplex case, a representation of a cyclic group decomposes into a direct sum of linear characters, making the connection between character theory and representation theory transparent. In thepositive characteristic case, the indecomposable representations of the cyclic group form a model and inductive basis for the representation theory of groups with cyclicSylow subgroupsand more generally the representation theory of blocks of cyclic defect.
Acycle graphillustrates the various cycles of agroupand is particularly useful in visualizing the structure of smallfinite groups. A cycle graph for a cyclic group is simply acircular graph, where the group order is equal to the number of nodes. A single generator defines the group as a directional path on the graph, and the inverse generator defines a backwards path. A trivial path (identity) can be drawn as aloopbut is usually suppressed. Z2is sometimes drawn with two curved edges as amultigraph.[15]
A cyclic group Zn, with ordern, corresponds to a single cycle graphed simply as ann-sided polygon with the elements at the vertices.
ACayley graphis a graph defined from a pair (G,S) whereGis a group andSis a set of generators for the group; it has a vertex for each group element, and an edge for each product of an element with a generator. In the case of a finite cyclic group, with its single generator, the Cayley graph is acycle graph, and for an infinite cyclic group with its generator the Cayley graph is a doubly infinitepath graph. However, Cayley graphs can be defined from other sets of generators as well. The Cayley graphs of cyclic groups with arbitrary generator sets are calledcirculant graphs.[16]These graphs may be represented geometrically as a set of equally spaced points on a circle or on a line, with each point connected to neighbors with the same set of distances as each other point. They are exactly thevertex-transitive graphswhosesymmetry groupincludes a transitive cyclic group.[17]
Theendomorphism ringof the abelian groupZ/nZisisomorphictoZ/nZitself as aring.[18]Under this isomorphism, the numberrcorresponds to the endomorphism ofZ/nZthat maps each element to the sum ofrcopies of it. This is abijectionif and only ifris coprime withn, so theautomorphism groupofZ/nZis isomorphic to the unit group (Z/nZ)×.[18]
Similarly, the endomorphism ring of the additive group ofZis isomorphic to the ringZ. Its automorphism group is isomorphic to the group of units of the ringZ, which is({−1, +1}, ×) ≅ C2.
Thetensor productZ/mZ⊗Z/nZcan be shown to be isomorphic toZ/ gcd(m,n)Z. So we can form the collection of grouphomomorphismsfromZ/mZtoZ/nZ, denotedhom(Z/mZ,Z/nZ), which is itself a group.
For the tensor product, this is a consequence of the general fact thatR/I⊗RR/J≅R/(I+J), whereRis a commutativeringwith unit andIandJareidealsof the ring. For the Hom group, recall that it is isomorphic to the subgroup ofZ/nZconsisting of the elements of order dividingm. That subgroup is cyclic of ordergcd(m,n), which completes the proof.
Several other classes of groups have been defined by their relation to the cyclic groups:
A group is calledvirtually cyclicif it contains a cyclic subgroup of finiteindex(the number ofcosetsthat the subgroup has). In other words, any element in a virtually cyclic group can be arrived at by multiplying a member of the cyclic subgroup and a member of a certain finite set. Every cyclic group is virtually cyclic, as is every finite group. An infinite group is virtually cyclic if and only if it isfinitely generatedand has exactly twoends;[note 3]an example of such a group is thedirect productofZ/nZandZ, in which the factorZhas finite indexn. Every abelian subgroup of aGromov hyperbolic groupis virtually cyclic.[20]
Aprofinite groupis calledprocyclicif it can be topologically generated by a single element. Examples of profinite groups include the profinite integersZ^{\displaystyle {\widehat {\mathbb {Z} }}}or thep-adic integersZp{\displaystyle \mathbb {Z} _{p}}for a prime numberp.
Alocally cyclic groupis a group in which eachfinitely generatedsubgroup is cyclic. An example is the additive group of therational numbers: every finite set of rational numbers is a set of integer multiples of a singleunit fraction, the inverse of theirlowest common denominator, and generates as a subgroup a cyclic group of integer multiples of this unit fraction. A group is locally cyclic if and only if itslattice of subgroupsis adistributive lattice.[21]
Acyclically ordered groupis a group together with acyclic orderpreserved by the group structure. Every cyclic group can be given a structure as a cyclically ordered group, consistent with the ordering of the integers (or the integers modulo the order of the group).
Every finite subgroup of a cyclically ordered group is cyclic.[22]
Ametacyclic groupis a group containing a cyclicnormal subgroupwhose quotient is also cyclic.[23]These groups include the cyclic groups, thedicyclic groups, and thedirect productsof two cyclic groups. Thepolycyclic groupsgeneralize metacyclic groups by allowing more than one level ofgroup extension. A group is polycyclic if it has a finite descending sequence of subgroups, each of which is normal in the previous subgroup with a cyclic quotient, ending in the trivial group. Every finitely generatedabelian groupornilpotent groupis polycyclic.[24]
|
https://en.wikipedia.org/wiki/Cyclic_group
|
Ingeometry,topology, and related branches ofmathematics, aclosed setis asetwhosecomplementis anopen set.[1][2]In atopological space, a closed set can be defined as a set which contains all itslimit points. In acomplete metric space, a closed set is a set which isclosedunder thelimitoperation. This should not be confused withclosed manifold.
Sets that are both open and closed and are calledclopen sets.
Given atopological space(X,τ){\displaystyle (X,\tau )}, the following statements are equivalent:
An alternativecharacterizationof closed sets is available viasequencesandnets. A subsetA{\displaystyle A}of a topological spaceX{\displaystyle X}is closed inX{\displaystyle X}if and only if everylimitof every net of elements ofA{\displaystyle A}also belongs toA.{\displaystyle A.}In afirst-countable space(such as a metric space), it is enough to consider only convergentsequences, instead of all nets. One value of this characterization is that it may be used as a definition in the context ofconvergence spaces, which are more general than topological spaces. Notice that this characterization also depends on the surrounding spaceX,{\displaystyle X,}because whether or not a sequence or net converges inX{\displaystyle X}depends on what points are present inX.{\displaystyle X.}A pointx{\displaystyle x}inX{\displaystyle X}is said to beclose toa subsetA⊆X{\displaystyle A\subseteq X}ifx∈clXA{\displaystyle x\in \operatorname {cl} _{X}A}(or equivalently, ifx{\displaystyle x}belongs to the closure ofA{\displaystyle A}in thetopological subspaceA∪{x},{\displaystyle A\cup \{x\},}meaningx∈clA∪{x}A{\displaystyle x\in \operatorname {cl} _{A\cup \{x\}}A}whereA∪{x}{\displaystyle A\cup \{x\}}is endowed with thesubspace topologyinduced on it byX{\displaystyle X}[note 1]).
Because the closure ofA{\displaystyle A}inX{\displaystyle X}is thus the set of all points inX{\displaystyle X}that are close toA,{\displaystyle A,}this terminology allows for a plain English description of closed subsets:
In terms of net convergence, a pointx∈X{\displaystyle x\in X}is close to a subsetA{\displaystyle A}if and only if there exists some net (valued) inA{\displaystyle A}that converges tox.{\displaystyle x.}IfX{\displaystyle X}is atopological subspaceof some other topological spaceY,{\displaystyle Y,}in which caseY{\displaystyle Y}is called atopological super-spaceofX,{\displaystyle X,}then theremightexist some point inY∖X{\displaystyle Y\setminus X}that is close toA{\displaystyle A}(although not an element ofX{\displaystyle X}), which is how it is possible for a subsetA⊆X{\displaystyle A\subseteq X}to be closed inX{\displaystyle X}but tonotbe closed in the "larger" surrounding super-spaceY.{\displaystyle Y.}IfA⊆X{\displaystyle A\subseteq X}and ifY{\displaystyle Y}isanytopological super-space ofX{\displaystyle X}thenA{\displaystyle A}is always a (potentially proper) subset ofclYA,{\displaystyle \operatorname {cl} _{Y}A,}which denotes the closure ofA{\displaystyle A}inY;{\displaystyle Y;}indeed, even ifA{\displaystyle A}is a closed subset ofX{\displaystyle X}(which happens if and only ifA=clXA{\displaystyle A=\operatorname {cl} _{X}A}), it is nevertheless still possible forA{\displaystyle A}to be a proper subset ofclYA.{\displaystyle \operatorname {cl} _{Y}A.}However,A{\displaystyle A}is a closed subset ofX{\displaystyle X}if and only ifA=X∩clYA{\displaystyle A=X\cap \operatorname {cl} _{Y}A}for some (or equivalently, for every) topological super-spaceY{\displaystyle Y}ofX.{\displaystyle X.}
Closed sets can also be used to characterizecontinuous functions: a mapf:X→Y{\displaystyle f:X\to Y}iscontinuousif and only iff(clXA)⊆clY(f(A)){\displaystyle f\left(\operatorname {cl} _{X}A\right)\subseteq \operatorname {cl} _{Y}(f(A))}for every subsetA⊆X{\displaystyle A\subseteq X}; this can be reworded inplain Englishas:f{\displaystyle f}is continuous if and only if for every subsetA⊆X,{\displaystyle A\subseteq X,}f{\displaystyle f}maps points that are close toA{\displaystyle A}to points that are close tof(A).{\displaystyle f(A).}Similarly,f{\displaystyle f}is continuous at a fixed given pointx∈X{\displaystyle x\in X}if and only if wheneverx{\displaystyle x}is close to a subsetA⊆X,{\displaystyle A\subseteq X,}thenf(x){\displaystyle f(x)}is close tof(A).{\displaystyle f(A).}
The notion of closed set is defined above in terms ofopen sets, a concept that makes sense fortopological spaces, as well as for other spaces that carry topological structures, such asmetric spaces,differentiable manifolds,uniform spaces, andgauge spaces.
Whether a set is closed depends on the space in which it is embedded. However, thecompactHausdorff spacesare "absolutely closed", in the sense that, if you embed a compact Hausdorff spaceD{\displaystyle D}in an arbitrary Hausdorff spaceX,{\displaystyle X,}thenD{\displaystyle D}will always be a closed subset ofX{\displaystyle X}; the "surrounding space" does not matter here.Stone–Čech compactification, a process that turns acompletely regularHausdorff space into a compact Hausdorff space, may be described as adjoining limits of certain nonconvergent nets to the space.
Furthermore, every closed subset of a compact space is compact, and every compact subspace of a Hausdorff space is closed.
Closed sets also give a useful characterization of compactness: a topological spaceX{\displaystyle X}is compact if and only if every collection of nonempty closed subsets ofX{\displaystyle X}with empty intersection admits a finite subcollection with empty intersection.
A topological spaceX{\displaystyle X}isdisconnectedif there exist disjoint, nonempty, open subsetsA{\displaystyle A}andB{\displaystyle B}ofX{\displaystyle X}whose union isX.{\displaystyle X.}Furthermore,X{\displaystyle X}istotally disconnectedif it has anopen basisconsisting of closed sets.
A closed set contains its ownboundary. In other words, if you are "outside" a closed set, you may move a small amount in any direction and still stay outside the set. This is also true if the boundary is the empty set, e.g. in the metric space of rational numbers, for the set of numbers of which the square is less than2.{\displaystyle 2.}
In fact, if given a setX{\displaystyle X}and a collectionF≠∅{\displaystyle \mathbb {F} \neq \varnothing }of subsets ofX{\displaystyle X}such that the elements ofF{\displaystyle \mathbb {F} }have the properties listed above, then there exists a unique topologyτ{\displaystyle \tau }onX{\displaystyle X}such that the closed subsets of(X,τ){\displaystyle (X,\tau )}are exactly those sets that belong toF.{\displaystyle \mathbb {F} .}The intersection property also allows one to define theclosureof a setA{\displaystyle A}in a spaceX,{\displaystyle X,}which is defined as the smallest closed subset ofX{\displaystyle X}that is asupersetofA.{\displaystyle A.}Specifically, the closure ofX{\displaystyle X}can be constructed as the intersection of all of these closed supersets.
Sets that can be constructed as the union ofcountablymany closed sets are denotedFσsets. These sets need not be closed.
|
https://en.wikipedia.org/wiki/Closed_set
|
Anintegeris thenumberzero (0), a positivenatural number(1, 2, 3, ...), or the negation of a positive natural number (−1, −2, −3, ...).[1]The negations oradditive inversesof the positive natural numbers are referred to asnegative integers.[2]Thesetof all integers is often denoted by theboldfaceZorblackboard boldZ{\displaystyle \mathbb {Z} }.[3][4]
The set of natural numbersN{\displaystyle \mathbb {N} }is asubsetofZ{\displaystyle \mathbb {Z} }, which in turn is a subset of the set of allrational numbersQ{\displaystyle \mathbb {Q} }, itself a subset of thereal numbersR{\displaystyle \mathbb {R} }.[a]Like the set of natural numbers, the set of integersZ{\displaystyle \mathbb {Z} }iscountably infinite. An integer may be regarded as a real number that can be written without afractional component. For example, 21, 4, 0, and −2048 are integers, while 9.75,5+1/2, 5/4, and√2are not.[8]
The integers form the smallestgroupand the smallestringcontaining thenatural numbers. Inalgebraic number theory, the integers are sometimes qualified asrational integersto distinguish them from the more generalalgebraic integers. In fact, (rational) integers are algebraic integers that are alsorational numbers.
The word integer comes from theLatinintegermeaning "whole" or (literally) "untouched", fromin("not") plustangere("to touch"). "Entire" derives from the same origin via theFrenchwordentier, which means bothentireandinteger.[9]Historically the term was used for anumberthat was a multiple of 1,[10][11]or to the whole part of amixed number.[12][13]Only positive integers were considered, making the term synonymous with thenatural numbers. The definition of integer expanded over time to includenegative numbersas their usefulness was recognized.[14]For exampleLeonhard Eulerin his 1765Elements of Algebradefined integers to include both positive and negative numbers.[15]
The phrasethe set of the integerswas not used before the end of the 19th century, whenGeorg Cantorintroduced the concept ofinfinite setsandset theory. The use of the letter Z to denote the set of integers comes from theGermanwordZahlen("numbers")[3][4]and has been attributed toDavid Hilbert.[16]The earliest known use of the notation in a textbook occurs inAlgèbrewritten by the collectiveNicolas Bourbaki, dating to 1947.[3][17]The notation was not adopted immediately. For example, another textbook used the letter J,[18]and a 1960 paper used Z to denote the non-negative integers.[19]But by 1961, Z was generally used by modern algebra texts to denote the positive and negative integers.[20]
The symbolZ{\displaystyle \mathbb {Z} }is often annotated to denote various sets, with varying usage amongst different authors:Z+{\displaystyle \mathbb {Z} ^{+}},Z+{\displaystyle \mathbb {Z} _{+}}, orZ>{\displaystyle \mathbb {Z} ^{>}}for the positive integers,Z0+{\displaystyle \mathbb {Z} ^{0+}}orZ≥{\displaystyle \mathbb {Z} ^{\geq }}for non-negative integers, andZ≠{\displaystyle \mathbb {Z} ^{\neq }}for non-zero integers. Some authors useZ∗{\displaystyle \mathbb {Z} ^{*}}for non-zero integers, while others use it for non-negative integers, or for {−1,1} (thegroup of unitsofZ{\displaystyle \mathbb {Z} }). Additionally,Zp{\displaystyle \mathbb {Z} _{p}}is used to denote either the set ofintegers modulop(i.e., the set ofcongruence classesof integers), or the set ofp-adic integers.[21][22]
Thewhole numberswere synonymous with the integers up until the early 1950s.[23][24][25]In the late 1950s, as part of theNew Mathmovement,[26]American elementary school teachers began teaching thatwhole numbersreferred to thenatural numbers, excluding negative numbers, whileintegerincluded the negative numbers.[27][28]Thewhole numbersremain ambiguous to the present day.[29]
Ring homomorphisms
Algebraic structures
Related structures
Algebraic number theory
Noncommutative algebraic geometry
Free algebra
Clifford algebra
Like thenatural numbers,Z{\displaystyle \mathbb {Z} }isclosedunder theoperationsof addition andmultiplication, that is, the sum and product of any two integers is an integer. However, with the inclusion of the negative natural numbers (and importantly,0),Z{\displaystyle \mathbb {Z} }, unlike the natural numbers, is also closed undersubtraction.[30]
The integers form aringwhich is the most basic one, in the following sense: for any ring, there is a uniquering homomorphismfrom the integers into this ring. Thisuniversal property, namely to be aninitial objectin thecategory of rings, characterizes the ringZ{\displaystyle \mathbb {Z} }. This unique homomorphism isinjectiveif and only if thecharacteristicof the ring is zero. It follows that every ring of characteristic zero contains a subring isomorphic toZ{\displaystyle \mathbb {Z} }, which is its smallest subring.
Z{\displaystyle \mathbb {Z} }is not closed underdivision, since the quotient of two integers (e.g., 1 divided by 2) need not be an integer. Although the natural numbers are closed underexponentiation, the integers are not (since the result can be a fraction when the exponent is negative).
The following table lists some of the basic properties of addition and multiplication for any integersa,b, andc:
The first five properties listed above for addition say thatZ{\displaystyle \mathbb {Z} }, under addition, is anabelian group. It is also acyclic group, since every non-zero integer can be written as a finite sum1 + 1 + ... + 1or(−1) + (−1) + ... + (−1). In fact,Z{\displaystyle \mathbb {Z} }under addition is theonlyinfinite cyclic group—in the sense that any infinite cyclic group isisomorphictoZ{\displaystyle \mathbb {Z} }.
The first four properties listed above for multiplication say thatZ{\displaystyle \mathbb {Z} }under multiplication is acommutative monoid. However, not every integer has a multiplicative inverse (as is the case of the number 2), which means thatZ{\displaystyle \mathbb {Z} }under multiplication is not a group.
All the rules from the above property table (except for the last), when taken together, say thatZ{\displaystyle \mathbb {Z} }together with addition and multiplication is acommutative ringwithunity. It is the prototype of all objects of suchalgebraic structure. Only thoseequalitiesofexpressionsare true inZ{\displaystyle \mathbb {Z} }for allvalues of variables, which are true in any unital commutative ring. Certain non-zero integers map tozeroin certain rings.
The lack ofzero divisorsin the integers (last property in the table) means that the commutative ringZ{\displaystyle \mathbb {Z} }is anintegral domain.
The lack of multiplicative inverses, which is equivalent to the fact thatZ{\displaystyle \mathbb {Z} }is not closed under division, means thatZ{\displaystyle \mathbb {Z} }isnotafield. The smallest field containing the integers as asubringis the field ofrational numbers. The process of constructing the rationals from the integers can be mimicked to form thefield of fractionsof any integral domain. And back, starting from analgebraic number field(an extension of rational numbers), itsring of integerscan be extracted, which includesZ{\displaystyle \mathbb {Z} }as itssubring.
Although ordinary division is not defined onZ{\displaystyle \mathbb {Z} }, the division "with remainder" is defined on them. It is calledEuclidean division, and possesses the following important property: given two integersaandbwithb≠ 0, there exist unique integersqandrsuch thata=q×b+rand0 ≤r< |b|, where|b|denotes theabsolute valueofb. The integerqis called thequotientandris called theremainderof the division ofabyb. TheEuclidean algorithmfor computinggreatest common divisorsworks by a sequence of Euclidean divisions.
The above says thatZ{\displaystyle \mathbb {Z} }is aEuclidean domain. This implies thatZ{\displaystyle \mathbb {Z} }is aprincipal ideal domain, and any positive integer can be written as the products ofprimesin anessentially uniqueway.[31]This is thefundamental theorem of arithmetic.
Z{\displaystyle \mathbb {Z} }is atotally ordered setwithoutupper or lower bound. The ordering ofZ{\displaystyle \mathbb {Z} }is given by::... −3 < −2 < −1 < 0 < 1 < 2 < 3 < ....
An integer ispositiveif it is greater thanzero, andnegativeif it is less than zero. Zero is defined as neither negative nor positive.
The ordering of integers is compatible with the algebraic operations in the following way:
Thus it follows thatZ{\displaystyle \mathbb {Z} }together with the above ordering is anordered ring.
The integers are the only nontrivialtotally orderedabelian groupwhose positive elements arewell-ordered.[32]This is equivalent to the statement that anyNoetherianvaluation ringis either afield—or adiscrete valuation ring.
In elementary school teaching, integers are often intuitively defined as the union of the (positive) natural numbers,zero, and the negations of the natural numbers. This can be formalized as follows.[33]First construct the set of natural numbers according to thePeano axioms, call thisP{\displaystyle P}. Then construct a setP−{\displaystyle P^{-}}which isdisjointfromP{\displaystyle P}and in one-to-one correspondence withP{\displaystyle P}via a functionψ{\displaystyle \psi }. For example, takeP−{\displaystyle P^{-}}to be theordered pairs(1,n){\displaystyle (1,n)}with the mappingψ=n↦(1,n){\displaystyle \psi =n\mapsto (1,n)}. Finally let 0 be some object not inP{\displaystyle P}orP−{\displaystyle P^{-}}, for example the ordered pair (0,0). Then the integers are defined to be the unionP∪P−∪{0}{\displaystyle P\cup P^{-}\cup \{0\}}.
The traditional arithmetic operations can then be defined on the integers in apiecewisefashion, for each of positive numbers, negative numbers, and zero. For examplenegationis defined as follows:
−x={ψ(x),ifx∈Pψ−1(x),ifx∈P−0,ifx=0{\displaystyle -x={\begin{cases}\psi (x),&{\text{if }}x\in P\\\psi ^{-1}(x),&{\text{if }}x\in P^{-}\\0,&{\text{if }}x=0\end{cases}}}
The traditional style of definition leads to many different cases (each arithmetic operation needs to be defined on each combination of types of integer) and makes it tedious to prove that integers obey the various laws of arithmetic.[34]
In modern set-theoretic mathematics, a more abstract construction[35][36]allowing one to define arithmetical operations without any case distinction is often used instead.[37]The integers can thus be formally constructed as theequivalence classesofordered pairsofnatural numbers(a,b).[38]
The intuition is that(a,b)stands for the result of subtractingbfroma.[38]To confirm our expectation that1 − 2and4 − 5denote the same number, we define anequivalence relation~on these pairs with the following rule:
precisely when
Addition and multiplication of integers can be defined in terms of the equivalent operations on the natural numbers;[38]by using[(a,b)]to denote the equivalence class having(a,b)as a member, one has:
The negation (or additive inverse) of an integer is obtained by reversing the order of the pair:
Hence subtraction can be defined as the addition of the additive inverse:
The standard ordering on the integers is given by:
It is easily verified that these definitions are independent of the choice of representatives of the equivalence classes.
Every equivalence class has a unique member that is of the form(n,0)or(0,n)(or both at once). The natural numbernis identified with the class[(n,0)](i.e., the natural numbers areembeddedinto the integers by map sendingnto[(n,0)]), and the class[(0,n)]is denoted−n(this covers all remaining classes, and gives the class[(0,0)]a second time since −0 = 0.
Thus,[(a,b)]is denoted by
If the natural numbers are identified with the corresponding integers (using the embedding mentioned above), this convention creates no ambiguity.
This notation recovers the familiarrepresentationof the integers as{..., −2, −1, 0, 1, 2, ...}.
Some examples are:
In theoretical computer science, other approaches for the construction of integers are used byautomated theorem proversandterm rewrite engines. Integers are represented asalgebraic termsbuilt using a few basic operations (e.g.,zero,succ,pred) and usingnatural numbers, which are assumed to be already constructed (using thePeano approach).
There exist at least ten such constructions of signed integers.[39]These constructions differ in several ways: the number of basic operations used for the construction, the number (usually, between 0 and 2), and the types of arguments accepted by these operations; the presence or absence of natural numbers as arguments of some of these operations, and the fact that these operations are free constructors or not, i.e., that the same integer can be represented using only one or many algebraic terms.
The technique for the construction of integers presented in the previous section corresponds to the particular case where there is a single basic operationpair(x,y){\displaystyle (x,y)}that takes as arguments two natural numbersx{\displaystyle x}andy{\displaystyle y}, and returns an integer (equal tox−y{\displaystyle x-y}). This operation is not free since the integer 0 can be writtenpair(0,0), orpair(1,1), orpair(2,2), etc.. This technique of construction is used by theproof assistantIsabelle; however, many other tools use alternative construction techniques, notable those based upon free constructors, which are simpler and can be implemented more efficiently in computers.
An integer is often a primitivedata typeincomputer languages. However, integer data types can only represent asubsetof all integers, since practical computers are of finite capacity. Also, in the commontwo's complementrepresentation, the inherent definition ofsigndistinguishes between "negative" and "non-negative" rather than "negative, positive, and 0". (It is, however, certainly possible for a computer to determine whether an integer value is truly positive.) Fixed length integer approximation data types (or subsets) are denotedintor Integer in several programming languages (such asAlgol68,C,Java,Delphi, etc.).
Variable-length representations of integers, such asbignums, can store any integer that fits in the computer's memory. Other integer data types are implemented with a fixed size, usually a number of bits which is a power of 2 (4, 8, 16, etc.) or a memorable number of decimal digits (e.g., 9 or 10).
The set of integers iscountably infinite, meaning it is possible to pair each integer with a unique natural number. An example of such a pairing is
More technically, thecardinalityofZ{\displaystyle \mathbb {Z} }is said to equalℵ0(aleph-null). The pairing between elements ofZ{\displaystyle \mathbb {Z} }andN{\displaystyle \mathbb {N} }is called abijection.
This article incorporates material from Integer onPlanetMath, which is licensed under theCreative Commons Attribution/Share-Alike License.
|
https://en.wikipedia.org/wiki/Integer
|
Incryptography, theElliptic Curve Digital Signature Algorithm(ECDSA) offers a variant of theDigital Signature Algorithm(DSA) which useselliptic-curve cryptography.
As with elliptic-curve cryptography in general, the bitsizeof theprivate keybelieved to be needed for ECDSA is about twice the size of thesecurity level, in bits.[1]For example, at a security level of 80 bits—meaning an attacker requires a maximum of about280{\displaystyle 2^{80}}operations to find the private key—the size of an ECDSA private key would be 160 bits. On the other hand, the signature size is the same for both DSA and ECDSA: approximately4t{\displaystyle 4t}bits, wheret{\displaystyle t}is the exponent in the formula2t{\displaystyle 2^{t}}, that is, about 320 bits for a security level of 80 bits, which is equivalent to280{\displaystyle 2^{80}}operations.
SupposeAlicewants to send a signed message toBob. Initially, they must agree on the curve parameters(CURVE,G,n){\displaystyle ({\textrm {CURVE}},G,n)}. In addition to thefieldand equation of the curve, we needG{\displaystyle G}, a base point of prime order on the curve;n{\displaystyle n}is the additive order of the pointG{\displaystyle G}.
The ordern{\displaystyle n}of the base pointG{\displaystyle G}must be prime. Indeed, we assume that every nonzero element of theringZ/nZ{\displaystyle \mathbb {Z} /n\mathbb {Z} }is invertible, so thatZ/nZ{\displaystyle \mathbb {Z} /n\mathbb {Z} }must be afield. It implies thatn{\displaystyle n}must be prime (cf.Bézout's identity).
Alice creates a key pair, consisting of a private key integerdA{\displaystyle d_{A}}, randomly selected in the interval[1,n−1]{\displaystyle [1,n-1]}; and a public key curve pointQA=dA×G{\displaystyle Q_{A}=d_{A}\times G}. We use×{\displaystyle \times }to denoteelliptic curve point multiplication by a scalar.
For Alice to sign a messagem{\displaystyle m}, she follows these steps:
As the standard notes, it is not only required fork{\displaystyle k}to be secret, but it is also crucial to select differentk{\displaystyle k}for different signatures. Otherwise, the equation in step 6 can be solved fordA{\displaystyle d_{A}}, the private key: given two signatures(r,s){\displaystyle (r,s)}and(r,s′){\displaystyle (r,s')}, employing the same unknownk{\displaystyle k}for different known messagesm{\displaystyle m}andm′{\displaystyle m'}, an attacker can calculatez{\displaystyle z}andz′{\displaystyle z'}, and sinces−s′=k−1(z−z′){\displaystyle s-s'=k^{-1}(z-z')}(all operations in this paragraph are done modulon{\displaystyle n}) the attacker can findk=z−z′s−s′{\displaystyle k={\frac {z-z'}{s-s'}}}. Sinces=k−1(z+rdA){\displaystyle s=k^{-1}(z+rd_{A})}, the attacker can now calculate the private keydA=sk−zr{\displaystyle d_{A}={\frac {sk-z}{r}}}.
This implementation failure was used, for example, to extract the signing key used for thePlayStation 3gaming-console.[3]
Another way ECDSA signature may leak private keys is whenk{\displaystyle k}is generated by a faultyrandom number generator. Such a failure in random number generation caused users of Android Bitcoin Wallet to lose their funds in August 2013.[4]
To ensure thatk{\displaystyle k}is unique for each message, one may bypass random number generation completely and generate deterministic signatures by derivingk{\displaystyle k}from both the message and the private key.[5]
For Bob to authenticate Alice's signaturer,s{\displaystyle r,s}on a messagem{\displaystyle m}, he must have a copy of her public-key curve pointQA{\displaystyle Q_{A}}. Bob can verifyQA{\displaystyle Q_{A}}is a valid curve point as follows:
After that, Bob follows these steps:
Note that an efficient implementation would compute inverses−1modn{\displaystyle s^{-1}\,{\bmod {\,}}n}only once. Also, using Shamir's trick, a sum of two scalar multiplicationsu1×G+u2×QA{\displaystyle u_{1}\times G+u_{2}\times Q_{A}}can be calculated faster than two scalar multiplications done independently.[6]
It is not immediately obvious why verification even functions correctly. To see why, denote asCthe curve point computed in step 5 of verification,
From the definition of the public key asQA=dA×G{\displaystyle Q_{A}=d_{A}\times G},
Because elliptic curve scalar multiplication distributes over addition,
Expanding the definition ofu1{\displaystyle u_{1}}andu2{\displaystyle u_{2}}from verification step 4,
Collecting the common terms−1{\displaystyle s^{-1}},
Expanding the definition ofsfrom signature step 6,
Since the inverse of an inverse is the original element, and the product of an element's inverse and the element is the identity, we are left with
From the definition ofr, this is verification step 6.
This shows only that a correctly signed message will verify correctly; other properties such as incorrectly signed messages failing to verify correctly and resistance tocryptanalyticattacks are required for a secure signature algorithm.
Given a messagemand Alice's signaturer,s{\displaystyle r,s}on that message, Bob can (potentially) recover Alice's public key:[7]
Note that an invalid signature, or a signature from a different message, will result in the recovery of an incorrect public key. The recovery algorithm can only be used to check validity of a signature if the signer's public key (or its hash) is known beforehand.
Start with the definition ofQA{\displaystyle Q_{A}}from recovery step 6,
From the definitionR=(x1,y1)=k×G{\displaystyle R=(x_{1},y_{1})=k\times G}from signing step 4,
Because elliptic curve scalar multiplication distributes over addition,
Expanding the definition ofu1{\displaystyle u_{1}}andu2{\displaystyle u_{2}}from recovery step 5,
Expanding the definition ofsfrom signature step 6,
Since the product of an element's inverse and the element is the identity, we are left with
The first and second terms cancel each other out,
From the definition ofQA=dA×G{\displaystyle Q_{A}=d_{A}\times G}, this is Alice's public key.
This shows that a correctly signed message will recover the correct public key, provided additional information was shared to uniquely calculate curve pointR=(x1,y1){\displaystyle R=(x_{1},y_{1})}from signature valuer.
In December 2010, a group calling itselffail0verflowannounced the recovery of the ECDSA private key used bySonyto sign software for thePlayStation 3game console. However, this attack only worked because Sony did not properly implement the algorithm, becausek{\displaystyle k}was static instead of random. As pointed out in theSignature generation algorithmsection above, this makesdA{\displaystyle d_{A}}solvable, rendering the entire algorithm useless.[8]
On March 29, 2011, two researchers published anIACRpaper[9]demonstrating that it is possible to retrieve a TLS private key of a server usingOpenSSLthat authenticates with Elliptic Curves DSA over a binaryfieldvia atiming attack.[10]The vulnerability was fixed in OpenSSL 1.0.0e.[11]
In August 2013, it was revealed that bugs in some implementations of theJavaclassSecureRandomsometimes generated collisions in thek{\displaystyle k}value. This allowed hackers to recover private keys giving them the same control over bitcoin transactions as legitimate keys' owners had, using the same exploit that was used to reveal the PS3 signing key on someAndroidapp implementations, which use Java and rely on ECDSA to authenticate transactions.[12]
This issue can be prevented by deterministic generation of k, as described by RFC 6979.
Some concerns expressed about ECDSA:
Below is a list of cryptographic libraries that provide support for ECDSA:
|
https://en.wikipedia.org/wiki/Elliptic_Curve_Digital_Signature_Algorithm
|
RSAmay refer to:
|
https://en.wikipedia.org/wiki/RSA#Signature_generation_and_verification
|
Inalgebra, aunitorinvertible element[a]of aringis aninvertible elementfor the multiplication of the ring. That is, an elementuof a ringRis a unit if there existsvinRsuch thatvu=uv=1,{\displaystyle vu=uv=1,}where1is themultiplicative identity; the elementvis unique for this property and is called themultiplicative inverseofu.[1][2]The set of units ofRforms agroupR×under multiplication, called thegroup of unitsorunit groupofR.[b]Other notations for the unit group areR∗,U(R), andE(R)(from the German termEinheit).
Less commonly, the termunitis sometimes used to refer to the element1of the ring, in expressions likering with a unitorunit ring, and alsounit matrix. Because of this ambiguity,1is more commonly called the "unity" or the "identity" of the ring, and the phrases "ring with unity" or a "ring with identity" may be used to emphasize that one is considering a ring instead of arng.
The multiplicative identity1and its additive inverse−1are always units. More generally, anyroot of unityin a ringRis a unit: ifrn= 1, thenrn−1is a multiplicative inverse ofr.
In anonzero ring, theelement 0is not a unit, soR×is not closed under addition.
A nonzero ringRin which every nonzero element is a unit (that is,R×=R∖ {0}) is called adivision ring(or a skew-field). A commutative division ring is called afield. For example, the unit group of the field ofreal numbersRisR∖ {0}.
In the ring ofintegersZ, the only units are1and−1.
In the ringZ/nZofintegers modulon, the units are the congruence classes(modn)represented by integerscoprimeton. They constitute themultiplicative group of integers modulon.
In the ringZ[√3]obtained by adjoining thequadratic integer√3toZ, one has(2 +√3)(2 −√3) = 1, so2 +√3is a unit, and so are its powers, soZ[√3]has infinitely many units.
More generally, for thering of integersRin anumber fieldF,Dirichlet's unit theoremstates thatR×is isomorphic to the groupZn×μR{\displaystyle \mathbf {Z} ^{n}\times \mu _{R}}whereμR{\displaystyle \mu _{R}}is the (finite, cyclic) group of roots of unity inRandn, therankof the unit group, isn=r1+r2−1,{\displaystyle n=r_{1}+r_{2}-1,}wherer1,r2{\displaystyle r_{1},r_{2}}are the number of real embeddings and the number of pairs of complex embeddings ofF, respectively.
This recovers theZ[√3]example: The unit group of (the ring of integers of) areal quadratic fieldis infinite of rank 1, sincer1=2,r2=0{\displaystyle r_{1}=2,r_{2}=0}.
For a commutative ringR, the units of thepolynomial ringR[x]are the polynomialsp(x)=a0+a1x+⋯+anxn{\displaystyle p(x)=a_{0}+a_{1}x+\dots +a_{n}x^{n}}such thata0is a unit inRand the remaining coefficientsa1,…,an{\displaystyle a_{1},\dots ,a_{n}}arenilpotent, i.e., satisfyaiN=0{\displaystyle a_{i}^{N}=0}for someN.[4]In particular, ifRis adomain(or more generallyreduced), then the units ofR[x]are the units ofR.
The units of thepower series ringR[[x]]{\displaystyle R[[x]]}are the power seriesp(x)=∑i=0∞aixi{\displaystyle p(x)=\sum _{i=0}^{\infty }a_{i}x^{i}}such thata0is a unit inR.[5]
The unit group of the ringMn(R)ofn×nmatricesover a ringRis the groupGLn(R)ofinvertible matrices. For a commutative ringR, an elementAofMn(R)is invertible if and only if thedeterminantofAis invertible inR. In that case,A−1can be given explicitly in terms of theadjugate matrix.
For elementsxandyin a ringR, if1−xy{\displaystyle 1-xy}is invertible, then1−yx{\displaystyle 1-yx}is invertible with inverse1+y(1−xy)−1x{\displaystyle 1+y(1-xy)^{-1}x};[6]this formula can be guessed, but not proved, by the following calculation in a ring of noncommutative power series:(1−yx)−1=∑n≥0(yx)n=1+y(∑n≥0(xy)n)x=1+y(1−xy)−1x.{\displaystyle (1-yx)^{-1}=\sum _{n\geq 0}(yx)^{n}=1+y\left(\sum _{n\geq 0}(xy)^{n}\right)x=1+y(1-xy)^{-1}x.}SeeHua's identityfor similar results.
Acommutative ringis alocal ringifR∖R×is amaximal ideal.
As it turns out, ifR∖R×is an ideal, then it is necessarily amaximal idealandRislocalsince amaximal idealis disjoint fromR×.
IfRis afinite field, thenR×is acyclic groupof order|R| − 1.
Everyring homomorphismf:R→Sinduces agroup homomorphismR×→S×, sincefmaps units to units. In fact, the formation of the unit group defines afunctorfrom thecategory of ringsto thecategory of groups. This functor has aleft adjointwhich is the integralgroup ringconstruction.[7]
Thegroup schemeGL1{\displaystyle \operatorname {GL} _{1}}is isomorphic to themultiplicative group schemeGm{\displaystyle \mathbb {G} _{m}}over any base, so for any commutative ringR, the groupsGL1(R){\displaystyle \operatorname {GL} _{1}(R)}andGm(R){\displaystyle \mathbb {G} _{m}(R)}are canonically isomorphic toU(R). Note that the functorGm{\displaystyle \mathbb {G} _{m}}(that is,R↦U(R)) isrepresentablein the sense:Gm(R)≃Hom(Z[t,t−1],R){\displaystyle \mathbb {G} _{m}(R)\simeq \operatorname {Hom} (\mathbb {Z} [t,t^{-1}],R)}for commutative ringsR(this for instance follows from the aforementioned adjoint relation with the group ring construction). Explicitly this means that there is a natural bijection between the set of the ring homomorphismsZ[t,t−1]→R{\displaystyle \mathbb {Z} [t,t^{-1}]\to R}and the set of unit elements ofR(in contrast,Z[t]{\displaystyle \mathbb {Z} [t]}represents the additive groupGa{\displaystyle \mathbb {G} _{a}}, theforgetful functorfrom the category of commutative rings to thecategory of abelian groups).
Suppose thatRis commutative. ElementsrandsofRare calledassociateif there exists a unituinRsuch thatr=us; then writer~s. In any ring, pairs ofadditive inverseelements[c]xand−xareassociate, since any ring includes the unit−1. For example, 6 and −6 are associate inZ. In general,~is anequivalence relationonR.
Associatedness can also be described in terms of theactionofR×onRvia multiplication: Two elements ofRare associate if they are in the sameR×-orbit.
In anintegral domain, the set of associates of a given nonzero element has the samecardinalityasR×.
The equivalence relation~can be viewed as any one ofGreen's semigroup relationsspecialized to the multiplicativesemigroupof a commutative ringR.
|
https://en.wikipedia.org/wiki/Group_of_units
|
Incryptography,RC4(Rivest Cipher 4, also known asARC4orARCFOUR, meaning Alleged RC4, see below) is astream cipher. While it is remarkable for its simplicity and speed in software, multiple vulnerabilities have been discovered in RC4, rendering it insecure.[3][4]It is especially vulnerable when the beginning of the outputkeystreamis not discarded, or when nonrandom or related keys are used. Particularly problematic uses of RC4 have led to very insecureprotocolssuch asWEP.[5]
As of 2015[update], there is speculation that some state cryptologic agencies may possess the capability to break RC4 when used in theTLS protocol.[6]IETFhas published RFC 7465 to prohibit the use of RC4 in TLS;[3]MozillaandMicrosofthave issued similar recommendations.[7][8]
A number of attempts have been made to strengthen RC4, notably Spritz, RC4A,VMPC, and RC4+.
RC4 was designed byRon RivestofRSA Securityin 1987. While it is officially termed "Rivest Cipher 4", the RC acronym is alternatively understood to stand for "Ron's Code"[9](see alsoRC2,RC5andRC6).
RC4 was initially atrade secret, but in September 1994, a description of it was anonymously posted to theCypherpunksmailing list.[10]It was soon posted on thesci.cryptnewsgroup, where it wasbrokenwithin days byBob Jenkins.[11]From there, it spread to many sites on the Internet. The leaked code was confirmed to be genuine, as its output was found to match that of proprietary software using licensed RC4. Because the algorithm is known, it is no longer a trade secret. The nameRC4is trademarked, so RC4 is often referred to asARCFOURorARC4(meaningalleged RC4)[12]to avoid trademark problems.RSA Securityhas never officially released the algorithm; Rivest has, however, linked to theEnglish Wikipediaarticle on RC4 in his own course notes in 2008[13]and confirmed the history of RC4 and its code in a 2014 paper by him.[14]
RC4 became part of some commonly used encryption protocols and standards, such asWEPin 1997 andWPAin 2003/2004 for wireless cards; andSSLin 1995 and its successorTLSin 1999, until it was prohibited for all versions of TLS by RFC 7465 in 2015, due to theRC4 attacksweakening or breaking RC4 used in SSL/TLS. The main factors in RC4's success over such a wide range of applications have been its speed and simplicity: efficient implementations in both software and hardware were very easy to develop.
RC4 generates apseudorandom stream of bits(akeystream). As with any stream cipher, these can be used for encryption by combining it with the plaintext using bitwiseexclusive or; decryption is performed the same way (since exclusive or with given data is aninvolution). This is similar to theone-time pad, except that generatedpseudorandom bits, rather than a prepared stream, are used.
To generate the keystream, the cipher makes use of a secret internal state which consists of two parts:
The permutation is initialized with a variable-lengthkey, typically between 40 and 2048 bits, using thekey-schedulingalgorithm (KSA). Once this has been completed, the stream of bits is generated using thepseudo-random generation algorithm(PRGA).
Thekey-schedulingalgorithm is used to initialize the permutation in the array "S". "keylength" is defined as the number of bytes in the key and can be in the range 1 ≤ keylength ≤ 256, typically between 5 and 16, corresponding to akey lengthof 40–128 bits. First, the array "S" is initialized to theidentity permutation. S is then processed for 256 iterations in a similar way to the main PRGA, but also mixes in bytes of the key at the same time.
For as many iterations as are needed, the PRGA modifies the state and outputs a byte of the keystream. In each iteration, the PRGA:
Each element of S is swapped with another element at least once every 256 iterations.
Thus, this produces a stream ofK[0], K[1], ...which areXORedwith theplaintextto obtain theciphertext. Sociphertext[l] = plaintext[l] ⊕ K[l].
Severaloperating systemsincludearc4random, an API originating inOpenBSDproviding access to a random number generator originally based on RC4. The API allows no seeding, as the function initializes itself using/dev/random. The use of RC4 has been phased out in most systems implementing this API.Man pagesfor the new arc4random include thebackronym"A Replacement Call for Random" for ARC4 as a mnemonic, as it provides better random data thanrand()does.[15]
Proposed new random number generators are often compared to the RC4 random number generator.[22][23]
Several attacks on RC4 are able todistinguish its output from a random sequence.[24]
Many stream ciphers are based onlinear-feedback shift registers(LFSRs), which, while efficient in hardware, are less so in software. The design of RC4 avoids the use of LFSRs and is ideal for software implementation, as it requires only byte manipulations. It uses 256 bytes of memory for the state array, S[0] through S[255], k bytes of memory for the key, key[0] through key[k−1], and integer variables, i, j, and K. Performing a modular reduction of some value modulo 256 can be done with abitwise ANDwith 255 (which is equivalent to taking the low-order byte of the value in question).
These test vectors are not official, but convenient for anyone testing their own RC4 program. The keys and plaintext areASCII, the keystream and ciphertext are inhexadecimal.
Unlike a modern stream cipher (such as those ineSTREAM), RC4 does not take a separatenoncealongside the key. This means that if a single long-term key is to be used to securely encrypt multiple streams, the protocol must specify how to combine the nonce and the long-term key to generate the stream key for RC4. One approach to addressing this is to generate a "fresh" RC4 key byhashinga long-term key with anonce. However, many applications that use RC4 simply concatenate key and nonce; RC4's weakkey schedulethen gives rise torelated-key attacks, like theFluhrer, Mantin and Shamir attack(which is famous for breaking theWEPstandard).[25]
Because RC4 is astream cipher, it is moremalleablethan commonblock ciphers. If not used together with a strongmessage authentication code(MAC), then encryption is vulnerable to abit-flipping attack. The cipher is also vulnerable to astream cipher attackif not implemented correctly.[26]
It is noteworthy, however, that RC4, being a stream cipher, was for a period of time the only common cipher that was immune[27]to the 2011BEAST attackonTLS 1.0. The attack exploits a known weakness in the waycipher-block chaining modeis used with all of the other ciphers supported by TLS 1.0, which are all block ciphers.
In March 2013, there were new attack scenarios proposed by Isobe, Ohigashi, Watanabe and Morii,[28]as well as AlFardan, Bernstein, Paterson, Poettering and Schuldt that use new statistical biases in RC4 key table[29]to recover plaintext with large number of TLS encryptions.[30][31]
The use of RC4 in TLS is prohibited by RFC 7465 published in February 2015.
In 1995, Andrew Roos experimentally observed that the first byte of the keystream is correlated with the first three bytes of the key, and the first few bytes of the permutation after the KSA are correlated with some linear combination of the key bytes.[32]These biases remained unexplained until 2007, when Goutam Paul, Siddheshwar Rathi and Subhamoy Maitra[33]proved the keystream–key correlation and, in another work, Goutam Paul and Subhamoy Maitra[34]proved the permutation–key correlations. The latter work also used the permutation–key correlations to design the first algorithm for complete key reconstruction from the final permutation after the KSA, without any assumption on the key orinitialization vector. This algorithm has a constant probability of success in a time, which is the square root of the exhaustive key search complexity. Subsequently, many other works have been performed on key reconstruction from RC4 internal states.[35][36][37]Subhamoy Maitra and Goutam Paul[38]also showed that the Roos-type biases still persist even when one considers nested permutation indices, likeS[S[i]]orS[S[S[i]]]. These types of biases are used in some of the later key reconstruction methods for increasing the success probability.
The keystream generated by the RC4 is biased to varying degrees towards certain sequences, making it vulnerable todistinguishing attacks. The best such attack is due to Itsik Mantin andAdi Shamir, who showed that the second output byte of the cipher was biased toward zero with probability 1/128 (instead of 1/256). This is due to the fact that if the third byte of the original state is zero, and the second byte is not equal to 2, then the second output byte is always zero. Such bias can be detected by observing only 256 bytes.[24]
Souradyuti PaulandBart PreneelofCOSICshowed that the first and the second bytes of the RC4 were also biased. The number of required samples to detect this bias is 225bytes.[39]
Scott Fluhrerand David McGrew also showed attacks that distinguished the keystream of the RC4 from a random stream given a gigabyte of output.[40]
The complete characterization of a single step of RC4 PRGA was performed by Riddhipratim Basu, Shirshendu Ganguly, Subhamoy Maitra, and Goutam Paul.[41]Considering all the permutations, they proved that the distribution of the output is not uniform given i and j, and as a consequence, information about j is always leaked into the output.
In 2001, a new and surprising discovery was made byFluhrer,MantinandShamir: over all the possible RC4 keys, the statistics for the first few bytes of output keystream are strongly non-random, leaking information about the key. If the nonce and long-term key are simply concatenated to generate the RC4 key, this long-term key can be discovered by analysing a large number of messages encrypted with this key.[42]This and related effects were then used to break theWEP("wired equivalent privacy") encryption used with802.11wireless networks. This caused a scramble for a standards-based replacement for WEP in the 802.11 market and led to theIEEE 802.11ieffort andWPA.[43]
Protocols can defend against this attack by discarding the initial portion of the keystream. Such a modified algorithm is traditionally called "RC4-drop[n]", wherenis the number of initial keystream bytes that are dropped. The SCAN default isn= 768 bytes, but a conservative value would ben= 3072 bytes.[44]
The Fluhrer, Mantin and Shamir attack does not apply to RC4-based SSL, since SSL generates the encryption keys it uses for RC4 by hashing, meaning that different SSL sessions have unrelated keys.[45]
In 2005, Andreas Klein presented an analysis of the RC4 stream cipher, showing more correlations between the RC4 keystream and the key.[46]Erik Tews,Ralf-Philipp Weinmann, andAndrei Pychkineused this analysis to create aircrack-ptw, a tool that cracks 104-bit RC4 used in 128-bit WEP in under a minute.[47]Whereas the Fluhrer, Mantin, and Shamir attack used around 10 million messages, aircrack-ptw can break 104-bit keys in 40,000 frames with 50% probability, or in 85,000 frames with 95% probability.
A combinatorial problem related to the number of inputs and outputs of the RC4 cipher was first posed byItsik MantinandAdi Shamirin 2001, whereby, of the total 256 elements in the typical state of RC4, ifxnumber of elements (x≤ 256) areonlyknown (all other elements can be assumed empty), then the maximum number of elements that can be produced deterministically is alsoxin the next 256 rounds. This conjecture was put to rest in 2004 with a formal proof given bySouradyuti PaulandBart Preneel.[48]
In 2013, a group of security researchers at the Information Security Group at Royal Holloway, University of London reported an attack that can become effective using only 234encrypted messages.[49][50][51]While yet not a practical attack for most purposes, this result is sufficiently close to one that it has led to speculation that it is plausible that some state cryptologic agencies may already have better attacks that render RC4 insecure.[6]Given that, as of 2013[update], a large amount ofTLStraffic uses RC4 to avoid attacks on block ciphers that usecipher block chaining, if these hypothetical better attacks exist, then this would make the TLS-with-RC4 combination insecure against such attackers in a large number of practical scenarios.[6]
In March 2015, researcher to Royal Holloway announced improvements to their attack, providing a 226attack against passwords encrypted with RC4, as used in TLS.[52]
At the Black Hat Asia 2015 Conference, Itsik Mantin presented another attack against SSL using RC4 cipher.[53][54]
In 2015, security researchers fromKU Leuvenpresented new attacks against RC4 in bothTLSandWPA-TKIP.[55]Dubbed the Numerous Occurrence MOnitoring & Recovery Exploit (NOMORE) attack, it is the first attack of its kind that was demonstrated in practice. Their attack againstTLScan decrypt a secureHTTP cookiewithin 75 hours. The attack against WPA-TKIP can be completed within an hour and allows an attacker to decrypt and inject arbitrary packets.
As mentioned above, the most important weakness of RC4 comes from the insufficient key schedule; the first bytes of output reveal information about the key. This can be corrected by simply discarding some initial portion of the output stream.[56]This is known as RC4-dropN, whereNis typically a multiple of 256, such as 768 or 1024.
A number of attempts have been made to strengthen RC4, notably Spritz, RC4A,VMPC, and RC4+.
Souradyuti PaulandBart Preneelhave proposed an RC4 variant, which they call RC4A.[57]
RC4A uses two state arraysS1andS2, and two indexesj1andj2. Each timeiis incremented, two bytes are generated:
Thus, the algorithm is:
Although the algorithm required the same number of operations per output byte, there is greater parallelism than RC4, providing a possible speed improvement.
Although stronger than RC4, this algorithm has also been attacked, with Alexander Maximov[58]and a team from NEC[59]developing ways to distinguish its output from a truly random sequence.
Variably Modified Permutation Composition (VMPC) is another RC4 variant.[60]It uses similar key schedule as RC4, withj := S[(j + S[i] + key[i mod keylength]) mod 256]iterating 3 × 256 = 768 times rather than 256, and with an optional additional 768 iterations to incorporate an initial vector. The output generation function operates as follows:
This was attacked in the same papers as RC4A, and can be distinguished within 238output bytes.[61][59]
RC4+is a modified version of RC4 with a more complex three-phase key schedule (taking about three times as long as RC4, or the same as RC4-drop512), and a more complex output function which performs four additional lookups in the S array for each byte output, taking approximately 1.7 times as long as basic RC4.[62]
This algorithm has not been analyzed significantly.
In 2014, Ronald Rivest gave a talk and co-wrote a paper[14]on an updated redesign called Spritz. A hardware accelerator of Spritz was published in Secrypt, 2016[63]and shows that due to multiple nested calls required to produce output bytes, Spritz performs rather slowly compared to other hash functions such as SHA-3 and the best known hardware implementation of RC4.
Like othersponge functions, Spritz can be used to build a cryptographic hash function, a deterministic random bit generator (DRBG), an encryption algorithm that supportsauthenticated encryptionwith associated data (AEAD), etc.[14]
In 2016, Banik and Isobe proposed an attack that can distinguish Spritz from random noise.[64]In 2017, Banik, Isobe, and Morii proprosed a simple fix that removes the distinguisher in the first two keystream bytes, requiring only one additional memory access without diminishing software performance substantially.[65]
Where a protocol is marked with "(optionally)", RC4 is one of multiple ciphers the system can be configured to use.
|
https://en.wikipedia.org/wiki/RC4
|
Incryptography,SHA-1(Secure Hash Algorithm 1) is ahash functionwhich takes an input and produces a 160-bit(20-byte) hash value known as amessage digest– typically rendered as 40hexadecimaldigits. It was designed by the United StatesNational Security Agency, and is a U.S.Federal Information Processing Standard.[3]The algorithm has been cryptographically broken[4][5][6][7][8][9][10]but is still widely used.
Since 2005, SHA-1 has not been considered secure against well-funded opponents;[11]as of 2010 many organizations have recommended its replacement.[12][10][13]NISTformally deprecated use of SHA-1 in 2011 and disallowed its use for digital signatures in 2013, and declared that it should be phased out by 2030.[14]As of 2020[update],chosen-prefix attacksagainst SHA-1 are practical.[6][8]As such, it is recommended to remove SHA-1 from products as soon as possible and instead useSHA-2orSHA-3. Replacing SHA-1 is urgent where it is used fordigital signatures.
All majorweb browservendors ceased acceptance of SHA-1SSL certificatesin 2017.[15][9][4]In February 2017,CWI AmsterdamandGoogleannounced they had performed acollision attackagainst SHA-1, publishing two dissimilar PDF files which produced the same SHA-1 hash.[16][2]However, SHA-1 is still secure forHMAC.[17]
Microsofthas discontinued SHA-1 code signing support forWindows Updateon August 3, 2020,[18]which also effectively ended the update servers for versions ofWindowsthat have not been updated to SHA-2, such asWindows 2000up toVista, as well asWindows Serverversions fromWindows 2000 ServertoServer 2003.
SHA-1 produces amessage digestbased on principles similar to those used byRonald L. RivestofMITin the design of theMD2,MD4andMD5message digest algorithms, but generates a larger hash value (160 bits vs. 128 bits).
SHA-1 was developed as part of the U.S. Government'sCapstone project.[19]The original specification of the algorithm was published in 1993 under the titleSecure Hash Standard,FIPSPUB 180, by U.S. government standards agencyNIST(National Institute of Standards and Technology).[20][21]This version is now often namedSHA-0. It was withdrawn by theNSAshortly after publication and was superseded by the revised version, published in 1995 in FIPS PUB 180-1 and commonly designatedSHA-1. SHA-1 differs from SHA-0 only by a single bitwise rotation in the message schedule of itscompression function. According to the NSA, this was done to correct a flaw in the original algorithm which reduced its cryptographic security, but they did not provide any further explanation.[22][23]Publicly available techniques did indeed demonstrate a compromise of SHA-0, in 2004, before SHA-1 in 2017 (see§Attacks).
SHA-1 forms part of several widely used security applications and protocols, includingTLSandSSL,PGP,SSH,S/MIME, andIPsec. Those applications can also useMD5; both MD5 and SHA-1 are descended fromMD4.
SHA-1 and SHA-2 are the hash algorithms required by law for use in certainU.S. governmentapplications, including use within other cryptographic algorithms and protocols, for the protection of sensitive unclassified information. FIPS PUB 180-1 also encouraged adoption and use of SHA-1 by private and commercial organizations. SHA-1 is being retired from most government uses; the U.S. National Institute of Standards and Technology said, "Federal agencies should stop using SHA-1 for...applications that require collision resistance as soon as practical, and must use theSHA-2family of hash functions for these applications after 2010",[24]though that was later relaxed to allow SHA-1 to be used for verifying old digital signatures and time stamps.[24]
A prime motivation for the publication of theSecure Hash Algorithmwas theDigital Signature Standard, in which it is incorporated.
The SHA hash functions have been used for the basis of theSHACALblock ciphers.
Revision controlsystems such asGit,Mercurial, andMonotoneuse SHA-1, not for security, but to identify revisions and to ensure that the data has not changed due to accidental corruption.Linus Torvaldssaid about Git in 2007:
However Git does not require thesecond preimage resistanceof SHA-1 as a security feature, since it will always prefer to keep the earliest version of an object in case of collision, preventing an attacker from surreptitiously overwriting files.[26]The known attacks (as of 2020) also do not break second preimage resistance.[27]
For a hash function for whichLis the number of bits in the message digest, finding a message that corresponds to a given message digest can always be done using a brute force search in approximately 2Levaluations. This is called apreimage attackand may or may not be practical depending onLand the particular computing environment. However, acollision, consisting of finding two different messages that produce the same message digest, requires on average only about1.2 × 2L/2evaluations using abirthday attack. Thus thestrengthof a hash function is usually compared to a symmetric cipher of half the message digest length. SHA-1, which has a 160-bit message digest, was originally thought to have 80-bit strength.
Some of the applications that use cryptographic hashes, like password storage, are only minimally affected by a collision attack. Constructing a password that works for a given account requires apreimage attack, as well as access to the hash of the original password, which may or may not be trivial. Reversing password encryption (e.g. to obtain a password to try against a user's account elsewhere) is not made possible by the attacks. However, even a secure password hash can't prevent brute-force attacks onweak passwords.SeePassword cracking.
In the case of document signing, an attacker could not simply fake a signature from an existing document: The attacker would have to produce a pair of documents, one innocuous and one damaging, and get the private key holder to sign the innocuous document. There are practical circumstances in which this is possible; until the end of 2008, it was possible to create forgedSSLcertificates using anMD5collision.[28]
Due to the block and iterative structure of the algorithms and the absence of additional final steps, all SHA functions (except SHA-3)[29]are vulnerable tolength-extensionand partial-message collision attacks.[30]These attacks allow an attacker to forge a message signed only by a keyed hash –SHA(key||message), but notSHA(message||key)– by extending the message and recalculating the hash without knowing the key. A simple improvement to prevent these attacks is to hash twice:SHAd(message) = SHA(SHA(0b||message))(the length of 0b, zero block, is equal to the block size of the hash function).
AtCRYPTO98, two French researchers,Florent ChabaudandAntoine Joux, presented an attack on SHA-0:collisionscan be found with complexity 261, fewer than the 280for an ideal hash function of the same size.[31]
In 2004,Bihamand Chen found near-collisions for SHA-0 – two messages that hash to nearly the same value; in this case, 142 out of the 160 bits are equal. They also found full collisions of SHA-0 reduced to 62 out of its 80 rounds.[32]
Subsequently, on 12 August 2004, a collision for the full SHA-0 algorithm was announced by Joux, Carribault, Lemuet, and Jalby. This was done by using a generalization of the Chabaud and Joux attack. Finding the collision had complexity 251and took about 80,000 processor-hours on asupercomputerwith 256Itanium 2processors (equivalent to 13 days of full-time use of the computer).
On 17 August 2004, at the Rump Session of CRYPTO 2004, preliminary results were announced byWang, Feng, Lai, and Yu, about an attack onMD5, SHA-0 and other hash functions. The complexity of their attack on SHA-0 is 240, significantly better than the attack by Jouxet al.[33][34]
In February 2005, an attack byXiaoyun Wang,Yiqun Lisa Yin, and Hongbo Yu was announced which could find collisions in SHA-0 in 239operations.[5][35]
Another attack in 2008 applying theboomerang attackbrought the complexity of finding collisions down to 233.6, which was estimated to take 1 hour on an average PC from the year 2008.[36]
In light of the results for SHA-0, some experts[who?]suggested that plans for the use of SHA-1 in newcryptosystemsshould be reconsidered. After the CRYPTO 2004 results were published, NIST announced that they planned to phase out the use of SHA-1 by 2010 in favor of the SHA-2 variants.[37]
In early 2005,Vincent RijmenandElisabeth Oswaldpublished an attack on a reduced version of SHA-1 – 53 out of 80 rounds – which finds collisions with a computational effort of fewer than 280operations.[38]
In February 2005, an attack byXiaoyun Wang, Yiqun Lisa Yin, and Hongbo Yu was announced.[5]The attacks can find collisions in the full version of SHA-1, requiring fewer than 269operations. (Abrute-force searchwould require 280operations.)
The authors write: "In particular, our analysis is built upon the original differential attack on SHA-0, the near collision attack on SHA-0, the multiblock collision techniques, as well as the message modification techniques used in the collision search attack on MD5. Breaking SHA-1 would not be possible without these powerful analytical techniques."[39]The authors have presented a collision for 58-round SHA-1, found with 233hash operations. The paper with the full attack description was published in August 2005 at the CRYPTO conference.
In an interview, Yin states that, "Roughly, we exploit the following two weaknesses: One is that the file preprocessing step is not complicated enough; another is that certain math operations in the first 20 rounds have unexpected security problems."[40]
On 17 August 2005, an improvement on the SHA-1 attack was announced on behalf ofXiaoyun Wang,Andrew YaoandFrances Yaoat the CRYPTO 2005 Rump Session, lowering the complexity required for finding a collision in SHA-1 to 263.[7]On 18 December 2007 the details of this result were explained and verified by Martin Cochran.[41]
Christophe De Cannière and Christian Rechberger further improved the attack on SHA-1 in "Finding SHA-1 Characteristics: General Results and Applications,"[42]receiving the Best Paper Award atASIACRYPT2006. A two-block collision for 64-round SHA-1 was presented, found using unoptimized methods with 235compression function evaluations. Since this attack requires the equivalent of about 235evaluations, it is considered to be a significant theoretical break.[43]Their attack was extended further to 73 rounds (of 80) in 2010 by Grechnikov.[44]In order to find an actual collision in the full 80 rounds of the hash function, however, tremendous amounts of computer time are required. To that end, a collision search for SHA-1 using the volunteer computing platformBOINCbegan August 8, 2007, organized by theGraz University of Technology. The effort was abandoned May 12, 2009 due to lack of progress.[45]
At the Rump Session of CRYPTO 2006, Christian Rechberger and Christophe De Cannière claimed to have discovered a collision attack on SHA-1 that would allow an attacker to select at least parts of the message.[46][47]
In 2008, an attack methodology by Stéphane Manuel reported hash collisions with an estimated theoretical complexity of 251to 257operations.[48]However he later retracted that claim after finding that local collision paths were not actually independent, and finally quoting for the most efficient a collision vector that was already known before this work.[49]
Cameron McDonald, Philip Hawkes and Josef Pieprzyk presented a hash collision attack with claimed complexity 252at the Rump Session of Eurocrypt 2009.[50]However, the accompanying paper, "Differential Path for SHA-1 with complexityO(252)" has been withdrawn due to the authors' discovery that their estimate was incorrect.[51]
One attack against SHA-1 was Marc Stevens[52]with an estimated cost of $2.77M (2012) to break a single hash value by renting CPU power from cloud servers.[53]Stevens developed this attack in a project called HashClash,[54]implementing a differential path attack. On 8 November 2010, he claimed he had a fully working near-collision attack against full SHA-1 working with an estimated complexity equivalent to 257.5SHA-1 compressions. He estimated this attack could be extended to a full collision with a complexity around 261.
On 8 October 2015, Marc Stevens, Pierre Karpman, and Thomas Peyrin published a freestart collision attack on SHA-1's compression function that requires only 257SHA-1 evaluations. This does not directly translate into a collision on the full SHA-1 hash function (where an attacker isnotable to freely choose the initial internal state), but undermines the security claims for SHA-1. In particular, it was the first time that an attack on full SHA-1 had beendemonstrated; all earlier attacks were too expensive for their authors to carry them out. The authors named this significant breakthrough in thecryptanalysisof SHA-1The SHAppening.[10]
The method was based on their earlier work, as well as the auxiliary paths (or boomerangs) speed-up technique from Joux and Peyrin, and using high performance/cost efficient GPU cards fromNvidia. The collision was found on a 16-node cluster with a total of 64 graphics cards. The authors estimated that a similar collision could be found by buying US$2,000 of GPU time onEC2.[10]
The authors estimated that the cost of renting enough of EC2 CPU/GPU time to generate a full collision for SHA-1 at the time of publication was between US$75K and $120K, and noted that was well within the budget of criminal organizations, not to mention nationalintelligence agencies. As such, the authors recommended that SHA-1 be deprecated as quickly as possible.[10]
On 23 February 2017, theCWI (Centrum Wiskunde & Informatica)and Google announced theSHAtteredattack, in which they generated two different PDF files with the same SHA-1 hash in roughly 263.1SHA-1 evaluations. This attack is about 100,000 times faster than brute forcing a SHA-1 collision with abirthday attack, which was estimated to take 280SHA-1 evaluations. The attack required "the equivalent processing power of 6,500 years of single-CPU computations and 110 years of single-GPU computations".[2]
On 24 April 2019 a paper by Gaëtan Leurent and Thomas Peyrin presented at Eurocrypt 2019 described an enhancement to the previously bestchosen-prefix attackinMerkle–Damgård–like digest functions based onDavies–Meyerblock ciphers. With these improvements, this method is capable of finding chosen-prefix collisions in approximately 268SHA-1 evaluations. This is approximately 1 billion times faster (and now usable for many targeted attacks, thanks to the possibility of choosing a prefix, for example malicious code or faked identities in signed certificates) than the previous attack's 277.1evaluations (but without chosen prefix, which was impractical for most targeted attacks because the found collisions were almost random)[1]and is fast enough to be practical for resourceful attackers, requiring approximately $100,000 of cloud processing. This method is also capable of finding chosen-prefix collisions in theMD5function, but at a complexity of 246.3does not surpass the prior best available method at a theoretical level (239), though potentially at a practical level (≤249).[55]This attack has a memory requirement of 500+ GB.
On 5 January 2020 the authors published an improved attack called "shambles".[8]In this paper they demonstrate a chosen-prefix collision attack with a complexity of 263.4, that at the time of publication would cost US$45K per generated collision.
Implementations of all FIPS-approved security functions can be officially validated through theCMVP program, jointly run by theNational Institute of Standards and Technology(NIST) and theCommunications Security Establishment(CSE). For informal verification, a package to generate a high number of test vectors is made available for download on the NIST site; the resulting verification, however, does not replace the formal CMVP validation, which is required by law for certain applications.
As of December 2013[update], there are over 2000 validated implementations of SHA-1, with 14 of them capable of handling messages with a length in bits not a multiple of eight (seeSHS Validation ListArchived2011-08-23 at theWayback Machine).
These are examples of SHA-1message digestsin hexadecimal and inBase64binary toASCIItext encoding.
Even a small change in the message will, with overwhelming probability, result in many bits changing due to theavalanche effect. For example, changingdogtocogproduces a hash with different values for 81 of the 160 bits:
The hash of the zero-length string is:
Pseudocodefor the SHA-1 algorithm follows:
The numberhhis the message digest, which can be written in hexadecimal (base 16).
The chosen constant values used in the algorithm were assumed to benothing up my sleeve numbers:
Instead of the formulation from the original FIPS PUB 180-1 shown, the following equivalent expressions may be used to computefin the main loop above:
It was also shown[57]that for the rounds 32–79 the computation of:
can be replaced with:
This transformation keeps all operands 64-bit aligned and, by removing the dependency ofw[i]onw[i-3], allows efficient SIMD implementation with a vector length of 4 likex86SSEinstructions.
In the table below,internal statemeans the "internal hash sum" after each compression of a data block.
Below is a list of cryptography libraries that support SHA-1:
Hardware acceleration is provided by the following processor extensions:
In the wake of SHAttered, Marc Stevens and Dan Shumow published "sha1collisiondetection" (SHA-1CD), a variant of SHA-1 that detects collision attacks and changes the hash output when one is detected. The false positive rate is 2−90.[64]SHA-1CD is used byGitHubsince March 2017 andgitsince version 2.13.0 of May 2017.[65]
|
https://en.wikipedia.org/wiki/SHA-1
|
TheMD5 message-digest algorithmis a widely usedhash functionproducing a 128-bithash value. MD5 was designed byRonald Rivestin 1991 to replace an earlier hash functionMD4,[3]and was specified in 1992 as RFC 1321.
MD5 can be used as achecksumto verifydata integrityagainst unintentional corruption. Historically it was widely used as acryptographic hash function; however it has been found to suffer from extensive vulnerabilities. It remains suitable for other non-cryptographic purposes, for example for determining the partition for a particular key in apartitioned database, and may be preferred due to lower computational requirements than more recentSecure Hash Algorithms.[4]
MD5 is one in a series ofmessage digestalgorithms designed by ProfessorRonald RivestofMIT(Rivest, 1992). When analytic work indicated that MD5's predecessorMD4was likely to be insecure, Rivest designed MD5 in 1991 as a secure replacement. (Hans Dobbertindid indeed later find weaknesses in MD4.)
In 1993, Den Boer and Bosselaers gave an early, although limited, result of finding a "pseudo-collision" of the MD5compression function; that is, two differentinitialization vectorsthat produce an identical digest.
In 1996, Dobbertin announced a collision of the compression function of MD5 (Dobbertin, 1996). While this was not an attack on the full MD5 hash function, it was close enough for cryptographers to recommend switching to a replacement, such asSHA-1(also compromised since) orRIPEMD-160.
The size of the hash value (128 bits) is small enough to contemplate abirthday attack.MD5CRKwas adistributed projectstarted in March 2004 to demonstrate that MD5 is practically insecure by finding a collision using a birthday attack.
MD5CRK ended shortly after 17 August 2004, whencollisionsfor the full MD5 were announced byXiaoyun Wang, Dengguo Feng,Xuejia Lai, and Hongbo Yu.[5][6]Their analytical attack was reported to take only one hour on anIBM p690cluster.[7]
On 1 March 2005,Arjen Lenstra,Xiaoyun Wang, and Benne de Weger demonstrated construction of twoX.509certificates with different public keys and the same MD5 hash value, a demonstrably practical collision.[8]The construction included private keys for both public keys. A few days later,Vlastimil Klimadescribed an improved algorithm, able to construct MD5 collisions in a few hours on a single notebook computer.[9]On 18 March 2006, Klima published an algorithm that could find a collision within one minute on a single notebook computer, using a method he calls tunneling.[10]
Various MD5-relatedRFC erratahave been published.
In 2009, theUnited States Cyber Commandused an MD5 hash value of their mission statement as a part of their official emblem.[11]
On 24 December 2010, Tao Xie and Dengguo Feng announced the first published single-block (512-bit) MD5 collision.[12](Previous collision discoveries had relied on multi-block attacks.) For "security reasons", Xie and Feng did not disclose the new attack method. They issued a challenge to the cryptographic community, offering a US$10,000 reward to the first finder of a different 64-byte collision before 1 January 2013.Marc Stevensresponded to the challenge and published colliding single-block messages as well as the construction algorithm and sources.[13]
In 2011 an informational RFC 6151[14]was approved to update the security considerations in MD5[15]and HMAC-MD5.[16]
One basic requirement of any cryptographic hash function is that it should becomputationally infeasibleto find two distinct messages that hash to the same value. MD5 fails this requirement catastrophically. On 31 December 2008, theCMU Software Engineering Instituteconcluded that MD5 was essentially "cryptographically broken and unsuitable for further use".[17]The weaknesses of MD5 have been exploited in the field, most infamously by theFlame malwarein 2012. As of 2019[update], MD5 continues to be widely used, despite its well-documented weaknesses and deprecation by security experts.[18]
Acollision attackexists that can findcollisionswithin seconds on a computer with a 2.6 GHz Pentium 4 processor (complexity of 224.1).[19]Further, there is also achosen-prefix collision attackthat can produce a collision for two inputs with specified prefixes within seconds, using off-the-shelf computing hardware (complexity 239).[20]The ability to find collisions has been greatly aided by the use of off-the-shelfGPUs. On an NVIDIA GeForce 8400GS graphics processor, 16–18 million hashes per second can be computed. An NVIDIA GeForce 8800 Ultra can calculate more than 200 million hashes per second.[21]
These hash and collision attacks have been demonstrated in the public in various situations, including colliding document files[22][23]anddigital certificates.[24]As of 2015, MD5 was demonstrated to be still quite widely used, most notably by security research and antivirus companies.[25]
As of 2019, one quarter of widely usedcontent management systemswere reported to still use MD5 forpassword hashing.[18]
In 1996, a flaw was found in the design of MD5. While it was not deemed a fatal weakness at the time, cryptographers began recommending the use of other algorithms, such asSHA-1, which has since been found to be vulnerable as well.[26]In 2004 it was shown that MD5 is notcollision-resistant.[27]As such, MD5 is not suitable for applications likeSSLcertificatesordigital signaturesthat rely on this property for digital security. Researchers additionally discovered more serious flaws in MD5, and described a feasiblecollision attack—a method to create a pair of inputs for which MD5 produces identicalchecksums.[5][28]Further advances were made in breaking MD5 in 2005, 2006, and 2007.[29]In December 2008, a group of researchers used this technique to fakeSSL certificatevalidity.[24][30]
As of 2010, theCMU Software Engineering Instituteconsiders MD5 "cryptographically broken and unsuitable for further use",[31]and most U.S. government applications now require theSHA-2family of hash functions.[32]In 2012, theFlamemalware exploited the weaknesses in MD5 to fake a Microsoftdigital signature.[33]
In 1996, collisions were found in the compression function of MD5, andHans Dobbertinwrote in theRSA Laboratoriestechnical newsletter, "The presented attack does not yet threaten practical applications of MD5, but it comes rather close ... in the future MD5 should no longer be implemented ... where a collision-resistant hash function is required."[34]
In 2005, researchers were able to create pairs ofPostScriptdocuments[35]andX.509certificates[36]with the same hash. Later that year, MD5's designer Ron Rivest wrote that "md5 and sha1 are both clearly broken (in terms of collision-resistance)".[37]
On 30 December 2008, a group of researchers announced at the 25thChaos Communication Congresshow they had used MD5 collisions to create an intermediate certificate authority certificate that appeared to be legitimate when checked by its MD5 hash.[24]The researchers used aPS3 clusterat theEPFLinLausanne, Switzerland[38]to change a normal SSL certificate issued byRapidSSLinto a workingCA certificatefor that issuer, which could then be used to create other certificates that would appear to be legitimate and issued by RapidSSL.Verisign, the issuers of RapidSSL certificates, said they stopped issuing new certificates using MD5 as their checksum algorithm for RapidSSL once the vulnerability was announced.[39]Although Verisign declined to revoke existing certificates signed using MD5, their response was considered adequate by the authors of the exploit (Alexander Sotirov,Marc Stevens,Jacob Appelbaum,Arjen Lenstra, David Molnar, Dag Arne Osvik, and Benne de Weger).[24]Bruce Schneier wrote of the attack that "we already knew that MD5 is a broken hash function" and that "no one should be using MD5 anymore".[40]The SSL researchers wrote, "Our desired impact is that Certification Authorities will stop using MD5 in issuing new certificates. We also hope that use of MD5 in other applications will be reconsidered as well."[24]
In 2012, according toMicrosoft, the authors of theFlamemalware used an MD5 collision to forge a Windows code-signing certificate.[33]
MD5 uses theMerkle–Damgård construction, so if two prefixes with the same hash can be constructed, a common suffix can be added to both to make the collision more likely to be accepted as valid data by the application using it. Furthermore, current collision-finding techniques allow specifying an arbitraryprefix: an attacker can create two colliding files that both begin with the same content. All the attacker needs to generate two colliding files is a template file with a 128-byte block of data, aligned on a 64-byte boundary, that can be changed freely by the collision-finding algorithm. An example MD5 collision, with the two messages differing in 6 bytes, is:
Both produce the MD5 hash79054025255fb1a26e4bc422aef54eb4.[41]The difference between the two samples is that the leading bit in eachnibblehas been flipped. For example, the 20th byte (offset 0x13) in the top sample, 0x87, is 10000111 in binary. The leading bit in the byte (also the leading bit in the first nibble) is flipped to make 00000111, which is 0x07, as shown in the lower sample.
Later it was also found to be possible to construct collisions between two files with separately chosen prefixes. This technique was used in the creation of the rogue CA certificate in 2008. A new variant of parallelized collision searching usingMPIwas proposed by Anton Kuznetsov in 2014, which allowed finding a collision in 11 hours on a computing cluster.[42]
In April 2009, an attack against MD5 was published that breaks MD5'spreimage resistance. This attack is only theoretical, with a computational complexity of 2123.4for full preimage.[43][44]
MD5 digests have been widely used in thesoftwareworld to provide some assurance that a transferred file has arrived intact. For example, file servers often provide a pre-computed MD5 (known asmd5sum)checksumfor the files, so that a user can compare the checksum of the downloaded file to it. Most unix-based operating systems include MD5 sum utilities in their distribution packages; Windows users may use the includedPowerShellfunction "Get-FileHash", the included command line function "certutil -hashfile <filename> md5",[45][46]install a Microsoft utility,[47][48]or use third-party applications. Android ROMs also use this type of checksum.
As it is easy to generate MD5 collisions, it is possible for the person who created the file to create a second file with the same checksum, so this technique cannot protect against some forms of malicious tampering. In some cases, the checksum cannot be trusted (for example, if it was obtained over the same channel as the downloaded file), in which case MD5 can only provide error-checking functionality: it will recognize a corrupt or incomplete download, which becomes more likely when downloading larger files.
Historically, MD5 has been used to store a one-way hash of apassword, often withkey stretching.[49][50]NIST does not include MD5 in their list of recommended hashes for password storage.[51]
MD5 is also used in the field ofelectronic discovery, to provide a unique identifier for each document that is exchanged during the legal discovery process. This method can be used to replace theBates stampnumbering system that has been used for decades during the exchange of paper documents. As above, this usage should be discouraged due to the ease of collision attacks.
MD5 processes a variable-length message into a fixed-length output of 128 bits. The input message is broken up into chunks of 512-bit blocks (sixteen 32-bit words); the message ispaddedso that its length is divisible by 512. The padding works as follows: first, a single bit, 1, is appended to the end of the message. This is followed by as many zeros as are required to bring the length of the message up to 64 bits fewer than a multiple of 512. The remaining bits are filled up with 64 bits representing the length of the original message, modulo 264.
The main MD5 algorithm operates on a 128-bit state, divided into four 32-bit words, denotedA,B,C, andD. These are initialized to certain fixed constants. The main algorithm then uses each 512-bit message block in turn to modify the state. The processing of a message block consists of four similar stages, termedrounds; each round is composed of 16 similar operations based on a non-linear functionF, modular addition, and left rotation. Figure 1 illustrates one operation within a round. There are four possible functions; a different one is used in each round:
⊕,∧,∨,¬{\displaystyle \oplus ,\wedge ,\vee ,\neg }denote theXOR,AND,ORandNOToperations respectively.
The MD5 hash is calculated according to this algorithm.[52]All values are inlittle-endian.
Instead of the formulation from the original RFC 1321 shown, the following may be used for improved efficiency (useful if assembly language is being used – otherwise, the compiler will generally optimize the above code. Since each computation is dependent on another in these formulations, this is often slower than the above method where the nand/and can be parallelised):
The 128-bit (16-byte) MD5 hashes (also termedmessage digests) are typically represented as a sequence of 32hexadecimaldigits. The following demonstrates a 43-byteASCIIinput and the corresponding MD5 hash:
Even a small change in the message will (with overwhelming probability) result in a mostly different hash, due to theavalanche effect. For example, adding a period to the end of the sentence:
The hash of the zero-length string is:
The MD5 algorithm is specified for messages consisting of any number of bits; it is not limited to multiples of eight bits (octets,bytes). Some MD5 implementations such asmd5summight be limited to octets, or they might not supportstreamingfor messages of an initially undetermined length.
Below is a list of cryptography libraries that support MD5:
|
https://en.wikipedia.org/wiki/MD5
|
TheMD4 Message-Digest Algorithmis acryptographic hash functiondeveloped byRonald Rivestin 1990.[3]The digest length is 128 bits. The algorithm has influenced later designs, such as theMD5,SHA-1andRIPEMDalgorithms. The initialism "MD" stands for "Message Digest".
The security of MD4 has been severely compromised. The first fullcollision attackagainst MD4 was published in 1995, and several newer attacks have been published since then. As of 2007, an attack can generate collisions in less than two MD4 hash operations.[2]A theoreticalpreimage attackalso exists.
A variant of MD4 is used in theed2k URI schemeto provide a unique identifier for a file in the popular eDonkey2000 / eMule P2P networks. MD4 was also used by thersyncprotocol (prior to version 3.0.0).
MD4 is used to computeNTLMpassword-derived key digests on Microsoft Windows NT, XP, Vista, 7, 8, 10 and 11.[4]
Weaknesses in MD4 were demonstrated by Den Boer and Bosselaers in a paper published in 1991.[5]The first full-round MD4collision attackwas found byHans Dobbertinin 1995, which took only seconds to carry out at that time.[6]In August 2004,Wanget al. found a very efficient collision attack, alongside attacks on later hash function designs in the MD4/MD5/SHA-1/RIPEMD family. This result was improved later by Sasaki et al., and generating a collision is now as cheap as verifying it (a few microseconds).[2]
In 2008, thepreimage resistanceof MD4 was also broken by Gaëtan Leurent, with a 2102attack.[7]In 2010 Guo et al published a 299.7attack.[8]
In 2011, RFC 6150 stated that RFC 1320 (MD4) ishistoric(obsolete).
The 128-bit (16-byte) MD4 hashes (also termedmessage digests) are typically represented as 32-digithexadecimalnumbers. The following demonstrates a 43-byteASCIIinput and the corresponding MD4 hash:
Even a small change in the message will (with overwhelming probability) result in a completely different hash, e.g. changingdtoc:
The hash of the zero-length string is:
The following test vectors are defined in RFC 1320 (The MD4 Message-Digest Algorithm)
Let:
Note that two hex-digits of k1 and k2 define one byte of the input string, whose length is 64 bytes .
|
https://en.wikipedia.org/wiki/MD4
|
Incryptanalysisandcomputer security, adictionary attackis an attack using a restricted subset of a keyspace to defeat acipheror authentication mechanism by trying to determine its decryption key orpassphrase, sometimes trying thousands or millions of likely possibilities[1]often obtained from lists of past security breaches.
A dictionary attack is based on trying all the strings in a pre-arranged listing. Such attacks originally used words found in a dictionary (hence the phrasedictionary attack);[2]however, now there are much larger lists available on the open Internet containing hundreds of millions of passwords recovered from past data breaches.[3]There is also cracking software that can use such lists and produce common variations, such assubstituting numbers for similar-looking letters. A dictionary attack tries only those possibilities which are deemed most likely to succeed. Dictionary attacks often succeed because many people have a tendency to choose short passwords that are ordinary words or common passwords; or variants obtained, for example, by appending a digit or punctuation character. Dictionary attacks are often successful, since many commonly used password creation techniques are covered by the available lists, combined with cracking software pattern generation. A safer approach is to randomly generate a long password (15 letters or more) or a multiwordpassphrase, using apassword managerprogram or manually typing a password.
Dictionary attacks can be deterred by the server administrator by using a more computationally expensive hashing algorithm.Bcrypt,scrypt, andArgon2are examples of such resource intensive functions that require significant computational power to process,[4]allowing for large improvements in security against dictionary attacks. While other hashing functions, such asSHAandMD5, are much faster and less expensive to compute, they can still be strengthened by being applied multiple times to an input string through a process calledkey stretching. An attacker would have to know approximately how many times the function was applied for a dictionary attack to be feasible.
It is possible to achieve atime–space tradeoffbypre-computinga list ofhashesof dictionary words and storing these in a database using the hash as thekey. This requires a considerable amount of preparation time, but this allows the actual attack to be executed faster. The storage requirements for the pre-computed tables were once a major cost, but now they are less of an issue because of the low cost ofdisk storage. Pre-computed dictionary attacks are particularly effective when a large number of passwords are to be cracked. The pre-computed dictionary needs be generated only once, and when it is completed, password hashes can be looked up almost instantly at any time to find the corresponding password. A more refined approach involves the use ofrainbow tables, which reduce storage requirements at the cost of slightly longer lookup-times.SeeLM hashfor an example of anauthentication systemcompromised by such an attack.
Pre-computed dictionary attacks, or "rainbow table attacks", can be thwarted by the use ofsalt, a technique that forces the hash dictionary to be recomputed for each password sought, makingprecomputationinfeasible, provided that the number of possible salt values is large enough.[5]
|
https://en.wikipedia.org/wiki/Dictionary_attack
|
Password strengthis a measure of the effectiveness of apasswordagainst guessing orbrute-force attacks. In its usual form, it estimates how many trials an attacker who does not have direct access to the password would need, on average, to guess it correctly. The strength of a password is a function of length, complexity, and unpredictability.[1]
Using strong passwords lowers the overallriskof a security breach, but strong passwords do not replace the need for other effectivesecurity controls.[2]The effectiveness of a password of a given strength is strongly determined by the design and implementation of theauthentication factors(knowledge, ownership, inherence). The first factor is the main focus of this article.
The rate at which an attacker can submit guessed passwords to the system is a key factor in determining system security. Some systems impose a time-out of several seconds after a small number (e.g. three) of failed password entry attempts. In the absence of othervulnerabilities, such systems can be effectively secured with relatively simple passwords. However, the system store information about the user's passwords in some form and if that information is stolen, say by breaching system security, the user's passwords can be at risk.
In 2019, the United Kingdom'sNCSCanalyzed public databases of breached accounts to see which words, phrases, and strings people used. The most popular password on the list was 123456, appearing in more than 23 million passwords. The second-most popular string, 123456789, was not much harder to crack, while the top five included "qwerty", "password", and 1111111.[3]
Passwords are created either automatically (using randomizing equipment) or by a human; the latter case is more common. While the strength of randomly chosen passwords against abrute-force attackcan be calculated with precision, determining the strength of human-generated passwords is difficult.
Typically, humans are asked to choose a password, sometimes guided by suggestions or restricted by a set of rules, when creating a new account for a computer system or internet website. Only rough estimates of strength are possible since humans tend to follow patterns in such tasks, and those patterns can usually assist an attacker.[4]In addition, lists of commonly chosen passwords are widely available for use by password-guessing programs. Such lists include the numerous online dictionaries for various human languages, breached databases ofplaintextandhashedpasswords from various online business and social accounts, along with other common passwords. All items in such lists are considered weak, as are passwords that are simple modifications of them.
Although random password generation programs are available and are intended to be easy to use, they usually generate random, hard-to-remember passwords, often resulting in people preferring to choose their own. However, this is inherently insecure because the person's lifestyle, entertainment preferences, and other key individualistic qualities usually come into play to influence the choice of password, while the prevalence of onlinesocial mediahas made obtaining information about people much easier.
Systems that use passwords forauthenticationmust have some way to check any password entered to gain access. If the valid passwords are simply stored in a system file or database, an attacker who gains sufficient access to the system will obtain all user passwords, giving the attacker access to all accounts on the attacked system and possibly other systems where users employ the same or similar passwords. One way to reduce this risk is to store only acryptographic hashof each password instead of the password itself. Standard cryptographic hashes, such as theSecure Hash Algorithm(SHA) series, are very hard to reverse, so an attacker who gets hold of the hash value cannot directly recover the password. However, knowledge of the hash value lets the attacker quickly test guesses offline.Password crackingprograms are widely available that will test a large number of trial passwords against a purloined cryptographic hash.
Improvements in computing technology keep increasing the rate at which guessed passwords can be tested. For example, in 2010, theGeorgia Tech Research Institutedeveloped a method of usingGPGPUto crack passwords much faster.[5]Elcomsoftinvented the usage of common graphic cards for quicker password recovery in August 2007 and soon filed a corresponding patent in the US.[6]By 2011, commercial products were available that claimed the ability to test up to 112,000 passwords per second on a standard desktop computer, using a high-end graphics processor for that time.[7]Such a device will crack a six-letter single-case password in one day. The work can be distributed over many computers for an additional speedup proportional to the number of available computers with comparable GPUs. Specialkey stretchinghashes are available that take a relatively long time to compute, reducing the rate at which guessing can take place. Although it is considered best practice to use key stretching, many common systems do not.
Another situation where quick guessing is possible is when the password is used to form acryptographic key. In such cases, an attacker can quickly check to see if a guessed password successfully decodes encrypted data. For example, one commercial product claims to test 103,000WPAPSK passwords per second.[8]
If a password system only stores the hash of the password, an attacker can pre-compute hash values for common password variants and all passwords shorter than a certain length, allowing very rapid recovery of the password once its hash is obtained. Very long lists of pre-computed password hashes can be efficiently stored usingrainbow tables. This method of attack can be foiled by storing a random value, called acryptographic salt, along with the hash. The salt is combined with the password when computing the hash, so an attacker precomputing a rainbow table would have to store for each password its hash with every possible salt value. This becomes infeasible if the salt has a big enough range, say a 32-bit number. Many authentication systems in common use do not employ salts and rainbow tables are available on the Internet for several such systems.
Password strength is specified by the amount ofinformation entropy, which is measured inshannon(Sh) and is a concept frominformation theory. It can be regarded as the minimum number ofbitsnecessary to hold the information in a password of a given type. A related measure is thebase-2 logarithmof the number of guesses needed to find the password with certainty, which is commonly referred to as the "bits of entropy".[9]A password with 42 bits of entropy would be as strong as a string of 42 bits chosen randomly, for example by afair cointoss. Put another way, a password with 42 bits of entropy would require 242(4,398,046,511,104) attempts to exhaust all possibilities during abrute force search. Thus, increasing the entropy of the password by one bit doubles the number of guesses required, making an attacker's task twice as difficult. On average, an attacker will have to try half the possible number of passwords before finding the correct one.[4]
Random passwords consist of a string of symbols of specified length taken from some set of symbols using a random selection process in which each symbol is equally likely to be selected. The symbols can be individual characters from a character set (e.g., theASCIIcharacter set), syllables designed to form pronounceable passwords or even words from a word list (thus forming apassphrase).
The strength of random passwords depends on the actual entropy of the underlying number generator; however, these are often not truly random, but pseudorandom. Many publicly available password generators use random number generators found in programming libraries that offer limited entropy. However, most modern operating systems offer cryptographically strong random number generators that are suitable for password generation. It is also possible to use ordinarydiceto generate random passwords(seeRandom password generator § Stronger methods). Random password programs often can ensure that the resulting password complies with a localpassword policy; for instance, by always producing a mix of letters, numbers, and special characters.
For passwords generated by a process that randomly selects a string of symbols of length,L, from a set ofNpossible symbols, the number of possible passwords can be found by raising the number of symbols to the powerL, i.e.NL. Increasing eitherLorNwill strengthen the generated password. The strength of a random password as measured by theinformation entropyis just thebase-2 logarithmor log2of the number of possible passwords, assuming each symbol in the password is produced independently. Thus a random password's information entropy,H, is given by the formula:
H=log2NL=Llog2N=LlogNlog2{\displaystyle H=\log _{2}N^{L}=L\log _{2}N=L{\log N \over \log 2}}
whereNis the number of possible symbols andLis the number of symbols in the password.His measured inbits.[4][10]In the last expression,logcan be to anybase.
Abinarybyteis usually expressed using two hexadecimal characters.
To find the length,L,needed to achieve a desired strengthH,with a password drawn randomly from a set ofNsymbols, one computes:
L=⌈Hlog2N⌉{\displaystyle L={\left\lceil {\frac {H}{\log _{2}N}}\right\rceil }}
where⌈⌉{\displaystyle \left\lceil \ \right\rceil }denotes the mathematicalceiling function,i.e.rounding up to the next largestwhole number.
The following table uses this formula to show the required lengths of truly randomly generated passwords to achieve desired password entropies for common symbol sets:
People are notoriously poor at achieving sufficient entropy to produce satisfactory passwords. According to one study involving half a million users, the average password entropy was estimated at 40.54 bits.[11]
Thus, in one analysis of over 3 million eight-character passwords, the letter "e" was used over 1.5 million times, while the letter "f" was used only 250,000 times. Auniform distributionwould have had each character being used about 900,000 times. The most common number used is "1", whereas the most common letters are a, e, o, and r.[12]
Users rarely make full use of larger character sets in forming passwords. For example, hacking results obtained from a MySpace phishing scheme in 2006 revealed 34,000 passwords, of which only 8.3% used mixed case, numbers, and symbols.[13]
The full strength associated with using the entire ASCII character set (numerals, mixed case letters, and special characters) is only achieved if each possible password is equally likely. This seems to suggest that all passwords must contain characters from each of several character classes, perhaps upper and lower-case letters, numbers, and non-alphanumeric characters. Such a requirement is a pattern in password choice and can be expected to reduce an attacker's "work factor" (in Claude Shannon's terms). This is a reduction in password "strength". A better requirement would be to require a passwordnotto contain any word in an online dictionary, or list of names, or any license plate pattern from any state (in the US) or country (as in the EU). If patterned choices are required, humans are likely to use them in predictable ways, such as capitalizing a letter, adding one or two numbers, and a special character. This predictability means that the increase in password strength is minor when compared to random passwords.
Password Safety Awareness Projects
Google developed Interland teach the kid internet audience safety on internet. On the chapter calledTower Of Tresureit is advised to use unusual names paired with characters like (₺&@#%) with a game.[14]
NISTSpecial Publication 800-63 of June 2004 (revision two) suggested a scheme to approximate the entropy of human-generated passwords:[4]
Using this scheme, an eight-character human-selected password without uppercase characters and non-alphabetic characters OR with either but of the two character sets is estimated to have eighteen bits of entropy. The NIST publication concedes that at the time of development, little information was available on the real-world selection of passwords. Later research into human-selected password entropy using newly available real-world data has demonstrated that the NIST scheme does not provide a valid metric for entropy estimation of human-selected passwords.[15]The June 2017 revision of SP 800-63 (Revision three) drops this approach.[16]
Because national keyboard implementations vary, not all 94 ASCII printable characters can be used everywhere. This can present a problem to an international traveler who wished to log into a remote system using a keyboard on a local computer(see article concerned withkeyboard layouts). Many handheld devices, such astablet computersandsmart phones, require complex shift sequences or keyboard app swapping to enter special characters.
Authentication programs can vary as to the list of allowable password characters. Some do not recognize case differences (e.g., the upper-case "E" is considered equivalent to the lower-case "e"), and others prohibit some of the other symbols. In the past few decades, systems have permitted more characters in passwords, but limitations still exist. Systems also vary as to the maximum length of passwords allowed.
As a practical matter, passwords must be both reasonable and functional for the end user as well as strong enough for the intended purpose. Passwords that are too difficult to remember may be forgotten and so are more likely to be written on paper, which some consider a security risk.[17]In contrast, others argue that forcing users to remember passwords without assistance can only accommodate weak passwords, and thus poses a greater security risk. According toBruce Schneier, most people are good at securing their wallets or purses, which is a "great place" to store a written password.[18]
The minimum number of bits of entropy needed for a password depends on thethreat modelfor the given application. Ifkey stretchingis not used, passwords with more entropy are needed. RFC 4086, "Randomness Requirements for Security", published June 2005, presents some example threat models and how to calculate the entropy desired for each one.[19]Their answers vary between 29 bits of entropy needed if only online attacks are expected, and up to 96 bits of entropy needed for important cryptographic keys used in applications like encryption where the password or key needs to be secure for a long period and stretching isn't applicable. A 2010Georgia Tech Research Institutestudy based on unstretched keys recommended a 12-character random password but as a minimum length requirement.[5][20]It pays to bear in mind that since computing power continually grows, to prevent offline attacks the required number of bits of entropy should also increase over time.
The upper end is related to the stringent requirements of choosing keys used in encryption. In 1999,an Electronic Frontier Foundation projectbroke 56-bitDESencryption in less than a day using specially designed hardware.[21]In 2002,distributed.netcracked a 64-bit key in 4 years, 9 months, and 23 days.[22]As of October 12, 2011,distributed.netestimates that cracking a 72-bit key using current hardware will take about 45,579 days or 124.8 years.[23]Due to currently understood limitations from fundamental physics, there is no expectation that anydigital computer(or combination) will be capable of breaking 256-bit encryption via a brute-force attack.[24]Whether or notquantum computerswill be able to do so in practice is still unknown, though theoretical analysis suggests such possibilities.[25]
Guidelines for choosing good passwords are typically designed to make passwords harder to discover by intelligent guessing. Common guidelines advocated by proponents of software system security have included:[26][27][28][29][30]
Forcing the inclusion of lowercase letters, uppercase letters, numbers, and symbols in passwords was a common policy but has been found to decrease security, by making it easier to crack. Research has shown how predictable the common use of such symbols are, and the US[34]and UK[35]government cyber security departments advise against forcing their inclusion in password policy. Complex symbols also make remembering passwords much harder, which increases writing down, password resets, and password reuse – all of which lower rather than improve password security. The original author of password complexity rules, Bill Burr, has apologized and admits they decrease security, as research has found; this was widely reported in the media in 2017.[36]Online security researchers[37]and consultants are also supportive of the change[38]in best practice advice on passwords.
Some guidelines advise against writing passwords down, while others, noting the large numbers of password-protected systems users must access, encourage writing down passwords as long as the written password lists are kept in a safe place, not attached to a monitor or in an unlocked desk drawer.[39]Use of apassword manageris recommended by the NCSC.[40]
The possible character set for a password can be constrained by different websites or by the range of keyboards on which the password must be entered.[41]
As with any security measure, passwords vary in strength; some are weaker than others. For example, the difference in strength between a dictionary word and a word with obfuscation (e.g. letters in the password are substituted by, say, numbers — a common approach) may cost a password-cracking device a few more seconds; this adds little strength. The examples below illustrate various ways weak passwords might be constructed, all of which are based on simple patterns which result in extremely low entropy, allowing them to be tested automatically at high speeds.:[12]
There are many other ways a password can be weak,[44]corresponding to the strengths of various attack schemes; the core principle is that a password should have high entropy (usually taken to be equivalent to randomness) andnotbe readily derivable by any "clever" pattern, nor should passwords be mixed with information identifying the user. Online services often provide a restore password function that a hacker can figure out and by doing so bypass a password.
In the landscape of 2012, as delineated byWilliam Cheswickin an article for ACM magazine, password security predominantly emphasized an alpha-numeric password of eight characters or more. Such a password, it was deduced, could resist ten million attempts per second for a duration of 252 days. However, with the assistance of contemporary GPUs at the time, this period was truncated to just about 9 hours, given a cracking rate of 7 billion attempts per second. A 13-character password was estimated to withstand GPU-computed attempts for over 900,000 years.[45][46]
In the context of 2023 hardware technology, the 2012 standard of an eight-character alpha-numeric password has become vulnerable, succumbing in a few hours. The time needed to crack a 13-character password is reduced to a few years. The current emphasis, thus, has shifted. Password strength is now gauged not just by its complexity but its length, with recommendations leaning towards passwords comprising at least 13-16 characters. This era has also seen the rise of Multi-Factor Authentication (MFA) as a crucial fortification measure. The advent and widespread adoption of password managers have further aided users in cultivating and maintaining an array of strong, unique passwords.[47]
A password policy is a guide to choosing satisfactory passwords. It is intended to:
Previous password policies used to prescribe the characters which passwords must contain, such as numbers, symbols, or upper/lower case. While this is still in use, it has been debunked as less secure by university research,[48]by the original instigator[49]of this policy, and by the cyber security departments (and other related government security bodies[50]) of USA[51]and UK.[52]Password complexity rules of enforced symbols were previously used by major platforms such as Google[53]and Facebook,[54]but these have removed the requirement following the discovery that they actually reduced security. This is because the human element is a far greater risk than cracking, and enforced complexity leads most users to highly predictable patterns (number at the end, swap 3 for E, etc.) which helps crack passwords. So password simplicity and length (passphrases) are the new best practice and complexity is discouraged. Forced complexity rules also increase support costs, and user friction and discourage user signups.
Password expiration was in some older password policies but has been debunked[36]as best practice and is not supported by USA or UK governments, or Microsoft which removed[55]the password expiry feature. Password expiration was previously trying to serve two purposes:[56]
However, password expiration has its drawbacks:[57][58]
The hardest passwords to crack, for a given length and character set, are random character strings; if long enough they resist brute force attacks (because there are many characters) and guessing attacks (due to high entropy). However, such passwords are typically the hardest to remember. The imposition of a requirement for such passwords in a password policy may encourage users to write them down, store them inmobile devices, or share them with others as a safeguard against memory failure. While some people consider each of these user resorts to increase security risks, others suggest the absurdity of expecting users to remember distinct complex passwords for each of the dozens of accounts they access. For example, in 2005, security expertBruce Schneierrecommended writing down one's password:
Simply, people can no longer remember passwords good enough to reliably defend against dictionary attacks, and are much more secure if they choose a password too complicated to remember and then write it down. We're all good at securing small pieces of paper. I recommend that people write their passwords down on a small piece of paper, and keep it with their other valuable small pieces of paper: in their wallet.[39]
The following measures may increase acceptance of strong password requirements if carefully used:
Password policies sometimes suggestmemory techniquesto assist remembering passwords:
A reasonable compromise for using large numbers of passwords is to record them in a password manager program, which include stand-alone applications, web browser extensions, or a manager built into the operating system. A password manager allows the user to use hundreds of different passwords, and only have to remember a single password, the one which opens the encrypted password database.[65]Needless to say, this single password should be strong and well-protected (not recorded anywhere). Most password managers can automatically create strong passwords using a cryptographically securerandom password generator, as well as calculating the entropy of the generated password. A good password manager will provide resistance against attacks such askey logging, clipboard logging and various other memory spying techniques.
6 Types of Password Attacks & How to Stop Them | OneLogin. (n.d.). Retrieved April 24, 2024, fromhttps://www.google.com/
Franchi, E., Poggi, A., & Tomaiuolo, M. (2015). Information and Password Attacks on Social Networks: An Argument for Cryptography. Journal of Information Technology Research, 8(1), 25–42.https://doi.org/10.4018/JITR.2015010103
|
https://en.wikipedia.org/wiki/Password_strength#Dictionary_attack
|
Inalgebra, aunitorinvertible element[a]of aringis aninvertible elementfor the multiplication of the ring. That is, an elementuof a ringRis a unit if there existsvinRsuch thatvu=uv=1,{\displaystyle vu=uv=1,}where1is themultiplicative identity; the elementvis unique for this property and is called themultiplicative inverseofu.[1][2]The set of units ofRforms agroupR×under multiplication, called thegroup of unitsorunit groupofR.[b]Other notations for the unit group areR∗,U(R), andE(R)(from the German termEinheit).
Less commonly, the termunitis sometimes used to refer to the element1of the ring, in expressions likering with a unitorunit ring, and alsounit matrix. Because of this ambiguity,1is more commonly called the "unity" or the "identity" of the ring, and the phrases "ring with unity" or a "ring with identity" may be used to emphasize that one is considering a ring instead of arng.
The multiplicative identity1and its additive inverse−1are always units. More generally, anyroot of unityin a ringRis a unit: ifrn= 1, thenrn−1is a multiplicative inverse ofr.
In anonzero ring, theelement 0is not a unit, soR×is not closed under addition.
A nonzero ringRin which every nonzero element is a unit (that is,R×=R∖ {0}) is called adivision ring(or a skew-field). A commutative division ring is called afield. For example, the unit group of the field ofreal numbersRisR∖ {0}.
In the ring ofintegersZ, the only units are1and−1.
In the ringZ/nZofintegers modulon, the units are the congruence classes(modn)represented by integerscoprimeton. They constitute themultiplicative group of integers modulon.
In the ringZ[√3]obtained by adjoining thequadratic integer√3toZ, one has(2 +√3)(2 −√3) = 1, so2 +√3is a unit, and so are its powers, soZ[√3]has infinitely many units.
More generally, for thering of integersRin anumber fieldF,Dirichlet's unit theoremstates thatR×is isomorphic to the groupZn×μR{\displaystyle \mathbf {Z} ^{n}\times \mu _{R}}whereμR{\displaystyle \mu _{R}}is the (finite, cyclic) group of roots of unity inRandn, therankof the unit group, isn=r1+r2−1,{\displaystyle n=r_{1}+r_{2}-1,}wherer1,r2{\displaystyle r_{1},r_{2}}are the number of real embeddings and the number of pairs of complex embeddings ofF, respectively.
This recovers theZ[√3]example: The unit group of (the ring of integers of) areal quadratic fieldis infinite of rank 1, sincer1=2,r2=0{\displaystyle r_{1}=2,r_{2}=0}.
For a commutative ringR, the units of thepolynomial ringR[x]are the polynomialsp(x)=a0+a1x+⋯+anxn{\displaystyle p(x)=a_{0}+a_{1}x+\dots +a_{n}x^{n}}such thata0is a unit inRand the remaining coefficientsa1,…,an{\displaystyle a_{1},\dots ,a_{n}}arenilpotent, i.e., satisfyaiN=0{\displaystyle a_{i}^{N}=0}for someN.[4]In particular, ifRis adomain(or more generallyreduced), then the units ofR[x]are the units ofR.
The units of thepower series ringR[[x]]{\displaystyle R[[x]]}are the power seriesp(x)=∑i=0∞aixi{\displaystyle p(x)=\sum _{i=0}^{\infty }a_{i}x^{i}}such thata0is a unit inR.[5]
The unit group of the ringMn(R)ofn×nmatricesover a ringRis the groupGLn(R)ofinvertible matrices. For a commutative ringR, an elementAofMn(R)is invertible if and only if thedeterminantofAis invertible inR. In that case,A−1can be given explicitly in terms of theadjugate matrix.
For elementsxandyin a ringR, if1−xy{\displaystyle 1-xy}is invertible, then1−yx{\displaystyle 1-yx}is invertible with inverse1+y(1−xy)−1x{\displaystyle 1+y(1-xy)^{-1}x};[6]this formula can be guessed, but not proved, by the following calculation in a ring of noncommutative power series:(1−yx)−1=∑n≥0(yx)n=1+y(∑n≥0(xy)n)x=1+y(1−xy)−1x.{\displaystyle (1-yx)^{-1}=\sum _{n\geq 0}(yx)^{n}=1+y\left(\sum _{n\geq 0}(xy)^{n}\right)x=1+y(1-xy)^{-1}x.}SeeHua's identityfor similar results.
Acommutative ringis alocal ringifR∖R×is amaximal ideal.
As it turns out, ifR∖R×is an ideal, then it is necessarily amaximal idealandRislocalsince amaximal idealis disjoint fromR×.
IfRis afinite field, thenR×is acyclic groupof order|R| − 1.
Everyring homomorphismf:R→Sinduces agroup homomorphismR×→S×, sincefmaps units to units. In fact, the formation of the unit group defines afunctorfrom thecategory of ringsto thecategory of groups. This functor has aleft adjointwhich is the integralgroup ringconstruction.[7]
Thegroup schemeGL1{\displaystyle \operatorname {GL} _{1}}is isomorphic to themultiplicative group schemeGm{\displaystyle \mathbb {G} _{m}}over any base, so for any commutative ringR, the groupsGL1(R){\displaystyle \operatorname {GL} _{1}(R)}andGm(R){\displaystyle \mathbb {G} _{m}(R)}are canonically isomorphic toU(R). Note that the functorGm{\displaystyle \mathbb {G} _{m}}(that is,R↦U(R)) isrepresentablein the sense:Gm(R)≃Hom(Z[t,t−1],R){\displaystyle \mathbb {G} _{m}(R)\simeq \operatorname {Hom} (\mathbb {Z} [t,t^{-1}],R)}for commutative ringsR(this for instance follows from the aforementioned adjoint relation with the group ring construction). Explicitly this means that there is a natural bijection between the set of the ring homomorphismsZ[t,t−1]→R{\displaystyle \mathbb {Z} [t,t^{-1}]\to R}and the set of unit elements ofR(in contrast,Z[t]{\displaystyle \mathbb {Z} [t]}represents the additive groupGa{\displaystyle \mathbb {G} _{a}}, theforgetful functorfrom the category of commutative rings to thecategory of abelian groups).
Suppose thatRis commutative. ElementsrandsofRare calledassociateif there exists a unituinRsuch thatr=us; then writer~s. In any ring, pairs ofadditive inverseelements[c]xand−xareassociate, since any ring includes the unit−1. For example, 6 and −6 are associate inZ. In general,~is anequivalence relationonR.
Associatedness can also be described in terms of theactionofR×onRvia multiplication: Two elements ofRare associate if they are in the sameR×-orbit.
In anintegral domain, the set of associates of a given nonzero element has the samecardinalityasR×.
The equivalence relation~can be viewed as any one ofGreen's semigroup relationsspecialized to the multiplicativesemigroupof a commutative ringR.
|
https://en.wikipedia.org/wiki/Unit_(ring_theory)
|
Radio-frequency identification(RFID) useselectromagnetic fieldsto automaticallyidentifyandtracktags attached to objects. An RFID system consists of a tiny radiotranspondercalled a tag, aradio receiver, and atransmitter. When triggered by an electromagnetic interrogation pulse from a nearby RFID reader device, the tag transmits digital data, usually anidentifying inventory number, back to the reader. This number can be used to trackinventorygoods.[1]
Passive tags are powered by energy from the RFID reader's interrogatingradio waves. Active tags are powered by a battery and thus can be read at a greater range from the RFID reader, up to hundreds of meters.
Unlike abarcode, the tag does not need to be within theline of sightof the reader, so it may be embedded in the tracked object. RFID is one method ofautomatic identification and data capture(AIDC).[2]
RFID tags are used in many industries. For example, an RFID tag attached to an automobile during production can be used to track its progress through theassembly line,[citation needed]RFID-tagged pharmaceuticals can be tracked through warehouses,[citation needed]andimplanting RFID microchipsin livestock and pets enables positive identification of animals.[3]Tags can also be used in shops to expedite checkout, and toprevent theftby customers and employees.[4]
Since RFID tags can be attached to physical money, clothing, and possessions, or implanted in animals and people, the possibility of reading personally-linked information withoutconsenthas raised seriousprivacyconcerns.[5]These concerns resulted in standard specifications development addressing privacy and security issues.
In 2014, the world RFID market was worth US$8.89billion, up from US$7.77 billion in 2013 and US$6.96 billion in 2012. This figure includes tags, readers, and software/services for RFID cards, labels, fobs, and all other form factors. The market value is expected to rise from US$12.08 billion in 2020 to US$16.23 billion by 2029.[6]
In 1945,Leon Theremininventedthe "Thing", a listening devicefor theSoviet Unionwhich retransmitted incident radio waves with the added audio information. Sound waves vibrated adiaphragmwhich slightly altered the shape of theresonator, which modulated the reflected radio frequency. Even though this device was acovert listening device, rather than an identification tag, it is considered to be a predecessor of RFID because it was passive, being energised and activated by waves from an outside source.[7]
Similar technology, such as theIdentification friend or foetransponder, was routinely used by the Allies and Germany inWorld War IIto identify aircraft as friendly or hostile.Transpondersare still used by most powered aircraft.[8]An early work exploring RFID is the landmark 1948 paper by Harry Stockman,[9]who predicted that "Considerable research and development work has to be done before the remaining basic problems in reflected-power communication are solved, and before the field of useful applications is explored."
Mario Cardullo's device, patented on January 23, 1973, was the first true ancestor of modern RFID,[10]as it was a passive radio transponder with memory.[11]The initial device was passive, powered by the interrogating signal, and was demonstrated in 1971 to theNew York Port Authorityand other potential users. It consisted of a transponder with 16bitmemory for use as atoll device. The basic Cardullo patent covers the use of radio frequency (RF), sound and light as transmission carriers. The original business plan presented to investors in 1969 showed uses in transportation (automotive vehicle identification, automatic toll system,electronic license plate, electronic manifest, vehicle routing, vehicle performance monitoring), banking (electronic chequebook, electronic credit card), security (personnel identification, automatic gates, surveillance) and medical (identification, patient history).[10]
In 1973, an early demonstration ofreflected power(modulated backscatter) RFID tags, both passive and semi-passive, was performed by Steven Depp, Alfred Koelle and Robert Freyman at theLos Alamos National Laboratory.[12]The portable system operated at 915 MHz and used 12-bit tags. This technique is used by the majority of today's UHFID and microwave RFID tags.[13]
In 1983, the first patent to be associated with the abbreviation RFID was granted toCharles Walton.[14]
In 1996, the first patent for a batteryless RFID passive tag with limited interference was granted to David Everett, John Frech, Theodore Wright, and Kelly Rodriguez.[15]
A radio-frequency identification system usestags, orlabelsattached to the objects to be identified. Two-way radio transmitter-receivers calledinterrogatorsorreaderssend a signal to the tag and read its response.[16]
RFID tags are made out of three pieces:
The tag information is stored in a non-volatile memory.[17]The RFID tags includes either fixed or programmable logic for processing the transmission and sensor data, respectively.[citation needed]
RFID tags can be either passive, active or battery-assisted passive. An active tag has an on-board battery and periodically transmits its ID signal.[17]A battery-assisted passive tag has a small battery on board and is activated when in the presence of an RFID reader. A passive tag is cheaper and smaller because it has no battery; instead, the tag uses the radio energy transmitted by the reader. However, to operate a passive tag, it must be illuminated with a power level roughly a thousand times stronger than an active tag for signal transmission.[18]
Tags may either be read-only, having a factory-assigned serial number that is used as a key into a database, or may be read/write, where object-specific data can be written into the tag by the system user. Field programmable tags may be write-once, read-multiple; "blank" tags may be written with an electronic product code by the user.[19]
The RFID tag receives the message and then responds with its identification and other information. This may be only a unique tag serial number, or may be product-related information such as a stock number, lot or batch number, production date, or other specific information. Since tags have individual serial numbers, the RFID system design can discriminate among several tags that might be within the range of the RFID reader and read them simultaneously.
RFID systems can be classified by the type of tag and reader. There are 3 types:[20]
Fixed readers are set up to create a specific interrogation zone which can be tightly controlled. This allows a highly defined reading area for when tags go in and out of the interrogation zone. Mobile readers may be handheld or mounted on carts or vehicles.
Signaling between the reader and the tag is done in several different incompatible ways, depending on the frequency band used by the tag. Tags operating on LF and HF bands are, in terms of radio wavelength, very close to the reader antenna because they are only a small percentage of a wavelength away. In thisnear fieldregion, the tag is closely coupled electrically with the transmitter in the reader. The tag can modulate the field produced by the reader by changing the electrical loading the tag represents. By switching between lower and higher relative loads, the tag produces a change that the reader can detect. At UHF and higher frequencies, the tag is more than one radio wavelength away from the reader, requiring a different approach. The tag canbackscattera signal. Active tags may contain functionally separated transmitters and receivers, and the tag need not respond on a frequency related to the reader's interrogation signal.[27]
AnElectronic Product Code(EPC) is one common type of data stored in a tag. When written into the tag by an RFID printer, the tag contains a 96-bit string of data. The first eight bits are a header which identifies the version of the protocol. The next 28 bits identify the organization that manages the data for this tag; the organization number is assigned by the EPCGlobal consortium. The next 24 bits are an object class, identifying the kind of product. The last 36 bits are a unique serial number for a particular tag. These last two fields are set by the organization that issued the tag. Rather like aURL, the total electronic product code number can be used as a key into a global database to uniquely identify a particular product.[28]
Often more than one tag will respond to a tag reader. For example, many individual products with tags may be shipped in a common box or on a common pallet. Collision detection is important to allow reading of data. Two different types of protocols are used to"singulate"a particular tag, allowing its data to be read in the midst of many similar tags. In aslotted Alohasystem, the reader broadcasts an initialization command and a parameter that the tags individually use to pseudo-randomly delay their responses. When using an "adaptive binary tree" protocol, the reader sends an initialization symbol and then transmits one bit of ID data at a time; only tags with matching bits respond, and eventually only one tag matches the complete ID string.[29]
Both methods have drawbacks when used with many tags or with multiple overlapping readers.[citation needed]
"Bulk reading" is a strategy for interrogating multiple tags at the same time, but lacks sufficient precision for inventory control. A group of objects, all of them RFID tagged, are read completely from one single reader position at one time. However, as tags respond strictly sequentially, the time needed for bulk reading grows linearly with the number of labels to be read. This means it takes at least twice as long to read twice as many labels. Due to collision effects, the time required is greater.[30]
A group of tags has to be illuminated by the interrogating signal just like a single tag. This is not a challenge concerning energy, but with respect to visibility; if any of the tags are shielded by other tags, they might not be sufficiently illuminated to return a sufficient response. The response conditions for inductively coupledHFRFID tags and coil antennas in magnetic fields appear better than for UHF or SHF dipole fields, but then distance limits apply and may prevent success.[citation needed][31]
Under operational conditions, bulk reading is not reliable. Bulk reading can be a rough guide for logistics decisions, but due to a high proportion of reading failures, it is not (yet)[when?]suitable for inventory management. However, when a single RFID tag might be seen as not guaranteeing a proper read, multiple RFID tags, where at least one will respond, may be a safer approach for detecting a known grouping of objects. In this respect, bulk reading is afuzzymethod for process support. From the perspective of cost and effect, bulk reading is not reported as an economical approach to secure process control in logistics.[32]
RFID tags are easy to conceal or incorporate in other items. For example, in 2009 researchers atBristol Universitysuccessfully glued RFID micro-transponders to liveantsin order to study their behavior.[33]This trend towards increasingly miniaturized RFIDs is likely to continue as technology advances.
Hitachi holds the record for the smallest RFID chip, at 0.05 mm × 0.05 mm. This is 1/64th the size of the previous record holder, the mu-chip.[34]Manufacture is enabled by using thesilicon-on-insulator(SOI) process. These dust-sized chips can store 38-digit numbers using 128-bitRead Only Memory(ROM).[35]A major challenge is the attachment of antennas, thus limiting read range to only millimeters.
In early 2020, MIT researchers demonstrated aterahertzfrequency identification (TFID) tag that is barely 1 square millimeter in size. The devices are essentially a piece of silicon that are inexpensive, small, and function like larger RFID tags. Because of the small size, manufacturers could tag any product and track logistics information for minimal cost.[36][37]
An RFID tag can be affixed to an object and used to track tools, equipment, inventory, assets, people, or other objects.
RFID offers advantages over manual systems or use ofbarcodes. The tag can be read if passed near a reader, even if it is covered by the object or not visible. The tag can be read inside a case, carton, box or other container, and unlike barcodes, RFID tags can be read hundreds at a time; barcodes can only be read one at a time using current devices. Some RFID tags, such as battery-assisted passive tags, are also able to monitor temperature and humidity.[38]
In 2011, the cost of passive tags started at US$0.09 each; special tags, meant to be mounted on metal or withstand gamma sterilization, could cost up to US$5. Active tags for tracking containers, medical assets, or monitoring environmental conditions in data centers started at US$50 and could be over US$100 each.[39]Battery-Assisted Passive (BAP) tags were in the US$3–10 range.[citation needed]
RFID can be used in a variety of applications,[40][41]such as:
In 2010, three factors drove a significant increase in RFID usage: decreased cost of equipment and tags, increased performance to a reliability of 99.9%, and a stable international standard around HF and UHF passive RFID. The adoption of these standards were driven by EPCglobal, a joint venture betweenGS1and GS1 US, which were responsible for driving global adoption of the barcode in the 1970s and 1980s. The EPCglobal Network was developed by theAuto-ID Center.[45]
RFID provides a way for organizations to identify and manage stock, tools and equipment (asset tracking), etc. without manual data entry. Manufactured products such as automobiles or garments can be tracked through the factory and through shipping to the customer. Automatic identification with RFID can be used for inventory systems. Many organisations require that their vendors place RFID tags on all shipments to improvesupply chain management.[citation needed]Warehouse Management System[clarification needed]incorporate this technology to speed up the receiving and delivery of the products and reduce the cost of labor needed in their warehouses.[46]
RFID is used foritem-level taggingin retail stores. This can enable more accurate and lower-labor-cost supply chain and store inventory tracking, as is done atLululemon, though physically locating items in stores requires more expensive technology.[47]RFID tags can be used at checkout; for example, at some stores of the French retailerDecathlon, customers performself-checkoutby either using a smartphone or putting items into a bin near the register that scans the tags without having to orient each one toward the scanner.[47]Some stores use RFID-tagged items to trigger systems that provide customers with more information or suggestions, such as fitting rooms atChaneland the "Color Bar" atKendra Scottstores.[47]
Item tagging can also provide protection against theft by customers and employees by usingelectronic article surveillance(EAS). Tags of different types can be physically removed with a special tool or deactivated electronically when payment is made.[48]On leaving the shop, customers have to pass near an RFID detector; if they have items with active RFID tags, an alarm sounds, both indicating an unpaid-for item, and identifying what it is.
Casinos can use RFID to authenticatepoker chips, and can selectively invalidate any chips known to be stolen.[49]
RFID tags are widely used inidentification badges, replacing earliermagnetic stripecards. These badges need only be held within a certain distance of the reader to authenticate the holder. Tags can also be placed on vehicles, which can be read at a distance, to allow entrance to controlled areas without having to stop the vehicle and present a card or enter an access code.[citation needed]
In 2010, Vail Resorts began using UHF Passive RFID tags in ski passes.[50]
Facebook is using RFID cards at most of their live events to allow guests to automatically capture and post photos.[citation needed][when?]
Automotive brands have adopted RFID for social media product placement more quickly than other industries. Mercedes was an early adopter in 2011 at thePGA Golf Championships,[51]and by the 2013 Geneva Motor Show many of the larger brands were using RFID for social media marketing.[52][further explanation needed]
To prevent retailers diverting products, manufacturers are exploring the use of RFID tags on promoted merchandise so that they can track exactly which product has sold through the supply chain at fully discounted prices.[53][when?]
Yard management, shipping and freight and distribution centers use RFID tracking. In therailroadindustry, RFID tags mounted on locomotives and rolling stock identify the owner, identification number and type of equipment and its characteristics. This can be used with a database to identify the type, origin, destination, etc. of the commodities being carried.[54]
In commercial aviation, RFID is used to support maintenance on commercial aircraft. RFID tags are used to identify baggage and cargo at several airports and airlines.[55][56]
Some countries are using RFID for vehicle registration and enforcement.[57]RFID can help detect and retrieve stolen cars.[58][59]
RFID is used inintelligent transportation systems. InNew York City, RFID readers are deployed at intersections to trackE-ZPasstags as a means for monitoring the traffic flow. The data is fed through the broadband wireless infrastructure to the traffic management center to be used inadaptive traffic controlof the traffic lights.[60]
Where ship, rail, or highway tanks are being loaded, a fixed RFID antenna contained in a transfer hose can read an RFID tag affixed to the tank, positively identifying it.[61]
At least one company has introduced RFID to identify and locate underground infrastructure assets such asgaspipelines,sewer lines, electrical cables, communication cables, etc.[62]
The first RFID passports ("E-passport") were issued byMalaysiain 1998. In addition to information also contained on the visual data page of the passport, Malaysian e-passports record the travel history (time, date, and place) of entry into and exit out of the country.[citation needed]
Other countries that insert RFID in passports include Norway (2005),[63]Japan (March 1, 2006), mostEUcountries (around 2006), Singapore (2006), Australia, Hong Kong, the United States (2007), the United Kingdom and Northern Ireland (2006), India (June 2008), Serbia (July 2008), Republic of Korea (August 2008), Taiwan (December 2008), Albania (January 2009), The Philippines (August 2009), Republic of Macedonia (2010), Argentina (2012), Canada (2013), Uruguay (2015)[64]and Israel (2017).
Standards for RFID passports are determined by theInternational Civil Aviation Organization(ICAO), and are contained in ICAO Document 9303, Part 1, Volumes 1 and 2 (6th edition, 2006). ICAO refers to theISO/IEC 14443RFID chips in e-passports as "contactless integrated circuits". ICAO standards provide for e-passports to be identifiable by a standard e-passport logo on the front cover.
Since 2006, RFID tags included in newUnited States passportsstore the same information that is printed within the passport, and include a digital picture of the owner.[65]The United States Department of State initially stated the chips could only be read from a distance of 10 centimetres (3.9 in), but after widespread criticism and a clear demonstration that special equipment can read the test passports from 10 metres (33 ft) away,[66]the passports were designed to incorporate a thin metal lining to make it more difficult for unauthorized readers toskiminformation when the passport is closed. The department will also implementBasic Access Control(BAC), which functions as apersonal identification number(PIN) in the form of characters printed on the passport data page. Before a passport's tag can be read, this PIN must be entered into an RFID reader. The BAC also enables the encryption of any communication between the chip and interrogator.[67]
In many countries, RFID tags can be used to pay for mass transit fares on bus, trains, or subways, or to collect tolls on highways.
Somebike lockersare operated with RFID cards assigned to individual users. A prepaid card is required to open or enter a facility or locker and is used to track and charge based on how long the bike is parked.[citation needed]
TheZipcarcar-sharing service uses RFID cards for locking and unlocking cars and for member identification.[68]
In Singapore, RFID replaces paper Season Parking Ticket (SPT).[69]
RFID tags for animals represent one of the oldest uses of RFID. Originally meant for large ranches and rough terrain, since the outbreak ofmad-cow disease, RFID has become crucial inanimal identificationmanagement. Animplantable RFID tagortranspondercan also be used for animal identification. The transponders are better known as PIT (Passive Integrated Transponder) tags, passive RFID, or "chips" on animals.[70]TheCanadian Cattle Identification Agencybegan using RFID tags as a replacement for barcode tags. Currently, CCIA tags are used inWisconsinand by United States farmers on a voluntary basis. TheUSDAis currently developing its own program.
RFID tags are required for all cattle sold in Australia and in some states, sheep and goats as well.[71]
Biocompatiblemicrochip implantsthat use RFID technology are being routinely implanted in humans. The first-ever human to receive an RFID microchip implant was American artistEduardo Kacin 1997.[72][73]Kac implanted the microchip live on television (and also live on the Internet) in the context of his artworkTime Capsule.[74]A year later, British professor ofcyberneticsKevin Warwickhad an RFID chip implanted in his arm by hisgeneral practitioner, George Boulos.[75][76]In 2004, the 'Baja Beach Club' operated byConrad ChaseinBarcelona[77]andRotterdamoffered implanted chips to identify their VIP customers, who could in turn use it to pay for service. In 2009, British scientistMark Gassonhad an advanced glass capsule RFID device surgically implanted into his left hand and subsequently demonstrated how a computer virus could wirelessly infect his implant and then be transmitted on to other systems.[78]
TheFood and Drug Administrationin the United States approved the use of RFID chips in humans in 2004.[79]
There is controversy regarding human applications of implantable RFID technology including concerns that individuals could potentially be tracked by carrying an identifier unique to them. Privacy advocates have protested against implantable RFID chips, warning of potential abuse. Some are concerned this could lead to abuse by an authoritarian government, to removal of freedoms,[80]and to the emergence of an "ultimatepanopticon", a society where all citizens behave in a socially accepted manner because others might be watching.[81]
On July 22, 2006, Reuters reported that two hackers, Newitz and Westhues, at a conference in New York City demonstrated that they could clone the RFID signal from a human implanted RFID chip, indicating that the device was not as secure as was previously claimed.[82]
The UFO religionUniverse Peopleis notorious online for their vocal opposition to human RFID chipping, which they claim is asaurianattempt to enslave the human race; one of their web domains is "dont-get-chipped".[83][84][85]
Adoption of RFID in the medical industry has been widespread and very effective.[86]Hospitals are among the first users to combine both active and passive RFID.[87]Active tags track high-value, or frequently moved items, and passive tags track smaller, lower cost items that only need room-level identification.[88]Medical facility rooms can collect data from transmissions of RFID badges worn by patients and employees, as well as from tags assigned to items such as mobile medical devices.[89]TheU.S. Department of Veterans Affairs (VA)recently announced plans to deploy RFID in hospitals across America to improve care and reduce costs.[90]
Since 2004, a number of U.S. hospitals have begun implanting patients with RFID tags and using RFID systems; the systems are typically used for workflow and inventory management.[91][92][93]The use of RFID to prevent mix-ups betweenspermandovainIVFclinics is also being considered.[94]
In October 2004, the FDA approved the USA's first RFID chips that can be implanted in humans. The 134 kHz RFID chips, from VeriChip Corp. can incorporate personal medical information and could save lives and limit injuries from errors in medical treatments, according to the company. Anti-RFID activistsKatherine AlbrechtandLiz McIntyrediscovered anFDA Warning Letterthat spelled out health risks.[95]According to the FDA, these include "adverse tissue reaction", "migration of the implanted transponder", "failure of implanted transponder", "electrical hazards" and "magnetic resonance imaging [MRI] incompatibility."
Libraries have used RFID to replace the barcodes on library items. The tag can contain identifying information or may just be a key into a database. An RFID system may replace or supplement bar codes and may offer another method of inventory management and self-service checkout by patrons. It can also act as asecuritydevice, taking the place of the more traditionalelectromagnetic security strip.[96]
It is estimated that over 30 million library items worldwide now contain RFID tags, including some in theVatican LibraryinRome.[97]
Since RFID tags can be read through an item, there is no need to open a book cover or DVD case to scan an item, and a stack of books can be read simultaneously. Book tags can be read while books are in motion on aconveyor belt, which reduces staff time. This can all be done by the borrowers themselves, reducing the need for library staff assistance. With portable readers, inventories could be done on a whole shelf of materials within seconds.[98]However, as of 2008, this technology remained too costly for many smaller libraries, and the conversion period has been estimated at 11 months for an average-size library. A 2004 Dutch estimate was that a library which lends 100,000 books per year should plan on a cost of €50,000 (borrow- and return-stations: 12,500 each, detection porches 10,000 each; tags 0.36 each). RFID taking a large burden off staff could also mean that fewer staff will be needed, resulting in some of them getting laid off,[97]but that has so far not happened in North America where recent surveys have not returned a single library that cut staff because of adding RFID.[citation needed][99]In fact, library budgets are being reduced for personnel and increased for infrastructure, making it necessary for libraries to add automation to compensate for the reduced staff size.[citation needed][99]Also, the tasks that RFID takes over are largely not the primary tasks of librarians.[citation needed][99]A finding in the Netherlands is that borrowers are pleased with the fact that staff are now more available for answering questions.[citation needed][99]
Privacy concerns have been raised surrounding library use of RFID.[100][101]Because some RFID tags can be read up to 100 metres (330 ft) away, there is some concern over whether sensitive information could be collected from an unwilling source. However, library RFID tags do not contain any patron information,[102]and the tags used in the majority of libraries use a frequency only readable from approximately 10 feet (3.0 m).[96]Another concern is that a non-library agency could potentially record the RFID tags of every person leaving the library without the library administrator's knowledge or consent. One simple option is to let the book transmit a code that has meaning only in conjunction with the library's database. Another possible enhancement would be to give each book a new code every time it is returned. In future, should readers become ubiquitous (and possibly networked), then stolen books could be traced even outside the library. Tag removal could be made difficult if the tags are so small that they fit invisibly inside a (random) page, possibly put there by the publisher.[citation needed]
RFID technologies are now[when?]also implemented in end-user applications in museums.[103]An example was the custom-designed temporary research application, "eXspot", at theExploratorium, a science museum inSan Francisco,California. A visitor entering the museum received an RF tag that could be carried as a card. The eXspot system enabled the visitor to receive information about specific exhibits. Aside from the exhibit information, the visitor could take photographs of themselves at the exhibit. It was also intended to allow the visitor to take data for later analysis. The collected information could be retrieved at home from a "personalized" website keyed to the RFID tag.[104]
In 2004, school authorities in the Japanese city ofOsakamade a decision to start chipping children's clothing, backpacks, and student IDs in a primary school.[105]Later, in 2007, a school inDoncaster, England, piloted a monitoring system designed to keep tabs on pupils by tracking radio chips in their uniforms.[106][when?]St Charles Sixth Form Collegein westLondon, England, starting in 2008, uses an RFID card system to check in and out of the main gate, to both track attendance and prevent unauthorized entrance. Similarly,Whitcliffe Mount SchoolinCleckheaton, England, uses RFID to track pupils and staff in and out of the building via a specially designed card. In the Philippines, during 2012, some schools already[when?]use RFID in IDs for borrowing books.[107][unreliable source?]Gates in those particular schools also have RFID scanners for buying items at school shops and canteens. RFID is also used in school libraries, and to sign in and out for student and teacher attendance.[99]
RFID for timing racesbegan in the early 1990s with pigeon racing, introduced by the companyDeister Electronicsin Germany. RFID can provide race start and end timings for individuals in large races where it is impossible to get accurate stopwatch readings for every entrant.[citation needed]
In races using RFID, racers wear tags that are read by antennas placed alongside the track or on mats across the track. UHF tags provide accurate readings with specially designed antennas. Rush error,[clarification needed]lap count errors and accidents at race start are avoided, as anyone can start and finish at any time without being in a batch mode.[clarification needed]
The design of the chip and of the antenna controls the range from which it can be read. Short range compact chips are twist tied to the shoe, or strapped to the ankle withhook-and-loop fasteners. The chips must be about 400 mm from the mat, therefore giving very good temporal resolution. Alternatively, a chip plus a very large (125mm square) antenna can be incorporated into the bib number worn on the athlete's chest at a height of about 1.25 m (4.1 ft).[citation needed]
Passive and active RFID systems are used in off-road events such asOrienteering,Enduroand Hare and Hounds racing. Riders have a transponder on their person, normally on their arm. When they complete a lap they swipe or touch the receiver which is connected to a computer and log their lap time.[citation needed]
RFID is being[when?]adapted by many recruitment agencies which have a PET (physical endurance test) as their qualifying procedure, especially in cases where the candidate volumes may run into millions (Indian Railway recruitment cells, police and power sector).
A number ofski resortshave adopted RFID tags to provide skiers hands-free access toski lifts. Skiers do not have to take their passes out of their pockets. Ski jackets have a left pocket into which the chip+card fits. This nearly contacts the sensor unit on the left of the turnstile as the skier pushes through to the lift. These systems were based on high frequency (HF) at 13.56MHz. The bulk of ski areas in Europe, from Verbier to Chamonix, use these systems.[108][109][110]
TheNFLin the United States equips players with RFID chips that measures speed, distance and direction traveled by each player in real-time. Currently, cameras stay focused on thequarterback; however, numerous plays are happening simultaneously on the field. The RFID chip will provide new insight into these simultaneous plays.[111]The chip triangulates the player's position within six inches and will be used to digitallybroadcastreplays. The RFID chip will make individual player information accessible to the public. The data will be available via the NFL 2015 app.[112]The RFID chips are manufactured byZebra Technologies. Zebra Technologies tested the RFID chip in 18 stadiums last year[when?]to track vector data.[113]
RFID tags are often a complement, but not a substitute, forUniversal Product Code(UPC) orEuropean Article Number(EAN) barcodes. They may never completely replace barcodes, due in part to their higher cost and the advantage of multiple data sources on the same object. Also, unlike RFID labels, barcodes can be generated and distributed electronically by e-mail or mobile phone, for printing or display by the recipient. An example is airlineboarding passes. The newEPC, along with several other schemes, is widely available at reasonable cost.
The storage of data associated with tracking items will require manyterabytes. Filtering and categorizing RFID data is needed to create useful information. It is likely that goods will be tracked by the pallet using RFID tags, and at package level with UPC or EAN from unique barcodes.
The unique identity is a mandatory requirement for RFID tags, despite special choice of the numbering scheme. RFID tag data capacity is large enough that each individual tag will have a unique code, while current barcodes are limited to a single type code for a particular product. The uniqueness of RFID tags means that a product may be tracked as it moves from location to location while being delivered to a person. This may help to combat theft and other forms of product loss. The tracing of products is an important feature that is well supported with RFID tags containing a unique identity of the tag and the serial number of the object. This may help companies cope with quality deficiencies and resulting recall campaigns, but also contributes to concern about tracking and profiling of persons after the sale.
Since around 2007, there has been increasing development in the use of RFID[when?]in thewaste managementindustry. RFID tags are installed on waste collection carts, linking carts to the owner's account for easy billing and service verification.[114]The tag is embedded into a garbage and recycle container, and the RFID reader is affixed to the garbage and recycle trucks.[115]RFID also measures a customer's set-out rate and provides insight as to the number of carts serviced by each waste collection vehicle. This RFID process replaces traditional "pay as you throw" (PAYT)municipal solid wasteusage-pricing models.
Active RFID tags have the potential to function as low-cost remote sensors that broadcasttelemetryback to a base station. Applications of tagometry data could include sensing of road conditions by implantedbeacons, weather reports, and noise level monitoring.[116]
Passive RFID tags can also report sensor data. For example, theWireless Identification and Sensing Platformis a passive tag that reports temperature, acceleration and capacitance to commercial Gen2 RFID readers.
It is possible that active or battery-assisted passive (BAP) RFID tags could broadcast a signal to an in-store receiver to determine whether the RFID tag – and by extension, the product it is attached to – is in the store.[citation needed]
To avoid injuries to humans and animals, RF transmission needs to be controlled.[117]A number of organizations have set standards for RFID, including theInternational Organization for Standardization(ISO), theInternational Electrotechnical Commission(IEC),ASTM International, theDASH7Alliance andEPCglobal.[118]
Several specific industries have also set guidelines, including the Financial Services Technology Consortium (FSTC) for tracking IT Assets with RFID, the Computer Technology Industry AssociationCompTIAfor certifying RFID engineers, and theInternational Air Transport Association(IATA) for luggage in airports.[citation needed]
Every country can set its own rules forfrequency allocationfor RFID tags, and not all radio bands are available in all countries. These frequencies are known as theISM bands(Industrial Scientific and Medical bands). The return signal of the tag may still causeinterferencefor other radio users.[citation needed]
In North America, UHF can be used unlicensed for 902–928 MHz (±13 MHz from the 915 MHz center frequency), but restrictions exist for transmission power.[citation needed]In Europe, RFID and other low-power radio applications are regulated byETSIrecommendationsEN 300 220andEN 302 208, andEROrecommendation 70 03, allowing RFID operation with somewhat complex band restrictions from 865–868 MHz.[citation needed]Readers are required to monitor a channel before transmitting ("Listen Before Talk"); this requirement has led to some restrictions on performance, the resolution of which is a subject of current[when?]research. The North American UHF standard is not accepted in France as it interferes with its military bands.[citation needed]On July 25, 2012, Japan changed its UHF band to 920 MHz, more closely matching the United States' 915 MHz band, establishing an international standard environment for RFID.[citation needed]
In some countries, a site license is needed, which needs to be applied for at the local authorities, and can be revoked.[citation needed]
As of 31 October 2014, regulations are in place in 78 countries representing approximately 96.5% of the world's GDP, and work on regulations was in progress in three countries representing approximately 1% of the world's GDP.[119]
Standardsthat have been made regarding RFID include:
In order to ensure global interoperability of products, several organizations have set up additional standards forRFID testing. These standards include conformance, performance and interoperability tests.[citation needed]
EPC Gen2 is short forEPCglobal UHF Class 1 Generation 2.
EPCglobal, a joint venture betweenGS1and GS1 US, is working on international standards for the use of mostly passive RFID and theElectronic Product Code(EPC) in the identification of many items in thesupply chainfor companies worldwide.
One of the missions of EPCglobal was to simplify the Babel of protocols prevalent in the RFID world in the 1990s. Two tag air interfaces (the protocol for exchanging information between a tag and a reader) were defined (but not ratified) by EPCglobal prior to 2003. These protocols, commonly known as Class 0 and Class 1, saw significant commercial implementation in 2002–2005.[121]
In 2004, the Hardware Action Group created a new protocol, the Class 1 Generation 2 interface, which addressed a number of problems that had been experienced with Class 0 and Class 1 tags. The EPC Gen2 standard was approved in December 2004. This was approved after a contention fromIntermecthat the standard may infringe a number of their RFID-related patents. It was decided that the standard itself does not infringe their patents, making the standard royalty free.[122]The EPC Gen2 standard was adopted with minor modifications as ISO 18000-6C in 2006.[123]
In 2007, the lowest cost of Gen2 EPC inlay was offered by the now-defunct company SmartCode, at a price of $0.05 apiece in volumes of 100 million or more.[124]
Not every successful reading of a tag (an observation) is useful for business purposes. A large amount of data may be generated that is not useful for managing inventory or other applications. For example, a customer moving a product from one shelf to another, or a pallet load of articles that passes several readers while being moved in a warehouse, are events that do not produce data that are meaningful to an inventory control system.[125]
Event filtering is required to reduce this data inflow to a meaningful depiction of moving goods passing a threshold. Various concepts[example needed]have been designed, mainly offered asmiddlewareperforming the filtering from noisy and redundant raw data to significant processed data.[citation needed]
The frequencies used for UHF RFID in the USA are as of 2007 incompatible with those of Europe or Japan. Furthermore, no emerging standard has yet become as universal as thebarcode.[126]To address international trade concerns, it is necessary to use a tag that is operational within all of the international frequency domains.
A primary RFID security concern is the illicit tracking of RFID tags. Tags, which are world-readable, pose a risk to both personal location privacy and corporate/military security. Such concerns have been raised with respect to theUnited States Department of Defense's recent[when?]adoption of RFID tags forsupply chain management.[127]More generally, privacy organizations have expressed concerns in the context of ongoing efforts to embed electronic product code (EPC) RFID tags in general-use products. This is mostly as a result of the fact that RFID tags can be read, and legitimate transactions with readers can be eavesdropped on, from non-trivial distances. RFID used in access control,[128]payment and eID (e-passport) systems operate at a shorter range than EPC RFID systems but are also vulnerable toskimmingand eavesdropping, albeit at shorter distances.[129]
A second method of prevention is by using cryptography.Rolling codesandchallenge–response authentication(CRA) are commonly used to foil monitor-repetition of the messages between the tag and reader, as any messages that have been recorded would prove to be unsuccessful on repeat transmission.[clarification needed]Rolling codes rely upon the tag's ID being changed after each interrogation, while CRA uses software to ask for acryptographicallycoded response from the tag. The protocols used during CRA can besymmetric, or may usepublic key cryptography.[130]
While a variety of secure protocols have been suggested for RFID tags,
in order to support long read range at low cost, many RFID tags have barely enough power available
to support very low-power and therefore simple security protocols such ascover-coding.[131]
Unauthorized reading of RFID tags presents a risk to privacy and to business secrecy.[132]Unauthorized readers can potentially use RFID information to identify or track packages, persons, carriers, or the contents of a package.[130]Several prototype systems are being developed to combat unauthorized reading, including RFID signal interruption,[133]as well as the possibility of legislation, and 700 scientific papers have been published on this matter since 2002.[134]There are also concerns that the database structure ofObject Naming Servicemay be susceptible to infiltration, similar todenial-of-service attacks, after the EPCglobal Network ONS root servers were shown to be vulnerable.[135]
Microchip–induced tumours have been noted during animal trials.[136][137]
In an effort to prevent the passive "skimming" of RFID-enabled cards or passports, the U.S.General Services Administration(GSA) issued a set of test procedures for evaluating electromagnetically opaque sleeves.[138]For shielding products to be in compliance with FIPS-201 guidelines, they must meet or exceed this published standard; compliant products are listed on the website of the U.S. CIO's FIPS-201 Evaluation Program.[139]The United States government requires that when new ID cards are issued, they must be delivered with an approved shielding sleeve or holder.[140]Although many wallets and passport holders are advertised to protect personal information, there is little evidence that RFID skimming is a serious threat; data encryption and use ofEMVchips rather than RFID makes this sort of theft rare.[141][142]
There are contradictory opinions as to whether aluminum can prevent reading of RFID chips. Some people claim that aluminum shielding, essentially creating aFaraday cage, does work.[143]Others claim that simply wrapping an RFID card in aluminum foil only makes transmission more difficult and is not completely effective at preventing it.[144]
Shielding effectiveness depends on the frequency being used.Low-frequencyLowFID tags, like those used in implantable devices for humans and pets, are relatively resistant to shielding, although thick metal foil will prevent most reads.High frequencyHighFID tags (13.56 MHz—smart cardsand access badges) are sensitive to shielding and are difficult to read when within a few centimetres of a metal surface.UHFUltra-HighFID tags (pallets and cartons) are difficult to read when placed within a few millimetres of a metal surface, although their read range is actually increased when they are spaced 2–4 cm from a metal surface due to positive reinforcement of the reflected wave and the incident wave at the tag.[145]
The use of RFID has engendered considerable controversy and someconsumer privacyadvocates have initiated productboycotts. Consumer privacy expertsKatherine AlbrechtandLiz McIntyreare two prominent critics of the "spychip" technology. The two main privacy concerns regarding RFID are as follows:[citation needed]
Most concerns revolve around the fact that RFID tags affixed to products remain functional even after the products have been purchased and taken home; thus, they may be used forsurveillanceand other purposes unrelated to their supply chain inventory functions.[146]
The RFID Network responded to these fears in the first episode of their syndicated cable TV series, saying that they are unfounded, and let RF engineers demonstrate how RFID works.[147]They provided images of RF engineers driving an RFID-enabled van around a building and trying to take an inventory of items inside. They also discussed satellite tracking of a passive RFID tag.
The concerns raised may be addressed in part by use of theClipped Tag. The Clipped Tag is an RFID tag designed to increase privacy for the purchaser of an item. The Clipped Tag has been suggested byIBMresearchersPaul Moskowitzand Guenter Karjoth. After the point of sale, a person may tear off a portion of the tag. This allows the transformation of a long-range tag into a proximity tag that still may be read, but only at short range – less than a few inches or centimeters. The modification of the tag may be confirmed visually. The tag may still be used later for returns, recalls, or recycling.
However, read range is a function of both the reader and the tag itself. Improvements in technology may increase read ranges for tags. Tags may be read at longer ranges than they are designed for by increasing reader power. The limit on read distance then becomes the signal-to-noise ratio of the signal reflected from the tag back to the reader. Researchers at two security conferences have demonstrated that passive Ultra-HighFID tags normally read at ranges of up to 30 feet can be read at ranges of 50 to 69 feet using suitable equipment.[148][149]
In January 2004, privacy advocates from CASPIAN and the German privacy groupFoeBuDwere invited to the METRO Future Store in Germany, where an RFID pilot project was implemented. It was uncovered by accident that METRO "Payback" customerloyalty cardscontained RFID tags with customer IDs, a fact that was disclosed neither to customers receiving the cards, nor to this group of privacy advocates. This happened despite assurances by METRO that no customer identification data was tracked and all RFID usage was clearly disclosed.[150]
During the UNWorld Summit on the Information Society(WSIS) in November 2005,Richard Stallman, the founder of thefree software movement, protested the use of RFID security cards by covering his card with aluminum foil.[151]
In 2004–2005, theFederal Trade Commissionstaff conducted a workshop and review of RFID privacy concerns and issued a report recommending best practices.[152]
RFID was one of the main topics of the 2006Chaos Communication Congress(organized by theChaos Computer ClubinBerlin) and triggered a large press debate. Topics included electronic passports, Mifare cryptography and the tickets for the FIFA World Cup 2006. Talks showed how the first real-world mass application of RFID at the 2006 FIFA Football World Cup worked. The groupmonochromstaged a "Hack RFID" song.[153]
Some individuals have grown to fear the loss of rights due to RFID human implantation.
By early 2007, Chris Paget of San Francisco, California, showed that RFID information could be pulled from aUS passport cardby using only $250 worth of equipment. This suggests that with the information captured, it would be possible to clone such cards.[154]
According to ZDNet, critics believe that RFID will lead to tracking individuals' every movement and will be an invasion of privacy.[155]In the bookSpyChips: How Major Corporations and Government Plan to Track Your Every Moveby Katherine Albrecht and Liz McIntyre, one is encouraged to "imagine a world of no privacy. Where your every purchase is monitored and recorded in a database and your every belonging is numbered. Where someone many states away or perhaps in another country has a record of everything you have ever bought. What's more, they can be tracked and monitored remotely".[156]
According to an RSA laboratories FAQ, RFID tags can be destroyed by a standard microwave oven;[157]however, some types of RFID tags, particularly those constructed to radiate using large metallic antennas (in particular RF tags andEPCtags), may catch fire if subjected to this process for too long (as would any metallic item inside a microwave oven). This simple method cannot safely be used to deactivate RFID features in electronic devices, or those implanted in living tissue, because of the risk of damage to the "host". However the time required is extremely short (a second or two of radiation) and the method works in many other non-electronic and inanimate items, long before heat or fire become of concern.[158]
Some RFID tags implement a "kill command" mechanism to permanently and irreversibly disable them. This mechanism can be applied if the chip itself is trusted or the mechanism is known by the person that wants to "kill" the tag.
UHF RFID tags that comply with the EPC2 Gen 2 Class 1 standard usually support this mechanism, while protecting the chip from being killed with a password.[159]Guessing or cracking this needed 32-bit password for killing a tag would not be difficult for a determined attacker.[160]
|
https://en.wikipedia.org/wiki/Radio-frequency_identification
|
AESmost often refers to:
AESmay also refer to:
|
https://en.wikipedia.org/wiki/AES
|
For AES-128, the key can be recovered with acomputational complexityof 2126.1using thebiclique attack. For biclique attacks on AES-192 and AES-256, the computational complexities of 2189.7and 2254.4respectively apply.Related-key attackscan break AES-256 and AES-192 with complexities 299.5and 2176in both time and data, respectively.[2]
TheAdvanced Encryption Standard(AES), also known by its original nameRijndael(Dutch pronunciation:[ˈrɛindaːl]),[5]is a specification for theencryptionof electronic data established by the U.S.National Institute of Standards and Technology(NIST) in 2001.[6]
AES is a variant of the Rijndaelblock cipher[5]developed by twoBelgiancryptographers,Joan DaemenandVincent Rijmen, who submitted a proposal[7]to NIST during theAES selection process.[8]Rijndael is a family of ciphers with differentkeyandblock sizes. For AES, NIST selected three members of the Rijndael family, each with a block size of 128 bits, but three different key lengths: 128, 192 and 256 bits.
AES has been adopted by theU.S. government. It supersedes theData Encryption Standard(DES),[9]which was published in 1977. The algorithm described by AES is asymmetric-key algorithm, meaning the same key is used for both encrypting and decrypting the data.
In the United States, AES was announced by the NIST as U.S.FIPSPUB 197 (FIPS 197) on November 26, 2001.[6]This announcement followed a five-year standardization process in which fifteen competing designs were presented and evaluated, before the Rijndael cipher was selected as the most suitable.[note 3]
AES is included in theISO/IEC18033-3standard. AES became effective as a U.S. federal government standard on May 26, 2002, after approval by U.S.Secretary of CommerceDonald Evans. AES is available in many different encryption packages, and is the first (and only) publicly accessiblecipherapproved by the U.S.National Security Agency(NSA) fortop secretinformation when used in an NSA approved cryptographic module.[note 4]
The Advanced Encryption Standard (AES) is defined in each of:
AES is based on a design principle known as asubstitution–permutation network, and is efficient in both software and hardware.[11]Unlike its predecessor DES, AES does not use aFeistel network. AES is a variant of Rijndael, with a fixedblock sizeof 128bits, and akey sizeof 128, 192, or 256 bits. By contrast, Rijndaelper seis specified with block and key sizes that may be any multiple of 32 bits, with a minimum of 128 and a maximum of 256 bits. Most AES calculations are done in a particularfinite field.
AES operates on a 4 × 4column-major orderarray of 16 bytesb0,b1,...,b15termed thestate:[note 5]
The key size used for an AES cipher specifies the number of transformation rounds that convert the input, called theplaintext, into the final output, called theciphertext. The number of rounds are as follows:
Each round consists of several processing steps, including one that depends on the encryption key itself. A set of reverse rounds are applied to transform ciphertext back into the original plaintext using the same encryption key.
In theSubBytesstep, each byteai,j{\displaystyle a_{i,j}}in thestatearray is replaced with aSubByteS(ai,j){\displaystyle S(a_{i,j})}using an 8-bitsubstitution box. Before round 0, thestatearray is simply the plaintext/input. This operation provides the non-linearity in thecipher. The S-box used is derived from themultiplicative inverseoverGF(28), known to have good non-linearity properties. To avoid attacks based on simple algebraic properties, the S-box is constructed by combining the inverse function with an invertibleaffine transformation. The S-box is also chosen to avoid any fixed points (and so is aderangement), i.e.,S(ai,j)≠ai,j{\displaystyle S(a_{i,j})\neq a_{i,j}}, and also any opposite fixed points, i.e.,S(ai,j)⊕ai,j≠FF16{\displaystyle S(a_{i,j})\oplus a_{i,j}\neq {\text{FF}}_{16}}.
While performing the decryption, theInvSubBytesstep (the inverse ofSubBytes) is used, which requires first taking the inverse of the affine transformation and then finding the multiplicative inverse.
TheShiftRowsstep operates on the rows of the state; it cyclically shifts the bytes in each row by a certainoffset. For AES, the first row is left unchanged. Each byte of the second row is shifted one to the left. Similarly, the third and fourth rows are shifted by offsets of two and three respectively.[note 6]In this way, each column of the output state of theShiftRowsstep is composed of bytes from each column of the input state. The importance of this step is to avoid the columns being encrypted independently, in which case AES would degenerate into four independent block ciphers.
In theMixColumnsstep, the four bytes of each column of the state are combined using an invertiblelinear transformation. TheMixColumnsfunction takes four bytes as input and outputs four bytes, where each input byte affects all four output bytes. Together withShiftRows,MixColumnsprovidesdiffusionin the cipher.
During this operation, each column is transformed using a fixed matrix (matrix left-multiplied by column gives new value of column in the state):
Matrix multiplication is composed of multiplication and addition of the entries. Entries are bytes treated as coefficients of polynomial of orderx7{\displaystyle x^{7}}. Addition is simply XOR. Multiplication is modulo irreducible polynomialx8+x4+x3+x+1{\displaystyle x^{8}+x^{4}+x^{3}+x+1}. If processed bit by bit, then, after shifting, a conditionalXORwith 1B16should be performed if the shifted value is larger than FF16(overflow must be corrected by subtraction of generating polynomial). These are special cases of the usual multiplication inGF(28){\displaystyle \operatorname {GF} (2^{8})}.
In more general sense, each column is treated as a polynomial overGF(28){\displaystyle \operatorname {GF} (2^{8})}and is then multiplied modulo0116⋅z4+0116{\displaystyle {01}_{16}\cdot z^{4}+{01}_{16}}with a fixed polynomialc(z)=0316⋅z3+0116⋅z2+0116⋅z+0216{\displaystyle c(z)={03}_{16}\cdot z^{3}+{01}_{16}\cdot z^{2}+{01}_{16}\cdot z+{02}_{16}}. The coefficients are displayed in theirhexadecimalequivalent of the binary representation of bit polynomials fromGF(2)[x]{\displaystyle \operatorname {GF} (2)[x]}. TheMixColumnsstep can also be viewed as a multiplication by the shown particularMDS matrixin thefinite fieldGF(28){\displaystyle \operatorname {GF} (2^{8})}. This process is described further in the articleRijndael MixColumns.
In theAddRoundKeystep, the subkey is combined with the state. For each round, a subkey is derived from the mainkeyusingRijndael's key schedule; each subkey is the same size as the state. The subkey is added by combining of the state with the corresponding byte of the subkey using bitwiseXOR.
On systems with 32-bit or larger words, it is possible to speed up execution of this cipher by combining theSubBytesandShiftRowssteps with theMixColumnsstep by transforming them into a sequence of table lookups. This requires four 256-entry 32-bit tables (together occupying 4096 bytes). A round can then be performed with 16 table lookup operations and 12 32-bit exclusive-or operations, followed by four 32-bit exclusive-or operations in theAddRoundKeystep.[12]Alternatively, the table lookup operation can be performed with a single 256-entry 32-bit table (occupying 1024 bytes) followed by circular rotation operations.
Using a byte-oriented approach, it is possible to combine theSubBytes,ShiftRows, andMixColumnssteps into a single round operation.[13]
TheNational Security Agency(NSA) reviewed all the AES finalists, including Rijndael, and stated that all of them were secure enough for U.S. Government non-classified data. In June 2003, the U.S. Government announced that AES could be used to protectclassified information:
The design and strength of all key lengths of the AES algorithm (i.e., 128, 192 and 256) are sufficient to protect classified information up to the SECRET level. TOP SECRET information will require use of either the 192 or 256 key lengths. The implementation of AES in products intended to protect national security systems and/or information must be reviewed and certified by NSA prior to their acquisition and use.[14]
AES has 10 rounds for 128-bit keys, 12 rounds for 192-bit keys, and 14 rounds for 256-bit keys.
For cryptographers, acryptographic"break" is anything faster than abrute-force attack—i.e., performing one trial decryption for each possible key in sequence(seeCryptanalysis § Computational resources required). A break can thus include results that are infeasible with current technology. Despite being impractical, theoretical breaks can sometimes provide insight into vulnerability patterns. The largest successful publicly known brute-force attack against a widely implemented block-cipher encryption algorithm was against a 64-bitRC5key bydistributed.netin 2006.[15]
The key space increases by a factor of 2 for each additional bit of key length, and if every possible value of the key is equiprobable; this translates into a doubling of the average brute-force key search time with every additional bit of key length. This implies that the effort of a brute-force search increases exponentially with key length. Key length in itself does not imply security against attacks, since there are ciphers with very long keys that have been found to be vulnerable.
AES has a fairly simple algebraic framework.[16]In 2002, a theoretical attack, named the "XSL attack", was announced byNicolas CourtoisandJosef Pieprzyk, purporting to show a weakness in the AES algorithm, partially due to the low complexity of its nonlinear components.[17]Since then, other papers have shown that the attack, as originally presented, is unworkable; seeXSL attack on block ciphers.
During the AES selection process, developers of competing algorithms wrote of Rijndael's algorithm "we are concerned about [its] use ... in security-critical applications."[18]In October 2000, however, at the end of the AES selection process,Bruce Schneier, a developer of the competing algorithmTwofish, wrote that while he thought successful academic attacks on Rijndael would be developed someday, he "did not believe that anyone will ever discover an attack that will allow someone to read Rijndael traffic."[19]
By 2006, the best known attacks were on 7 rounds for 128-bit keys, 8 rounds for 192-bit keys, and 9 rounds for 256-bit keys.[20]
Until May 2009, the only successful published attacks against the full AES wereside-channel attackson some specific implementations. In 2009, a newrelated-key attackwas discovered that exploits the simplicity of AES's key schedule and has a complexity of 2119. In December 2009 it was improved to 299.5.[2]This is a follow-up to an attack discovered earlier in 2009 byAlex Biryukov,Dmitry Khovratovich, and Ivica Nikolić, with a complexity of 296for one out of every 235keys.[21]However, related-key attacks are not of concern in any properly designed cryptographic protocol, as a properly designed protocol (i.e., implementational software) will take care not to allow related keys, essentially byconstrainingan attacker's means of selecting keys for relatedness.
Another attack was blogged by Bruce Schneier[3]on July 30, 2009, and released as apreprint[22]on August 3, 2009. This new attack, by Alex Biryukov,Orr Dunkelman,Nathan Keller, Dmitry Khovratovich, andAdi Shamir, is against AES-256 that uses only two related keys and 239time to recover the complete 256-bit key of a 9-round version, or 245time for a 10-round version with a stronger type of related subkey attack, or 270time for an 11-round version. 256-bit AES uses 14 rounds, so these attacks are not effective against full AES.
The practicality of these attacks with stronger related keys has been criticized,[23]for instance, by the paper on chosen-key-relations-in-the-middle attacks on AES-128 authored by Vincent Rijmen in 2010.[24]
In November 2009, the firstknown-key distinguishing attackagainst a reduced 8-round version of AES-128 was released as a preprint.[25]This known-key distinguishing attack is an improvement of the rebound, or the start-from-the-middle attack, against AES-like permutations, which view two consecutive rounds of permutation as the application of a so-called Super-S-box. It works on the 8-round version of AES-128, with a time complexity of 248, and a memory complexity of 232. 128-bit AES uses 10 rounds, so this attack is not effective against full AES-128.
The firstkey-recovery attackson full AES were by Andrey Bogdanov, Dmitry Khovratovich, and Christian Rechberger, and were published in 2011.[26]The attack is abiclique attackand is faster than brute force by a factor of about four. It requires 2126.2operations to recover an AES-128 key. For AES-192 and AES-256, 2190.2and 2254.6operations are needed, respectively. This result has been further improved to 2126.0for AES-128, 2189.9for AES-192, and 2254.3for AES-256 by Biaoshuai Tao and Hongjun Wu in a 2015 paper,[27]which are the current best results in key recovery attack against AES.
This is a very small gain, as a 126-bit key (instead of 128 bits) would still take billions of years to brute force on current and foreseeable hardware. Also, the authors calculate the best attack using their technique on AES with a 128-bit key requires storing 288bits of data. That works out to about 38 trillion terabytes of data, which was more than all the data stored on all the computers on the planet in 2016.[28]A paper in 2015 later improved the space complexity to 256bits,[27]which is 9007 terabytes (while still keeping a time complexity of approximately 2126).
According to theSnowden documents, the NSA is doing research on whether a cryptographic attack based ontau statisticmay help to break AES.[29]
At present, there is no known practical attack that would allow someone without knowledge of the key to read data encrypted by AES when correctly implemented.[citation needed]
Side-channel attacksdo not attack the cipher as ablack box, and thus are not related to cipher security as defined in the classical context, but are important in practice. They attack implementations of the cipher on hardware or software systems that inadvertently leak data. There are several such known attacks on various implementations of AES.
In April 2005,D. J. Bernsteinannounced a cache-timing attack that he used to break a custom server that usedOpenSSL's AES encryption.[30]The attack required over 200 million chosen plaintexts.[31]The custom server was designed to give out as much timing information as possible (the server reports back the number of machine cycles taken by the encryption operation). However, as Bernstein pointed out, "reducing the precision of the server's timestamps, or eliminating them from the server's responses, does not stop the attack: the client simply uses round-trip timings based on its local clock, and compensates for the increased noise by averaging over a larger number of samples."[30]
In October 2005, Dag Arne Osvik,Adi ShamirandEran Tromerpresented a paper demonstrating several cache-timing attacks against the implementations in AES found in OpenSSL and Linux'sdm-cryptpartition encryption function.[32]One attack was able to obtain an entire AES key after only 800 operations triggering encryptions, in a total of 65 milliseconds. This attack requires the attacker to be able to run programs on the same system or platform that is performing AES.
In December 2009 an attack on some hardware implementations was published that useddifferential fault analysisand allows recovery of a key with a complexity of 232.[33]
In November 2010 Endre Bangerter, David Gullasch and Stephan Krenn published a paper which described a practical approach to a "near real time" recovery of secret keys from AES-128 without the need for either cipher text or plaintext. The approach also works on AES-128 implementations that use compression tables, such as OpenSSL.[34]Like some earlier attacks, this one requires the ability to run unprivileged code on the system performing the AES encryption, which may be achieved by malware infection far more easily than commandeering the root account.[35]
In March 2016, C. Ashokkumar, Ravi Prakash Giri and Bernard Menezes presented a side-channel attack on AES implementations that can recover the complete 128-bit AES key in just 6–7 blocks of plaintext/ciphertext, which is a substantial improvement over previous works that require between 100 and a million encryptions.[36]The proposed attack requires standard user privilege and key-retrieval algorithms run under a minute.
Many modern CPUs have built-inhardware instructions for AES, which protect against timing-related side-channel attacks.[37][38]
AES-256 is considered to bequantum resistant, as it has similar quantum resistance to AES-128's resistance against traditional, non-quantum, attacks at 128bits of security. AES-192 and AES-128 are not considered quantum resistant due to their smaller key sizes. AES-192 has a strength of 96 bits against quantum attacks and AES-128 has 64 bits of strength against quantum attacks, making them both insecure.[39][40]
TheCryptographic Module Validation Program(CMVP) is operated jointly by the United States Government'sNational Institute of Standards and Technology(NIST) Computer Security Division and theCommunications Security Establishment(CSE) of the Government of Canada. The use of cryptographic modules validated to NISTFIPS 140-2is required by the United States Government for encryption of all data that has a classification ofSensitive but Unclassified(SBU) or above. From NSTISSP #11, National Policy Governing the Acquisition of Information Assurance: "Encryption products for protecting classified information will be certified by NSA, and encryption products intended for protecting sensitive information will be certified in accordance with NIST FIPS 140-2."[41]
The Government of Canada also recommends the use ofFIPS 140validated cryptographic modules in unclassified applications of its departments.
Although NIST publication 197 ("FIPS 197") is the unique document that covers the AES algorithm, vendors typically approach the CMVP under FIPS 140 and ask to have several algorithms (such asTriple DESorSHA1) validated at the same time. Therefore, it is rare to find cryptographic modules that are uniquely FIPS 197 validated and NIST itself does not generally take the time to list FIPS 197 validated modules separately on its public web site. Instead, FIPS 197 validation is typically just listed as an "FIPS approved: AES" notation (with a specific FIPS 197 certificate number) in the current list of FIPS 140 validated cryptographic modules.
The Cryptographic Algorithm Validation Program (CAVP)[42]allows for independent validation of the correct implementation of the AES algorithm. Successful validation results in being listed on the NIST validations page.[43]This testing is a pre-requisite for the FIPS 140-2 module validation. However, successful CAVP validation in no way implies that the cryptographic module implementing the algorithm is secure. A cryptographic module lacking FIPS 140-2 validation or specific approval by the NSA is not deemed secure by the US Government and cannot be used to protect government data.[41]
FIPS 140-2 validation is challenging to achieve both technically and fiscally.[44]There is a standardized battery of tests as well as an element of source code review that must be passed over a period of a few weeks. The cost to perform these tests through an approved laboratory can be significant (e.g., well over $30,000 US)[44]and does not include the time it takes to write, test, document and prepare a module for validation. After validation, modules must be re-submitted and re-evaluated if they are changed in any way. This can vary from simple paperwork updates if the security functionality did not change to a more substantial set of re-testing if the security functionality was impacted by the change.
Test vectors are a set of known ciphers for a given input and key. NIST distributes the reference of AES test vectors as AES Known Answer Test (KAT) Vectors.[note 7]
High speed and low RAM requirements were some of the criteria of the AES selection process. As the chosen algorithm, AES performed well on a wide variety of hardware, from 8-bitsmart cardsto high-performance computers.
On aPentium Pro, AES encryption requires 18 clock cycles per byte (cpb),[45]equivalent to a throughput of about 11 MiB/s for a 200 MHz processor.
OnIntel CoreandAMD RyzenCPUs supportingAES-NI instruction setextensions, throughput can be multiple GiB/s.[46]On an IntelWestmereCPU, AES encryption using AES-NI takes about 1.3 cpb for AES-128 and 1.8 cpb for AES-256.[47]
|
https://en.wikipedia.org/wiki/Rijndael
|
Confidentialityinvolves a set of rules or a promise sometimes executed throughconfidentiality agreementsthat limits the access to or places restrictions on the distribution of certain types ofinformation.
By law, lawyers are often required to keep confidential anything on the representation of a client. The duty of confidentiality is much broader than theattorney–client evidentiary privilege, which only coverscommunicationsbetween the attorney and the client.[1]
Both the privilege and the duty serve the purpose of encouraging clients to speak frankly about their cases. This way, lawyers can carry out their duty to provide clients with zealous representation. Otherwise, the opposing side may be able to surprise the lawyer in court with something he did not know about his client, which may weaken the client's position. Also, a distrustful client might hide a relevant fact he thinks is incriminating, but that a skilled lawyer could turn to the client's advantage (for example, by raisingaffirmative defenseslike self-defense). However, most jurisdictions have exceptions for situations where the lawyer has reason to believe that the client may kill or seriously injure someone, may cause substantial injury to the financial interest or property of another, or is using (or seeking to use) the lawyer's services to perpetrate a crime or fraud. In such situations the lawyer has the discretion, but not the obligation, to disclose information designed to prevent the planned action. Most states have a version of this discretionary disclosure rule under Rules of Professional Conduct, Rule 1.6 (or its equivalent). A few jurisdictions have made this traditionally discretionary duty mandatory. For example, see the New Jersey and Virginia Rules of Professional Conduct, Rule 1.6.
In some jurisdictions, the lawyer must try to convince the client to conform his or her conduct to the boundaries of the law before disclosing any otherwise confidential information. These exceptions generally do not cover crimes that have already occurred, even in extreme cases where murderers have confessed the location of missing bodies to their lawyers but the police are still looking for those bodies. TheU.S. Supreme Courtand manystate supreme courtshave affirmed the right of a lawyer to withhold information in such situations. Otherwise, it would be impossible for any criminal defendant to obtain a zealous defense.
California is famous for having one of the strongest duties of confidentiality in the world; its lawyers must protect client confidences at "every peril to himself [or herself]" under former California Business and Professions Code section 6068(e). Until an amendment in 2004 (which turned subsection (e) into subsection (e)(1) and added subsection (e)(2) to section 6068), California lawyers were not even permitted to disclose that a client was about to commit murder or assault. The Supreme Court of California promptly amended the California Rules of Professional Conduct to conform to the new exception in the revised statute. Recent legislation in the UK curtails the confidentiality professionals like lawyers and accountants can maintain at the expense of the state.[2]Accountants, for example, are required to disclose to the state any suspicions of fraudulent accounting and, even, the legitimate use of tax saving schemes if those schemes are not already known to the tax authorities.
The "three traditional requirements of the cause of action for breach of confidence"[3]: [19]were identified byMegarry JinCoco v A N Clark (Engineers) Ltd(1968) in the following terms:[4]
In my judgment, three elements are normally required if, apart from contract, a case of breach of confidence is to succeed. First, the information itself, in the words of Lord Greene, M.R. in theSaltmancase on page 215, must "have the necessary quality of confidence about it." Secondly, that information must have been imparted in circumstances importing an obligation of confidence. Thirdly, there must be an unauthorised use of that information to the detriment of the party communicating it.
The 1896 case featuring the royalaccoucheurDrWilliam Smoult Playfairshowed the difference between lay and medical views. Playfair was consulted by Linda Kitson; he ascertained that she had been pregnant while separated from her husband. He informed his wife, a relative of Kitson's, in order that she protect herself and their daughters from moral contagion. Kitson sued, and the case gained public notoriety, with huge damages awarded against the doctor.[5]
Confidentiality is commonly applied to conversations between doctors and patients. Legal protections prevent physicians from revealing certain discussions with patients, even under oath in court.[6]Thisphysician-patient privilegeonly applies to secrets shared between physician and patient during the course of providing medical care.[6][7]
The rule dates back to at least theHippocratic Oath, which reads in part:Whatever, in connection with my professional service, or not in connection with it, I see or hear, in the life of men, which ought not to be spoken of abroad, I will not divulge, as reckoning that all such should be kept secret.
Traditionally, medical ethics has viewed the duty of confidentiality as a relatively non-negotiable tenet of medical practice.[8]
Confidentiality is standard in the United States byHIPAAlaws, specifically the Privacy Rule, and various state laws, some more rigorous than HIPAA. However, numerous exceptions to the rules have been carved out over the years. For example, many American states require physicians to report gunshot wounds to the police and impaired drivers to the Department of Motor Vehicles. Confidentiality is also challenged in cases involving the diagnosis of a sexually transmitted disease in a patient who refuses to reveal the diagnosis to a spouse, and in the termination of a pregnancy in an underage patient, without the knowledge of the patient's parents. Many states in the U.S. have laws governing parental notification in underage abortion.[9]Confidentiality can be protected in medical research viacertificates of confidentiality.
Due to theEUDirective 2001/20/EC, inspectors appointed by the Member States have to maintain confidentiality whenever they gain access to confidential information as a result of thegood clinical practiceinspections in accordance with applicable national and international requirements.[10]
A typical patient declaration might read:
I have been informed of the benefit that I gain from the protection and the rights granted by the European Union Data Protection Directive and other national laws on the protection of my personal data. I agree that the representatives of the sponsor or possibly the health authorities can have access to my medical records. My participation in the study will be treated as confidential. I will not be referred to by my name in any report of the study. My identity will not be disclosed to any person, except for the purposes described above and in the event of a medical emergency or if required by the law. My data will be processed electronically to determine the outcome of this study, and to provide it to the health authorities. My data may be transferred to other countries (such as the USA). For these purposes the sponsor has to protect my personal information even in countries whosedata privacylaws are less strict than those of this country.
In the United Kingdom information about an individual's HIV status is kept confidential within theNational Health Service. This is based in law, in the NHS Constitution, and in key NHS rules and procedures. It is also outlined in every NHS employee's contract of employment and in professional standards set by regulatory bodies.[11]The National AIDS Trust's Confidentiality in the NHS: Your Information, Your Rights[12]outlines these rights. All registered healthcare professionals must abide by these standards and if they are found to have breached confidentiality, they can face disciplinary action.
A healthcare worker shares confidential information with someone else who is, or is about to, provide the patient directly with healthcare to make sure they get the best possible treatment. They only share information that is relevant to their care in that instance, and with consent.
There are two ways to give consent:explicit consentorimplied consent. Explicit consent is when a patient clearly communicates to a healthcare worker, verbally or in writing or in some other way, that relevant confidential information can be shared. Implied consent means that a patient's consent to share personal confidential information is assumed. When personal confidential information is shared between healthcare workers, consent is taken as implied.
If a patient doesn't want a healthcare worker to share confidential health information, they need to make this clear and discuss the matter with healthcare staff. Patients have the right, in most situations, to refuse permission for ahealth careprofessional to share their information with another healthcare professional, even one giving them care—but are advised, where appropriate, about the dangers of this course of action, due to possible drug interactions.
However, in a few limited instances, a healthcare worker can share personal information without consent if it is in the public interest. These instances are set out in guidance from the General Medical Council,[13]which is the regulatory body for doctors. Sometimes the healthcare worker has to provide the information – if required by law or in response to a court order.
TheNational AIDS Trusthas written a guide for people living with HIV to confidentiality in the NHS.[14]
The ethical principle of confidentiality requires that information shared by a client with atherapistisn't shared without consent, and that the sharing of information would be guided by ETHIC Model: Examining professional values, after thinking about ethical standards of the certifying association, hypothesize about different courses of action and possible consequences, identifying how it and to whom will it be beneficial per professional standards, and after consulting with supervisor and colleagues.[15]Confidentiality principle bolsters thetherapeutic alliance, as it promotes an environment of trust. There are important exceptions to confidentiality, namely where it conflicts with the clinician'sduty to warnorduty to protect. This includes instances ofsuicidal behaviororhomicidalplans,child abuse,elder abuseanddependent adult abuse. Information shared by a client with a therapist is considered asprivileged communication, however in certain cases and based on certain provinces and states they are negated, it is determined by the use of negative and positive freedom.[16]
Some legal jurisdictions recognise a category of commercial confidentiality whereby abusinessmay withhold information on the basis of perceived harm to "commercial interests".[17]For example,Coca-Cola's main syrupformularemains atrade secret.
Banking secrecy,[18][19]alternatively known as financial privacy, banking discretion, or bank safety,[20][21]is aconditional agreementbetween a bank and its clients that all foregoing activities remain secure, confidential, and private.[22]Most often associated withbanking in Switzerland, banking secrecy is prevalent inLuxembourg,Monaco,Hong Kong,Singapore,Ireland, andLebanon, among otheroff-shore banking institutions.
Also known as bank–client confidentiality or banker–client privilege,[23][24]the practice was started byItalian merchantsduring the 1600s nearNorthern Italy(a region that would become theItalian-speaking regionof Switzerland).
Confidentiality agreements that "seal"litigation settlementsare not uncommon, but this can leave regulators and society ignorant of public hazards. In the U.S. state of Washington, for example, journalists discovered that about two dozen medical malpractice cases had been improperly sealed by judges, leading to improperly weak discipline by the state Department of Health.[25]In the 1990s and early 2000s, theCatholic sexual abuse scandalinvolved a number of confidentiality agreements with victims.[26]Some states have passed laws that limit confidentiality. For example, in 1990 Florida passed a 'Sunshine in Litigation' law that limits confidentiality from concealing public hazards.[27]Washington state, Texas, Arkansas, and Louisiana have laws limiting confidentiality as well, although judicial interpretation has weakened the application of these types of laws.[28]In the U.S. Congress, a similar federal Sunshine in Litigation Act has been proposed but not passed in 2009, 2011, 2014, and 2015.[29]
The dictionary definition ofconfidentialityat Wiktionary
Quotations related toConfidentialityat Wikiquote
|
https://en.wikipedia.org/wiki/Confidentiality
|
Incryptography, abrute-force attackconsists of an attacker submitting manypasswordsorpassphraseswith the hope of eventually guessing correctly. The attacker systematically checks all possible passwords and passphrases until the correct one is found. Alternatively, the attacker can attempt to guess thekeywhich is typically created from the password using akey derivation function. This is known as anexhaustive key search. This approach doesn't depend on intellectual tactics; rather, it relies on making several attempts.[citation needed]
A brute-force attack is acryptanalytic attackthat can, in theory, be used to attempt to decrypt any encrypted data (except for data encrypted in aninformation-theoretically securemanner).[1]Such an attack might be used when it is not possible to take advantage of other weaknesses in an encryption system (if any exist) that would make the task easier.
When password-guessing, this method is very fast when used to check all short passwords, but for longer passwords other methods such as thedictionary attackare used because a brute-force search takes too long. Longer passwords, passphrases and keys have more possible values, making them exponentially more difficult to crack than shorter ones due to diversity of characters.[2]
Brute-force attacks can be made less effective byobfuscatingthe data to be encoded making it more difficult for an attacker to recognize when the code has been cracked or by making the attacker do more work to test each guess. One of the measures of the strength of an encryption system is how long it would theoretically take an attacker to mount a successful brute-force attack against it.[3]
Brute-force attacks are an application of brute-force search, the general problem-solving technique of enumerating all candidates and checking each one. The word 'hammering' is sometimes used to describe a brute-force attack,[4]with 'anti-hammering' for countermeasures.[5]
Brute-force attacks work by calculating every possible combination that could make up a password and testing it to see if it is the correct password. As the password's length increases, the amount of time, on average, to find the correct password increases exponentially.[6]
The resources required for a brute-force attack growexponentiallywith increasingkey size, not linearly. Although U.S. export regulations historically restricted key lengths to 56-bitsymmetric keys(e.g.Data Encryption Standard), these restrictions are no longer in place, so modern symmetric algorithms typically use computationally stronger 128- to 256-bit keys.
There is a physical argument that a 128-bit symmetric key is computationally secure against brute-force attack. TheLandauer limitimplied by the laws of physics sets a lower limit on the energy required to perform a computation ofkT·ln 2per bit erased in a computation, whereTis the temperature of the computing device inkelvins,kis theBoltzmann constant, and thenatural logarithmof 2 is about 0.693 (0.6931471805599453). No irreversible computing device can use less energy than this, even in principle.[7]Thus, in order to simply flip through the possible values for a 128-bit symmetric key (ignoring doing the actual computing to check it) would, theoretically, require2128− 1bit flips on a conventional processor. If it is assumed that the calculation occurs near room temperature (≈300 K), the Von Neumann-Landauer Limit can be applied to estimate the energy required as ≈1018joules, which is equivalent to consuming 30gigawattsof power for one year. This is equal to 30×109W×365×24×3600 s = 9.46×1017J or 262.7 TWh (about 0.1% of theyearly world energy production). The full actual computation – checking each key to see if a solution has been found – would consume many times this amount. Furthermore, this is simply the energy requirement for cycling through the key space; the actual time it takes to flip each bit is not considered, which is certainly greater than 0 (seeBremermann's limit).[citation needed]
However, this argument assumes that the register values are changed using conventional set and clear operations, which inevitably generateentropy. It has been shown that computational hardware can be designed not to encounter this theoretical obstruction (seereversible computing), though no such computers are known to have been constructed.[citation needed]
As commercial successors of governmentalASICsolutions have become available, also known ascustom hardware attacks, two emerging technologies have proven their capability in the brute-force attack of certain ciphers. One is moderngraphics processing unit(GPU) technology,[8][page needed]the other is thefield-programmable gate array(FPGA) technology. GPUs benefit from their wide availability and price-performance benefit, FPGAs from theirenergy efficiencyper cryptographic operation. Both technologies try to transport the benefits of parallel processing to brute-force attacks. In case of GPUs some hundreds, in the case of FPGA some thousand processing units making them much better suited to cracking passwords than conventional processors. For instance in 2022, 8Nvidia RTX 4090GPU were linked together to test password strength by using the softwareHashcatwith results that showed 200 billion eight-characterNTLMpassword combinations could be cycled through in 48 minutes.[9][10]
Various publications in the fields of cryptographic analysis have proved the energy efficiency of today's FPGA technology, for example, the COPACOBANA FPGA Cluster computer consumes the same energy as a single PC (600 W), but performs like 2,500 PCs for certain algorithms. A number of firms provide hardware-based FPGA cryptographic analysis solutions from a single FPGAPCI Expresscard up to dedicated FPGA computers.[citation needed]WPAandWPA2encryption have successfully been brute-force attacked by reducing the workload by a factor of 50 in comparison to conventional CPUs[11][12]and some hundred in case of FPGAs.
Advanced Encryption Standard(AES) permits the use of 256-bit keys. Breaking a symmetric 256-bit key by brute-force requires 2128times more computational power than a 128-bit key. One of the fastest supercomputers in 2019 has a speed of 100petaFLOPSwhich could theoretically check 100 trillion (1014) AES keys per second (assuming 1000 operations per check), but would still require 3.67×1055years to exhaust the 256-bit key space.[13]
An underlying assumption of a brute-force attack is that the complete key space was used to generate keys, something that relies on an effectiverandom number generator, and that there are no defects in the algorithm or its implementation. For example, a number of systems that were originally thought to be impossible to crack by brute-force have nevertheless beencrackedbecause thekey spaceto search through was found to be much smaller than originally thought, because of a lack of entropy in theirpseudorandom number generators. These includeNetscape's implementation ofSecure Sockets Layer(SSL) (cracked byIan GoldbergandDavid Wagnerin 1995) and aDebian/Ubuntuedition ofOpenSSLdiscovered in 2008 to be flawed.[14][15]A similar lack of implemented entropy led to the breaking ofEnigma'scode.[16][17]
Credential recycling is thehackingpractice of re-using username and password combinations gathered in previous brute-force attacks. A special form of credential recycling ispass the hash, whereunsaltedhashed credentials are stolen and re-used without first being brute-forced.[18]
Certain types of encryption, by their mathematical properties, cannot be defeated by brute-force. An example of this isone-time padcryptography, where everycleartextbit has a corresponding key from a truly random sequence of key bits. A 140 character one-time-pad-encoded string subjected to a brute-force attack would eventually reveal every 140 character string possible, including the correct answer – but of all the answers given, there would be no way of knowing which was the correct one. Defeating such a system, as was done by theVenona project, generally relies not on pure cryptography, but upon mistakes in its implementation, such as the key pads not being truly random, intercepted keypads, or operators making mistakes.[19]
In case of anofflineattack where the attacker has gained access to the encrypted material, one can try key combinations without the risk of discovery or interference. In case ofonlineattacks, database and directory administrators can deploy countermeasures such as limiting the number of attempts that a password can be tried, introducing time delays between successive attempts, increasing the answer's complexity (e.g., requiring aCAPTCHAanswer or employingmulti-factor authentication), and/or locking accounts out after unsuccessful login attempts.[20][page needed]Website administrators may prevent a particular IP address from trying more than a predetermined number of password attempts against any account on the site.[21]Additionally, the MITRE D3FEND framework provides structured recommendations for defending against brute-force attacks by implementing strategies such as network traffic filtering, deploying decoy credentials, and invalidating authentication caches.[22]
In a reverse brute-force attack (also called password spraying), a single (usually common) password is tested against multiple usernames or encrypted files.[23]The process may be repeated for a select few passwords. In such a strategy, the attacker is not targeting a specific user.
|
https://en.wikipedia.org/wiki/Brute-force_attack
|
Incryptography, acipher(orcypher) is analgorithmfor performingencryptionordecryption—a series of well-defined steps that can be followed as a procedure. An alternative, less common term isencipherment. To encipher or encode is to convert information into cipher or code. In common parlance, "cipher" is synonymous with "code", as they are both a set of steps that encrypt a message; however, the concepts are distinct in cryptography, especiallyclassical cryptography.
Codes generally substitute different length strings of characters in the output, while ciphers generally substitute the same number of characters as are input. A code maps one meaning with another. Words and phrases can be coded as letters or numbers. Codes typically have direct meaning from input to key. Codes primarily function to save time. Ciphers are algorithmic. The given input must follow the cipher's process to be solved. Ciphers are commonly used to encrypt written information.
Codes operated by substituting according to a largecodebookwhich linked a random string of characters or numbers to a word or phrase. For example, "UQJHSE" could be the code for "Proceed to the following coordinates.". When using a cipher the original information is known asplaintext, and the encrypted form asciphertext. The ciphertext message contains all the information of the plaintext message, but is not in a format readable by a human or computer without the proper mechanism to decrypt it.
The operation of a cipher usually depends on a piece of auxiliary information, called akey(or, in traditionalNSAparlance, acryptovariable). The encrypting procedure is varied depending on the key, which changes the detailed operation of the algorithm. A key must be selected before using a cipher to encrypt a message, with some exceptions such asROT13andAtbash.
Most modern ciphers can be categorized in several ways:
Originating from the Sanskrit word for zero शून्य (śuṇya), via the Arabic word صفر (ṣifr), the word "cipher"(see etymology) spread to Europe as part of the Arabic numeral system during the Middle Ages. The Roman numeral system lacked the concept ofzero, and this limited advances in mathematics. In this transition, the word was adopted into Medieval Latin as cifra, and then into Middle French as cifre. This eventually led to the English word cipher (minority spelling cypher). One theory for how the term came to refer to encoding is that the concept of zero was confusing to Europeans, and so the term came to refer to a message or communication that was not easily understood.[1]
The termcipherwas later also used to refer to any Arabic digit, or to calculation using them, so encoding text in the form of Arabic numerals is literally converting the text to "ciphers".
In casual contexts, "code" and "cipher" can typically be used interchangeably; however, the technical usages of the words refer to different concepts. Codes contain meaning; words and phrases are assigned to numbers or symbols, creating a shorter message.
An example of this is thecommercial telegraph codewhich was used to shorten long telegraph messages which resulted from entering into commercial contracts using exchanges oftelegrams.
Another example is given by whole word ciphers, which allow the user to replace an entire word with a symbol or character, much like the way written Japanese utilizesKanji(meaning Chinese characters in Japanese) characters to supplement the native Japanese characters representing syllables. An example using English language with Kanji could be to replace "The quick brown fox jumps over the lazy dog" by "The quick brown 狐 jumps 上 the lazy 犬".Stenographerssometimes use specific symbols to abbreviate whole words.
Ciphers, on the other hand, work at a lower level: the level of individual letters, small groups of letters, or, in modern schemes, individual bits and blocks of bits. Some systems used both codes and ciphers in one system, usingsuperenciphermentto increase the security. In some cases the termscodesandciphersare used synonymously withsubstitutionandtransposition, respectively.
Historically, cryptography was split into a dichotomy of codes and ciphers, while coding had its own terminology analogous to that of ciphers: "encoding,codetext,decoding" and so on.
However, codes have a variety of drawbacks, including susceptibility tocryptanalysisand the difficulty of managing a cumbersomecodebook. Because of this, codes have fallen into disuse in modern cryptography, and ciphers are the dominant technique.
There are a variety of different types of encryption. Algorithms used earlier in thehistory of cryptographyare substantially different from modern methods, and modern ciphers can be classified according to how they operate and whether they use one or two keys.
TheCaesar Cipheris one of the earliest known cryptographic systems. Julius Caesar used a cipher that shifts the letters in the alphabet in place by three and wrapping the remaining letters to the front to write to Marcus Tullius Cicero in approximately 50 BC.[citation needed]
Historical pen and paper ciphers used in the past are sometimes known asclassical ciphers. They include simplesubstitution ciphers(such asROT13) andtransposition ciphers(such as aRail Fence Cipher). For example, "GOOD DOG" can be encrypted as "PLLX XLP" where "L" substitutes for "O", "P" for "G", and "X" for "D" in the message. Transposition of the letters "GOOD DOG" can result in "DGOGDOO". These simple ciphers and examples are easy to crack, even without plaintext-ciphertext pairs.[2][3]
In the 1640s, the Parliamentarian commander,Edward Montagu, 2nd Earl of Manchester, developed ciphers to send coded messages to his allies during theEnglish Civil War.[4]The English theologian John Wilkins published a book in 1641 titled "Mercury, or The Secret and Swift Messenger" and described a musical cipher wherein letters of the alphabet were substituted for music notes.[5][6]This species of melodic cipher was depicted in greater detail by author Abraham Rees in his book Cyclopædia (1778).[7]
Simple ciphers were replaced bypolyalphabetic substitutionciphers (such as theVigenère) which changed the substitution alphabet for every letter. For example, "GOOD DOG" can be encrypted as "PLSX TWF" where "L", "S", and "W" substitute for "O". With even a small amount of known or estimated plaintext, simple polyalphabetic substitution ciphers and letter transposition ciphers designed for pen and paper encryption are easy to crack.[8]It is possible to create a secure pen and paper cipher based on aone-time pad, but these have other disadvantages.
During the early twentieth century, electro-mechanical machines were invented to do encryption and decryption using transposition, polyalphabetic substitution, and a kind of "additive" substitution. Inrotor machines, several rotor disks provided polyalphabetic substitution, while plug boards provided another substitution. Keys were easily changed by changing the rotor disks and the plugboard wires. Although these encryption methods were more complex than previous schemes and required machines to encrypt and decrypt, other machines such as the BritishBombewere invented to crack these encryption methods.
Modern encryption methods can be divided by two criteria: by type of key used, and by type of input data.
By type of key used ciphers are divided into:
In a symmetric key algorithm (e.g., DES and AES), the sender and receiver must have a shared key set up in advance and kept secret from all other parties; the sender uses this key for encryption, and the receiver uses the same key for decryption. The design of AES (Advanced Encryption System) was beneficial because it aimed to overcome the flaws in the design of the DES (Data encryption standard). AES's designer's claim that the common means of modern cipher cryptanalytic attacks are ineffective against AES due to its design structure.
Ciphers can be distinguished into two types by the type of input data:
In a pure mathematical attack, (i.e., lacking any other information to help break a cipher) two factors above all count:
Since the desired effect is computational difficulty, in theory one would choose analgorithmand desired difficulty level, thus decide the key length accordingly.
Claude Shannonproved, using information theory considerations, that any theoretically unbreakable cipher must have keys which are at least as long as the plaintext, and used only once:one-time pad.[9]
|
https://en.wikipedia.org/wiki/Cipher
|
Cryptography, orcryptology(fromAncient Greek:κρυπτός,romanized:kryptós"hidden, secret"; andγράφεινgraphein, "to write", or-λογία-logia, "study", respectively[1]), is the practice and study of techniques forsecure communicationin the presence ofadversarialbehavior.[2]More generally, cryptography is about constructing and analyzingprotocolsthat prevent third parties or the public from reading private messages.[3]Modern cryptography exists at the intersection of the disciplines of mathematics,computer science,information security,electrical engineering,digital signal processing, physics, and others.[4]Core concepts related toinformation security(data confidentiality,data integrity,authentication, andnon-repudiation) are also central to cryptography.[5]Practical applications of cryptography includeelectronic commerce,chip-based payment cards,digital currencies,computer passwords, andmilitary communications.
Cryptography prior to the modern age was effectively synonymous withencryption, converting readable information (plaintext) to unintelligiblenonsensetext (ciphertext), which can only be read by reversing the process (decryption). The sender of an encrypted (coded) message shares the decryption (decoding) technique only with the intended recipients to preclude access from adversaries. The cryptography literatureoften uses the names"Alice" (or "A") for the sender, "Bob" (or "B") for the intended recipient, and "Eve" (or "E") for theeavesdroppingadversary.[6]Since the development ofrotor cipher machinesinWorld War Iand the advent of computers inWorld War II, cryptography methods have become increasingly complex and their applications more varied.
Modern cryptography is heavily based onmathematical theoryand computer science practice; cryptographicalgorithmsare designed aroundcomputational hardness assumptions, making such algorithms hard to break in actual practice by any adversary. While it is theoretically possible to break into a well-designed system, it is infeasible in actual practice to do so. Such schemes, if well designed, are therefore termed "computationally secure". Theoretical advances (e.g., improvements ininteger factorizationalgorithms) and faster computing technology require these designs to be continually reevaluated and, if necessary, adapted.Information-theoretically secureschemes that provably cannot be broken even with unlimited computing power, such as theone-time pad, are much more difficult to use in practice than the best theoretically breakable but computationally secure schemes.
The growth of cryptographic technology has raiseda number of legal issuesin theInformation Age. Cryptography's potential for use as a tool for espionage andseditionhas led many governments to classify it as a weapon and to limit or even prohibit its use and export.[7]In some jurisdictions where the use of cryptography is legal, laws permit investigators tocompel the disclosureofencryption keysfor documents relevant to an investigation.[8][9]Cryptography also plays a major role indigital rights managementandcopyright infringementdisputes with regard todigital media.[10]
The first use of the term "cryptograph" (as opposed to "cryptogram") dates back to the 19th century—originating from "The Gold-Bug", a story byEdgar Allan Poe.[11][12]
Until modern times, cryptography referred almost exclusively to "encryption", which is the process of converting ordinary information (calledplaintext) into an unintelligible form (calledciphertext).[13]Decryption is the reverse, in other words, moving from the unintelligible ciphertext back to plaintext. Acipher(or cypher) is a pair of algorithms that carry out the encryption and the reversing decryption. The detailed operation of a cipher is controlled both by the algorithm and, in each instance, by a "key". The key is a secret (ideally known only to the communicants), usually a string of characters (ideally short so it can be remembered by the user), which is needed to decrypt the ciphertext. In formal mathematical terms, a "cryptosystem" is the ordered list of elements of finite possible plaintexts, finite possible cyphertexts, finite possible keys, and the encryption and decryption algorithms that correspond to each key. Keys are important both formally and in actual practice, as ciphers without variable keys can be trivially broken with only the knowledge of the cipher used and are therefore useless (or even counter-productive) for most purposes. Historically, ciphers were often used directly for encryption or decryption without additional procedures such asauthenticationor integrity checks.
There are two main types of cryptosystems:symmetricandasymmetric. In symmetric systems, the only ones known until the 1970s, the same secret key encrypts and decrypts a message. Data manipulation in symmetric systems is significantly faster than in asymmetric systems. Asymmetric systems use a "public key" to encrypt a message and a related "private key" to decrypt it. The advantage of asymmetric systems is that the public key can be freely published, allowing parties to establish secure communication without having a shared secret key. In practice, asymmetric systems are used to first exchange a secret key, and then secure communication proceeds via a more efficient symmetric system using that key.[14]Examples of asymmetric systems includeDiffie–Hellman key exchange, RSA (Rivest–Shamir–Adleman), ECC (Elliptic Curve Cryptography), andPost-quantum cryptography. Secure symmetric algorithms include the commonly used AES (Advanced Encryption Standard) which replaced the older DES (Data Encryption Standard).[15]Insecure symmetric algorithms include children's language tangling schemes such asPig Latinor othercant, and all historical cryptographic schemes, however seriously intended, prior to the invention of theone-time padearly in the 20th century.
Incolloquialuse, the term "code" is often used to mean any method of encryption or concealment of meaning. However, in cryptography, code has a more specific meaning: the replacement of a unit of plaintext (i.e., a meaningful word or phrase) with acode word(for example, "wallaby" replaces "attack at dawn"). A cypher, in contrast, is a scheme for changing or substituting an element below such a level (a letter, a syllable, or a pair of letters, etc.) to produce a cyphertext.
Cryptanalysisis the term used for the study of methods for obtaining the meaning of encrypted information without access to the key normally required to do so; i.e., it is the study of how to "crack" encryption algorithms or their implementations.
Some use the terms "cryptography" and "cryptology" interchangeably in English,[16]while others (including US military practice generally) use "cryptography" to refer specifically to the use and practice of cryptographic techniques and "cryptology" to refer to the combined study of cryptography and cryptanalysis.[17][18]English is more flexible than several other languages in which "cryptology" (done by cryptologists) is always used in the second sense above.RFC2828advises thatsteganographyis sometimes included in cryptology.[19]
The study of characteristics of languages that have some application in cryptography or cryptology (e.g. frequency data, letter combinations, universal patterns, etc.) is calledcryptolinguistics. Cryptolingusitics is especially used in military intelligence applications for deciphering foreign communications.[20][21]
Before the modern era, cryptography focused on message confidentiality (i.e., encryption)—conversion ofmessagesfrom a comprehensible form into an incomprehensible one and back again at the other end, rendering it unreadable by interceptors oreavesdropperswithout secret knowledge (namely the key needed for decryption of that message). Encryption attempted to ensuresecrecyin communications, such as those ofspies, military leaders, and diplomats. In recent decades, the field has expanded beyond confidentiality concerns to include techniques for message integrity checking, sender/receiver identity authentication,digital signatures,interactive proofsandsecure computation, among others.
The main classical cipher types aretransposition ciphers, which rearrange the order of letters in a message (e.g., 'hello world' becomes 'ehlol owrdl' in a trivially simple rearrangement scheme), andsubstitution ciphers, which systematically replace letters or groups of letters with other letters or groups of letters (e.g., 'fly at once' becomes 'gmz bu podf' by replacing each letter with the one following it in theLatin alphabet).[22]Simple versions of either have never offered much confidentiality from enterprising opponents. An early substitution cipher was theCaesar cipher, in which each letter in the plaintext was replaced by a letter three positions further down the alphabet.[23]Suetoniusreports thatJulius Caesarused it with a shift of three to communicate with his generals.Atbashis an example of an early Hebrew cipher. The earliest known use of cryptography is some carved ciphertext on stone inEgypt(c.1900 BCE), but this may have been done for the amusement of literate observers rather than as a way of concealing information.
TheGreeks of Classical timesare said to have known of ciphers (e.g., thescytaletransposition cipher claimed to have been used by theSpartanmilitary).[24]Steganography(i.e., hiding even the existence of a message so as to keep it confidential) was also first developed in ancient times. An early example, fromHerodotus, was a message tattooed on a slave's shaved head and concealed under the regrown hair.[13]Other steganography methods involve 'hiding in plain sight,' such as using amusic cipherto disguise an encrypted message within a regular piece of sheet music. More modern examples of steganography include the use ofinvisible ink,microdots, anddigital watermarksto conceal information.
In India, the 2000-year-oldKama SutraofVātsyāyanaspeaks of two different kinds of ciphers called Kautiliyam and Mulavediya. In the Kautiliyam, the cipher letter substitutions are based on phonetic relations, such as vowels becoming consonants. In the Mulavediya, the cipher alphabet consists of pairing letters and using the reciprocal ones.[13]
InSassanid Persia, there were two secret scripts, according to the Muslim authorIbn al-Nadim: thešāh-dabīrīya(literally "King's script") which was used for official correspondence, and therāz-saharīyawhich was used to communicate secret messages with other countries.[25]
David Kahnnotes inThe Codebreakersthat modern cryptology originated among theArabs, the first people to systematically document cryptanalytic methods.[26]Al-Khalil(717–786) wrote theBook of Cryptographic Messages, which contains the first use ofpermutations and combinationsto list all possible Arabic words with and without vowels.[27]
Ciphertexts produced by aclassical cipher(and some modern ciphers) will reveal statistical information about the plaintext, and that information can often be used to break the cipher. After the discovery offrequency analysis, nearly all such ciphers could be broken by an informed attacker.[28]Such classical ciphers still enjoy popularity today, though mostly aspuzzles(seecryptogram). TheArab mathematicianandpolymathAl-Kindi wrote a book on cryptography entitledRisalah fi Istikhraj al-Mu'amma(Manuscript for the Deciphering Cryptographic Messages), which described the first known use of frequency analysis cryptanalysis techniques.[29][30]
Language letter frequencies may offer little help for some extended historical encryption techniques such ashomophonic cipherthat tend to flatten the frequency distribution. For those ciphers, language letter group (or n-gram) frequencies may provide an attack.
Essentially all ciphers remained vulnerable to cryptanalysis using the frequency analysis technique until the development of thepolyalphabetic cipher, most clearly byLeon Battista Albertiaround the year 1467, though there is some indication that it was already known to Al-Kindi.[30]Alberti's innovation was to use different ciphers (i.e., substitution alphabets) for various parts of a message (perhaps for each successive plaintext letter at the limit). He also invented what was probably the first automaticcipher device, a wheel that implemented a partial realization of his invention. In theVigenère cipher, apolyalphabetic cipher, encryption uses akey word, which controls letter substitution depending on which letter of the key word is used. In the mid-19th centuryCharles Babbageshowed that the Vigenère cipher was vulnerable toKasiski examination, but this was first published about ten years later byFriedrich Kasiski.[31]
Although frequency analysis can be a powerful and general technique against many ciphers, encryption has still often been effective in practice, as many a would-be cryptanalyst was unaware of the technique. Breaking a message without using frequency analysis essentially required knowledge of the cipher used and perhaps of the key involved, thus making espionage, bribery, burglary, defection, etc., more attractive approaches to the cryptanalytically uninformed. It was finally explicitly recognized in the 19th century that secrecy of a cipher's algorithm is not a sensible nor practical safeguard of message security; in fact, it was further realized that any adequate cryptographic scheme (including ciphers) should remain secure even if the adversary fully understands the cipher algorithm itself. Security of the key used should alone be sufficient for a good cipher to maintain confidentiality under an attack. This fundamental principle was first explicitly stated in 1883 byAuguste Kerckhoffsand is generally calledKerckhoffs's Principle; alternatively and more bluntly, it was restated byClaude Shannon, the inventor ofinformation theoryand the fundamentals of theoretical cryptography, asShannon's Maxim—'the enemy knows the system'.
Different physical devices and aids have been used to assist with ciphers. One of the earliest may have been the scytale of ancient Greece, a rod supposedly used by the Spartans as an aid for a transposition cipher. In medieval times, other aids were invented such as thecipher grille, which was also used for a kind of steganography. With the invention of polyalphabetic ciphers came more sophisticated aids such as Alberti's owncipher disk,Johannes Trithemius'tabula rectascheme, andThomas Jefferson'swheel cypher(not publicly known, and reinvented independently byBazeriesaround 1900). Many mechanical encryption/decryption devices were invented early in the 20th century, and several patented, among themrotor machines—famously including theEnigma machineused by the German government and military from the late 1920s and duringWorld War II.[32]The ciphers implemented by better quality examples of these machine designs brought about a substantial increase in cryptanalytic difficulty after WWI.[33]
Cryptanalysis of the new mechanical ciphering devices proved to be both difficult and laborious. In the United Kingdom, cryptanalytic efforts atBletchley Parkduring WWII spurred the development of more efficient means for carrying out repetitive tasks, suchas military code breaking (decryption). This culminated in the development of theColossus, the world's first fully electronic, digital,programmablecomputer, which assisted in the decryption of ciphers generated by the German Army'sLorenz SZ40/42machine.
Extensive open academic research into cryptography is relatively recent, beginning in the mid-1970s. In the early 1970sIBMpersonnel designed the Data Encryption Standard (DES) algorithm that became the first federal government cryptography standard in the United States.[34]In 1976Whitfield DiffieandMartin Hellmanpublished the Diffie–Hellman key exchange algorithm.[35]In 1977 theRSA algorithmwas published inMartin Gardner'sScientific Americancolumn.[36]Since then, cryptography has become a widely used tool in communications,computer networks, andcomputer securitygenerally.
Some modern cryptographic techniques can only keep their keys secret if certain mathematical problems areintractable, such as theinteger factorizationor thediscrete logarithmproblems, so there are deep connections withabstract mathematics. There are very few cryptosystems that are proven to be unconditionally secure. Theone-time padis one, and was proven to be so by Claude Shannon. There are a few important algorithms that have been proven secure under certain assumptions. For example, the infeasibility of factoring extremely large integers is the basis for believing that RSA is secure, and some other systems, but even so, proof of unbreakability is unavailable since the underlying mathematical problem remains open. In practice, these are widely used, and are believed unbreakable in practice by most competent observers. There are systems similar to RSA, such as one byMichael O. Rabinthat are provably secure provided factoringn = pqis impossible; it is quite unusable in practice. Thediscrete logarithm problemis the basis for believing some other cryptosystems are secure, and again, there are related, less practical systems that are provably secure relative to the solvability or insolvability discrete log problem.[37]
As well as being aware of cryptographic history,cryptographic algorithmand system designers must also sensibly consider probable future developments while working on their designs. For instance, continuous improvements in computer processing power have increased the scope ofbrute-force attacks, so when specifyingkey lengths, the required key lengths are similarly advancing.[38]The potential impact ofquantum computingare already being considered by some cryptographic system designers developing post-quantum cryptography.[when?]The announced imminence of small implementations of these machines may be making the need for preemptive caution rather more than merely speculative.[5]
Claude Shannon's two papers, his1948 paperoninformation theory, and especially his1949 paperon cryptography, laid the foundations of modern cryptography and provided a mathematical basis for future cryptography.[39][40]His 1949 paper has been noted as having provided a "solid theoretical basis for cryptography and for cryptanalysis",[41]and as having turned cryptography from an "art to a science".[42]As a result of his contributions and work, he has been described as the "founding father of modern cryptography".[43]
Prior to the early 20th century, cryptography was mainly concerned withlinguisticandlexicographicpatterns. Since then cryptography has broadened in scope, and now makes extensive use of mathematical subdisciplines, including information theory,computational complexity, statistics,combinatorics,abstract algebra,number theory, andfinite mathematics.[44]Cryptography is also a branch of engineering, but an unusual one since it deals with active, intelligent, and malevolent opposition; other kinds of engineering (e.g., civil or chemical engineering) need deal only with neutral natural forces. There is also active research examining the relationship between cryptographic problems andquantum physics.
Just as the development of digital computers and electronics helped in cryptanalysis, it made possible much more complex ciphers. Furthermore, computers allowed for the encryption of any kind of data representable in any binary format, unlike classical ciphers which only encrypted written language texts; this was new and significant. Computer use has thus supplanted linguistic cryptography, both for cipher design and cryptanalysis. Many computer ciphers can be characterized by their operation onbinarybitsequences (sometimes in groups or blocks), unlike classical and mechanical schemes, which generally manipulate traditional characters (i.e., letters and digits) directly. However, computers have also assisted cryptanalysis, which has compensated to some extent for increased cipher complexity. Nonetheless, good modern ciphers have stayed ahead of cryptanalysis; it is typically the case that use of a quality cipher is very efficient (i.e., fast and requiring few resources, such as memory or CPU capability), while breaking it requires an effort many orders of magnitude larger, and vastly larger than that required for any classical cipher, making cryptanalysis so inefficient and impractical as to be effectively impossible.
Symmetric-key cryptography refers to encryption methods in which both the sender and receiver share the same key (or, less commonly, in which their keys are different, but related in an easily computable way). This was the only kind of encryption publicly known until June 1976.[35]
Symmetric key ciphers are implemented as eitherblock ciphersorstream ciphers. A block cipher enciphers input in blocks of plaintext as opposed to individual characters, the input form used by a stream cipher.
TheData Encryption Standard(DES) and theAdvanced Encryption Standard(AES) are block cipher designs that have been designatedcryptography standardsby the US government (though DES's designation was finally withdrawn after the AES was adopted).[45]Despite its deprecation as an official standard, DES (especially its still-approved and much more securetriple-DESvariant) remains quite popular; it is used across a wide range of applications, from ATM encryption[46]toe-mail privacy[47]andsecure remote access.[48]Many other block ciphers have been designed and released, with considerable variation in quality. Many, even some designed by capable practitioners, have been thoroughly broken, such asFEAL.[5][49]
Stream ciphers, in contrast to the 'block' type, create an arbitrarily long stream of key material, which is combined with the plaintext bit-by-bit or character-by-character, somewhat like theone-time pad. In a stream cipher, the output stream is created based on a hidden internal state that changes as the cipher operates. That internal state is initially set up using the secret key material.RC4is a widely used stream cipher.[5]Block ciphers can be used as stream ciphers by generating blocks of a keystream (in place of aPseudorandom number generator) and applying anXORoperation to each bit of the plaintext with each bit of the keystream.[50]
Message authentication codes(MACs) are much likecryptographic hash functions, except that a secret key can be used to authenticate the hash value upon receipt;[5][51]this additional complication blocks an attack scheme against baredigest algorithms, and so has been thought worth the effort. Cryptographic hash functions are a third type of cryptographic algorithm. They take a message of any length as input, and output a short, fixed-lengthhash, which can be used in (for example) a digital signature. For good hash functions, an attacker cannot find two messages that produce the same hash.MD4is a long-used hash function that is now broken;MD5, a strengthened variant of MD4, is also widely used but broken in practice. The USNational Security Agencydeveloped the Secure Hash Algorithm series of MD5-like hash functions: SHA-0 was a flawed algorithm that the agency withdrew;SHA-1is widely deployed and more secure than MD5, but cryptanalysts have identified attacks against it; theSHA-2family improves on SHA-1, but is vulnerable to clashes as of 2011; and the US standards authority thought it "prudent" from a security perspective to develop a new standard to "significantly improve the robustness ofNIST's overall hash algorithm toolkit."[52]Thus, ahash function design competitionwas meant to select a new U.S. national standard, to be calledSHA-3, by 2012. The competition ended on October 2, 2012, when the NIST announced thatKeccakwould be the new SHA-3 hash algorithm.[53]Unlike block and stream ciphers that are invertible, cryptographic hash functions produce a hashed output that cannot be used to retrieve the original input data. Cryptographic hash functions are used to verify the authenticity of data retrieved from an untrusted source or to add a layer of security.
Symmetric-key cryptosystems use the same key for encryption and decryption of a message, although a message or group of messages can have a different key than others. A significant disadvantage of symmetric ciphers is thekey managementnecessary to use them securely. Each distinct pair of communicating parties must, ideally, share a different key, and perhaps for each ciphertext exchanged as well. The number of keys required increases as thesquareof the number of network members, which very quickly requires complex key management schemes to keep them all consistent and secret.
In a groundbreaking 1976 paper, Whitfield Diffie and Martin Hellman proposed the notion ofpublic-key(also, more generally, calledasymmetric key) cryptography in which two different but mathematically related keys are used—apublickey and aprivatekey.[54]A public key system is so constructed that calculation of one key (the 'private key') is computationally infeasible from the other (the 'public key'), even though they are necessarily related. Instead, both keys are generated secretly, as an interrelated pair.[55]The historianDavid Kahndescribed public-key cryptography as "the most revolutionary new concept in the field since polyalphabetic substitution emerged in the Renaissance".[56]
In public-key cryptosystems, the public key may be freely distributed, while its paired private key must remain secret. In a public-key encryption system, thepublic keyis used for encryption, while theprivateorsecret keyis used for decryption. While Diffie and Hellman could not find such a system, they showed that public-key cryptography was indeed possible by presenting theDiffie–Hellman key exchangeprotocol, a solution that is now widely used in secure communications to allow two parties to secretly agree on ashared encryption key.[35]TheX.509standard defines the most commonly used format forpublic key certificates.[57]
Diffie and Hellman's publication sparked widespread academic efforts in finding a practical public-key encryption system. This race was finally won in 1978 byRonald Rivest,Adi Shamir, andLen Adleman, whose solution has since become known as theRSA algorithm.[58]
TheDiffie–HellmanandRSA algorithms, in addition to being the first publicly known examples of high-quality public-key algorithms, have been among the most widely used. Otherasymmetric-key algorithmsinclude theCramer–Shoup cryptosystem,ElGamal encryption, and variouselliptic curve techniques.
A document published in 1997 by the Government Communications Headquarters (GCHQ), a British intelligence organization, revealed that cryptographers at GCHQ had anticipated several academic developments.[59]Reportedly, around 1970,James H. Ellishad conceived the principles of asymmetric key cryptography. In 1973,Clifford Cocksinvented a solution that was very similar in design rationale to RSA.[59][60]In 1974,Malcolm J. Williamsonis claimed to have developed the Diffie–Hellman key exchange.[61]
Public-key cryptography is also used for implementingdigital signatureschemes. A digital signature is reminiscent of an ordinary signature; they both have the characteristic of being easy for a user to produce, but difficult for anyone else toforge. Digital signatures can also be permanently tied to the content of the message being signed; they cannot then be 'moved' from one document to another, for any attempt will be detectable. In digital signature schemes, there are two algorithms: one forsigning, in which a secret key is used to process the message (or a hash of the message, or both), and one forverification, in which the matching public key is used with the message to check the validity of the signature. RSA andDSAare two of the most popular digital signature schemes. Digital signatures are central to the operation ofpublic key infrastructuresand many network security schemes (e.g.,SSL/TLS, manyVPNs, etc.).[49]
Public-key algorithms are most often based on thecomputational complexityof "hard" problems, often fromnumber theory. For example, the hardness of RSA is related to theinteger factorizationproblem, while Diffie–Hellman and DSA are related to thediscrete logarithmproblem. The security ofelliptic curve cryptographyis based on number theoretic problems involvingelliptic curves. Because of the difficulty of the underlying problems, most public-key algorithms involve operations such asmodularmultiplication and exponentiation, which are much more computationally expensive than the techniques used in most block ciphers, especially with typical key sizes. As a result, public-key cryptosystems are commonlyhybrid cryptosystems, in which a fast high-quality symmetric-key encryption algorithm is used for the message itself, while the relevant symmetric key is sent with the message, but encrypted using a public-key algorithm. Similarly, hybrid signature schemes are often used, in which a cryptographic hash function is computed, and only the resulting hash is digitally signed.[5]
Cryptographic hash functions are functions that take a variable-length input and return a fixed-length output, which can be used in, for example, a digital signature. For a hash function to be secure, it must be difficult to compute two inputs that hash to the same value (collision resistance) and to compute an input that hashes to a given output (preimage resistance).MD4is a long-used hash function that is now broken;MD5, a strengthened variant of MD4, is also widely used but broken in practice. The USNational Security Agencydeveloped the Secure Hash Algorithm series of MD5-like hash functions: SHA-0 was a flawed algorithm that the agency withdrew;SHA-1is widely deployed and more secure than MD5, but cryptanalysts have identified attacks against it; theSHA-2family improves on SHA-1, but is vulnerable to clashes as of 2011; and the US standards authority thought it "prudent" from a security perspective to develop a new standard to "significantly improve the robustness ofNIST's overall hash algorithm toolkit."[52]Thus, ahash function design competitionwas meant to select a new U.S. national standard, to be calledSHA-3, by 2012. The competition ended on October 2, 2012, when the NIST announced thatKeccakwould be the new SHA-3 hash algorithm.[53]Unlike block and stream ciphers that are invertible, cryptographic hash functions produce a hashed output that cannot be used to retrieve the original input data. Cryptographic hash functions are used to verify the authenticity of data retrieved from an untrusted source or to add a layer of security.
The goal of cryptanalysis is to find some weakness or insecurity in a cryptographic scheme, thus permitting its subversion or evasion.
It is a common misconception that every encryption method can be broken. In connection with his WWII work atBell Labs,Claude Shannonproved that theone-time padcipher is unbreakable, provided the key material is trulyrandom, never reused, kept secret from all possible attackers, and of equal or greater length than the message.[62]Mostciphers, apart from the one-time pad, can be broken with enough computational effort bybrute force attack, but the amount of effort needed may beexponentiallydependent on the key size, as compared to the effort needed to make use of the cipher. In such cases, effective security could be achieved if it is proven that the effort required (i.e., "work factor", in Shannon's terms) is beyond the ability of any adversary. This means it must be shown that no efficient method (as opposed to the time-consuming brute force method) can be found to break the cipher. Since no such proof has been found to date, the one-time-pad remains the only theoretically unbreakable cipher. Although well-implemented one-time-pad encryption cannot be broken, traffic analysis is still possible.
There are a wide variety of cryptanalytic attacks, and they can be classified in any of several ways. A common distinction turns on what Eve (an attacker) knows and what capabilities are available. In aciphertext-only attack, Eve has access only to the ciphertext (good modern cryptosystems are usually effectively immune to ciphertext-only attacks). In aknown-plaintext attack, Eve has access to a ciphertext and its corresponding plaintext (or to many such pairs). In achosen-plaintext attack, Eve may choose a plaintext and learn its corresponding ciphertext (perhaps many times); an example isgardening, used by the British during WWII. In achosen-ciphertext attack, Eve may be able tochooseciphertexts and learn their corresponding plaintexts.[5]Finally in aman-in-the-middleattack Eve gets in between Alice (the sender) and Bob (the recipient), accesses and modifies the traffic and then forward it to the recipient.[63]Also important, often overwhelmingly so, are mistakes (generally in the design or use of one of theprotocolsinvolved).
Cryptanalysis of symmetric-key ciphers typically involves looking for attacks against the block ciphers or stream ciphers that are more efficient than any attack that could be against a perfect cipher. For example, a simple brute force attack against DES requires one known plaintext and 255decryptions, trying approximately half of the possible keys, to reach a point at which chances are better than even that the key sought will have been found. But this may not be enough assurance; alinear cryptanalysisattack against DES requires 243known plaintexts (with their corresponding ciphertexts) and approximately 243DES operations.[64]This is a considerable improvement over brute force attacks.
Public-key algorithms are based on the computational difficulty of various problems. The most famous of these are the difficulty ofinteger factorizationofsemiprimesand the difficulty of calculatingdiscrete logarithms, both of which are not yet proven to be solvable inpolynomial time(P) using only a classicalTuring-completecomputer. Much public-key cryptanalysis concerns designing algorithms inPthat can solve these problems, or using other technologies, such asquantum computers. For instance, the best-known algorithms for solving theelliptic curve-basedversion of discrete logarithm are much more time-consuming than the best-known algorithms for factoring, at least for problems of more or less equivalent size. Thus, to achieve an equivalent strength of encryption, techniques that depend upon the difficulty of factoring large composite numbers, such as the RSA cryptosystem, require larger keys than elliptic curve techniques. For this reason, public-key cryptosystems based on elliptic curves have become popular since their invention in the mid-1990s.
While pure cryptanalysis uses weaknesses in the algorithms themselves, other attacks on cryptosystems are based on actual use of the algorithms in real devices, and are calledside-channel attacks. If a cryptanalyst has access to, for example, the amount of time the device took to encrypt a number of plaintexts or report an error in a password or PIN character, they may be able to use atiming attackto break a cipher that is otherwise resistant to analysis. An attacker might also study the pattern and length of messages to derive valuable information; this is known astraffic analysis[65]and can be quite useful to an alert adversary. Poor administration of a cryptosystem, such as permitting too short keys, will make any system vulnerable, regardless of other virtues.Social engineeringand other attacks against humans (e.g., bribery,extortion,blackmail, espionage,rubber-hose cryptanalysisor torture) are usually employed due to being more cost-effective and feasible to perform in a reasonable amount of time compared to pure cryptanalysis by a high margin.
Much of the theoretical work in cryptography concernscryptographicprimitives—algorithms with basic cryptographic properties—and their relationship to other cryptographic problems. More complicated cryptographic tools are then built from these basic primitives. These primitives provide fundamental properties, which are used to develop more complex tools calledcryptosystemsorcryptographic protocols, which guarantee one or more high-level security properties. Note, however, that the distinction between cryptographicprimitivesand cryptosystems, is quite arbitrary; for example, the RSA algorithm is sometimes considered a cryptosystem, and sometimes a primitive. Typical examples of cryptographic primitives includepseudorandom functions,one-way functions, etc.
One or more cryptographic primitives are often used to develop a more complex algorithm, called a cryptographic system, orcryptosystem. Cryptosystems (e.g.,El-Gamal encryption) are designed to provide particular functionality (e.g., public key encryption) while guaranteeing certain security properties (e.g.,chosen-plaintext attack (CPA)security in therandom oracle model). Cryptosystems use the properties of the underlying cryptographic primitives to support the system's security properties. As the distinction between primitives and cryptosystems is somewhat arbitrary, a sophisticated cryptosystem can be derived from a combination of several more primitive cryptosystems. In many cases, the cryptosystem's structure involves back and forth communication among two or more parties in space (e.g., between the sender of a secure message and its receiver) or across time (e.g., cryptographically protectedbackupdata). Such cryptosystems are sometimes calledcryptographic protocols.
Some widely known cryptosystems include RSA,Schnorr signature,ElGamal encryption, andPretty Good Privacy(PGP). More complex cryptosystems includeelectronic cash[66]systems,signcryptionsystems, etc. Some more 'theoretical'[clarification needed]cryptosystems includeinteractive proof systems,[67](likezero-knowledge proofs)[68]and systems forsecret sharing.[69][70]
Lightweight cryptography (LWC) concerns cryptographic algorithms developed for a strictly constrained environment. The growth ofInternet of Things (IoT)has spiked research into the development of lightweight algorithms that are better suited for the environment. An IoT environment requires strict constraints on power consumption, processing power, and security.[71]Algorithms such as PRESENT,AES, andSPECKare examples of the many LWC algorithms that have been developed to achieve the standard set by theNational Institute of Standards and Technology.[72]
Cryptography is widely used on the internet to help protect user-data and prevent eavesdropping. To ensure secrecy during transmission, many systems use private key cryptography to protect transmitted information. With public-key systems, one can maintain secrecy without a master key or a large number of keys.[73]But, some algorithms likeBitLockerandVeraCryptare generally not private-public key cryptography. For example, Veracrypt uses a password hash to generate the single private key. However, it can be configured to run in public-private key systems. TheC++opensource encryption libraryOpenSSLprovidesfree and opensourceencryption software and tools. The most commonly used encryption cipher suit isAES,[74]as it has hardware acceleration for allx86based processors that hasAES-NI. A close contender isChaCha20-Poly1305, which is astream cipher, however it is commonly used for mobile devices as they areARMbased which does not feature AES-NI instruction set extension.
Cryptography can be used to secure communications by encrypting them. Websites use encryption viaHTTPS.[75]"End-to-end" encryption, where only sender and receiver can read messages, is implemented for email inPretty Good Privacyand for secure messaging in general inWhatsApp,SignalandTelegram.[75]
Operating systems use encryption to keep passwords secret, conceal parts of the system, and ensure that software updates are truly from the system maker.[75]Instead of storing plaintext passwords, computer systems store hashes thereof; then, when a user logs in, the system passes the given password through a cryptographic hash function and compares it to the hashed value on file. In this manner, neither the system nor an attacker has at any point access to the password in plaintext.[75]
Encryption is sometimes used to encrypt one's entire drive. For example,University College Londonhas implementedBitLocker(a program by Microsoft) to render drive data opaque without users logging in.[75]
Cryptographic techniques enablecryptocurrencytechnologies, such asdistributed ledger technologies(e.g.,blockchains), which financecryptoeconomicsapplications such asdecentralized finance (DeFi). Key cryptographic techniques that enable cryptocurrencies and cryptoeconomics include, but are not limited to:cryptographic keys, cryptographic hash function,asymmetric (public key) encryption,Multi-Factor Authentication (MFA),End-to-End Encryption (E2EE), andZero Knowledge Proofs (ZKP).
Cryptography has long been of interest to intelligence gathering andlaw enforcement agencies.[9]Secret communications may be criminal or eventreasonous.[citation needed]Because of its facilitation ofprivacy, and the diminution of privacy attendant on its prohibition, cryptography is also of considerable interest to civil rights supporters. Accordingly, there has been a history of controversial legal issues surrounding cryptography, especially since the advent of inexpensive computers has made widespread access to high-quality cryptography possible.
In some countries, even the domestic use of cryptography is, or has been, restricted. Until 1999, France significantly restricted the use of cryptography domestically, though it has since relaxed many of these rules. InChinaandIran, a license is still required to use cryptography.[7]Many countries have tight restrictions on the use of cryptography. Among the more restrictive are laws inBelarus,Kazakhstan,Mongolia,Pakistan, Singapore,Tunisia, andVietnam.[76]
In the United States, cryptography is legal for domestic use, but there has been much conflict over legal issues related to cryptography.[9]One particularly important issue has been theexport of cryptographyand cryptographic software and hardware. Probably because of the importance of cryptanalysis inWorld War IIand an expectation that cryptography would continue to be important for national security, many Western governments have, at some point, strictly regulated export of cryptography. After World War II, it was illegal in the US to sell or distribute encryption technology overseas; in fact, encryption was designated as auxiliary military equipment and put on theUnited States Munitions List.[77]Until the development of the personal computer, asymmetric key algorithms (i.e., public key techniques), and the Internet, this was not especially problematic. However, as the Internet grew and computers became more widely available, high-quality encryption techniques became well known around the globe.
In the 1990s, there were several challenges to US export regulation of cryptography. After thesource codeforPhilip Zimmermann'sPretty Good Privacy(PGP) encryption program found its way onto the Internet in June 1991, a complaint byRSA Security(then called RSA Data Security, Inc.) resulted in a lengthy criminal investigation of Zimmermann by the US Customs Service and theFBI, though no charges were ever filed.[78][79]Daniel J. Bernstein, then a graduate student atUC Berkeley, brought a lawsuit against the US government challenging some aspects of the restrictions based onfree speechgrounds. The 1995 caseBernstein v. United Statesultimately resulted in a 1999 decision that printed source code for cryptographic algorithms and systems was protected asfree speechby the United States Constitution.[80]
In 1996, thirty-nine countries signed theWassenaar Arrangement, an arms control treaty that deals with the export of arms and "dual-use" technologies such as cryptography. The treaty stipulated that the use of cryptography with short key-lengths (56-bit for symmetric encryption, 512-bit for RSA) would no longer be export-controlled.[81]Cryptography exports from the US became less strictly regulated as a consequence of a major relaxation in 2000;[82]there are no longer very many restrictions on key sizes in US-exportedmass-market software. Since this relaxation in US export restrictions, and because most personal computers connected to the Internet include US-sourcedweb browserssuch asFirefoxorInternet Explorer, almost every Internet user worldwide has potential access to quality cryptography via their browsers (e.g., viaTransport Layer Security). TheMozilla ThunderbirdandMicrosoft OutlookE-mail clientprograms similarly can transmit and receive emails via TLS, and can send and receive email encrypted withS/MIME. Many Internet users do not realize that their basic application software contains such extensivecryptosystems. These browsers and email programs are so ubiquitous that even governments whose intent is to regulate civilian use of cryptography generally do not find it practical to do much to control distribution or use of cryptography of this quality, so even when such laws are in force, actual enforcement is often effectively impossible.[citation needed]
Another contentious issue connected to cryptography in the United States is the influence of theNational Security Agencyon cipher development and policy.[9]The NSA was involved with the design ofDESduring its development atIBMand its consideration by theNational Bureau of Standardsas a possible Federal Standard for cryptography.[83]DES was designed to be resistant todifferential cryptanalysis,[84]a powerful and general cryptanalytic technique known to the NSA and IBM, that became publicly known only when it was rediscovered in the late 1980s.[85]According toSteven Levy, IBM discovered differential cryptanalysis,[79]but kept the technique secret at the NSA's request. The technique became publicly known only when Biham and Shamir re-discovered and announced it some years later. The entire affair illustrates the difficulty of determining what resources and knowledge an attacker might actually have.
Another instance of the NSA's involvement was the 1993Clipper chipaffair, an encryption microchip intended to be part of theCapstonecryptography-control initiative. Clipper was widely criticized by cryptographers for two reasons. The cipher algorithm (calledSkipjack) was then classified (declassified in 1998, long after the Clipper initiative lapsed). The classified cipher caused concerns that the NSA had deliberately made the cipher weak to assist its intelligence efforts. The whole initiative was also criticized based on its violation ofKerckhoffs's Principle, as the scheme included a specialescrow keyheld by the government for use by law enforcement (i.e.wiretapping).[79]
Cryptography is central to digital rights management (DRM), a group of techniques for technologically controlling use ofcopyrightedmaterial, being widely implemented and deployed at the behest of some copyright holders. In 1998, U.S. PresidentBill Clintonsigned theDigital Millennium Copyright Act(DMCA), which criminalized all production, dissemination, and use of certain cryptanalytic techniques and technology (now known or later discovered); specifically, those that could be used to circumvent DRM technological schemes.[86]This had a noticeable impact on the cryptography research community since an argument can be made that any cryptanalytic research violated the DMCA. Similar statutes have since been enacted in several countries and regions, including the implementation in theEU Copyright Directive. Similar restrictions are called for by treaties signed byWorld Intellectual Property Organizationmember-states.
TheUnited States Department of JusticeandFBIhave not enforced the DMCA as rigorously as had been feared by some, but the law, nonetheless, remains a controversial one.Niels Ferguson, a well-respected cryptography researcher, has publicly stated that he will not release some of his research into anIntelsecurity design for fear of prosecution under the DMCA.[87]CryptologistBruce Schneierhas argued that the DMCA encouragesvendor lock-in, while inhibiting actual measures toward cyber-security.[88]BothAlan Cox(longtimeLinux kerneldeveloper) andEdward Felten(and some of his students at Princeton) have encountered problems related to the Act.Dmitry Sklyarovwas arrested during a visit to the US from Russia, and jailed for five months pending trial for alleged violations of the DMCA arising from work he had done in Russia, where the work was legal. In 2007, the cryptographic keys responsible forBlu-rayandHD DVDcontent scrambling werediscovered and released onto the Internet. In both cases, theMotion Picture Association of Americasent out numerous DMCA takedown notices, and there was a massive Internet backlash[10]triggered by the perceived impact of such notices onfair useandfree speech.
In the United Kingdom, theRegulation of Investigatory Powers Actgives UK police the powers to force suspects to decrypt files or hand over passwords that protect encryption keys. Failure to comply is an offense in its own right, punishable on conviction by a two-year jail sentence or up to five years in cases involving national security.[8]Successful prosecutions have occurred under the Act; the first, in 2009,[89]resulted in a term of 13 months' imprisonment.[90]Similar forced disclosure laws in Australia, Finland, France, and India compel individual suspects under investigation to hand over encryption keys or passwords during a criminal investigation.
In the United States, the federal criminal case ofUnited States v. Fricosuaddressed whether a search warrant can compel a person to reveal anencryptionpassphraseor password.[91]TheElectronic Frontier Foundation(EFF) argued that this is a violation of the protection from self-incrimination given by theFifth Amendment.[92]In 2012, the court ruled that under theAll Writs Act, the defendant was required to produce an unencrypted hard drive for the court.[93]
In many jurisdictions, the legal status of forced disclosure remains unclear.
The 2016FBI–Apple encryption disputeconcerns the ability of courts in the United States to compel manufacturers' assistance in unlocking cell phones whose contents are cryptographically protected.
As a potential counter-measure to forced disclosure some cryptographic software supportsplausible deniability, where the encrypted data is indistinguishable from unused random data (for example such as that of adrive which has been securely wiped).
|
https://en.wikipedia.org/wiki/Cryptography
|
Incryptography,Galois/Counter Mode(GCM)[1]is amode of operationforsymmetric-keycryptographicblock cipherswhich is widely adopted for its performance. GCM throughput rates for state-of-the-art, high-speed communication channels can be achieved with inexpensive hardware resources.[2]
The GCM algorithm provides both data authenticity (integrity) and confidentiality and belongs to the class ofauthenticated encryption with associated data (AEAD)methods. This means that as input it takes a key K, some plaintext P, and some associated data AD; it then encrypts the plaintext using the key to produce ciphertext C, and computes an authentication tag T from the ciphertext and the associated data (which remains unencrypted). A recipient with knowledge of K, upon reception of AD, C and T, can decrypt the ciphertext to recover the plaintext P and can check the tag T to ensure that neither ciphertext nor associated data were tampered with.
GCM uses a block cipher with block size 128 bits (commonlyAES-128) operated incounter modefor encryption, and uses arithmetic in theGalois fieldGF(2128) to compute the authentication tag; hence the name.
Galois Message Authentication Code(GMAC) is an authentication-only variant of the GCM which can form an incrementalmessage authentication code. Both GCM and GMAC can acceptinitialization vectorsof arbitrary length.
Different block cipher modes of operation can have significantly different performance and efficiency characteristics, even when used with the same block cipher. GCM can take full advantage ofparallel processingand implementing GCM can make efficient use of aninstruction pipelineor a hardware pipeline. By contrast, thecipher block chaining(CBC) mode of operation incurspipeline stallsthat hamper its efficiency and performance.
Like in normalcounter mode, blocks are numbered sequentially, and then this block number is combined with aninitialization vector(IV) and encrypted with a block cipherE, usuallyAES. The result of this encryption is thenXORedwith theplaintextto produce theciphertext. Like all counter modes, this is essentially astream cipher, and so it is essential that a different IV is used for each stream that is encrypted.
The ciphertext blocks are considered coefficients of apolynomialwhich is then evaluated at a key-dependent pointH, usingfinite field arithmetic. The result is then encrypted, producing anauthentication tagthat can be used to verify the integrity of the data. The encrypted text then contains the IV, ciphertext, and authentication tag.
GCM combines the well-knowncounter modeof encryption with the new Galois mode of authentication. The key feature is the ease of parallel computation of theGalois fieldmultiplication used for authentication. This feature permits higher throughput than encryption algorithms, likeCBC, which use chaining modes. The GF(2128) field used is defined by the polynomial
The authentication tag is constructed by feeding blocks of data into the GHASH function and encrypting the result. This GHASH function is defined by
whereH=Ek(0128) is thehash key, a string of 128 zero bits encrypted using theblock cipher,Ais data which is only authenticated (not encrypted),Cis theciphertext,mis the number of 128-bit blocks inA(rounded up),nis the number of 128-bit blocks inC(rounded up), and the variableXifori= 0, ...,m+n+ 1is defined below.[3]
First, the authenticated text and the cipher text are separately zero-padded to multiples of 128 bits and combined into a single messageSi:
where len(A) and len(C) are the 64-bit representations of the bit lengths ofAandC, respectively,v= len(A) mod 128 is the bit length of the final block ofA,u= len(C) mod 128 is the bit length of the final block ofC, and∥{\displaystyle \parallel }denotes concatenation of bit strings.
ThenXiis defined as:
The second form is an efficient iterative algorithm (eachXidepends onXi−1) produced by applyingHorner's methodto the first. Only the finalXm+n+1remains an output.
If it is necessary to parallelize the hash computation, this can be done by interleavingktimes:
If the length of the IV is not 96, the GHASH function is used to calculateCounter 0:
GCM was designed byJohn Viegaand David A. McGrew to be an improvement toCarter–Wegman counter mode(CWC mode).[4]
In November 2007,NISTannounced the release of NIST Special Publication 800-38DRecommendation for Block Cipher Modes of Operation: Galois/Counter Mode (GCM) and GMACmaking GCM and GMAC official standards.[5]
GCM mode is used in theIEEE 802.1AE(MACsec) Ethernet security,WPA3-EnterpriseWifi security protocol,IEEE 802.11ad(also dubbedWiGig), ANSI (INCITS)Fibre ChannelSecurity Protocols (FC-SP),IEEE P1619.1 tape storage,IETFIPsecstandards,[6][7]SSH,[8]TLS1.2[9][10]and TLS 1.3.[11]AES-GCM is included in theNSA Suite B Cryptographyand its latest replacement in 2018Commercial National Security Algorithm (CNSA)suite.[12]GCM mode is used in theSoftEther VPNserver and client,[13]as well asOpenVPNsince version 2.4.
GCM requires one block cipher operation and one 128-bit multiplication in theGalois fieldper each block (128 bit) of encrypted and authenticated data. The block cipher operations are easily pipelined or parallelized; the multiplication operations are easily pipelined and can be parallelized with some modest effort (either by parallelizing the actual operation, by adaptingHorner's methodper the original NIST submission, or both).
Intel has added thePCLMULQDQinstruction, highlighting its use for GCM.[14]In 2011, SPARC added the XMULX and XMULXHI instructions, which also perform 64 × 64 bitcarry-less multiplication. In 2015, SPARC added the XMPMUL instruction, which performs XOR multiplication of much larger values, up to 2048 × 2048 bit input values producing a 4096-bit result. These instructions enable fast multiplication over GF(2n), and can be used with any field representation.
Impressive performance results are published for GCM on a number of platforms. Käsper and Schwabe described a "Faster andTiming-AttackResistant AES-GCM"[15]that achieves 10.68 cycles per byte AES-GCM authenticated encryption on 64-bit Intel processors. Dai et al. report 3.5 cycles per byte for the same algorithm when using Intel'sAES-NIand PCLMULQDQ instructions.Shay GueronandVlad Krasnovachieved 2.47 cycles per byte on the 3rd generation Intel processors. Appropriate patches were prepared for theOpenSSLandNSSlibraries.[16]
When both authentication and encryption need to be performed on a message, a software implementation can achieve speed gains by overlapping the execution of those operations. Performance is increased by exploitinginstruction-level parallelismby interleaving operations. This process is called function stitching,[17]and while in principle it can be applied to any combination of cryptographic algorithms, GCM is especially suitable. Manley and Gregg[18]show the ease of optimizing when using function stitching with GCM. They present a program generator that takes an annotated C version of a cryptographic algorithm and generates code that runs well on the target processor.
GCM has been criticized in the embedded world (for example by Silicon Labs) because the parallel processing is not suited for performant use of cryptographic hardware engines. As a result, GCM reduces the performance of encryption for some of the most performance-sensitive devices.[19]Specialized hardware accelerators forChaCha20-Poly1305are less complex compared to AES accelerators.[20]
According to the authors' statement, GCM is unencumbered by patents.[21]
GCM is proven secure in theconcrete security model.[22]It is secure when it is used with a block cipher that is indistinguishable from a random permutation; however, security depends on choosing a uniqueinitialization vectorfor every encryption performed with the same key (seestream cipher attack). For any given key and initialization vector value, GCM is limited to encrypting239− 256bits of plain text (64 GiB). NIST Special Publication 800-38D[5]includes guidelines for initialization vector selection and limits the number of possible initialization vector values for a single key. As the security assurance of GCM degrades with more data being processed using the same key, the total number of blocks of plaintext and AD protected during the lifetime of a single key should be limited by 264.[5]
The authentication strength depends on the length of the authentication tag, like with all symmetric message authentication codes. The use of shorter authentication tags with GCM is discouraged. The bit-length of the tag, denotedt, is a security parameter. In general,tmay be any one of the following five values: 128, 120, 112, 104, or 96. For certain applications,tmay be 64 or 32, but the use of these two tag lengths constrains the length of the input data and the lifetime of the key. Appendix C in NIST SP 800-38D provides guidance for these constraints (for example, ift= 32and the maximal packet size is 210bytes, the authentication decryption function should be invoked no more than 211times; ift= 64and the maximal packet size is 215bytes, the authentication decryption function should be invoked no more than 232times).
Like with any message authentication code, if the adversary chooses at-bit tag at random, it is expected to be correct for given data with probability measure 2−t. With GCM, however, an adversary can increase their likelihood of success by choosing tags withnwords – the total length of the ciphertext plus any additional authenticated data (AAD) – with probability measure 2−tby a factor ofn. Although, one must bear in mind that these optimal tags are still dominated by the algorithm's survival measure1 −n⋅2−tfor arbitrarily larget. Moreover, GCM is neither well-suited for use with very short tag-lengths nor very long messages.
Ferguson and Saarinen independently described how an attacker can perform optimal attacks against GCM authentication, which meet the lower bound on its security. Ferguson showed that, ifndenotes the total number of blocks in the encoding (the input to the GHASH function), then there is a method of constructing a targeted ciphertext forgery that is expected to succeed with a probability of approximatelyn⋅2−t. If the tag lengthtis shorter than 128, then each successful forgery in this attack increases the probability that subsequent targeted forgeries will succeed, and leaks information about the hash subkey,H. Eventually,Hmay be compromised entirely and the authentication assurance is completely lost.[23]
Independent of this attack, an adversary may attempt to systematically guess many different tags for a given input to authenticated decryption and thereby increase the probability that one (or more) of them, eventually, will be considered valid. For this reason, the system or protocol that implements GCM should monitor and, if necessary, limit the number of unsuccessful verification attempts for each key.
Saarinen described GCMweak keys.[24]This work gives some valuable insights into how polynomial hash-based authentication works. More precisely, this work describes a particular way of forging a GCM message, given a valid GCM message, that works with probability of aboutn⋅2−128for messages that aren× 128bits long. However, this work does not show a more effective attack than was previously known; the success probability in observation 1 of this paper matches that of lemma 2 from the INDOCRYPT 2004 analysis (settingw= 128andl=n× 128). Saarinen also described a GCM variantSophie Germain Counter Mode(SGCM) based onSophie Germain primes.
|
https://en.wikipedia.org/wiki/Galois/Counter_Mode
|
Incryptography, anHMAC(sometimes expanded as eitherkeyed-hash message authentication codeorhash-based message authentication code) is a specific type ofmessage authentication code(MAC) involving acryptographic hash functionand a secret cryptographic key. As with any MAC, it may be used to simultaneously verify both thedata integrityand authenticity of a message. An HMAC is a type of keyed hash function that can also be used in a key derivation scheme or a key stretching scheme.
HMAC can provide authentication using ashared secretinstead of usingdigital signatureswithasymmetric cryptography. It trades off the need for a complexpublic key infrastructureby delegating the key exchange to the communicating parties, who are responsible for establishing and using a trusted channel to agree on the key prior to communication.
Any cryptographic hash function, such asSHA-2orSHA-3, may be used in the calculation of an HMAC; the resulting MAC algorithm is termed HMAC-x, wherexis the hash function used (e.g. HMAC-SHA256 or HMAC-SHA3-512). Thecryptographic strengthof the HMAC depends upon the cryptographic strength of the underlying hash function, the size of its hash output, and the size and quality of the key.[1]
HMAC uses two passes of hash computation. Before either pass, the secret key is used to derive two keys – inner and outer. Next, the first pass of the hash algorithm produces an internal hash derived from the message and the inner key. The second pass produces the final HMAC code derived from the inner hash result and the outer key. Thus the algorithm provides better immunity againstlength extension attacks.
An iterative hash function (one that uses theMerkle–Damgård construction) breaks up a message into blocks of a fixed size and iterates over them with acompression function. For example, SHA-256 operates on 512-bit blocks. The size of the output of HMAC is the same as that of the underlying hash function (e.g., 256 and 512 bits in the case of SHA-256 and SHA3-512, respectively), although it can be truncated if desired.
HMAC does not encrypt the message. Instead, the message (encrypted or not) must be sent alongside the HMAC hash. Parties with the secret key will hash the message again themselves, and if it is authentic, the received and computed hashes will match.
The definition and analysis of the HMAC construction was first published in 1996 in a paper byMihir Bellare,Ran Canetti, andHugo Krawczyk,[1][2]and they also wrote RFC 2104 in 1997.[3]: §2The 1996 paper also defined a nested variant called NMAC (Nested MAC).FIPSPUB 198 generalizes and standardizes the use of HMACs.[4]HMAC is used within theIPsec,[2]SSHandTLSprotocols and forJSON Web Tokens.
This definition is taken from RFC 2104:
where
The followingpseudocodedemonstrates how HMAC may be implemented. The block size is 512 bits (64 bytes) when using one of the following hash functions: SHA-1, MD5, RIPEMD-128.[3]: §2
The design of the HMAC specification was motivated by the existence of attacks on more trivial mechanisms for combining a key with a hash function. For example, one might assume the same security that HMAC provides could be achieved with MAC =H(key∥message). However, this method suffers from a serious flaw: with most hash functions, it is easy to append data to the message without knowing the key and obtain another valid MAC ("length-extension attack"). The alternative, appending the key using MAC =H(message∥key), suffers from the problem that an attacker who can find a collision in the (unkeyed) hash function has a collision in the MAC (as two messages m1 and m2 yielding the same hash will provide the same start condition to the hash function before the appended key is hashed, hence the final hash will be the same). Using MAC =H(key∥message∥key) is better, but various security papers have suggested vulnerabilities with this approach, even when two different keys are used.[1][7][8]
No known extension attacks have been found against the current HMAC specification which is defined asH(key∥H(key∥message)) because the outer application of the hash function masks the intermediate result of the internal hash. The values ofipadandopadare not critical to the security of the algorithm, but were defined in such a way to have a largeHamming distancefrom each other and so the inner and outer keys will have fewer bits in common. The security reduction of HMAC does require them to be different in at least one bit.[citation needed]
TheKeccakhash function, that was selected byNISTas theSHA-3competition winner, doesn't need this nested approach and can be used to generate a MAC by simply prepending the key to the message, as it is not susceptible to length-extension attacks.[9]
The cryptographic strength of the HMAC depends upon the size of the secret key that is used and the security of the underlying hash function used. It has been proven that the security of an HMAC construction is directly related to security properties of the hash function used. The most common attack against HMACs is brute force to uncover the secret key. HMACs are substantially less affected by collisions than their underlying hashing algorithms alone.[2][10][11]In particular, Mihir Bellare proved that HMAC is apseudo-random function(PRF) under the sole assumption that the compression function is a PRF.[12]Therefore, HMAC-MD5 does not suffer from the same weaknesses that have been found in MD5.[13]
RFC 2104 requires that "keys longer thanBbytes are first hashed usingH" which leads to a confusing pseudo-collision: if the key is longer than the hash block size (e.g. 64 bytes for SHA-1), thenHMAC(k, m)is computed asHMAC(H(k), m). This property is sometimes raised as a possible weakness of HMAC in password-hashing scenarios: it has been demonstrated that it's possible to find a long ASCII string and a random value whose hash will be also an ASCII string, and both values will produce the same HMAC output.[14][15][16]
In 2006,Jongsung Kim,Alex Biryukov,Bart Preneel, andSeokhie Hongshowed how to distinguish HMAC with reduced versions of MD5 and SHA-1 or full versions ofHAVAL,MD4, andSHA-0from arandom functionor HMAC with a random function. Differential distinguishers allow an attacker to devise a forgery attack on HMAC. Furthermore, differential and rectangle distinguishers can lead tosecond-preimage attacks. HMAC with the full version of MD4 can beforgedwith this knowledge. These attacks do not contradict the security proof of HMAC, but provide insight into HMAC based on existing cryptographic hash functions.[17]
In 2009,Xiaoyun Wanget al.presented a distinguishing attack on HMAC-MD5 without using related keys. It can distinguish an instantiation of HMAC with MD5 from an instantiation with a random function with 297queries with probability 0.87.[18]
In 2011 an informational RFC 6151 was published to summarize security considerations inMD5and HMAC-MD5. For HMAC-MD5 the RFC summarizes that – although the security of theMD5hash function itself is severely compromised – the currently known"attacks on HMAC-MD5 do not seem to indicate a practical vulnerability when used as a message authentication code", but it also adds that"for a new protocol design, a ciphersuite with HMAC-MD5 should not be included".[13]
In May 2011, RFC 6234 was published detailing the abstract theory and source code for SHA-based HMACs.[19]
Here are some HMAC values, assuming 8-bit ASCII for the input and hexadecimal encoding for the output:
|
https://en.wikipedia.org/wiki/HMAC
|
Incryptography, amessage authentication code(MAC), sometimes known as anauthentication tag, is a short piece of information used forauthenticatingandintegrity-checking a message. In other words, it is used to confirm that the message came from the stated sender (its authenticity) and has not been changed (its integrity). The MAC value allows verifiers (who also possess a secret key) to detect any changes to the message content.
The termmessage integrity code(MIC) is frequently substituted for the termMAC, especially in communications[1]to distinguish it from the use of the latter asmedia access control address(MAC address). However, some authors[2]use MIC to refer to amessage digest, which aims only to uniquely but opaquely identify a single message. RFC 4949 recommends avoiding the termmessage integrity code(MIC), and instead usingchecksum,error detection code,hash,keyed hash,message authentication code, orprotected checksum.
Informally, a message authentication code system consists of three algorithms:
A secure message authentication code must resist attempts by an adversary toforge tags, for arbitrary, select, or all messages, including under conditions ofknown-orchosen-message. It should be computationally infeasible to compute a valid tag of the given message without knowledge of the key, even if for the worst case, we assume the adversary knows the tag of any message but the one in question.[3]
Formally, amessage authentication code(MAC) system is a triple of efficient[4]algorithms (G,S,V) satisfying:
SandVmust satisfy the following:
A MAC isunforgeableif for every efficient adversaryA
whereAS(k, · )denotes thatAhas access to the oracleS(k, · ), and Query(AS(k, · ), 1n) denotes the set of the queries onSmade byA, which knowsn. Clearly we require that any adversary cannot directly query the stringxonS, since otherwise a valid tag can be easily obtained by that adversary.[6]
While MAC functions are similar tocryptographic hash functions, they possess different security requirements. To be considered secure, a MAC function must resistexistential forgeryunderchosen-message attacks. This means that even if an attacker has access to anoraclewhich possesses the secret key and generates MACs for messages of the attacker's choosing, the attacker cannot guess the MAC for other messages (which were not used to query the oracle) without performing infeasible amounts of computation.
MACs differ fromdigital signaturesas MAC values are both generated and verified using the same secret key. This implies that the sender and receiver of a message must agree on the same key before initiating communications, as is the case withsymmetric encryption. For the same reason, MACs do not provide the property ofnon-repudiationoffered by signatures specifically in the case of a network-wideshared secretkey: any user who can verify a MAC is also capable of generating MACs for other messages. In contrast, a digital signature is generated using the private key of a key pair, which is public-key cryptography.[4]Since this private key is only accessible to its holder, a digital signature proves that a document was signed by none other than that holder. Thus, digital signatures do offer non-repudiation. However, non-repudiation can be provided by systems that securely bind key usage information to the MAC key; the same key is in the possession of two people, but one has a copy of the key that can be used for MAC generation while the other has a copy of the key in ahardware security modulethat only permits MAC verification. This is commonly done in the finance industry.[citation needed]
While the primary goal of a MAC is to prevent forgery by adversaries without knowledge of the secret key, this is insufficient in certain scenarios. When an adversary is able to control the MAC key, stronger guarantees are needed, akin tocollision resistanceorpreimage securityin hash functions. For MACs, these concepts are known ascommitmentandcontext-discoverysecurity.[7]
MAC algorithms can be constructed from other cryptographic primitives, likecryptographic hash functions(as in the case ofHMAC) or fromblock cipheralgorithms (OMAC,CCM,GCM, andPMAC). However many of the fastest MAC algorithms, likeUMAC-VMACandPoly1305-AES, are constructed based onuniversal hashing.[8]
Intrinsically keyed hash algorithms such asSipHashare also by definition MACs; they can be even faster than universal-hashing based MACs.[9]
Additionally, the MAC algorithm can deliberately combine two or more cryptographic primitives, so as to maintain protection even if one of them is later found to be vulnerable. For instance, inTransport Layer Security(TLS) versions before 1.2, theinput datais split in halves that are each processed with a different hashing primitive (SHA-1andSHA-2) thenXORedtogether to output the MAC.
Universal hashingand in particularpairwise independenthash functions provide a secure message authentication code as long as the key is used at most once. This can be seen as theone-time padfor authentication.[10]
The simplest such pairwise independent hash function is defined by the random key,key= (a,b), and the MAC tag for a messagemis computed astag= (am+b) modp, wherepis prime.
More generally,k-independent hashingfunctions provide a secure message authentication code as long as the key is used less thanktimes fork-ways independent hashing functions.
Message authentication codes and data origin authentication have been also discussed in the framework of quantum cryptography. By contrast to other cryptographic tasks, such as key distribution, for a rather broad class of quantum MACs it has been shown that quantum resources do not offer any advantage over unconditionally secure one-time classical MACs.[11]
Various standards exist that define MAC algorithms. These include:
ISO/IEC 9797-1 and -2 define generic models and algorithms that can be used with any block cipher or hash function, and a variety of different parameters. These models and parameters allow more specific algorithms to be defined by nominating the parameters. For example, the FIPS PUB 113 algorithm is functionally equivalent to ISO/IEC 9797-1 MAC algorithm 1 with padding method 1 and a block cipher algorithm of DES.
[20]In this example, the sender of a message runs it through a MAC algorithm to produce a MAC data tag. The message and the MAC tag are then sent to the receiver. The receiver in turn runs the message portion of the transmission through the same MAC algorithm using the same key, producing a second MAC data tag. The receiver then compares the first MAC tag received in the transmission to the second generated MAC tag. If they are identical, the receiver can safely assume that the message was not altered or tampered with during transmission (data integrity).
However, to allow the receiver to be able to detectreplay attacks, the message itself must contain data that assures that this same message can only be sent once (e.g. time stamp,sequence numberor use of aone-time MAC). Otherwise an attacker could – without even understanding its content – record this message and play it back at a later time, producing the same result as the original sender.
|
https://en.wikipedia.org/wiki/Message_authentication_code
|
Moore's lawis the observation that the number oftransistorsin anintegrated circuit(IC) doubles about every two years. Moore's law is anobservationandprojectionof a historical trend. Rather than alaw of physics, it is anempirical relationship. It is anexperience-curve law, a type of law quantifying efficiency gains from experience in production.
The observation is named afterGordon Moore, the co-founder ofFairchild SemiconductorandInteland former CEO of the latter, who in 1965 noted that the number of components per integrated circuit had beendoubling every year,[a]and projected this rate of growth would continue for at least another decade. In 1975, looking forward to the next decade, he revised the forecast to doubling every two years, acompound annual growth rate(CAGR) of 41%. Moore's empirical evidence did not directly imply that the historical trend would continue, nevertheless, his prediction has held since 1975 and has since become known as alaw.
Moore's prediction has been used in thesemiconductor industryto guide long-term planning and to set targets forresearch and development(R&D). Advancements indigital electronics, such as the reduction inquality-adjusted pricesofmicroprocessors, the increase inmemory capacity(RAMandflash), the improvement ofsensors, and even the number and size ofpixelsindigital cameras, are strongly linked to Moore's law. These ongoing changes in digital electronics have been a driving force of technological and social change,productivity, and economic growth.
Industry experts have not reached a consensus on exactly when Moore's law will cease to apply. Microprocessor architects report that semiconductor advancement has slowed industry-wide since around 2010, slightly below the pace predicted by Moore's law. In September 2022,NvidiaCEOJensen Huangconsidered Moore's law dead,[2]while Intel CEOPat Gelsingerwas of the opposite view.[3]
In 1959,Douglas Engelbartstudied the projected downscaling of integrated circuit (IC) size, publishing his results in the article "Microelectronics, and the Art of Similitude".[4][5][6]Engelbart presented his findings at the 1960International Solid-State Circuits Conference, where Moore was present in the audience.[7]
In 1965, Gordon Moore, who at the time was working as the director of research and development atFairchild Semiconductor, was asked to contribute to the thirty-fifth-anniversary issue ofElectronicsmagazine with a prediction on the future of the semiconductor components industry over the next ten years.[8]His response was a brief article entitled "Cramming more components onto integrated circuits".[1][9][b]Within his editorial, he speculated that by 1975 it would be possible to contain as many as65000components on a single quarter-square-inch (~1.6 cm2) semiconductor.
The complexity for minimum component costs has increased at a rate of roughly a factor of two per year. Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years.[1]
Moore posited a log–linear relationship between device complexity (higher circuit density at reduced cost) and time.[12][13]In a 2015 interview, Moore noted of the 1965 article: "... I just did a wild extrapolation saying it's going to continue to double every year for the next 10 years."[14]One historian of the law citesStigler's law of eponymy, to introduce the fact that the regular doubling of components was known to many working in the field.[13]
In 1974,Robert H. DennardatIBMrecognized the rapid MOSFET scaling technology and formulated what became known asDennard scaling, which describes that as MOS transistors get smaller, theirpower densitystays constant such that the power use remains in proportion with area.[15][16]Evidence from the semiconductor industry shows that this inverse relationship between power density andareal densitybroke down in the mid-2000s.[17]
At the 1975IEEE International Electron Devices Meeting, Moore revised his forecast rate,[18][19]predicting semiconductor complexity would continue to double annually until about 1980, after which it would decrease to a rate of doubling approximately every two years.[19][20][21]He outlined several contributing factors for this exponential behavior:[12][13]
Shortly after 1975,CaltechprofessorCarver Meadpopularized the termMoore's law.[22][23]Moore's law eventually came to be widely accepted as a goal for the semiconductor industry, and it was cited by competitive semiconductor manufacturers as they strove to increase processing power. Moore viewed his eponymous law as surprising and optimistic: "Moore's law is a violation ofMurphy's law. Everything gets better and better."[24]The observation was even seen as aself-fulfilling prophecy.[25][26]
The doubling period is often misquoted as 18 months because of a separate prediction by Moore's colleague, Intel executiveDavid House.[27]In 1975, House noted that Moore's revised law of doubling transistor count every 2 years in turn implied that computer chip performance would roughly double every 18 months,[28]with no increase in power consumption.[29]Mathematically, Moore's law predicted that transistor count would double every 2 years due to shrinking transistor dimensions and other improvements.[30]As a consequence of shrinking dimensions, Dennard scaling predicted that power consumption per unit area would remain constant. Combining these effects, David House deduced that computer chip performance would roughly double every 18 months. Also due to Dennard scaling, this increased performance would not be accompanied by increased power, i.e., the energy-efficiency ofsilicon-based computer chips roughly doubles every 18 months. Dennard scaling ended in the 2000s.[17]Koomey later showed that a similar rate of efficiency improvement predated silicon chips and Moore's law, for technologies such as vacuum tubes.
Microprocessor architects report that since around 2010, semiconductor advancement has slowed industry-wide below the pace predicted by Moore's law.[17]Brian Krzanich, the former CEO of Intel, cited Moore's 1975 revision as a precedent for the current deceleration, which results from technical challenges and is "a natural part of the history of Moore's law".[31][32][33]The rate of improvement in physical dimensions known as Dennard scaling also ended in the mid-2000s. As a result, much of the semiconductor industry has shifted its focus to the needs of major computing applications rather than semiconductor scaling.[25][34][17]Nevertheless, as of 2019, leading semiconductor manufacturersTSMCandSamsung Electronicsclaimed to keep pace with Moore's law[35][36][37][38][39][40]with10,7, and5 nmnodes in mass production.[35][36][41][42][43]
As the cost of computer power to the consumer falls, the cost for producers to fulfill Moore's law follows an opposite trend: R&D, manufacturing, and test costs have increased steadily with each new generation of chips. The cost of the tools, principallyextreme ultraviolet lithography(EUVL), used to manufacture chips doubles every 4 years.[44]Rising manufacturing costs are an important consideration for the sustaining of Moore's law.[45]This led to the formulation ofMoore's second law, also called Rock's law (named afterArthur Rock), which is that thecapital costof asemiconductor fabrication plantalso increases exponentially over time.[46][47]
Numerous innovations by scientists and engineers have sustained Moore's law since the beginning of the IC era. Some of the key innovations are listed below, as examples of breakthroughs that have advanced integrated circuit andsemiconductor device fabricationtechnology, allowing transistor counts to grow by more than seven orders of magnitude in less than five decades.
Computer industry technology road maps predicted in 2001 that Moore's law would continue for several generations of semiconductor chips.[71]
One of the key technical challenges of engineering futurenanoscaletransistors is the design of gates. As device dimensions shrink, controlling the current flow in the thin channel becomes more difficult. Modern nanoscale transistors typically take the form ofmulti-gate MOSFETs, with theFinFETbeing the most common nanoscale transistor. The FinFET has gate dielectric on three sides of the channel. In comparison, thegate-all-aroundMOSFET (GAAFET) structure has even better gate control.
Microprocessor architects report that semiconductor advancement has slowed industry-wide since around 2010, below the pace predicted by Moore's law.[17]Brian Krzanich, the former CEO of Intel, announced, "Our cadence today is closer to two and a half years than two."[103]Intel stated in 2015 that improvements in MOSFET devices have slowed, starting at the22 nmfeature width around 2012, and continuing at14 nm.[104]Pat Gelsinger, Intel CEO, stated at the end of 2023 that "we're no longer in the golden era of Moore's Law, it's much, much harder now, so we're probably doubling effectively closer to every three years now, so we've definitely seen a slowing."[105]
The physical limits to transistor scaling have been reached due to source-to-drain leakage, limited gate metals and limited options for channel material. Other approaches are being investigated, which do not rely on physical scaling. These include the spin state of electronspintronics,tunnel junctions, and advanced confinement of channel materials via nano-wire geometry.[106]Spin-based logic and memory options are being developed actively in labs.[107][108]
The vast majority of current transistors on ICs are composed principally ofdopedsilicon and its alloys. As silicon is fabricated into single nanometer transistors,short-channel effectsadversely changes desired material properties of silicon as a functional transistor. Below are several non-silicon substitutes in the fabrication of small nanometer transistors.
One proposed material isindium gallium arsenide, or InGaAs. Compared to their silicon and germanium counterparts, InGaAs transistors are more promising for future high-speed, low-power logic applications. Because of intrinsic characteristics ofIII–V compound semiconductors, quantum well andtunneleffect transistors based on InGaAs have been proposed as alternatives to more traditional MOSFET designs.
Biological computingresearch shows that biological material has superior information density and energy efficiency compared to silicon-based computing.[116]
Various forms ofgrapheneare being studied forgraphene electronics, e.g.graphene nanoribbontransistorshave shown promise since its appearance in publications in 2008. (Bulk graphene has aband gapof zero and thus cannot be used in transistors because of its constant conductivity, an inability to turn off. The zigzag edges of the nanoribbons introduce localized energy states in the conduction and valence bands and thus a bandgap that enables switching when fabricated as a transistor. As an example, a typical GNR of width of 10 nm has a desirable bandgap energy of 0.4 eV.[117][118]) More research will need to be performed, however, on sub-50 nm graphene layers, as its resistivity value increases and thus electron mobility decreases.[117]
In April 2005,Gordon Moorestated in an interview that the projection cannot be sustained indefinitely: "It can't continue forever. The nature of exponentials is that you push them out and eventually disaster happens." He also noted that transistors eventually would reach the limits of miniaturization atatomiclevels:
In terms of size [of transistors] you can see that we're approaching the size of atoms which is a fundamental barrier, but it'll be two or three generations before we get that far—but that's as far out as we've ever been able to see. We have another 10 to 20 years before we reach a fundamental limit. By then they'll be able to make bigger chips and have transistor budgets in the billions.[119]
In 2016 theInternational Technology Roadmap for Semiconductors, after using Moore's Law to drive the industry since 1998, produced its final roadmap. It no longer centered its research and development plan on Moore's law. Instead, it outlined what might be called the More than Moore strategy in which the needs of applications drive chip development, rather than a focus on semiconductor scaling. Application drivers range from smartphones to AI to data centers.[120]
IEEE began a road-mapping initiative in 2016, Rebooting Computing, named theInternational Roadmap for Devices and Systems(IRDS).[121]
Some forecasters, including Gordon Moore,[122]predict that Moore's law will end by around 2025.[123][120][124]Although Moore's Law will reach a physical limit, some forecasters are optimistic about the continuation of technological progress in a variety of other areas, including new chip architectures, quantum computing, and AI and machine learning.[125][126]NvidiaCEOJensen Huangdeclared Moore's law dead in 2022;[2]several days later, Intel CEO Pat Gelsinger countered with the opposite claim.[3]
Digital electronics have contributed to world economic growth in the late twentieth and early twenty-first centuries.[127]The primary driving force of economic growth is the growth ofproductivity,[128]which Moore's law factors into. Moore (1995) expected that "the rate of technological progress is going to be controlled from financial realities".[129]The reverse could and did occur around the late-1990s, however, with economists reporting that "Productivity growth is the key economic indicator of innovation."[130]Moore's law describes a driving force of technological and social change, productivity, and economic growth.[131][132][128]
An acceleration in the rate of semiconductor progress contributed to a surge in U.S. productivity growth,[133][134][135]which reached 3.4% per year in 1997–2004, outpacing the 1.6% per year during both 1972–1996 and 2005–2013.[136]As economist Richard G. Anderson notes, "Numerous studies have traced the cause of the productivity acceleration to technological innovations in the production of semiconductors that sharply reduced the prices of such components and of the products that contain them (as well as expanding the capabilities of such products)."[137]
The primary negative implication of Moore's law is thatobsolescencepushes society up against theLimits to Growth. As technologies continue to rapidly improve, they render predecessor technologies obsolete. In situations in which security and survivability of hardware or data are paramount, or in which resources are limited, rapid obsolescence often poses obstacles to smooth or continued operations.[138]
Several measures of digital technology are improving at exponential rates related to Moore's law, including the size, cost, density, and speed of components. Moore wrote only about the density of components, "a component being a transistor, resistor, diode or capacitor",[129]at minimum cost.
Transistors per integrated circuit– The most popular formulation is of the doubling of the number of transistors on ICs every two years. At the end of the 1970s, Moore's law became known as the limit for the number of transistors on the most complex chips. The graph at the top of this article shows this trend holds true today. As of 2025[update], the commercially available processor possessing one of the highest numbers of transistors is aGB202 graphics processorwith more than 92.2 billion transistors.[139]
Density at minimum cost per transistor– This is the formulation given in Moore's 1965 paper.[1]It is not just about the density of transistors that can be achieved, but about the density of transistors at which the cost per transistor is the lowest.[140]
As more transistors are put on a chip, the cost to make each transistor decreases, but the chance that the chip will not work due to a defect increases. In 1965, Moore examined the density of transistors at which cost is minimized, and observed that, as transistors were made smaller through advances inphotolithography, this number would increase at "a rate of roughly a factor of two per year".[1]
Dennard scaling– This posits that power usage would decrease in proportion to area (both voltage and current being proportional to length) of transistors. Combined with Moore's law,performance per wattwould grow at roughly the same rate as transistor density, doubling every 1–2 years. According to Dennard scaling transistor dimensions would be scaled by 30% (0.7×) every technology generation, thus reducing their area by 50%. This would reduce the delay by 30% (0.7×) and therefore increase operating frequency by about 40% (1.4×). Finally, to keep electric field constant, voltage would be reduced by 30%, reducing energy by 65% and power (at 1.4× frequency) by 50%.[c]Therefore, in every technology generation transistor density would double, circuit becomes 40% faster, while power consumption (with twice the number of transistors) stays the same.[141]Dennard scaling ended in 2005–2010, due to leakage currents.[17]
The exponential processor transistor growth predicted by Moore does not always translate into exponentially greater practical CPU performance. Since around 2005–2007, Dennard scaling has ended, so even though Moore's law continued after that, it has not yielded proportional dividends in improved performance.[15][142]The primary reason cited for the breakdown is that at small sizes, current leakage poses greater challenges, and also causes the chip to heat up, which creates a threat ofthermal runawayand therefore, further increases energy costs.[15][142][17]
The breakdown of Dennard scaling prompted a greater focus on multicore processors, but the gains offered by switching to more cores are lower than the gains that would be achieved had Dennard scaling continued.[143][144]In another departure from Dennard scaling, Intel microprocessors adopted a non-planar tri-gate FinFET at 22 nm in 2012 that is faster and consumes less power than a conventional planar transistor.[145]The rate of performance improvement for single-core microprocessors has slowed significantly.[146]Single-core performance was improving by 52% per year in 1986–2003 and 23% per year in 2003–2011, but slowed to just seven percent per year in 2011–2018.[146]
Quality adjusted price of IT equipment– Thepriceof information technology (IT), computers and peripheral equipment, adjusted for quality and inflation, declined 16% per year on average over the five decades from 1959 to 2009.[147][148]The pace accelerated, however, to 23% per year in 1995–1999 triggered by faster IT innovation,[130]and later, slowed to 2% per year in 2010–2013.[147][149]
Whilequality-adjustedmicroprocessor price improvement continues,[150]the rate of improvement likewise varies, and is not linear on a log scale. Microprocessor price improvement accelerated during the late 1990s, reaching 60% per year (halving every nine months) versus the typical 30% improvement rate (halving every two years) during the years earlier and later.[151][152]Laptop microprocessors in particular improved 25–35% per year in 2004–2010, and slowed to 15–25% per year in 2010–2013.[153]
The number of transistors per chip cannot explain quality-adjusted microprocessor prices fully.[151][154][155]Moore's 1995 paper does not limit Moore's law to strict linearity or to transistor count, "The definition of 'Moore's Law' has come to refer to almost anything related to the semiconductor industry that on asemi-log plotapproximates a straight line. I hesitate to review its origins and by doing so restrict its definition."[129]
Hard disk drive areal density– A similar prediction (sometimes calledKryder's law) was made in 2005 forhard disk driveareal density.[156]The prediction was later viewed as over-optimistic. Several decades of rapid progress in areal density slowed around 2010, from 30 to 100% per year to 10–15% per year, because of noise related tosmaller grain sizeof the disk media, thermal stability, and writability using available magnetic fields.[157][158]
Fiber-optic capacity– The number of bits per second that can be sent down an optical fiber increases exponentially, faster than Moore's law.Keck's law, in honor ofDonald Keck.[159]
Network capacity– According to Gerald Butters,[160][161]the former head of Lucent's Optical Networking Group at Bell Labs, there is another version, called Butters' Law of Photonics,[162]a formulation that deliberately parallels Moore's law. Butters' law says that the amount of data coming out of an optical fiber is doubling every nine months.[163]Thus, the cost of transmitting a bit over an optical network decreases by half every nine months. The availability ofwavelength-division multiplexing(sometimes called WDM) increased the capacity that could be placed on a single fiber by as much as a factor of 100. Optical networking anddense wavelength-division multiplexing(DWDM) is rapidly bringing down the cost of networking, and further progress seems assured. As a result, the wholesale price of data traffic collapsed in thedot-com bubble.Nielsen's Lawsays that the bandwidth available to users increases by 50% annually.[164]
Pixels per dollar– Similarly, Barry Hendy of Kodak Australia has plotted pixels per dollar as a basic measure of value for a digital camera, demonstrating the historical linearity (on a log scale) of this market and the opportunity to predict the future trend of digital camera price,LCDandLEDscreens, and resolution.[165][166][167][168]
The great Moore's law compensator (TGMLC), also known asWirth's law– generally is referred to assoftware bloatand is the principle that successive generations of computer software increase in size and complexity, thereby offsetting the performance gains predicted by Moore's law. In a 2008 article inInfoWorld, Randall C. Kennedy,[169]formerly of Intel, introduces this term using successive versions ofMicrosoft Officebetween the year 2000 and 2007 as his premise. Despite the gains in computational performance during this time period according to Moore's law, Office 2007 performed the same task at half the speed on a prototypical year 2007 computer as compared to Office 2000 on a year 2000 computer.
Library expansion– was calculated in 1945 byFremont Riderto double in capacity every 16 years, if sufficient space were made available.[170]He advocated replacing bulky, decaying printed works with miniaturizedmicroformanalog photographs, which could be duplicated on-demand for library patrons or other institutions. He did not foresee the digital technology that would follow decades later to replace analog microform with digital imaging, storage, and transmission media. Automated, potentially lossless digital technologies allowed vast increases in the rapidity of information growth in an era that now sometimes is called theInformation Age.
Carlson curve– is a term coined byThe Economist[171]to describe the biotechnological equivalent of Moore's law, and is named after author Rob Carlson.[172]Carlson accurately predicted that the doubling time of DNA sequencing technologies (measured by cost and performance) would be at least as fast as Moore's law.[173]Carlson Curves illustrate the rapid (in some cases hyperexponential) decreases in cost, and increases in performance, of a variety of technologies, including DNA sequencing, DNA synthesis, and a range of physical and computational tools used in protein expression and in determining protein structures.
Eroom's law– is a pharmaceutical drug development observation that was deliberately written as Moore's Law spelled backward in order to contrast it with the exponential advancements of other forms of technology (such as transistors) over time. It states that the cost of developing a new drug roughly doubles every nine years.
Experience curve effectssays that each doubling of the cumulative production of virtually any product or service is accompanied by an approximate constant percentage reduction in the unit cost. The acknowledged first documented qualitative description of this dates from 1885.[174][175]A power curve was used to describe this phenomenon in a 1936 discussion of the cost of airplanes.[176]
Edholm's law– Phil Edholm observed that thebandwidthoftelecommunication networks(including the Internet) is doubling every 18 months.[177]The bandwidths of onlinecommunication networkshas risen frombits per secondtoterabits per second. The rapid rise in online bandwidth is largely due to the same MOSFET scaling that enabled Moore's law, as telecommunications networks are built from MOSFETs.[178]
Haitz's lawpredicts that the brightness of LEDs increases as their manufacturing cost goes down.
Swanson's lawis the observation that the price of solar photovoltaic modules tends to drop 20 percent for every doubling of cumulative shipped volume. At present rates, costs go down 75% about every 10 years.
|
https://en.wikipedia.org/wiki/Moore%27s_law#Implications
|
Bluetoothis a short-rangewirelesstechnology standard that is used for exchanging data between fixed and mobile devices over short distances and buildingpersonal area networks(PANs). In the most widely used mode, transmission power is limited to 2.5milliwatts, giving it a very short range of up to 10 metres (33 ft). It employsUHFradio wavesin theISM bands, from 2.402GHzto 2.48GHz.[3]It is mainly used as an alternative to wired connections to exchange files between nearby portable devices and connectcell phonesand music players withwireless headphones,wireless speakers,HIFIsystems,car audioand wireless transmission betweenTVsandsoundbars.
Bluetooth is managed by theBluetooth Special Interest Group(SIG), which has more than 35,000 member companies in the areas of telecommunication, computing, networking, and consumer electronics. TheIEEEstandardized Bluetooth asIEEE 802.15.1but no longer maintains the standard. The Bluetooth SIG oversees the development of the specification, manages the qualification program, and protects the trademarks.[4]A manufacturer must meetBluetooth SIG standardsto market it as a Bluetooth device.[5]A network ofpatentsapplies to the technology, which is licensed to individual qualifying devices. As of 2021[update], 4.7 billion Bluetoothintegrated circuitchips are shipped annually.[6]Bluetooth was first demonstrated in space in 2024, an early test envisioned to enhanceIoTcapabilities.[7]
The name "Bluetooth" was proposed in 1997 by Jim Kardach ofIntel, one of the founders of the Bluetooth SIG. The name was inspired by a conversation with Sven Mattisson who related Scandinavian history through tales fromFrans G. Bengtsson'sThe Long Ships, a historical novel about Vikings and the 10th-century Danish kingHarald Bluetooth. Upon discovering a picture of therunestone of Harald Bluetooth[8]in the bookA History of the VikingsbyGwyn Jones, Kardach proposed Bluetooth as the codename for the short-range wireless program which is now called Bluetooth.[9][10][11]
According to Bluetooth's official website,
Bluetooth was only intended as a placeholder until marketing could come up with something really cool.
Later, when it came time to select a serious name, Bluetooth was to be replaced with either RadioWire or PAN (Personal Area Networking). PAN was the front runner, but an exhaustive search discovered it already had tens of thousands of hits throughout the internet.
A full trademark search on RadioWire couldn't be completed in time for launch, making Bluetooth the only choice. The name caught on fast and before it could be changed, it spread throughout the industry, becoming synonymous with short-range wireless technology.[12]
Bluetooth is theAnglicisedversion of the ScandinavianBlåtand/Blåtann(or inOld Norseblátǫnn). It was theepithetof King Harald Bluetooth, who united the disparate
Danish tribes into a single kingdom; Kardach chose the name to imply that Bluetooth similarly unites communication protocols.[13]
The Bluetooth logois abind runemerging theYounger Futharkrunes(ᚼ,Hagall) and(ᛒ,Bjarkan), Harald's initials.[14][15]
The development of the "short-link" radio technology, later named Bluetooth, was initiated in 1989 by Nils Rydbeck, CTO atEricsson MobileinLund, Sweden. The purpose was to develop wireless headsets, according to two inventions byJohan Ullman,SE 8902098-6, issued 12 June 1989andSE 9202239, issued 24 July 1992. Nils Rydbeck taskedTord Wingrenwith specifying and DutchmanJaap Haartsenand Sven Mattisson with developing.[16]Both were working for Ericsson in Lund.[17]Principal design and development began in 1994 and by 1997 the team had a workable solution.[18]From 1997 Örjan Johansson became the project leader and propelled the technology and standardization.[19][20][21][22]
In 1997, Adalio Sanchez, then head ofIBMThinkPadproduct R&D, approached Nils Rydbeck about collaborating on integrating amobile phoneinto a ThinkPad notebook. The two assigned engineers fromEricssonandIBMstudied the idea. The conclusion was that power consumption on cellphone technology at that time was too high to allow viable integration into a notebook and still achieve adequate battery life. Instead, the two companies agreed to integrate Ericsson's short-link technology on both a ThinkPad notebook and an Ericsson phone to accomplish the goal.
Since neither IBM ThinkPad notebooks nor Ericsson phones were the market share leaders in their respective markets at that time, Adalio Sanchez and Nils Rydbeck agreed to make the short-link technology an open industry standard to permit each player maximum market access. Ericsson contributed the short-link radio technology, and IBM contributed patents around the logical layer. Adalio Sanchez of IBM then recruited Stephen Nachtsheim of Intel to join and then Intel also recruitedToshibaandNokia. In May 1998, the Bluetooth SIG was launched with IBM and Ericsson as the founding signatories and a total of five members: Ericsson, Intel, Nokia, Toshiba, and IBM.
The first Bluetooth device was revealed in 1999. It was a hands-free mobile headset that earned the "Best of show Technology Award" atCOMDEX. The first Bluetooth mobile phone was the unreleased prototype Ericsson T36, though it was the revised Ericsson modelT39that actually made it to store shelves in June 2001. However Ericsson released the R520m in Quarter 1 of 2001,[23]making the R520m the first ever commercially available Bluetooth phone. In parallel, IBM introduced the IBM ThinkPad A30 in October 2001 which was the first notebook with integrated Bluetooth.
Bluetooth's early incorporation into consumer electronics products continued at Vosi Technologies in Costa Mesa, California, initially overseen by founding members Bejan Amini and Tom Davidson. Vosi Technologies had been created by real estate developer Ivano Stegmenga, with United States Patent 608507, for communication between a cellular phone and a vehicle's audio system. At the time, Sony/Ericsson had only a minor market share in the cellular phone market, which was dominated in the US by Nokia and Motorola. Due to ongoing negotiations for an intended licensing agreement with Motorola beginning in the late 1990s, Vosi could not publicly disclose the intention, integration, and initial development of other enabled devices which were to be the first "Smart Home" internet connected devices.
Vosi needed a means for the system to communicate without a wired connection from the vehicle to the other devices in the network. Bluetooth was chosen, sinceWi-Fiwas not yet readily available or supported in the public market. Vosi had begun to develop the Vosi Cello integrated vehicular system and some other internet connected devices, one of which was intended to be a table-top device named the Vosi Symphony, networked with Bluetooth. Through the negotiations withMotorola, Vosi introduced and disclosed its intent to integrate Bluetooth in its devices. In the early 2000s a legal battle[24]ensued between Vosi and Motorola, which indefinitely suspended release of the devices. Later, Motorola implemented it in their devices, which initiated the significant propagation of Bluetooth in the public market due to its large market share at the time.
In 2012, Jaap Haartsen was nominated by theEuropean Patent Officefor theEuropean Inventor Award.[18]
Bluetooth operates at frequencies between 2.402 and 2.480GHz, or 2.400 and 2.4835GHz, includingguard bands2MHz wide at the bottom end and 3.5MHz wide at the top.[25]This is in the globally unlicensed (but not unregulated) industrial, scientific and medical (ISM) 2.4GHz short-range radio frequency band. Bluetooth uses a radio technology calledfrequency-hopping spread spectrum. Bluetooth divides transmitted data into packets, and transmits each packet on one of 79 designated Bluetooth channels. Each channel has a bandwidth of 1MHz. It usually performs 1600hops per second, withadaptive frequency-hopping(AFH) enabled.[25]Bluetooth Low Energyuses 2MHz spacing, which accommodates 40 channels.[26]
Originally,Gaussian frequency-shift keying(GFSK) modulation was the only modulation scheme available. Since the introduction of Bluetooth 2.0+EDR, π/4-DQPSK(differential quadrature phase-shift keying) and 8-DPSK modulation may also be used between compatible devices. Devices functioning with GFSK are said to be operating in basic rate (BR) mode, where an instantaneousbit rateof 1Mbit/sis possible. The termEnhanced Data Rate(EDR) is used to describe π/4-DPSK (EDR2) and 8-DPSK (EDR3) schemes, transferring 2 and 3Mbit/s respectively.
In 2019, Apple published an extension called HDR which supports data rates of 4 (HDR4) and 8 (HDR8) Mbit/s using π/4-DQPSKmodulation on 4 MHz channels with forward error correction (FEC).[27]
Bluetooth is apacket-based protocolwith amaster/slave architecture. One master may communicate with up to seven slaves in apiconet. All devices within a given piconet use the clock provided by the master as the base for packet exchange. The master clock ticks with a period of 312.5μs, two clock ticks then make up a slot of 625μs, and two slots make up a slot pair of 1250μs. In the simple case of single-slot packets, the master transmits in even slots and receives in odd slots. The slave, conversely, receives in even slots and transmits in odd slots. Packets may be 1, 3, or 5 slots long, but in all cases, the master's transmission begins in even slots and the slave's in odd slots.
The above excludes Bluetooth Low Energy, introduced in the 4.0 specification,[28]whichuses the same spectrum but somewhat differently.
A master BR/EDR Bluetooth device can communicate with a maximum of seven devices in a piconet (an ad hoc computer network using Bluetooth technology), though not all devices reach this maximum. The devices can switch roles, by agreement, and the slave can become the master (for example, a headset initiating a connection to a phone necessarily begins as master—as an initiator of the connection—but may subsequently operate as the slave).
The Bluetooth Core Specification provides for the connection of two or more piconets to form ascatternet, in which certain devices simultaneously play the master/leader role in one piconet and the slave role in another.
At any given time, data can be transferred between the master and one other device (except for the little-used broadcast mode). The master chooses which slave device to address; typically, it switches rapidly from one device to another in around-robinfashion. Since it is the master that chooses which slave to address, whereas a slave is (in theory) supposed to listen in each receive slot, being a master is a lighter burden than being a slave. Being a master of seven slaves is possible; being a slave of more than one master is possible. The specification is vague as to required behavior in scatternets.[29]
Bluetooth is a standard wire-replacement communications protocol primarily designed for low power consumption, with a short range based on low-costtransceivermicrochipsin each device.[30]Because the devices use a radio (broadcast) communications system, they do not have to be in visual line of sight of each other; however, aquasi opticalwireless path must be viable.[31]
Historically, the Bluetooth range was defined by the radio class, with a lower class (and higher output power) having larger range.[2]The actual range of a given link depends on several qualities of both communicating devices and theair and obstacles in between. The primary attributes affecting range are the data rate, protocol (Bluetooth Classic or Bluetooth Low Energy), transmission power, and receiver sensitivity, and the relative orientations and gains of both antennas.[32]
The effective range varies depending on propagation conditions, material coverage, production sample variations, antenna configurations and battery conditions. Most Bluetooth applications are for indoor conditions, where attenuation of walls and signal fading due to signal reflections make the range far lower than specified line-of-sight ranges of the Bluetooth products.
Most Bluetooth applications are battery-powered Class 2 devices, with little difference in range whether the other end of the link is a Class 1 or Class 2 device as the lower-powered device tends to set the range limit. In some cases the effective range of the data link can be extended when a Class 2 device is connecting to a Class 1 transceiver with both higher sensitivity and transmission power than a typical Class 2 device.[33]In general, however, Class 1 devices have sensitivities similar to those of Class 2 devices. Connecting two Class 1 devices with both high sensitivity and high power can allow ranges far in excess of the typical 100 m, depending on the throughput required by the application. Some such devices allow open field ranges of up to 1 km and beyond between two similar devices without exceeding legal emission limits.[34][35][36]
To use Bluetooth wireless technology, a device must be able to interpret certain Bluetooth profiles.
For example,
Profiles are definitions of possible applications and specify general behaviors that Bluetooth-enabled devices use to communicate with other Bluetooth devices. These profiles include settings to parameterize and to control the communication from the start. Adherence to profiles saves the time for transmitting the parameters anew before the bi-directional link becomes effective. There are a wide range of Bluetooth profiles that describe many different types of applications or use cases for devices.[37]
Bluetooth exists in numerous products such as telephones,speakers, tablets, media players, robotics systems, laptops, and game console equipment as well as some high definitionheadsets,modems,hearing aids[53]and even watches.[54]Bluetooth is useful when transferring information between two or more devices that are near each other in low-bandwidth situations. Bluetooth is commonly used to transfer sound data with telephones (i.e., with a Bluetooth headset) or byte data with hand-held computers (transferring files).
Bluetooth protocols simplify the discovery and setup of services between devices.[55]Bluetooth devices can advertise all of the services they provide.[56]This makes using services easier, because more of the security,network addressand permission configuration can be automated than with many other network types.[55]
A personal computer that does not have embedded Bluetooth can use a Bluetooth adapter that enables the PC to communicate with Bluetooth devices. While somedesktop computersand most recent laptops come with a built-in Bluetooth radio, others require an external adapter, typically in the form of a small USB "dongle".
Unlike its predecessor,IrDA, which requires a separate adapter for each device, Bluetooth lets multiple devices communicate with a computer over a single adapter.[57]
ForMicrosoftplatforms,Windows XP Service Pack 2and SP3 releases work natively with Bluetooth v1.1, v2.0 and v2.0+EDR.[58]Previous versions required users to install their Bluetooth adapter's own drivers, which were not directly supported by Microsoft.[59]Microsoft's own Bluetooth dongles (packaged with their Bluetooth computer devices) have no external drivers and thus require at least Windows XP Service Pack 2. Windows Vista RTM/SP1 with the Feature Pack for Wireless or Windows Vista SP2 work with Bluetooth v2.1+EDR.[58]Windows 7 works with Bluetooth v2.1+EDR and Extended Inquiry Response (EIR).[58]The Windows XP and Windows Vista/Windows 7 Bluetooth stacks support the following Bluetooth profiles natively: PAN, SPP,DUN, HID, HCRP. The Windows XP stack can be replaced by a third party stack that supports more profiles or newer Bluetooth versions. The Windows Vista/Windows 7 Bluetooth stack supports vendor-supplied additional profiles without requiring that the Microsoft stack be replaced.[58]Windows 8 and later support Bluetooth Low Energy (BLE). It is generally recommended to install the latest vendor driver and its associated stack to be able to use the Bluetooth device at its fullest extent.
Appleproducts have worked with Bluetooth sinceMac OSX v10.2, which was released in 2002.[60]
Linuxhas two popularBluetooth stacks, BlueZ and Fluoride. The BlueZ stack is included with most Linux kernels and was originally developed byQualcomm.[61]Fluoride, earlier known as Bluedroid is included in Android OS and was originally developed byBroadcom.[62]There is also Affix stack, developed byNokia. It was once popular, but has not been updated since 2005.[63]
FreeBSDhas included Bluetooth since its v5.0 release, implemented throughnetgraph.[64][65]
NetBSDhas included Bluetooth since its v4.0 release.[66][67]Its Bluetooth stack was ported toOpenBSDas well, however OpenBSD later removed it as unmaintained.[68][69]
DragonFly BSDhas had NetBSD's Bluetooth implementation since 1.11 (2008).[70][71]Anetgraph-based implementation fromFreeBSDhas also been available in the tree, possibly disabled until 2014-11-15, and may require more work.[72][73]
The specifications were formalized by theBluetooth Special Interest Group(SIG) and formally announced on 20 May 1998.[74]In 2014 it had a membership of over 30,000 companies worldwide.[75]It was established byEricsson,IBM,Intel,NokiaandToshiba, and later joined by many other companies.
All versions of the Bluetooth standards arebackward-compatiblewith all earlier versions.[76]
The Bluetooth Core Specification Working Group (CSWG) produces mainly four kinds of specifications:
Major enhancements include:
This version of the Bluetooth Core Specification was released before 2005. The main difference is the introduction of an Enhanced Data Rate (EDR) for fasterdata transfer. The data rate of EDR is 3Mbit/s, although the maximum data transfer rate (allowing for inter-packet time and acknowledgements) is 2.1Mbit/s.[79]EDR uses a combination ofGFSKandphase-shift keyingmodulation (PSK) with two variants, π/4-DQPSKand 8-DPSK.[81]EDR can provide a lower power consumption through a reducedduty cycle.
The specification is published asBluetooth v2.0 + EDR, which implies that EDR is an optional feature. Aside from EDR, the v2.0 specification contains other minor improvements, and products may claim compliance to "Bluetooth v2.0" without supporting the higher data rate. At least one commercial device states "Bluetooth v2.0 without EDR" on its data sheet.[82]
Bluetooth Core Specification version 2.1 + EDR was adopted by the Bluetooth SIG on 26 July 2007.[81]
The headline feature of v2.1 issecure simple pairing(SSP): this improves the pairing experience for Bluetooth devices, while increasing the use and strength of security.[83]
Version 2.1 allows various other improvements, includingextended inquiry response(EIR), which provides more information during the inquiry procedure to allow better filtering of devices before connection; and sniff subrating, which reduces the power consumption in low-power mode.
Version 3.0 + HS of the Bluetooth Core Specification[81]was adopted by the Bluetooth SIG on 21 April 2009. Bluetooth v3.0 + HS provides theoretical data transfer speeds of up to 24 Mbit/s, though not over the Bluetooth link itself. Instead, the Bluetooth link is used for negotiation and establishment, and the high data rate traffic is carried over a colocated802.11link.
The main new feature isAMP(Alternative MAC/PHY), the addition of802.11as a high-speed transport. The high-speed part of the specification is not mandatory, and hence only devices that display the "+HS" logo actually support Bluetooth over 802.11 high-speed data transfer. A Bluetooth v3.0 device without the "+HS" suffix is only required to support features introduced in Core Specification version 3.0[84]or earlier Core Specification Addendum 1.[85]
The high-speed (AMP) feature of Bluetooth v3.0 was originally intended forUWB, but the WiMedia Alliance, the body responsible for the flavor of UWB intended for Bluetooth, announced in March 2009 that it was disbanding, and ultimately UWB was omitted from the Core v3.0 specification.[86]
On 16 March 2009, theWiMedia Allianceannounced it was entering into technology transfer agreements for the WiMediaUltra-wideband(UWB) specifications. WiMedia has transferred all current and future specifications, including work on future high-speed and power-optimized implementations, to the Bluetooth Special Interest Group (SIG),Wireless USBPromoter Group and theUSB Implementers Forum. After successful completion of the technology transfer, marketing, and related administrative items, the WiMedia Alliance ceased operations.[87][88][89][90][91]
In October 2009, theBluetooth Special Interest Groupsuspended development of UWB as part of the alternative MAC/PHY, Bluetooth v3.0 + HS solution. A small, but significant, number of formerWiMediamembers had not and would not sign up to the necessary agreements for theIPtransfer. As of 2009, the Bluetooth SIG was in the process of evaluating other options for its longer-term roadmap.[92][93][94]
The Bluetooth SIG completed the Bluetooth Core Specification version 4.0 (called Bluetooth Smart) and has been adopted as of 30 June 2010[update]. It includesClassic Bluetooth,Bluetooth high speedandBluetooth Low Energy(BLE) protocols. Bluetooth high speed is based on Wi-Fi, and Classic Bluetooth consists of legacy Bluetooth protocols.
Bluetooth Low Energy, previously known as Wibree,[95]is a subset of Bluetooth v4.0 with an entirely new protocol stack for rapid build-up of simple links. As an alternative to the Bluetooth standard protocols that were introduced in Bluetooth v1.0 to v3.0, it is aimed at very low power applications powered by acoin cell. Chip designs allow for two types of implementation, dual-mode, single-mode and enhanced past versions.[96]The provisional namesWibreeandBluetooth ULP(Ultra Low Power) were abandoned and the BLE name was used for a while. In late 2011, new logos "Bluetooth Smart Ready" for hosts and "Bluetooth Smart" for sensors were introduced as the general-public face of BLE.[97]
Compared toClassic Bluetooth, Bluetooth Low Energy is intended to provide considerably reduced power consumption and cost while maintaining asimilar communication range. In terms of lengthening the battery life of Bluetooth devices,BLErepresents a significant progression.
Cost-reduced single-mode chips, which enable highly integrated and compact devices, feature a lightweight Link Layer providing ultra-low power idle mode operation, simple device discovery, and reliable point-to-multipoint data transfer with advanced power-save and secure encrypted connections at the lowest possible cost.
General improvements in version 4.0 include the changes necessary to facilitate BLE modes, as well the Generic Attribute Profile (GATT) and Security Manager (SM) services withAESEncryption.
Core Specification Addendum 2 was unveiled in December 2011; it contains improvements to the audio Host Controller Interface and to the High Speed (802.11) Protocol Adaptation Layer.
Core Specification Addendum 3 revision 2 has an adoption date of 24 July 2012.
Core Specification Addendum 4 has an adoption date of 12 February 2013.
The Bluetooth SIG announced formal adoption of the Bluetooth v4.1 specification on 4 December 2013. This specification is an incremental software update to Bluetooth Specification v4.0, and not a hardware update. The update incorporates Bluetooth Core Specification Addenda (CSA 1, 2, 3 & 4) and adds new features that improve consumer usability. These include increased co-existence support for LTE, bulk data exchange rates—and aid developer innovation by allowing devices to support multiple roles simultaneously.[106]
New features of this specification include:
Some features were already available in a Core Specification Addendum (CSA) before the release of v4.1.
Released on 2 December 2014,[108]it introduces features for theInternet of things.
The major areas of improvement are:
Older Bluetooth hardware may receive 4.2 features such as Data Packet Length Extension and improved privacy via firmware updates.[109][110]
The Bluetooth SIG released Bluetooth 5 on 6 December 2016.[111]Its new features are mainly focused on newInternet of Thingstechnology. Sony was the first to announce Bluetooth 5.0 support with itsXperia XZ Premiumin Feb 2017 during the Mobile World Congress 2017.[112]The SamsungGalaxy S8launched with Bluetooth 5 support in April 2017. In September 2017, theiPhone 8, 8 Plus andiPhone Xlaunched with Bluetooth 5 support as well.Applealso integrated Bluetooth 5 in its newHomePodoffering released on 9 February 2018.[113]Marketing drops the point number; so that it is just "Bluetooth 5" (unlike Bluetooth 4.0);[114]the change is for the sake of "Simplifying our marketing, communicating user benefits more effectively and making it easier to signal significant technology updates to the market."
Bluetooth 5 provides, forBLE, options that can double the data rate (2Mbit/s burst) at the expense of range, or provide up to four times the range at the expense of data rate. The increase in transmissions could be important for Internet of Things devices, where many nodes connect throughout a whole house. Bluetooth 5 increases capacity of connectionless services such as location-relevant navigation[115]of low-energy Bluetooth connections.[116][117][118]
The major areas of improvement are:
Features added in CSA5 – integrated in v5.0:
The following features were removed in this version of the specification:
The Bluetooth SIG presented Bluetooth 5.1 on 21 January 2019.[120]
The major areas of improvement are:
Features added in Core Specification Addendum (CSA) 6 – integrated in v5.1:
The following features were removed in this version of the specification:
On 31 December 2019, the Bluetooth SIG published the Bluetooth Core Specification version 5.2. The new specification adds new features:[121]
The Bluetooth SIG published the Bluetooth Core Specification version 5.3 on 13 July 2021. The feature enhancements of Bluetooth 5.3 are:[128]
The following features were removed in this version of the specification:
The Bluetooth SIG released the Bluetooth Core Specification version 5.4 on 7 February 2023. This new version adds the following features:[129]
The Bluetooth SIG released the Bluetooth Core Specification version 6.0 on 27 August 2024.[130]This version adds the following features:[131]
The Bluetooth SIG released the Bluetooth Core Specification version 6.1 on 7 May 2025.[132]
Seeking to extend the compatibility of Bluetooth devices, the devices that adhere to the standard use an interface called HCI (Host Controller Interface) between the host and the controller.
High-level protocols such as the SDP (Protocol used to find other Bluetooth devices within the communication range, also responsible for detecting the function of devices in range), RFCOMM (Protocol used to emulate serial port connections) and TCS (Telephony control protocol) interact with the baseband controller through the L2CAP (Logical Link Control and Adaptation Protocol). The L2CAP protocol is responsible for the segmentation and reassembly of the packets.
The hardware that makes up the Bluetooth device is made up of, logically, two parts; which may or may not be physically separate. A radio device, responsible for modulating and transmitting the signal; and a digital controller. The digital controller is likely a CPU, one of whose functions is to run a Link Controller; and interfaces with the host device; but some functions may be delegated to hardware. The Link Controller is responsible for the processing of the baseband and the management of ARQ and physical layer FEC protocols. In addition, it handles the transfer functions (both asynchronous and synchronous), audio coding (e.g.SBC (codec)) and data encryption. The CPU of the device is responsible for attending the instructions related to Bluetooth of the host device, in order to simplify its operation. To do this, the CPU runs software called Link Manager that has the function of communicating with other devices through the LMP protocol.
A Bluetooth device is ashort-rangewirelessdevice. Bluetooth devices arefabricatedonRF CMOSintegrated circuit(RF circuit) chips.[133][134]
Bluetooth is defined as a layer protocol architecture consisting of core protocols, cable replacement protocols, telephony control protocols, and adopted protocols.[135]Mandatory protocols for all Bluetooth stacks are LMP, L2CAP and SDP. In addition, devices that communicate with Bluetooth almost universally can use these protocols:HCIand RFCOMM.[citation needed]
The Link Manager (LM) is the system that manages establishing the connection between devices. It is responsible for the establishment, authentication and configuration of the link. The Link Manager locates other managers and communicates with them via the management protocol of the LMP link. To perform its function as a service provider, the LM uses the services included in the Link Controller (LC).
The Link Manager Protocol basically consists of several PDUs (Protocol Data Units) that are sent from one device to another. The following is a list of supported services:
The Host Controller Interface provides a command interface between the controller and the host.
TheLogical Link Control and Adaptation Protocol(L2CAP) is used to multiplex multiple logical connections between two devices using different higher level protocols.
Provides segmentation and reassembly of on-air packets.
InBasicmode, L2CAP provides packets with a payload configurable up to 64 kB, with 672 bytes as the defaultMTU, and 48 bytes as the minimum mandatory supported MTU.
InRetransmission and Flow Controlmodes, L2CAP can be configured either for isochronous data or reliable data per channel by performing retransmissions and CRC checks.
Bluetooth Core Specification Addendum 1 adds two additional L2CAP modes to the core specification. These modes effectively deprecate original Retransmission and Flow Control modes:
Reliability in any of these modes is optionally and/or additionally guaranteed by the lower layer Bluetooth BDR/EDR air interface by configuring the number of retransmissions and flush timeout (time after which the radio flushes packets). In-order sequencing is guaranteed by the lower layer.
Only L2CAP channels configured in ERTM or SM may be operated over AMP logical links.
TheService Discovery Protocol(SDP) allows a device to discover services offered by other devices, and their associated parameters. For example, when you use a mobile phone with a Bluetooth headset, the phone uses SDP to determine whichBluetooth profilesthe headset can use (Headset Profile, Hands Free Profile (HFP),Advanced Audio Distribution Profile (A2DP)etc.) and the protocol multiplexer settings needed for the phone to connect to the headset using each of them. Each service is identified by aUniversally unique identifier(UUID), with official services (Bluetooth profiles) assigned a short form UUID (16 bits rather than the full 128).
Radio Frequency Communications(RFCOMM) is a cable replacement protocol used for generating a virtual serial data stream. RFCOMM provides for binary data transport and emulatesEIA-232(formerly RS-232) control signals over the Bluetooth baseband layer, i.e., it is a serial port emulation.
RFCOMM provides a simple, reliable, data stream to the user, similar to TCP. It is used directly by many telephony related profiles as a carrier for AT commands, as well as being a transport layer for OBEX over Bluetooth.
Many Bluetooth applications use RFCOMM because of its widespread support and publicly available API on most operating systems. Additionally, applications that used a serial port to communicate can be quickly ported to use RFCOMM.
TheBluetooth Network Encapsulation Protocol(BNEP) is used for transferring another protocol stack's data via an L2CAP channel.
Its main purpose is the transmission of IP packets in the Personal Area Networking Profile.
BNEP performs a similar function toSNAPin Wireless LAN.
TheAudio/Video Control Transport Protocol(AVCTP) is used by the remote control profile to transfer AV/C commands over an L2CAP channel. The music control buttons on a stereo headset use this protocol to control the music player.
TheAudio/Video Distribution Transport Protocol(AVDTP) is used by the advanced audio distribution (A2DP) profile to stream music to stereo headsets over anL2CAPchannel intended for video distribution profile in the Bluetooth transmission.
TheTelephony Control Protocol– Binary(TCS BIN) is the bit-oriented protocol that defines the call control signaling for the establishment of voice and data calls between Bluetooth devices. Additionally, "TCS BIN defines mobility management procedures for handling groups of Bluetooth TCS devices."
TCS-BIN is only used by the cordless telephony profile, which failed to attract implementers. As such it is only of historical interest.
Adopted protocols are defined by other standards-making organizations and incorporated into Bluetooth's protocol stack, allowing Bluetooth to code protocols only when necessary. The adopted protocols include:
Depending on packet type, individual packets may be protected byerror correction, either 1/3 rateforward error correction(FEC) or 2/3 rate. In addition, packets with CRC will be retransmitted until acknowledged byautomatic repeat request(ARQ).
Any Bluetooth device indiscoverable modetransmits the following information on demand:
Any device may perform an inquiry to find other devices to connect to, and any device can be configured to respond to such inquiries. However, if the device trying to connect knows the address of the device, it always responds to direct connection requests and transmits the information shown in the list above if requested. Use of a device's services may require pairing or acceptance by its owner, but the connection itself can be initiated by any device and held until it goes out of range. Some devices can be connected to only one device at a time, and connecting to them prevents them from connecting to other devices and appearing in inquiries until they disconnect from the other device.
Every device has aunique 48-bit address. However, these addresses are generally not shown in inquiries. Instead, friendly Bluetooth names are used, which can be set by the user. This name appears when another user scans for devices and in lists of paired devices.
Most cellular phones have the Bluetooth name set to the manufacturer and model of the phone by default. Most cellular phones and laptops show only the Bluetooth names and special programs are required to get additional information about remote devices. This can be confusing as, for example, there could be several cellular phones in range namedT610(seeBluejacking).
Many services offered over Bluetooth can expose private data or let a connecting party control the Bluetooth device. Security reasons make it necessary to recognize specific devices, and thus enable control over which devices can connect to a given Bluetooth device. At the same time, it is useful for Bluetooth devices to be able to establish a connection without user intervention (for example, as soon as in range).
To resolve this conflict, Bluetooth uses a process calledbonding, and a bond is generated through a process calledpairing. The pairing process is triggered either by a specific request from a user to generate a bond (for example, the user explicitly requests to "Add a Bluetooth device"), or it is triggered automatically when connecting to a service where (for the first time) the identity of a device is required for security purposes. These two cases are referred to as dedicated bonding and general bonding respectively.
Pairing often involves some level of user interaction. This user interaction confirms the identity of the devices. When pairing completes, a bond forms between the two devices, enabling those two devices to connect in the future without repeating the pairing process to confirm device identities. When desired, the user can remove the bonding relationship.
During pairing, the two devices establish a relationship by creating ashared secretknown as alink key. If both devices store the same link key, they are said to bepairedorbonded. A device that wants to communicate only with a bonded device cancryptographicallyauthenticatethe identity of the other device, ensuring it is the same device it previously paired with. Once a link key is generated, an authenticatedACLlink between the devices may beencryptedto protect exchanged data againsteavesdropping. Users can delete link keys from either device, which removes the bond between the devices—so it is possible for one device to have a stored link key for a device it is no longer paired with.
Bluetooth services generally require either encryption or authentication and as such require pairing before they let a remote device connect. Some services, such as the Object Push Profile, elect not to explicitly require authentication or encryption so that pairing does not interfere with the user experience associated with the service use-cases.
Pairing mechanisms changed significantly with the introduction of Secure Simple Pairing in Bluetooth v2.1. The following summarizes the pairing mechanisms:
SSP is considered simple for the following reasons:
Prior to Bluetooth v2.1, encryption is not required and can be turned off at any time. Moreover, the encryption key is only good for approximately 23.5 hours; using a single encryption key longer than this time allows simpleXOR attacksto retrieve the encryption key.
Bluetooth v2.1 addresses this in the following ways:
Link keys may be stored on the device file system, not on the Bluetooth chip itself. Many Bluetooth chip manufacturers let link keys be stored on the device—however, if the device is removable, this means that the link key moves with the device.
Bluetooth implementsconfidentiality,authenticationandkeyderivation with custom algorithms based on theSAFER+block cipher. Bluetooth key generation is generally based on a Bluetooth PIN, which must be entered into both devices. This procedure might be modified if one of the devices has a fixed PIN (e.g., for headsets or similar devices with a restricted user interface). During pairing, an initialization key or master key is generated, using the E22 algorithm.[136]TheE0stream cipher is used for encrypting packets, granting confidentiality, and is based on a shared cryptographic secret, namely a previously generated link key or master key. Those keys, used for subsequent encryption of data sent via the air interface, rely on the Bluetooth PIN, which has been entered into one or both devices.
An overview of Bluetooth vulnerabilities exploits was published in 2007 by Andreas Becker.[137]
In September 2008, theNational Institute of Standards and Technology(NIST) published a Guide to Bluetooth Security as a reference for organizations. It describes Bluetooth security capabilities and how to secure Bluetooth technologies effectively. While Bluetooth has its benefits, it is susceptible to denial-of-service attacks, eavesdropping, man-in-the-middle attacks, message modification, and resource misappropriation. Users and organizations must evaluate their acceptable level of risk and incorporate security into the lifecycle of Bluetooth devices. To help mitigate risks, included in the NIST document are security checklists with guidelines and recommendations for creating and maintaining secure Bluetooth piconets, headsets, and smart card readers.[138]
Bluetooth v2.1 – finalized in 2007 with consumer devices first appearing in 2009 – makes significant changes to Bluetooth's security, including pairing. See thepairing mechanismssection for more about these changes.
Bluejacking is the sending of either a picture or a message from one user to an unsuspecting user through Bluetooth wireless technology. Common applications include short messages, e.g., "You've just been bluejacked!"[139]Bluejacking does not involve the removal or alteration of any data from the device.[140]
Some form ofDoSis also possible, even in modern devices, by sending unsolicited pairing requests in rapid succession; this becomes disruptive because most systems display a full screen notification for every connection request, interrupting every other activity, especially on less powerful devices.
In 2001, Jakobsson and Wetzel fromBell Laboratoriesdiscovered flaws in the Bluetooth pairing protocol and also pointed to vulnerabilities in the encryption scheme.[141]In 2003, Ben and Adam Laurie from A.L. Digital Ltd. discovered that serious flaws in some poor implementations of Bluetooth security may lead to disclosure of personal data.[142]In a subsequent experiment, Martin Herfurt from the trifinite.group was able to do a field-trial at theCeBITfairgrounds, showing the importance of the problem to the world. A new attack calledBlueBugwas used for this experiment.[143]In 2004 the first purportedvirususing Bluetooth to spread itself among mobile phones appeared on theSymbian OS.[144]The virus was first described byKaspersky Laband requires users to confirm the installation of unknown software before it can propagate. The virus was written as a proof-of-concept by a group of virus writers known as "29A" and sent to anti-virus groups. Thus, it should be regarded as a potential (but not real) security threat to Bluetooth technology orSymbian OSsince the virus has never spread outside of this system. In August 2004, a world-record-setting experiment (see alsoBluetooth sniping) showed that the range of Class 2 Bluetooth radios could be extended to 1.78 km (1.11 mi) with directional antennas and signal amplifiers.[145]This poses a potential security threat because it enables attackers to access vulnerable Bluetooth devices from a distance beyond expectation. The attacker must also be able to receive information from the victim to set up a connection. No attack can be made against a Bluetooth device unless the attacker knows its Bluetooth address and which channels to transmit on, although these can be deduced within a few minutes if the device is in use.[146]
In January 2005, a mobilemalwareworm known as Lasco surfaced. The worm began targeting mobile phones usingSymbian OS(Series 60 platform) using Bluetooth enabled devices to replicate itself and spread to other devices. The worm is self-installing and begins once the mobile user approves the transfer of the file (Velasco.sis) from another device. Once installed, the worm begins looking for other Bluetooth enabled devices to infect. Additionally, the worm infects other.SISfiles on the device, allowing replication to another device through the use of removable media (Secure Digital,CompactFlash, etc.). The worm can render the mobile device unstable.[147]
In April 2005,University of Cambridgesecurity researchers published results of their actual implementation of passive attacks against thePIN-basedpairing between commercial Bluetooth devices. They confirmed that attacks are practicably fast, and the Bluetooth symmetric key establishment method is vulnerable. To rectify this vulnerability, they designed an implementation that showed that stronger, asymmetric key establishment is feasible for certain classes of devices, such as mobile phones.[148]
In June 2005, Yaniv Shaked[149]and Avishai Wool[150]published a paper describing both passive and active methods for obtaining the PIN for a Bluetooth link. The passive attack allows a suitably equipped attacker to eavesdrop on communications and spoof if the attacker was present at the time of initial pairing. The active method makes use of a specially constructed message that must be inserted at a specific point in the protocol, to make the master and slave repeat the pairing process. After that, the first method can be used to crack the PIN. This attack's major weakness is that it requires the user of the devices under attack to re-enter the PIN during the attack when the device prompts them to. Also, this active attack probably requires custom hardware, since most commercially available Bluetooth devices are not capable of the timing necessary.[151]
In August 2005, police inCambridgeshire, England, issued warnings about thieves using Bluetooth enabled phones to track other devices left in cars. Police are advising users to ensure that any mobile networking connections are de-activated if laptops and other devices are left in this way.[152]
In April 2006, researchers fromSecure NetworkandF-Securepublished a report that warns of the large number of devices left in a visible state, and issued statistics on the spread of various Bluetooth services and the ease of spread of an eventual Bluetooth worm.[153]
In October 2006, at the Luxembourgish Hack.lu Security Conference, Kevin Finistere and Thierry Zoller demonstrated and released a remote root shell via Bluetooth on Mac OS X v10.3.9 and v10.4. They also demonstrated the first Bluetooth PIN and Linkkeys cracker, which is based on the research of Wool and Shaked.[154]
In April 2017, security researchers at Armis discovered multiple exploits in the Bluetooth software in various platforms, includingMicrosoft Windows,Linux, AppleiOS, and GoogleAndroid. These vulnerabilities are collectively called "BlueBorne". The exploits allow an attacker to connect to devices or systems without authentication and can give them "virtually full control over the device". Armis contacted Google, Microsoft, Apple, Samsung and Linux developers allowing them to patch their software before the coordinated announcement of the vulnerabilities on 12 September 2017.[155]
In July 2018, Lior Neumann andEli Biham, researchers at the Technion – Israel Institute of Technology identified a security vulnerability in the latest Bluetooth pairing procedures: Secure Simple Pairing and LE Secure Connections.[156][157]
Also, in October 2018, Karim Lounis, a network security researcher at Queen's University, identified a security vulnerability, called CDV (Connection Dumping Vulnerability), on various Bluetooth devices that allows an attacker to tear down an existing Bluetooth connection and cause the deauthentication and disconnection of the involved devices. The researcher demonstrated the attack on various devices of different categories and from different manufacturers.[158]
In August 2019, security researchers at theSingapore University of Technology and Design, Helmholtz Center for Information Security, andUniversity of Oxforddiscovered a vulnerability, called KNOB (Key Negotiation of Bluetooth) in the key negotiation that would "brute force the negotiated encryption keys, decrypt the eavesdropped ciphertext, and inject valid encrypted messages (in real-time)".[159][160]Google released anAndroidsecurity patch on 5 August 2019, which removed this vulnerability.[161]
In November 2023, researchers fromEurecomrevealed a new class of attacks known as BLUFFS (Bluetooth Low Energy Forward and Future Secrecy Attacks). These 6 new attacks expand on and work in conjunction with the previously known KNOB and BIAS (Bluetooth Impersonation AttackS) attacks. While the previous KNOB and BIAS attacks allowed an attacker to decrypt and spoof Bluetooth packets within a session, BLUFFS extends this capability to all sessions generated by a device (including past, present, and future). All devices running Bluetooth versions 4.2 up to and including 5.4 are affected.[162][163]
Bluetooth uses theradio frequencyspectrum in the 2.402GHz to 2.480GHz range,[164]which isnon-ionizing radiation, of similar bandwidth to that used by wireless and mobile phones. No specific harm has been demonstrated, even though wireless transmission has been included byIARCin the possiblecarcinogenlist. Maximum power output from a Bluetooth radio is 100mWfor Class1, 2.5mW for Class2, and 1mW for Class3 devices. Even the maximum power output of Class1 is a lower level than the lowest-powered mobile phones.[165]UMTSandW-CDMAoutput 250mW,GSM1800/1900outputs 1000mW, andGSM850/900outputs 2000mW.
The Bluetooth Innovation World Cup, a marketing initiative of the Bluetooth Special Interest Group (SIG), was an international competition that encouraged the development of innovations for applications leveraging Bluetooth technology in sports, fitness and health care products. The competition aimed to stimulate new markets.[166]
The Bluetooth Innovation World Cup morphed into the Bluetooth Breakthrough Awards in 2013. Bluetooth SIG subsequently launched the Imagine Blue Award in 2016 at Bluetooth World.[167]The Bluetooth Breakthrough Awards program highlights the most innovative products and applications available today, prototypes coming soon, and student-led projects in the making.[168]
|
https://en.wikipedia.org/wiki/Bluetooth#Overview
|
Data communication, includingdata transmissionanddata reception, is the transfer ofdata,transmittedand received over apoint-to-pointorpoint-to-multipointcommunication channel. Examples of such channels arecopper wires,optical fibers,wirelesscommunication usingradio spectrum,storage mediaandcomputer buses. The data are represented as anelectromagnetic signal, such as anelectrical voltage,radiowave,microwave, orinfraredsignal.
Analog transmissionis a method of conveying voice, data, image, signal or video information using a continuous signal that varies in amplitude, phase, or some other property in proportion to that of a variable. The messages are either represented by a sequence of pulses by means of aline code(basebandtransmission), or by a limited set of continuously varying waveforms (passbandtransmission), using a digitalmodulationmethod. The passband modulation and correspondingdemodulationis carried out bymodemequipment.
Digital communications, includingdigital transmissionanddigital reception, is the transfer of
either adigitizedanalog signal or aborn-digitalbitstream.[1]According to the most common definition, both baseband and passband bit-stream components are considered part of adigital signal; an alternative definition considers only the baseband signal as digital, and passband transmission of digital data as a form ofdigital-to-analog conversion.
Courses and textbooks in the field ofdata transmission[1]as well asdigital transmission[2][3]anddigital communications[4][5]have similar content.
Digital transmission or data transmission traditionally belongs totelecommunicationsandelectrical engineering. Basic principles of data transmission may also be covered within thecomputer scienceorcomputer engineeringtopic of data communications, which also includescomputer networkingapplications andcommunication protocols, for example routing, switching andinter-process communication. Although theTransmission Control Protocol(TCP) involves transmission, TCP and other transport layer protocols are covered in computer networking butnotdiscussed in a textbook or course about data transmission.
In most textbooks, the termanalog transmissiononly refers to the transmission of an analog message signal (without digitization) by means of an analog signal, either as a non-modulated baseband signal or as a passband signal using ananalog modulation methodsuch asAMorFM. It may also include analog-over-analogpulse modulatedbaseband signals such as pulse-width modulation. In a few books within the computer networking tradition,analog transmissionalso refers to passband transmission of bit-streams usingdigital modulationmethods such asFSK,PSKandASK.[1]
The theoretical aspects of data transmission are covered byinformation theoryandcoding theory.
Courses and textbooks in the field of data transmission typically deal with the followingOSI modelprotocol layers and topics:
It is also common to deal with the cross-layer design of those three layers.[7]
Data (mainly but not exclusivelyinformational) has been sent via non-electronic (e.g.optical,acoustic,mechanical) means since the advent ofcommunication.Analog signaldata has been sent electronically since theadvent of the telephone. However, the first data electromagnetic transmission applications in modern time wereelectrical telegraphy(1809) andteletypewriters(1906), which are bothdigital signals. The fundamental theoretical work in data transmission and information theory byHarry Nyquist,Ralph Hartley,Claude Shannonand others during the early 20th century, was done with these applications in mind.
In the early 1960s,Paul Baraninventeddistributed adaptive message block switchingfor digital communication of voice messages using switches that were low-cost electronics.[8][9]Donald Daviesinvented and implemented modern data communication during 1965-7, includingpacket switching, high-speedrouters,communication protocols, hierarchicalcomputer networksand the essence of theend-to-end principle.[10][11][12][13]Baran's work did not include routers with software switches and communication protocols, nor the idea that users, rather than the network itself, would provide thereliability.[14][15][16]Both were seminal contributions that influenced the development ofcomputer networks.[17][18]
Data transmission is utilized incomputersincomputer busesand for communication withperipheral equipmentviaparallel portsandserial portssuch asRS-232(1969),FireWire(1995) andUSB(1996). The principles of data transmission are also utilized in storage media forerror detection and correctionsince 1951. The first practical method to overcome the problem of receiving data accurately by the receiver using digital code was theBarker codeinvented byRonald Hugh Barkerin 1952 and published in 1953.[19]Data transmission is utilized incomputer networkingequipment such asmodems(1940),local area network(LAN) adapters (1964),repeaters,repeater hubs,microwave links,wireless network access points(1997), etc.
In telephone networks, digital communication is utilized for transferring many phone calls over the same copper cable or fiber cable by means ofpulse-code modulation(PCM) in combination withtime-division multiplexing(TDM) (1962).Telephone exchangeshave become digital and software controlled, facilitating many value-added services. For example, the firstAXE telephone exchangewas presented in 1976. Digital communication to the end user usingIntegrated Services Digital Network(ISDN) services became available in the late 1980s. Since the end of the 1990s, broadband access techniques such asADSL,Cable modems,fiber-to-the-building(FTTB) andfiber-to-the-home(FTTH) have become widespread to small offices and homes. The current tendency is to replace traditional telecommunication services withpacket mode communicationsuch asIP telephonyandIPTV.
Transmitting analog signals digitally allows for greatersignal processingcapability. The ability to process a communications signal means that errors caused by random processes can be detected and corrected. Digital signals can also besampledinstead of continuously monitored. Themultiplexingof multiple digital signals is much simpler compared to the multiplexing of analog signals. Because of all these advantages, because of the vast demand to transmit computer data and the ability of digital communications to do so and because recent advances inwidebandcommunication channelsandsolid-state electronicshave allowed engineers to realize these advantages fully, digital communications have grown quickly.
The digital revolution has also resulted in many digital telecommunication applications where the principles of data transmission are applied. Examples includesecond-generation(1991) and latercellular telephony,video conferencing,digital TV(1998),digital radio(1999), andtelemetry.
Data transmission, digital transmission or digital communications is the transfer of data over a point-to-point or point-to-multipoint communication channel. Examples of such channels include copper wires, optical fibers, wireless communication channels, storage media and computer buses. The data are represented as anelectromagnetic signal, such as an electrical voltage, radio wave, microwave, or infrared light.
While analog transmission is the transfer of a continuously varying analog signal over an analog channel, digital communication is the transfer of discrete messages over a digital or an analog channel. The messages are either represented by a sequence of pulses by means of a line code (baseband transmission), or by a limited set of continuously varying wave forms (passband transmission), using a digital modulation method. The passband modulation and corresponding demodulation (also known as detection) is carried out by modem equipment. According to the most common definition of a digital signal, both baseband and passband signals representing bit-streams are considered as digital transmission, while an alternative definition only considers the baseband signal as digital, and passband transmission of digital data as a form of digital-to-analog conversion.[citation needed]
Data transmitted may be digital messages originating from a data source, for example a computer or a keyboard. It may also be an analog signal such as a phone call or a video signal, digitized into a bit-stream for example using pulse-code modulation (PCM) or more advanced source coding (analog-to-digital conversion and data compression) schemes. This source coding and decoding is carried out by codec equipment.
In telecommunications,serial transmissionis the sequential transmission ofsignal elementsof a group representing acharacteror other entity ofdata. Digital serial transmissions are bits sent over a single wire, frequency or optical path sequentially. Because it requires lesssignal processingand less chances for error than parallel transmission, thetransfer rateof each individual path may be faster. This can be used over longer distances and a check digit orparity bitcan be sent along with the data easily.
Parallel transmission is the simultaneous transmission of related signal elements over two or more separate paths. Multiple electrical wires are used that can transmit multiple bits simultaneously, which allows for higher data transfer rates than can be achieved with serial transmission. This method is typically used internally within the computer, for example, the internal buses, and sometimes externally for such things as printers.Timing skewcan be a significant issue in these systems because the wires in parallel data transmission unavoidably have slightly different properties so some bits may arrive before others, which may corrupt the message. This issue tends to worsen with distance making parallel data transmission less reliable for long distances.
Some communications channel types include:
Asynchronous serial communicationuses start and stop bits to signify the beginning and end of transmission.[20]This method of transmission is used when data are sent intermittently as opposed to in a solid stream.
Synchronous transmissionsynchronizes transmission speeds at both the receiving and sending end of the transmission usingclock signals. The clock may be a separate signal orembedded in the data. A continual stream of data is then sent between the two nodes. Due to there being no start and stop bits, the data transfer rate may be more efficient.
|
https://en.wikipedia.org/wiki/Data_transmission
|
TheEnigma machineis acipherdevice developed and used in the early- to mid-20th century to protectcommercial, diplomatic, and military communication. It was employed extensively byNazi GermanyduringWorld War II, in all branches of theGerman military. The Enigma machine was considered so secure that it was used to encipher the most top-secret messages.[1]
The Enigma has an electromechanicalrotor mechanismthat scrambles the 26 letters of the alphabet. In typical use, one person enters text on the Enigma's keyboard and another person writes down which of the 26 lights above the keyboard illuminated at each key press. Ifplaintextis entered, the illuminated letters are theciphertext. Entering ciphertext transforms it back into readable plaintext. The rotor mechanism changes the electrical connections between the keys and the lights with each keypress.
The security of the system depends on machine settings that were generally changed daily, based on secret key lists distributed in advance, and on other settings that were changed for each message. The receiving station would have to know and use the exact settings employed by the transmitting station to decrypt a message.
Although Nazi Germany introduced a series of improvements to the Enigma over the years that hampered decryption efforts,cryptanalysis of the EnigmaenabledPolandto first crack the machine as early as December 1932 and to read messages prior to and into the war. Poland's sharing of their achievements enabled theAlliesto exploit Enigma-enciphered messages as a major source of intelligence.[2]Many commentators say the flow ofUltracommunications intelligencefrom the decrypting of Enigma,Lorenz, and other ciphers shortened the war substantially and may even have altered its outcome.[3]
The Enigma machine was invented by German engineerArthur Scherbiusat the end ofWorld War I.[4]The German firm Scherbius & Ritter, co-founded by Scherbius, patented ideas for a cipher machine in 1918 and began marketing the finished product under the brand nameEnigmain 1923, initially targeted at commercial markets.[5]Early models were used commercially from the early 1920s, and adopted by military and government services of several countries, most notablyNazi Germanybefore and duringWorld War II.[6]
Several Enigma models were produced,[7]but theGerman militarymodels, having aplugboard, were the most complex. Japanese and Italian models were also in use.[8]With its adoption (in slightly modified form) by the German Navy in 1926 and the German Army and Air Force soon after, the nameEnigmabecame widely known in military circles. Pre-war German military planning emphasized fast, mobile forces and tactics, later known asblitzkrieg, which depended on radio communication for command and coordination. Since adversaries would likely intercept radio signals, messages had to be protected with secure encipherment. Compact and easily portable, the Enigma machine filled that need.
Hans-Thilo Schmidtwas aGermanwho spied for theFrench, obtaining access to German cipher materials that included the daily keys used in September and October 1932. Those keys included the plugboard settings. The French passed the material toPoland. Around December 1932,Marian Rejewski, a Polish mathematician andcryptologistat thePolish Cipher Bureau, used the theory of permutations,[9]and flaws in the German military-message encipherment procedures, to break message keys of the plugboard Enigma machine.[10]Rejewski used the French supplied material and the message traffic that took place in September and October to solve for the unknown rotor wiring. Consequently, the Polish mathematicians were able to build their own Enigma machines, dubbed "Enigma doubles". Rejewski was aided by fellow mathematician-cryptologistsJerzy RóżyckiandHenryk Zygalski, both of whom had been recruited with Rejewski fromPoznań University, which had been selected for its students' knowledge of the German language, since that area was held byGermanyprior to World War I. The Polish Cipher Bureau developed techniques to defeat the plugboard and find all components of the daily key, which enabled the Cipher Bureau to read German Enigma messages starting from January 1933.[11]
Over time, the German cryptographic procedures improved, and the Cipher Bureau developed techniques and designed mechanical devices to continue reading Enigma traffic. As part of that effort, the Poles exploited quirks of the rotors, compiled catalogues, built acyclometer(invented by Rejewski) to help make a catalogue with 100,000 entries, invented and producedZygalski sheets, and built the electromechanical cryptologicbomba(invented by Rejewski) to search for rotor settings. In 1938 the Poles had sixbomby(plural ofbomba), but when that year the Germans added two more rotors, ten times as manybombywould have been needed to read the traffic.[12]
On 26 and 27 July 1939,[13]inPyry, just south ofWarsaw, the Poles initiated French and Britishmilitary intelligencerepresentatives into the PolishEnigma-decryption techniquesand equipment, including Zygalski sheets and the cryptologic bomb, and promised each delegation a Polish-reconstructed Enigma (the devices were soon delivered).[14]
In September 1939, British Military Mission 4, which includedColin GubbinsandVera Atkins, went to Poland, intending to evacuate cipher-breakersMarian Rejewski,Jerzy Różycki, andHenryk Zygalskifrom the country. The cryptologists, however, had been evacuated by their own superiors into Romania, at the time a Polish-allied country. On the way, for security reasons, the Polish Cipher Bureau personnel had deliberately destroyed their records and equipment. From Romania they traveled on to France, where they resumed their cryptological work, collaborating by teletype with theBritish, who began work on decrypting German Enigma messages, using the Polish equipment and techniques.[15]
Gordon Welchman, who became head ofHut 6at Bletchley Park, wrote: "Hut 6Ultrawould never have got off the ground if we had not learned from the Poles, in the nick of time, the details both of the German military version of the commercial Enigma machine, and of the operating procedures that were in use." The Polish transfer of theory and technology at Pyry formed the crucial basis for the subsequent World War II British Enigma-decryption effort atBletchley Park, where Welchman worked.[16]
During the war, British cryptologists decrypted a vast number of messages enciphered on Enigma. The intelligence gleaned from this source, codenamed "Ultra" by the British, was a substantial aid to theAlliedwar effort.[a]
Though Enigma had some cryptographic weaknesses, in practice it was German procedural flaws, operator mistakes, failure to systematically introduce changes in encipherment procedures, and Allied capture of key tables and hardware that, during the war, enabled Allied cryptologists to succeed.[17][18]
TheAbwehrused different versions of Enigma machines. In November 1942, duringOperation Torch, a machine was captured which had no plugboard and the three rotors had been changed to rotate 11, 15, and 19 times rather than once every 26 letters, plus a plate on the left acted as a fourth rotor.[19]
The Abwehr code had been broken on 8 December 1941 byDilly Knox. Agents sent messages to the Abwehr in a simple code which was then sent on using an Enigma machine. The simple codes were broken and helped break the daily Enigma cipher. This breaking of the code enabled theDouble-Cross Systemto operate.[19]From October 1944, the German Abwehr used theSchlüsselgerät 41in limited quantities.[20]
Like other rotor machines, the Enigma machine is a combination of mechanical and electrical subsystems. The mechanical subsystem consists of akeyboard; a set of rotating disks calledrotorsarranged adjacently along aspindle; one of various stepping components to turn at least one rotor with each key press, and a series of lamps, one for each letter. These design features are the reason that the Enigma machine was originally referred to as the rotor-based cipher machine during its intellectual inception in 1915.[21]
An electrical pathway is a route for current to travel. By manipulating this phenomenon the Enigma machine was able to scramble messages.[21]The mechanical parts act by forming a varyingelectrical circuit. When a key is pressed, one or more rotors rotate on the spindle. On the sides of the rotors are a series of electrical contacts that, after rotation, line up with contacts on the other rotors or fixed wiring on either end of the spindle. When the rotors are properly aligned, each key on the keyboard is connected to a unique electrical pathway through the series of contacts and internal wiring. Current, typically from a battery, flows through the pressed key, into the newly configured set of circuits and back out again, ultimately lighting one displaylamp, which shows the output letter. For example, when encrypting a message startingANX..., the operator would first press theAkey, and theZlamp might light, soZwould be the first letter of theciphertext. The operator would next pressN, and thenXin the same fashion, and so on.
Current flows from the battery (1) through a depressed bi-directional keyboard switch (2) to the plugboard (3). Next, it passes through the (unused in this instance, so shown closed) plug "A" (3) via the entry wheel (4), through the wiring of the three (Wehrmacht Enigma) or four (KriegsmarineM4 andAbwehrvariants) installed rotors (5), and enters the reflector (6). The reflector returns the current, via an entirely different path, back through the rotors (5) and entry wheel (4), proceeding through plug "S" (7) connected with a cable (8) to plug "D", and another bi-directional switch (9) to light the appropriate lamp.[22]
The repeated changes of electrical path through an Enigma scrambler implement apolyalphabetic substitution cipherthat provides Enigma's security. The diagram on the right shows how the electrical pathway changes with each key depression, which causes rotation of at least the right-hand rotor. Current passes into the set of rotors, into and back out of the reflector, and out through the rotors again. The greyed-out lines are other possible paths within each rotor; these are hard-wired from one side of each rotor to the other. The letterAencrypts differently with consecutive key presses, first toG, and then toC. This is because the right-hand rotor steps (rotates one position) on each key press, sending the signal on a completely different route. Eventually other rotors step with a key press.
The rotors (alternativelywheelsordrums,Walzenin German) form the heart of an Enigma machine. Each rotor is a disc approximately 10 cm (3.9 in) in diameter made fromEboniteorBakelitewith 26brass, spring-loaded,electrical contactpins arranged in a circle on one face, with the other face housing 26 corresponding electrical contacts in the form of circular plates. The pins and contacts represent thealphabet— typically the 26 letters A–Z, as will be assumed for the rest of this description. When the rotors are mounted side by side on the spindle, the pins of one rotor rest against the plate contacts of the neighbouring rotor, forming an electrical connection. Inside the body of the rotor, 26 wires connect each pin on one side to a contact on the other in a complex pattern. Most of the rotors are identified by Roman numerals, and each issued copy of rotor I, for instance, is wired identically to all others. The same is true for the special thin beta and gamma rotors used in theM4naval variant.
By itself, a rotor performs only a very simple type ofencryption, a simplesubstitution cipher. For example, the pin corresponding to the letterEmight be wired to the contact for letterTon the opposite face, and so on. Enigma's security comes from using several rotors in series (usually three or four) and the regular stepping movement of the rotors, thus implementing a polyalphabetic substitution cipher.
Each rotor can be set to one of 26 starting positions when placed in an Enigma machine. After insertion, a rotor can be turned to the correct position by hand, using the grooved finger-wheel which protrudes from the internal Enigma cover when closed. In order for the operator to know the rotor's position, each has analphabet tyre(or letter ring) attached to the outside of the rotor disc, with 26 characters (typically letters); one of these is visible through the window for that slot in the cover, thus indicating the rotational position of the rotor. In early models, the alphabet ring was fixed to the rotor disc. A later improvement was the ability to adjust the alphabet ring relative to the rotor disc. The position of the ring was known as theRingstellung("ring setting"), and that setting was a part of the initial setup needed prior to an operating session. In modern terms it was a part of theinitialization vector.
Each rotor contains one or more notches that control rotor stepping. In the military variants, the notches are located on the alphabet ring.
The Army and Air Force Enigmas were used with several rotors, initially three. On 15 December 1938, this changed to five, from which three were chosen for a given session. Rotors were marked withRoman numeralsto distinguish them: I, II, III, IV and V, all with single turnover notches located at different points on the alphabet ring. This variation was probably intended as a security measure, but ultimately allowed the PolishClock Methodand BritishBanburismusattacks.
The Naval version of theWehrmachtEnigma had always been issued with more rotors than the other services: At first six, then seven, and finally eight. The additional rotors were marked VI, VII and VIII, all with different wiring, and had two notches, resulting in more frequent turnover. The four-rotor Naval Enigma (M4) machine accommodated an extra rotor in the same space as the three-rotor version. This was accomplished by replacing the original reflector with a thinner one and by adding a thin fourth rotor. That fourth rotor was one of two types,BetaorGamma, and never stepped, but could be manually set to any of 26 positions. One of the 26 made the machine perform identically to the three-rotor machine.
To avoid merely implementing a simple (solvable) substitution cipher, every key press caused one or more rotors to step by one twenty-sixth of a full rotation, before the electrical connections were made. This changed the substitution alphabet used for encryption, ensuring that the cryptographic substitution was different at each new rotor position, producing a more formidable polyalphabetic substitution cipher. The stepping mechanism varied slightly from model to model. The right-hand rotor stepped once with each keystroke, and other rotors stepped less frequently.
The advancement of a rotor other than the left-hand one was called aturnoverby the British. This was achieved by aratchet and pawlmechanism. Each rotor had a ratchet with 26 teeth and every time a key was pressed, the set of spring-loaded pawls moved forward in unison, trying to engage with a ratchet. The alphabet ring of the rotor to the right normally prevented this. As this ring rotated with its rotor, a notch machined into it would eventually align itself with the pawl, allowing it to engage with the ratchet, and advance the rotor on its left. The right-hand pawl, having no rotor and ring to its right, stepped its rotor with every key depression.[23]For a single-notch rotor in the right-hand position, the middle rotor stepped once for every 26 steps of the right-hand rotor. Similarly for rotors two and three. For a two-notch rotor, the rotor to its left would turn over twice for each rotation.
The first five rotors to be introduced (I–V) contained one notch each, while the additional naval rotors VI, VII and VIII each had two notches. The position of the notch on each rotor was determined by the letter ring which could be adjusted in relation to the core containing the interconnections. The points on the rings at which they caused the next wheel to move were as follows.[24]
The design also included a feature known asdouble-stepping. This occurred when each pawl aligned with both the ratchet of its rotor and the rotating notched ring of the neighbouring rotor. If a pawl engaged with a ratchet through alignment with a notch, as it moved forward it pushed against both the ratchet and the notch, advancing both rotors. In a three-rotor machine, double-stepping affected rotor two only. If, in moving forward, the ratchet of rotor three was engaged, rotor two would move again on the subsequent keystroke, resulting in two consecutive steps. Rotor two also pushes rotor one forward after 26 steps, but since rotor one moves forward with every keystroke anyway, there is no double-stepping.[23]This double-stepping caused the rotors to deviate fromodometer-style regular motion.
With three wheels and only single notches in the first and second wheels, the machine had a period of 26×25×26 = 16,900 (not 26×26×26, because of double-stepping).[23]Historically, messages were limited to a few hundred letters, and so there was no chance of repeating any combined rotor position during a single session, denying cryptanalysts valuable clues.
To make room for the Naval fourth rotors, the reflector was made much thinner. The fourth rotor fitted into the space made available. No other changes were made, which eased the changeover. Since there were only three pawls, the fourth rotor never stepped, but could be manually set into one of 26 possible positions.
A device that was designed, but not implemented before the war's end, was theLückenfüllerwalze(gap-fill wheel) that implemented irregular stepping. It allowed field configuration of notches in all 26 positions. If the number of notches was arelative primeof 26 and the number of notches were different for each wheel, the stepping would be more unpredictable. Like the Umkehrwalze-D it also allowed the internal wiring to be reconfigured.[25]
The current entry wheel (Eintrittswalzein German), or entrystator, connects theplugboardto the rotor assembly. If the plugboard is not present, the entry wheel instead connects the keyboard and lampboard to the rotor assembly. While the exact wiring used is of comparatively little importance to security, it proved an obstacle to Rejewski's progress during his study of the rotor wirings. The commercial Enigma connects the keys in the order of their sequence on aQWERTZkeyboard:Q→A,W→B,E→Cand so on. The military Enigma connects them in straight alphabetical order:A→A,B→B,C→C, and so on. It took inspired guesswork for Rejewski to penetrate the modification.
With the exception of modelsAandB, the last rotor came before a 'reflector' (German:Umkehrwalze, meaning 'reversal rotor'), a patented feature[26]unique to Enigma among the period's various rotor machines. The reflector connected outputs of the last rotor in pairs, redirecting current back through the rotors by a different route. The reflector ensured that Enigma would beself-reciprocal; thus, with two identically configured machines, a message could be encrypted on one and decrypted on the other, without the need for a bulky mechanism to switch between encryption and decryption modes. The reflector allowed a more compact design, but it also gave Enigma the property that no letter ever encrypted to itself. This was a severe cryptological flaw that was subsequently exploited by codebreakers.
In Model 'C', the reflector could be inserted in one of two different positions. In Model 'D', the reflector could be set in 26 possible positions, although it did not move during encryption. In theAbwehrEnigma, the reflector stepped during encryption in a manner similar to the other wheels.
In the German Army and Air Force Enigma, the reflector was fixed and did not rotate; there were four versions. The original version was marked 'A',[27]and was replaced byUmkehrwalze Bon 1 November 1937. A third version,Umkehrwalze Cwas used briefly in 1940, possibly by mistake, and was solved byHut 6.[28]The fourth version, first observed on 2 January 1944, had a rewireable reflector, calledUmkehrwalze D, nick-named Uncle Dick by the British, allowing the Enigma operator to alter the connections as part of the key settings.
The plugboard (Steckerbrettin German) permitted variable wiring that could be reconfigured by the operator. It was introduced on German Army versions in 1928,[29]and was soon adopted by theReichsmarine(German Navy). The plugboard contributed more cryptographic strength than an extra rotor, as it had 150 trillion possible settings (see below).[30]Enigma without a plugboard (known asunsteckered Enigma) could be solved relatively straightforwardly using hand methods; these techniques were generally defeated by the plugboard, driving Allied cryptanalysts to develop special machines to solve it.
A cable placed onto the plugboard connected letters in pairs; for example,EandQmight be a steckered pair. The effect was to swap those letters before and after the main rotor scrambling unit. For example, when an operator pressedE, the signal was diverted toQbefore entering the rotors. Up to 13 steckered pairs might be used at one time, although only 10 were normally used.
Current flowed from the keyboard through the plugboard, and proceeded to the entry-rotor orEintrittswalze. Each letter on the plugboard had two jacks. Inserting a plug disconnected the upper jack (from the keyboard) and the lower jack (to the entry-rotor) of that letter. The plug at the other end of the crosswired cable was inserted into another letter's jacks, thus switching the connections of the two letters.
Other features made various Enigma machines more secure or more convenient.[31]
Some M4 Enigmas used theSchreibmax, a smallprinterthat could print the 26 letters on a narrow paper ribbon. This eliminated the need for a second operator to read the lamps and transcribe the letters. TheSchreibmaxwas placed on top of the Enigma machine and was connected to the lamp panel. To install the printer, the lamp cover and light bulbs had to be removed. It improved both convenience and operational security; the printer could be installed remotely such that the signal officer operating the machine no longer had to see the decryptedplaintext.
Another accessory was the remote lamp panelFernlesegerät. For machines equipped with the extra panel, the wooden case of the Enigma was wider and could store the extra panel. A lamp panel version could be connected afterwards, but that required, as with theSchreibmax, that the lamp panel and light bulbs be removed.[22]The remote panel made it possible for a person to read the decrypted plaintext without the operator seeing it.
In 1944, theLuftwaffeintroduced a plugboard switch, called theUhr(clock), a small box containing a switch with 40 positions. It replaced the standard plugs. After connecting the plugs, as determined in the daily key sheet, the operator turned the switch into one of the 40 positions, each producing a different combination of plug wiring. Most of these plug connections were, unlike the default plugs, not pair-wise.[22]In one switch position, theUhrdid not swap letters, but simply emulated the 13 stecker wires with plugs.
The Enigma transformation for each letter can be specified mathematically as a product ofpermutations.[9]Assuming a three-rotor German Army/Air Force Enigma, letPdenote the plugboard transformation,Udenote that of the reflector (U=U−1{\displaystyle U=U^{-1}}), andL,M,Rdenote those of the left, middle and right rotors respectively. Then the encryptionEcan be expressed as
After each key press, the rotors turn, changing the transformation. For example, if the right-hand rotorRis rotatednpositions, the transformation becomes
whereρis thecyclic permutationmapping A to B, B to C, and so forth. Similarly, the middle and left-hand rotors can be represented asjandkrotations ofMandL. The encryption transformation can then be described as
Combining three rotors from a set of five, each of the 3 rotor settings with 26 positions, and the plugboard with ten pairs of letters connected, the military Enigma has 158,962,555,217,826,360,000 different settings (nearly 159quintillionor about 67bits).[30]
A German Enigma operator would be given a plaintext message to encrypt. After setting up his machine, he would type the message on the Enigma keyboard. For each letter pressed, one lamp lit indicating a different letter according to apseudo-randomsubstitution determined by the electrical pathways inside the machine. The letter indicated by the lamp would be recorded, typically by a second operator, as thecyphertextletter. The action of pressing a key also moved one or more rotors so that the next key press used a different electrical pathway, and thus a different substitution would occur even if the same plaintext letter were entered again. For each key press there was rotation of at least the right hand rotor and less often the other two, resulting in a differentsubstitution alphabetbeing used for every letter in the message. This process continued until the message was completed. The cyphertext recorded by the second operator would then be transmitted, usually by radio inMorse code, to an operator of another Enigma machine. This operator would type in the cyphertext and — as long as all the settings of the deciphering machine were identical to those of the enciphering machine — for every key press the reverse substitution would occur and the plaintext message would emerge.
In use, the Enigma required a list of daily key settings and auxiliary documents. In German military practice, communications were divided into separate networks, each using different settings. These communication nets were termedkeysatBletchley Park, and were assignedcode names, such asRed,Chaffinch, andShark. Each unit operating in a network was given the same settings list for its Enigma, valid for a period of time. The procedures for German Naval Enigma were more elaborate and more secure than those in other services and employed auxiliarycodebooks. Navy codebooks were printed in red, water-soluble ink on pink paper so that they could easily be destroyed if they were endangered or if the vessel was sunk.
An Enigma machine's setting (itscryptographic keyin modern terms;Schlüsselin German) specified each operator-adjustable aspect of the machine:
For a message to be correctly encrypted and decrypted, both sender and receiver had to configure their Enigma in the same way; rotor selection and order, ring positions, plugboard connections and starting rotor positions must be identical. Except for the starting positions, these settings were established beforehand, distributed in key lists and changed daily. For example, the settings for the 18th day of the month in the German Luftwaffe Enigma key list number 649 (see image) were as follows:
Enigma was designed to be secure even if the rotor wiring was known to an opponent, although in practice considerable effort protected the wiring configuration. If the wiring is secret, the total number of possible configurations has been calculated to be around3×10114(approximately 380 bits); with known wiring and other operational constraints, this is reduced to around1023(76 bits).[32]Because of the large number of possibilities, users of Enigma were confident of its security; it was not then feasible for an adversary to even begin to try abrute-force attack.
Most of the key was kept constant for a set time period, typically a day. A different initial rotor position was used for each message, a concept similar to aninitialisation vectorin modern cryptography. The reason is that encrypting many messages with identical or near-identical settings (termed in cryptanalysis as beingindepth), would enable an attack using a statistical procedure such asFriedman'sIndex of coincidence.[33]The starting position for the rotors was transmitted just before the ciphertext, usually after having been enciphered. The exact method used was termed theindicator procedure. Design weakness and operator sloppiness in these indicator procedures were two of the main weaknesses that made cracking Enigma possible.
One of the earliestindicator proceduresfor the Enigma was cryptographically flawed and allowed Polish cryptanalysts to make the initial breaks into the plugboard Enigma. The procedure had the operator set his machine in accordance with the secret settings that all operators on the net shared. The settings included an initial position for the rotors (theGrundstellung), say,AOH. The operator turned his rotors untilAOHwas visible through the rotor windows. At that point, the operator chose his own arbitrary starting position for the message he would send. An operator might selectEIN, and that became themessage settingfor that encryption session. The operator then typedEINinto the machine twice, this producing the encrypted indicator, for exampleXHTLOA. This was then transmitted, at which point the operator would turn the rotors to his message settings,EINin this example, and then type the plaintext of the message.
At the receiving end, the operator set the machine to the initial settings (AOH) and typed in the first six letters of the message (XHTLOA). In this example,EINEINemerged on the lamps, so the operator would learn themessage settingthat the sender used to encrypt this message. The receiving operator would set his rotors toEIN, type in the rest of the ciphertext, and get the deciphered message.
This indicator scheme had two weaknesses. First, the use of a global initial position (Grundstellung) meant all message keys used the same polyalphabetic substitution. In later indicator procedures, the operator selected his initial position for encrypting the indicator and sent that initial position in the clear. The second problem was the repetition of the indicator, which was a serious security flaw. The message setting was encoded twice, resulting in a relation between first and fourth, second and fifth, and third and sixth character. These security flaws enabled the Polish Cipher Bureau to break into the pre-war Enigma system as early as 1932. The early indicator procedure was subsequently described by German cryptanalysts as the "faulty indicator technique".[34]
During World War II, codebooks were only used each day to set up the rotors, their ring settings and the plugboard. For each message, the operator selected a random start position, let's sayWZA, and a random message key, perhapsSXT. He moved the rotors to theWZAstart position and encoded the message keySXT. Assume the result wasUHL. He then set up the message key,SXT, as the start position and encrypted the message. Next, he transmitted the start position,WZA, the encoded message key,UHL, and then the ciphertext. The receiver set up the start position according to the first trigram,WZA, and decoded the second trigram,UHL, to obtain theSXTmessage setting. Next, he used thisSXTmessage setting as the start position to decrypt the message. This way, each ground setting was different and the new procedure avoided the security flaw of double encoded message settings.[35]
This procedure was used byHeerandLuftwaffeonly. TheKriegsmarineprocedures for sending messages with the Enigma were far more complex and elaborate. Prior to encryption the message was encoded using theKurzsignalheftcode book. TheKurzsignalheftcontained tables to convert sentences into four-letter groups. A great many choices were included, for example, logistic matters such as refuelling and rendezvous with supply ships, positions and grid lists, harbour names, countries, weapons, weather conditions, enemy positions and ships, date and time tables. Another codebook contained theKenngruppenandSpruchschlüssel: the key identification and message key.[36]
The Army Enigma machine used only the 26 alphabet characters. Punctuation was replaced with rare character combinations. A space was omitted or replaced with an X. The X was generally used as full-stop.
Some punctuation marks were different in other parts of the armed forces. TheWehrmachtreplaced a comma with ZZ and the question mark with FRAGE or FRAQ.
TheKriegsmarinereplaced the comma with Y and the question mark with UD. The combination CH, as in "Acht" (eight) or "Richtung" (direction), was replaced with Q (AQT, RIQTUNG). Two, three and four zeros were replaced with CENTA, MILLE and MYRIA.
TheWehrmachtand theLuftwaffetransmitted messages in groups of five characters and counted the letters.
TheKriegsmarineused four-character groups and counted those groups.
Frequently used names or words were varied as much as possible. Words likeMinensuchboot(minesweeper) could be written as MINENSUCHBOOT, MINBOOT or MMMBOOT. To make cryptanalysis harder, messages were limited to 250 characters. Longer messages were divided into several parts, each using a different message key.[37][38]
The character substitutions by the Enigma machine as a whole can be expressed as a string of letters with each position occupied by the character that will replace the character at the corresponding position in the alphabet. For example, a given machine configuration that enciphered A to L, B to U, C to S, ..., and Z to J could be represented compactly as
and the enciphering of a particular character by that configuration could be represented by highlighting the enciphered character as in
Since the operation of an Enigma machine enciphering a message is a series of such configurations, each associated with a single character being enciphered, a sequence of such representations can be used to represent the operation of the machine as it enciphers a message. For example, the process of enciphering the first sentence of the main body of the famous "Dönitz message"[39]to
can be represented as
where the letters following each mapping are the letters that appear at the windows at that stage (the only state changes visible to the operator) and the numbers show the underlying physical position of each rotor.
The character mappings for a given configuration of the machine are in turn the result of a series of such mappings applied by each pass through a component of the machine: the enciphering of a character resulting from the application of a given component's mapping serves as the input to the mapping of the subsequent component. For example, the 4th step in the enciphering above can be expanded to show each of these stages using the same representation of mappings and highlighting for the enciphered character:
Here the enciphering begins trivially with the first "mapping" representing the keyboard (which has no effect), followed by the plugboard, configured as AE.BF.CM.DQ.HU.JN.LX.PR.SZ.VW which has no effect on 'G', followed by the VIII rotor in the 03 position, which maps G to A, then the VI rotor in the 17 position, which maps A to N, ..., and finally the plugboard again, which maps B to F, producing the overall mapping indicated at the final step: G to F.
This model has 4 rotors (lines 1 through 4) and the reflector (line R) also permutes (garbles) letters.
The Enigma family included multiple designs. The earliest were commercial models dating from the early 1920s. Starting in the mid-1920s, the German military began to use Enigma, making a number of security-related changes. Various nations either adopted or adapted the design for their own cipher machines.
An estimated 40,000 Enigma machines were constructed.[40][41]After the end of World War II, the Allies sold captured Enigma machines, still widely considered secure, to developing countries.[42]
On 23 February 1918,[43]Arthur Scherbiusapplied for apatentfor a ciphering machine that usedrotors.[44]Scherbius andE. Richard Ritterfounded the firm of Scherbius & Ritter. They approached theGerman Navyand Foreign Office with their design, but neither agency was interested. Scherbius & Ritter then assigned the patent rights to Gewerkschaft Securitas, who founded theChiffriermaschinen Aktien-Gesellschaft(Cipher Machines Stock Corporation) on 9 July 1923; Scherbius and Ritter were on the board of directors.
Chiffriermaschinen AG began advertising a rotor machine,Enigma Handelsmaschine, which was exhibited at the Congress of theInternational Postal Unionin 1924. The machine was heavy and bulky, incorporating atypewriter. It measured 65×45×38 cm and weighed about 50 kilograms (110 lb).
This was also a model with a type writer. There were a number of problems associated with the printer and the construction was not stable until 1926. Both early versions of Enigma lacked the reflector and had to be switched between enciphering and deciphering.
The reflector, suggested by Scherbius' colleague Willi Korn,[26]was introduced with the glow lamp version.
The machine was also known as the military Enigma. It had two rotors and a manually rotatable reflector. The typewriter was omitted and glow lamps were used for output. The operation was somewhat different from later models. Before the next key pressure, the operator had to press a button to advance the right rotor one step.
Enigmamodel Bwas introduced late in 1924, and was of a similar construction.[45]While bearing the Enigma name, both modelsAandBwere quite unlike later versions: They differed in physical size and shape, but also cryptographically, in that they lacked the reflector. This model of Enigma machine was referred to as the Glowlamp Enigma orGlühlampenmaschinesince it produced its output on a lamp panel rather than paper. This method of output was much more reliable and cost effective. Hence this machine was 1/8th the price of its predecessor.[21]
Model Cwas the third model of the so-called ″glowlamp Enigmas″ (after A and B) and it again lacked a typewriter.[21]
TheEnigma Cquickly gave way toEnigma D(1927). This version was widely used, with shipments to Sweden, the Netherlands, United Kingdom, Japan, Italy, Spain, United States and Poland. In 1927Hugh Fossat the BritishGovernment Code and Cypher Schoolwas able to show that commercial Enigma machines could be broken, provided suitable cribs were available.[46]Soon, the Enigma D would pioneer the use of a standard keyboard layout to be used in German computing. This "QWERTZ" layout is very similar to the AmericanQWERTYkeyboard format used in many languages.
Other countries used Enigma machines. TheItalian Navyadopted the commercial Enigma as "Navy Cipher D". The Spanish also used commercial Enigma machines during theirCivil War. British codebreakers succeeded in breaking these machines, which lacked a plugboard.[47]Enigma machines were also used by diplomatic services.
There was also a large, eight-rotor printing model, theEnigma H, calledEnigma IIby theReichswehr. In 1933 the Polish Cipher Bureau detected that it was in use for high-level military communication, but it was soon withdrawn, as it was unreliable and jammed frequently.[48]
The Swiss used a version of Enigma calledModel KorSwiss Kfor military and diplomatic use, which was very similar to commercialEnigma D. The machine's code was cracked by Poland, France, the United Kingdom and the United States; the latter code-named it INDIGO. AnEnigma Tmodel, code-namedTirpitz, was used by Japan.
The various services of theWehrmachtused various Enigma versions, and replaced them frequently, sometimes with ones adapted from other services. Enigma seldom carried high-level strategic messages, which when not urgent went by courier, and when urgent went by other cryptographic systems including theGeheimschreiber.
The Reichsmarine was the first military branch to adopt Enigma. This version, namedFunkschlüssel C("Radio cipher C"), had been put into production by 1925 and was introduced into service in 1926.[49]
The keyboard and lampboard contained 29 letters — A-Z, Ä, Ö and Ü — that were arranged alphabetically, as opposed to the QWERTZUI ordering.[50]The rotors had 28 contacts, with the letterXwired to bypass the rotors unencrypted.[18]Three rotors were chosen from a set of five[51]and the reflector could be inserted in one of four different positions, denoted α, β, γ and δ.[52]The machine was revised slightly in July 1933.[53]
By 15 July 1928,[54]the German Army (Reichswehr) had introduced their own exclusive version of the Enigma machine, theEnigma G.
TheAbwehrused theEnigma G. This Enigma variant was a four-wheel unsteckered machine with multiple notches on the rotors. This model was equipped with a counter that incremented upon each key press, and so is also known as the "counter machine" or theZählwerkEnigma.
Enigma machine G was modified to theEnigma Iby June 1930.[55]Enigma I is also known as theWehrmacht, or "Services" Enigma, and was used extensively by German military services and other government organisations (such as therailways[56]) before and duringWorld War II.
The major difference betweenEnigma I(German Army version from 1930), and commercial Enigma models was the addition of a plugboard to swap pairs of letters, greatly increasing cryptographic strength.
Other differences included the use of a fixed reflector and the relocation of the stepping notches from the rotor body to the movable letter rings. The machine measured 28 cm × 34 cm × 15 cm (11.0 in × 13.4 in × 5.9 in) and weighed around 12 kg (26 lb).[57]
In August 1935, the Air Force introduced the Wehrmacht Enigma for their communications.[55]
By 1930, the Reichswehr had suggested that the Navy adopt their machine, citing the benefits of increased security (with the plugboard) and easier interservice communications.[58]The Reichsmarine eventually agreed and in 1934[59]brought into service the Navy version of the Army Enigma, designatedFunkschlüssel' orM3. While the Army used only three rotors at that time, the Navy specified a choice of three from a possible five.[60]
In December 1938, the Army issued two extra rotors so that the three rotors were chosen from a set of five.[55]In 1938, the Navy added two more rotors, and then another in 1939 to allow a choice of three rotors from a set of eight.[60]
A four-rotor Enigma was introduced by the Navy for U-boat traffic on 1 February 1942, calledM4(the network was known asTriton, orSharkto the Allies). The extra rotor was fitted in the same space by splitting the reflector into a combination of a thin reflector and a thin fourth rotor.
The effort to break the Enigma was not disclosed until 1973. Since then, interest in the Enigma machine has grown. Enigmas are on public display in museums around the world, and several are in the hands of private collectors and computer history enthusiasts.[61]
TheDeutsches MuseuminMunichhas both the three- and four-rotor German military variants, as well as several civilian versions. TheDeutsches SpionagemuseuminBerlinalso showcases two military variants.[62]Enigma machines are also exhibited at theNational Codes CentreinBletchley Park, theGovernment Communications Headquarters, theScience MuseuminLondon,Discovery Park of Americain Tennessee, thePolish Army Museumin Warsaw, theSwedish Army Museum(Armémuseum) inStockholm, the Military Museum ofA Coruñain Spain, the Nordland Red Cross War Memorial Museum inNarvik,[63]Norway,The Artillery, Engineers and Signals MuseuminHämeenlinna, Finland[64]theTechnical University of Denmarkin Lyngby, Denmark, inSkanderborg Bunkerneat Skanderborg, Denmark, and at theAustralian War Memorialand in the foyer of theAustralian Signals Directorate, both inCanberra, Australia. The Jozef Pilsudski Institute in London exhibited a rarePolish Enigma doubleassembled in France in 1940.[65][66]In 2020, thanks to the support of the Ministry of Culture and National Heritage, it became the property of the Polish History Museum.[67]
In the United States, Enigma machines can be seen at theComputer History MuseuminMountain View, California, and at theNational Security Agency'sNational Cryptologic MuseuminFort Meade, Maryland, where visitors can try their hand at enciphering and deciphering messages. Two machines that were acquired after the capture ofU-505during World War II are on display alongside the submarine at theMuseum of Science and IndustryinChicago, Illinois. A three-rotor Enigma is on display atDiscovery Park of AmericainUnion City, Tennessee. A four-rotor device is on display in the ANZUS Corridor of thePentagonon the second floor, A ring, between corridors 8 and 9. This machine is on loan from Australia. The United States Air Force Academy in Colorado Springs has a machine on display in the Computer Science Department. There is also a machine located atThe National WWII Museumin New Orleans.The International Museum of World War IInear Boston has seven Enigma machines on display, including a U-boat four-rotor model, one of three surviving examples of an Enigma machine with a printer, one of fewer than ten surviving ten-rotor code machines, an example blown up by a retreating German Army unit, and two three-rotor Enigmas that visitors can operate to encode and decode messages.Mimms Museum of Technology and ArtinRoswell, Georgiahas a three-rotor model with two additional rotors. The machine is fully restored and CMoA has the original paperwork for the purchase on 7 March 1936 by the German Army. TheNational Museum of Computingalso contains surviving Enigma machines in Bletchley, England.[68]
In Canada, a Swiss Army issue Enigma-K, is in Calgary, Alberta. It is on permanent display at the Naval Museum of Alberta inside the Military Museums of Calgary. A four-rotor Enigma machine is on display at theMilitary Communications and Electronics MuseumatCanadian Forces Base (CFB) KingstoninKingston, Ontario.
Occasionally, Enigma machines are sold at auction; prices have in recent years ranged from US$40,000[69][70]to US$547,500[71]in 2017. Replicas are available in various forms, including an exact reconstructed copy of the Naval M4 model, an Enigma implemented in electronics (Enigma-E), various simulators and paper-and-scissors analogues.
A rareAbwehrEnigma machine, designated G312, was stolen from the Bletchley Park museum on 1 April 2000. In September, a man identifying himself as "The Master" sent a note demanding £25,000 and threatening to destroy the machine if the ransom was not paid. In early October 2000, Bletchley Park officials announced that they would pay the ransom, but the stated deadline passed with no word from the blackmailer. Shortly afterward, the machine was sent anonymously to BBC journalistJeremy Paxman, missing three rotors.
In November 2000, an antiques dealer named Dennis Yates was arrested after telephoningThe Sunday Timesto arrange the return of the missing parts. The Enigma machine was returned to Bletchley Park after the incident. In October 2001, Yates was sentenced to ten months in prison and served three months.[72]
In October 2008, the Spanish daily newspaperEl Paísreported that 28 Enigma machines had been discovered by chance in an attic of Army headquarters in Madrid. These four-rotor commercial machines had helped Franco's Nationalists win theSpanish Civil War, because, though the British cryptologistAlfred Dilwyn Knoxin 1937 broke the cipher generated by Franco's Enigma machines, this was not disclosed to the Republicans, who failed to break the cipher. The Nationalist government continued using its 50 Enigmas into the 1950s. Some machines have gone on display in Spanish military museums,[73][74]including one at the National Museum of Science and Technology (MUNCYT) in La Coruña and one at theSpanish Army Museum. Two have been given to Britain's GCHQ.[75]
TheBulgarianmilitary used Enigma machines with aCyrillickeyboard; one is on display in theNational Museum of Military HistoryinSofia.[76]
On 3 December 2020, German divers working on behalf of theWorld Wide Fund for Naturediscovered a destroyed Enigma machine inFlensburg Firth(part of theBaltic Sea) which is believed to be from a scuttled U-boat.[77]This Enigma machine will be restored by and be the property of the Archaeology Museum ofSchleswig Holstein.[78]
An M4 Enigma was salvaged in the 1980s from the German minesweeper R15, which was sunk off theIstriancoast in 1945. The machine was put on display in thePivka Park of Military HistoryinSloveniaon 13 April 2023.[79]
The Enigma was influential in the field of cipher machine design, spinning off otherrotor machines. Once the British discovered Enigma's principle of operation, they created theTypexrotor cipher, which the Germans believed to be unsolvable.[80]Typex was originally derived from the Enigma patents;[81]Typex even includes features from the patent descriptions that were omitted from the actual Enigma machine. The British paid no royalties for the use of the patents.[81]In the United States, cryptologistWilliam Friedmandesigned theM-325machine,[82]starting in 1936,[83]that is logically similar.[84]
Machines like theSIGABA,NEMA, Typex, and so forth, are not considered to be Enigma derivatives as their internal ciphering functions are not mathematically identical to the Enigma transform.
A unique rotor machine called Cryptograph was constructed in 2002 by Netherlands-based Tatjana van Vark. This device makes use of 40-point rotors, allowing letters, numbers and some punctuation to be used; each rotor contains 509 parts.[85]
|
https://en.wikipedia.org/wiki/Enigma_machine
|
Apseudorandom number generator(PRNG), also known as adeterministic random bit generator(DRBG),[1]is analgorithmfor generating a sequence of numbers whose properties approximate the properties of sequences ofrandom numbers. The PRNG-generated sequence is not trulyrandom, because it is completely determined by an initial value, called the PRNG'sseed(which may include truly random values). Although sequences that are closer to truly random can be generated usinghardware random number generators,pseudorandom number generatorsare important in practice for their speed in number generation and their reproducibility.[2]
PRNGs are central in applications such assimulations(e.g. for theMonte Carlo method),electronic games(e.g. forprocedural generation), andcryptography. Cryptographic applications require the output not to be predictable from earlier outputs, and moreelaborate algorithms, which do not inherit the linearity of simpler PRNGs, are needed.
Good statistical properties are a central requirement for the output of a PRNG. In general, careful mathematical analysis is required to have any confidence that a PRNG generates numbers that are sufficiently close to random to suit the intended use.John von Neumanncautioned about the misinterpretation of a PRNG as a truly random generator, joking that "Anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin."[3]
In practice, the output from many common PRNGs exhibitartifactsthat cause them to fail statistical pattern-detection tests. These include:
Defects exhibited by flawed PRNGs range from unnoticeable (and unknown) to very obvious. An example was theRANDUrandom number algorithm used for decades onmainframe computers. It was seriously flawed, but its inadequacy went undetected for a very long time.
In many fields, research work prior to the 21st century that relied on random selection or onMonte Carlosimulations, or in other ways relied on PRNGs, were much less reliable than ideal as a result of using poor-quality PRNGs.[4]Even today, caution is sometimes required, as illustrated by the following warning in theInternational Encyclopedia of Statistical Science(2010).[5]
The list of widely used generators that should be discarded is much longer [than the list of good generators]. Do not trust blindly the software vendors. Check the default RNG of your favorite software and be ready to replace it if needed. This last recommendation has been made over and over again over the past 40 years. Perhaps amazingly, it remains as relevant today as it was 40 years ago.
As an illustration, consider the widely used programming languageJava. Up until 2020, Java still relied on alinear congruential generator(LCG) for its PRNG,[6][7]which is of low quality (see further below). Java support was upgraded withJava 17.
One well-known PRNG to avoid major problems and still run fairly quickly is theMersenne Twister(discussed below), which was published in 1998. Other higher-quality PRNGs, both in terms of computational and statistical performance, were developed before and after this date; these can be identified in theList of pseudorandom number generators.
In the second half of the 20th century, the standard class of algorithms used for PRNGs comprisedlinear congruential generators. The quality of LCGs was known to be inadequate, but better methods were unavailable. Press et al. (2007) described the result thus: "If all scientific papers whose results are in doubt because of [LCGs and related] were to disappear from library shelves, there would be a gap on each shelf about as big as your fist."[8]
A major advance in the construction of pseudorandom generators was the introduction of techniques based on linear recurrences on the two-element field; such generators are related tolinear-feedback shift registers.
The 1997 invention of theMersenne Twister,[9]in particular, avoided many of the problems with earlier generators. The Mersenne Twister has a period of 219 937− 1 iterations (≈ 4.3×106001), is proven to beequidistributedin (up to) 623 dimensions (for 32-bit values), and at the time of its introduction was running faster than other statistically reasonable generators.
In 2003,George Marsagliaintroduced the family ofxorshiftgenerators,[10]again based on a linear recurrence. Such generators are extremely fast and, combined with a nonlinear operation, they pass strong statistical tests.[11][12][13]
In 2006, theWELLfamily of generators was developed.[14]The WELL generators in some ways improves on the quality of the Mersenne Twister, which has a too-large state space and a very slow recovery from state spaces with a large number of zeros.
A counter-based random number generation (CBRNG, also known as a counter-based pseudo-random number generator, or CBPRNG) is a kind of PRNG that uses only an integer counter as its internal state:
output=f(n,key){\displaystyle {\text{ output }}=f(n,{\text{ key }})}
They are generally used for generating pseudorandom numbers for large parallel computations, such as over GPU or CPU clusters.[15]They have certain advantages:
Examples include:[15]
A PRNG suitable forcryptographicapplications is called acryptographically-secure PRNG(CSPRNG). A requirement for a CSPRNG is that an adversary not knowing the seed has onlynegligibleadvantagein distinguishing the generator's output sequence from a random sequence. In other words, while a PRNG is only required to pass certain statistical tests, a CSPRNG must pass all statistical tests that are restricted topolynomial timein the size of the seed. Though a proof of this property is beyond the current state of the art ofcomputational complexity theory, strong evidence may be provided byreducingto the CSPRNG from aproblemthat is assumed to behard, such asinteger factorization.[16]In general, years of review may be required before an algorithm can be certified as a CSPRNG.
Some classes of CSPRNGs include the following:
It has been shown to be likely that theNSAhas inserted an asymmetricbackdoorinto theNIST-certified pseudorandom number generatorDual_EC_DRBG.[20]
Most PRNG algorithms produce sequences that areuniformly distributedby any of several tests. It is an open question, and one central to the theory and practice ofcryptography, whether there is any way to distinguish the output of a high-quality PRNG from a truly random sequence. In this setting, the distinguisher knows that either the known PRNG algorithm was used (but not the state with which it was initialized) or a truly random algorithm was used, and has to distinguish between the two.[21]The security of most cryptographic algorithms and protocols using PRNGs is based on the assumption that it is infeasible to distinguish use of a suitable PRNG from use of a truly random sequence. The simplest examples of this dependency arestream ciphers, which (most often) work byexclusive or-ing theplaintextof a message with the output of a PRNG, producingciphertext. The design of cryptographically adequate PRNGs is extremely difficult because they must meet additional criteria. The size of its period is an important factor in the cryptographic suitability of a PRNG, but not the only one.
The GermanFederal Office for Information Security(German:Bundesamt für Sicherheit in der Informationstechnik, BSI) has established four criteria for quality of deterministic random number generators.[22]They are summarized here:
For cryptographic applications, only generators meeting the K3 or K4 standards are acceptable.
Given:
We call a functionf:N1→R{\displaystyle f:\mathbb {N} _{1}\rightarrow \mathbb {R} }(whereN1={1,2,3,…}{\displaystyle \mathbb {N} _{1}=\left\{1,2,3,\dots \right\}}is the set of positive integers) apseudo-random number generator forP{\displaystyle P}givenF{\displaystyle {\mathfrak {F}}}taking values inA{\displaystyle A}if and only if:
(#S{\displaystyle \#S}denotes the number of elements in the finite setS{\displaystyle S}.)
It can be shown that iff{\displaystyle f}is a pseudo-random number generator for the uniform distribution on(0,1){\displaystyle \left(0,1\right)}and ifF{\displaystyle F}is theCDFof some given probability distributionP{\displaystyle P}, thenF∗∘f{\displaystyle F^{*}\circ f}is a pseudo-random number generator forP{\displaystyle P}, whereF∗:(0,1)→R{\displaystyle F^{*}:\left(0,1\right)\rightarrow \mathbb {R} }is the percentile ofP{\displaystyle P}, i.e.F∗(x):=inf{t∈R:x≤F(t)}{\displaystyle F^{*}(x):=\inf \left\{t\in \mathbb {R} :x\leq F(t)\right\}}. Intuitively, an arbitrary distribution can be simulated from a simulation of the standard uniform distribution.
An early computer-based PRNG, suggested byJohn von Neumannin 1946, is known as themiddle-square method. The algorithm is as follows: take any number, square it, remove the middle digits of the resulting number as the "random number", then use that number as the seed for the next iteration. For example, squaring the number "1111" yields "1234321", which can be written as "01234321", an 8-digit number being the square of a 4-digit number. This gives "2343" as the "random" number. Repeating this procedure gives "4896" as the next result, and so on. Von Neumann used 10 digit numbers, but the process was the same.
A problem with the "middle square" method is that all sequences eventually repeat themselves, some very quickly, such as "0000". Von Neumann was aware of this, but he found the approach sufficient for his purposes and was worried that mathematical "fixes" would simply hide errors rather than remove them.
Von Neumann judged hardware random number generators unsuitable, for, if they did not record the output generated, they could not later be tested for errors. If they did record their output, they would exhaust the limited computer memories then available, and so the computer's ability to read and write numbers. If the numbers were written to cards, they would take very much longer to write and read. On theENIACcomputer he was using, the "middle square" method generated numbers at a rate some hundred times faster than reading numbers in frompunched cards.
The middle-square method has since been supplanted by more elaborate generators.
A recent innovation is to combine the middle square with aWeyl sequence. This method produces high-quality output through a long period (seemiddle-square method).
Numbers selected from a non-uniform probability distribution can be generated using auniform distributionPRNG and a function that relates the two distributions.
First, one needs thecumulative distribution functionF(b){\displaystyle F(b)}of the target distributionf(b){\displaystyle f(b)}:
Note that0=F(−∞)≤F(b)≤F(∞)=1{\displaystyle 0=F(-\infty )\leq F(b)\leq F(\infty )=1}. Using a random numbercfrom a uniform distribution as the probability density to "pass by", we get
so that
is a number randomly selected from distributionf(b){\displaystyle f(b)}. This is based on theinverse transform sampling.
For example, the inverse of cumulativeGaussian distributionerf−1(x){\displaystyle \operatorname {erf} ^{-1}(x)}with an ideal uniform PRNG with range (0, 1) as inputx{\displaystyle x}would produce a sequence of (positive only) values with a Gaussian distribution; however
Similar considerations apply to generating other non-uniform distributions such asRayleighandPoisson.
|
https://en.wikipedia.org/wiki/Pseudorandom_number_generator
|
TheElGamal signature schemeis adigital signaturescheme which is based on the difficulty of computingdiscrete logarithms. It was described byTaher Elgamalin 1985.[1]
The ElGamal signature algorithm is rarely used in practice. A variant developed at theNSAand known as theDigital Signature Algorithmis much more widely used. There are several other variants.[2]The ElGamal signature scheme must not be confused withElGamal encryptionwhich was also invented by Taher Elgamal.
The ElGamal signature scheme is a digital signature scheme based on the algebraic properties of modular exponentiation, together with the discrete logarithm problem. The algorithm uses akey pairconsisting of apublic keyand aprivate key. The private key is used to generate adigital signaturefor a message, and such a signature can beverifiedby using the signer's corresponding public key. The digital signature provides message authentication (the receiver can verify the origin of the message), integrity (the receiver can verify that the message has not been modified since it was signed) and non-repudiation (the sender cannot falsely claim that they have not signed the message).
The ElGamal signature scheme was described byTaher Elgamalin 1985.[1]It is based on theDiffie–Hellman problem.
The scheme involves four operations: key generation (which creates the key pair), key distribution, signing and signature verification.
Key generation has two phases. The first phase is a choice of algorithm parameters which may be shared between different users of the system, while the second phase computes a single key pair for one user.
The algorithm parameters are(p,g){\displaystyle (p,g)}. These parameters may be shared between users of the system.
Given a set of parameters, the second phase computes the key pair for a single user:
x{\displaystyle x}is the private key andy{\displaystyle y}is the public key.
The signer should send the public keyy{\displaystyle y}to the receiver via a reliable, but not necessarily secret, mechanism. The signer should keep the private keyx{\displaystyle x}secret.
A messagem{\displaystyle m}is signed as follows:
The signature is(r,s){\displaystyle (r,s)}.
One can verify that a signature(r,s){\displaystyle (r,s)}is a valid signature for a messagem{\displaystyle m}as follows:
The algorithm is correct in the sense that a signature generated with the signing algorithm will always be accepted by the verifier.
The computation ofs{\displaystyle s}during signature generation implies
Sinceg{\displaystyle g}is relatively prime top{\displaystyle p},
A third party can forge signatures either by finding the signer's secret keyxor by finding collisions in the hash functionH(m)≡H(M)(modp−1){\displaystyle H(m)\equiv H(M){\pmod {p-1}}}. Both problems are believed to be difficult. However, as of 2011 no tight reduction to acomputational hardness assumptionis known.
The signer must be careful to choose a differentkuniformly at random for each signature and to be certain thatk, or even partial information aboutk, is not leaked. Otherwise, an attacker may be able to deduce the secret keyxwith reduced difficulty, perhaps enough to allow a practical attack. In particular, if two messages are sent using the same value ofkand the same key, then an attacker can computexdirectly.[1]
The original paper[1]did not include ahash functionas a system parameter. The messagemwas used directly in the algorithm instead ofH(m). This enables an attack calledexistential forgery, as described in section IV of the paper. Pointcheval and Stern generalized that case and described two levels of forgeries:[3]
|
https://en.wikipedia.org/wiki/ElGamal_signature_scheme
|
Incryptography,collision resistanceis a property ofcryptographic hash functions: a hash functionHis collision-resistant if it is hard to find two inputs that hash to the same output; that is, two inputsaandbwherea≠bbutH(a) =H(b).[1]: 136Thepigeonhole principlemeans that any hash function with more inputs than outputs will necessarily have such collisions;[1]: 136the harder they are to find, the more cryptographically secure the hash function is.
The "birthday paradox" places an upper bound on collision resistance: if a hash function producesNbits of output, an attacker who computes only 2N/2(or2N{\displaystyle \scriptstyle {\sqrt {2^{N}}}}) hash operations on random input is likely to find two matching outputs. If there is an easier method to do this thanbrute-force attack, it is typically considered a flaw in the hash function.[2]
Cryptographic hash functionsare usually designed to be collision resistant. However, many hash functions that were once thought to be collision resistant were later broken.MD5andSHA-1in particular both have published techniques more efficient than brute force for finding collisions.[3][4]However, some hash functions have a proof that finding collisions is at least as difficult as some hard mathematical problem (such asinteger factorizationordiscrete logarithm). Those functions are calledprovably secure.[2]
A family of functions {hk: {0, 1}m(k)→ {0, 1}l(k)} generated by some algorithmGis a family of collision-resistant hash functions, if |m(k)| > |l(k)| for anyk, i.e.,hkcompresses the input string, and everyhkcan be computed withinpolynomial timegivenk, but for any probabilistic polynomial algorithmA, we have
where negl(·) denotes somenegligible function, andnis thesecurity parameter.[5]
There are two different types of collision resistance.
A hash function has weak collision resistance when, given a hashing function H and an x, no other x' can be found such that H(x)=H(x'). In words, when given an x, it is not possible to find another x' such that the hashing function would create a collision.
A hash function has strong collision resistance when, given a hashing function H, no arbitrary x and x' can be found where H(x)=H(x'). In words, no two x's can be found where the hashing function would create a collision.
Collision resistance is desirable for several reasons.
|
https://en.wikipedia.org/wiki/Collision_resistance
|
Incryptography, apreimage attackoncryptographic hash functionstries to find amessagethat has a specific hash value. A cryptographic hash function should resist attacks on itspreimage(set of possible inputs).
In the context of attack, there are two types of preimage resistance:
These can be compared with acollision resistance, in which it is computationally infeasible to find any two distinct inputsx,x′that hash to the same output; i.e., such thath(x) =h(x′).[1]
Collision resistance implies second-preimage resistance. Second-preimage resistance implies preimage resistance only if the size of the hash function's inputs can be substantially (e.g., factor 2) larger than the size of the hash function's outputs.[1]Conversely, a second-preimage attack implies a collision attack (trivially, since, in addition tox′,xis already known right from the start).
By definition, an ideal hash function is such that the fastest way to compute a first or second preimage is through abrute-force attack. For ann-bit hash, this attack has atime complexity2n, which is considered too high for a typical output size ofn= 128 bits. If such complexity is the best that can be achieved by an adversary, then the hash function is considered preimage-resistant. However, there is a general result thatquantum computersperform a structured preimage attack in2n=2n2{\displaystyle {\sqrt {2^{n}}}=2^{\frac {n}{2}}}, which also implies second preimage[2]and thus a collision attack.
Faster preimage attacks can be found bycryptanalysingcertain hash functions, and are specific to that function. Some significant preimage attacks have already been discovered, but they are not yet practical. If a practical preimage attack is discovered, it would drastically affect many Internet protocols. In this case, "practical" means that it could be executed by an attacker with a reasonable amount of resources. For example, a preimaging attack that costs trillions of dollars and takes decades to preimage one desired hash value or one message is not practical; one that costs a few thousand dollars and takes a few weeks might be very practical.
All currently known practical or almost-practical attacks[3][4]onMD5andSHA-1arecollision attacks.[5]In general, a collision attack is easier to mount than a preimage attack, as it is not restricted by any set value (any two values can be used to collide). The time complexity of a brute-force collision attack, in contrast to the preimage attack, is only2n2{\displaystyle 2^{\frac {n}{2}}}.
The computational infeasibility of a first preimage attack on an ideal hash function assumes that the set of possible hash inputs is too large for a brute force search. However if a given hash value is known to have been produced from a set of inputs that is relatively small or is ordered by likelihood in some way, then a brute force search may be effective. Practicality depends on the input set size and the speed or cost of computing the hash function.
A common example is the use of hashes to storepasswordvalidation data for authentication. Rather than store the plaintext of user passwords, an access control system stores a hash of the password. When a user requests access, the password they submit is hashed and compared with the stored value. If the stored validation data is stolen, the thief will only have the hash values, not the passwords. However most users choose passwords in predictable ways and many passwords are short enough that all possible combinations can be tested if fast hashes are used, even if the hash is rated secure against preimage attacks.[6]Special hashes calledkey derivation functionshave been created to slow searches.SeePassword cracking. For a method to prevent the testing of short passwords seesalt (cryptography).
|
https://en.wikipedia.org/wiki/Second_preimage_attack
|
Inmathematics, afinite fieldorGalois field(so-named in honor ofÉvariste Galois) is afieldthat contains a finite number ofelements. As with any field, a finite field is aseton which the operations of multiplication, addition, subtraction and division are defined and satisfy certain basic rules. The most common examples of finite fields are theintegers modp{\displaystyle p}whenp{\displaystyle p}is aprime number.
Theorderof a finite field is its number of elements, which is either a prime number or aprime power. For every prime numberp{\displaystyle p}and every positive integerk{\displaystyle k}there are fields of orderpk{\displaystyle p^{k}}. All finite fields of a given order areisomorphic.
Finite fields are fundamental in a number of areas of mathematics andcomputer science, includingnumber theory,algebraic geometry,Galois theory,finite geometry,cryptographyandcoding theory.
A finite field is a finite set that is afield; this means that multiplication, addition, subtraction and division (excluding division by zero) are defined and satisfy the rules of arithmetic known as thefield axioms.[1]
The number of elements of a finite field is called itsorderor, sometimes, itssize. A finite field of orderq{\displaystyle q}exists if and only ifq{\displaystyle q}is aprime powerpk{\displaystyle p^{k}}(wherep{\displaystyle p}is a prime number andk{\displaystyle k}is a positive integer). In a field of orderpk{\displaystyle p^{k}}, addingp{\displaystyle p}copies of any element always results in zero; that is, thecharacteristicof the field isp{\displaystyle p}.[1]
Forq=pk{\displaystyle q=p^{k}}, all fields of orderq{\displaystyle q}areisomorphic(see§ Existence and uniquenessbelow).[2]Moreover, a field cannot contain two different finitesubfieldswith the same order. One may therefore identify all finite fields with the same order, and they are unambiguously denotedFq{\displaystyle \mathbb {F} _{q}},Fq{\displaystyle \mathbf {F} _{q}}orGF(q){\displaystyle \mathrm {GF} (q)}, where the letters GF stand for "Galois field".[3]
In a finite field of orderq{\displaystyle q}, thepolynomialXq−X{\displaystyle X^{q}-X}has allq{\displaystyle q}elements of the finite field asroots.[citation needed]The non-zero elements of a finite field form amultiplicative group. This group iscyclic, so all non-zero elements can be expressed as powers of a single element called aprimitive elementof the field. (In general there will be several primitive elements for a given field.)[1]
The simplest examples of finite fields are the fields of prime order: for eachprime numberp{\displaystyle p}, theprime fieldof orderp{\displaystyle p}may be constructed as theintegers modulop{\displaystyle p},Z/pZ{\displaystyle \mathbb {Z} /p\mathbb {Z} }.[1]
The elements of the prime field of orderp{\displaystyle p}may be represented by integers in the range0,…,p−1{\displaystyle 0,\ldots ,p-1}. The sum, the difference and the product are theremainder of the divisionbyp{\displaystyle p}of the result of the corresponding integer operation.[1]The multiplicative inverse of an element may be computed by using the extended Euclidean algorithm (seeExtended Euclidean algorithm § Modular integers).[citation needed]
LetF{\displaystyle F}be a finite field. For any elementx{\displaystyle x}inF{\displaystyle F}and anyintegern{\displaystyle n}, denote byn⋅x{\displaystyle n\cdot x}the sum ofn{\displaystyle n}copies ofx{\displaystyle x}. The least positiven{\displaystyle n}such thatn⋅1=0{\displaystyle n\cdot 1=0}is the characteristicp{\displaystyle p}of the field.[1]This allows defining a multiplication(k,x)↦k⋅x{\displaystyle (k,x)\mapsto k\cdot x}of an elementk{\displaystyle k}ofGF(p){\displaystyle \mathrm {GF} (p)}by an elementx{\displaystyle x}ofF{\displaystyle F}by choosing an integer representative fork{\displaystyle k}. This multiplication makesF{\displaystyle F}into aGF(p){\displaystyle \mathrm {GF} (p)}-vector space.[1]It follows that the number of elements ofF{\displaystyle F}ispn{\displaystyle p^{n}}for some integern{\displaystyle n}.[1]
Theidentity(x+y)p=xp+yp{\displaystyle (x+y)^{p}=x^{p}+y^{p}}(sometimes called thefreshman's dream) is true in a field of characteristicp{\displaystyle p}. This follows from thebinomial theorem, as eachbinomial coefficientof the expansion of(x+y)p{\displaystyle (x+y)^{p}}, except the first and the last, is a multiple ofp{\displaystyle p}.[citation needed]
ByFermat's little theorem, ifp{\displaystyle p}is a prime number andx{\displaystyle x}is in the fieldGF(p){\displaystyle \mathrm {GF} (p)}thenxp=x{\displaystyle x^{p}=x}. This implies the equalityXp−X=∏a∈GF(p)(X−a){\displaystyle X^{p}-X=\prod _{a\in \mathrm {GF} (p)}(X-a)}for polynomials overGF(p){\displaystyle \mathrm {GF} (p)}. More generally, every element inGF(pn){\displaystyle \mathrm {GF} (p^{n})}satisfies the polynomial equationxpn−x=0{\displaystyle x^{p^{n}}-x=0}.[citation needed]
Any finitefield extensionof a finite field isseparableand simple. That is, ifE{\displaystyle E}is a finite field andF{\displaystyle F}is a subfield ofE{\displaystyle E}, thenE{\displaystyle E}is obtained fromF{\displaystyle F}by adjoining a single element whoseminimal polynomialisseparable. To use a piece of jargon, finite fields areperfect.[1]
A more generalalgebraic structurethat satisfies all the other axioms of a field, but whose multiplication is not required to becommutative, is called adivision ring(or sometimesskew field). ByWedderburn's little theorem, any finite division ring is commutative, and hence is a finite field.[1]
Letq=pn{\displaystyle q=p^{n}}be aprime power, andF{\displaystyle F}be thesplitting fieldof the polynomialP=Xq−X{\displaystyle P=X^{q}-X}over the prime fieldGF(p){\displaystyle \mathrm {GF} (p)}. This means thatF{\displaystyle F}is a finite field of lowest order, in whichP{\displaystyle P}hasq{\displaystyle q}distinct roots (theformal derivativeofP{\displaystyle P}isP′=−1{\displaystyle P'=-1}, implying thatgcd(P,P′)=1{\displaystyle \mathrm {gcd} (P,P')=1}, which in general implies that the splitting field is aseparable extensionof the original). Theabove identityshows that the sum and the product of two roots ofP{\displaystyle P}are roots ofP{\displaystyle P}, as well as the multiplicative inverse of a root ofP{\displaystyle P}. In other words, the roots ofP{\displaystyle P}form a field of orderq{\displaystyle q}, which is equal toF{\displaystyle F}by the minimality of the splitting field.
The uniqueness up to isomorphism of splitting fields implies thus that all fields of orderq{\displaystyle q}are isomorphic. Also, if a fieldF{\displaystyle F}has a field of orderq=pk{\displaystyle q=p^{k}}as a subfield, its elements are theq{\displaystyle q}roots ofXq−X{\displaystyle X^{q}-X}, andF{\displaystyle F}cannot contain another subfield of orderq{\displaystyle q}.
In summary, we have the following classification theorem first proved in 1893 byE. H. Moore:[2]
The order of a finite field is a prime power. For every prime powerq{\displaystyle q}there are fields of orderq{\displaystyle q}, and they are all isomorphic. In these fields, every element satisfiesxq=x,{\displaystyle x^{q}=x,}and the polynomialXq−X{\displaystyle X^{q}-X}factors asXq−X=∏a∈F(X−a).{\displaystyle X^{q}-X=\prod _{a\in F}(X-a).}
It follows thatGF(pn){\displaystyle \mathrm {GF} (p^{n})}contains a subfield isomorphic toGF(pm){\displaystyle \mathrm {GF} (p^{m})}if and only ifm{\displaystyle m}is a divisor ofn{\displaystyle n}; in that case, this subfield is unique. In fact, the polynomialXpm−X{\displaystyle X^{p^{m}}-X}dividesXpn−X{\displaystyle X^{p^{n}}-X}if and only ifm{\displaystyle m}is a divisor ofn{\displaystyle n}.
Given a prime powerq=pn{\displaystyle q=p^{n}}withp{\displaystyle p}prime andn>1{\displaystyle n>1}, the fieldGF(q){\displaystyle \mathrm {GF} (q)}may be explicitly constructed in the following way. One first chooses anirreducible polynomialP{\displaystyle P}inGF(p)[X]{\displaystyle \mathrm {GF} (p)[X]}of degreen{\displaystyle n}(such an irreducible polynomial always exists). Then thequotient ringGF(q)=GF(p)[X]/(P){\displaystyle \mathrm {GF} (q)=\mathrm {GF} (p)[X]/(P)}of the polynomial ringGF(p)[X]{\displaystyle \mathrm {GF} (p)[X]}by the ideal generated byP{\displaystyle P}is a field of orderq{\displaystyle q}.
More explicitly, the elements ofGF(q){\displaystyle \mathrm {GF} (q)}are the polynomials overGF(p){\displaystyle \mathrm {GF} (p)}whose degree is strictly less thann{\displaystyle n}. The addition and the subtraction are those of polynomials overGF(p){\displaystyle \mathrm {GF} (p)}. The product of two elements is the remainder of theEuclidean divisionbyP{\displaystyle P}of the product inGF(q)[X]{\displaystyle \mathrm {GF} (q)[X]}.
The multiplicative inverse of a non-zero element may be computed with the extended Euclidean algorithm; seeExtended Euclidean algorithm § Simple algebraic field extensions.
However, with this representation, elements ofGF(q){\displaystyle \mathrm {GF} (q)}may be difficult to distinguish from the corresponding polynomials. Therefore, it is common to give a name, commonlyα{\displaystyle \alpha }to the element ofGF(q){\displaystyle \mathrm {GF} (q)}that corresponds to the polynomialX{\displaystyle X}. So, the elements ofGF(q){\displaystyle \mathrm {GF} (q)}become polynomials inα{\displaystyle \alpha }, whereP(α)=0{\displaystyle P(\alpha )=0}, and, when one encounters a polynomial inα{\displaystyle \alpha }of degree greater or equal ton{\displaystyle n}(for example after a multiplication), one knows that one has to use the relationP(α)=0{\displaystyle P(\alpha )=0}to reduce its degree (it is what Euclidean division is doing).
Except in the construction ofGF(4){\displaystyle \mathrm {GF} (4)}, there are several possible choices forP{\displaystyle P}, which produce isomorphic results. To simplify the Euclidean division, one commonly chooses forP{\displaystyle P}a polynomial of the formXn+aX+b,{\displaystyle X^{n}+aX+b,}which make the needed Euclidean divisions very efficient. However, for some fields, typically in characteristic2{\displaystyle 2}, irreducible polynomials of the formXn+aX+b{\displaystyle X^{n}+aX+b}may not exist. In characteristic2{\displaystyle 2}, if the polynomialXn+X+1{\displaystyle X^{n}+X+1}is reducible, it is recommended to chooseXn+Xk+1{\displaystyle X^{n}+X^{k}+1}with the lowest possiblek{\displaystyle k}that makes the polynomial irreducible. If all thesetrinomialsare reducible, one chooses "pentanomials"Xn+Xa+Xb+Xc+1{\displaystyle X^{n}+X^{a}+X^{b}+X^{c}+1}, as polynomials of degree greater than1{\displaystyle 1}, with an even number of terms, are never irreducible in characteristic2{\displaystyle 2}, having1{\displaystyle 1}as a root.[4]
A possible choice for such a polynomial is given byConway polynomials. They ensure a certain compatibility between the representation of a field and the representations of its subfields.
In the next sections, we will show how the general construction method outlined above works for small finite fields.
The smallest non-prime field is the field with four elements, which is commonly denotedGF(4){\displaystyle \mathrm {GF} (4)}orF4.{\displaystyle \mathbb {F} _{4}.}It consists of the four elements0,1,α,1+α{\displaystyle 0,1,\alpha ,1+\alpha }such thatα2=1+α{\displaystyle \alpha ^{2}=1+\alpha },1⋅α=α⋅1=α{\displaystyle 1\cdot \alpha =\alpha \cdot 1=\alpha },x+x=0{\displaystyle x+x=0}, andx⋅0=0⋅x=0{\displaystyle x\cdot 0=0\cdot x=0}, for everyx∈GF(4){\displaystyle x\in \mathrm {GF} (4)}, the other operation results being easily deduced from thedistributive law. See below for the complete operation tables.
This may be deduced as follows from the results of the preceding section.
OverGF(2){\displaystyle \mathrm {GF} (2)}, there is only oneirreducible polynomialof degree2{\displaystyle 2}:X2+X+1{\displaystyle X^{2}+X+1}Therefore, forGF(4){\displaystyle \mathrm {GF} (4)}the construction of the preceding section must involve this polynomial, andGF(4)=GF(2)[X]/(X2+X+1).{\displaystyle \mathrm {GF} (4)=\mathrm {GF} (2)[X]/(X^{2}+X+1).}Letα{\displaystyle \alpha }denote a root of this polynomial inGF(4){\displaystyle \mathrm {GF} (4)}. This implies thatα2=1+α,{\displaystyle \alpha ^{2}=1+\alpha ,}and thatα{\displaystyle \alpha }and1+α{\displaystyle 1+\alpha }are the elements ofGF(4){\displaystyle \mathrm {GF} (4)}that are not inGF(2){\displaystyle \mathrm {GF} (2)}. The tables of the operations inGF(4){\displaystyle \mathrm {GF} (4)}result from this, and are as follows:
A table for subtraction is not given, because subtraction is identical to addition, as is the case for every field of characteristic 2.
In the third table, for the division ofx{\displaystyle x}byy{\displaystyle y}, the values ofx{\displaystyle x}must be read in the left column, and the values ofy{\displaystyle y}in the top row. (Because0⋅z=0{\displaystyle 0\cdot z=0}for everyz{\displaystyle z}in everyringthedivision by 0has to remain undefined.) From the tables, it can be seen that the additive structure ofGF(4){\displaystyle \mathrm {GF} (4)}is isomorphic to theKlein four-group, while the non-zero multiplicative structure is isomorphic to the groupZ3{\displaystyle Z_{3}}.
The mapφ:x↦x2{\displaystyle \varphi :x\mapsto x^{2}}is the non-trivial field automorphism, called theFrobenius automorphism, which sendsα{\displaystyle \alpha }into the second root1+α{\displaystyle 1+\alpha }of the above-mentioned irreducible polynomialX2+X+1{\displaystyle X^{2}+X+1}.
For applying theabove general constructionof finite fields in the case ofGF(p2){\displaystyle \mathrm {GF} (p^{2})}, one has to find an irreducible polynomial of degree 2. Forp=2{\displaystyle p=2}, this has been done in the preceding section. Ifp{\displaystyle p}is an odd prime, there are always irreducible polynomials of the formX2−r{\displaystyle X^{2}-r}, withr{\displaystyle r}inGF(p){\displaystyle \mathrm {GF} (p)}.
More precisely, the polynomialX2−r{\displaystyle X^{2}-r}is irreducible overGF(p){\displaystyle \mathrm {GF} (p)}if and only ifr{\displaystyle r}is aquadratic non-residuemodulop{\displaystyle p}(this is almost the definition of a quadratic non-residue). There arep−12{\displaystyle {\frac {p-1}{2}}}quadratic non-residues modulop{\displaystyle p}. For example,2{\displaystyle 2}is a quadratic non-residue forp=3,5,11,13,…{\displaystyle p=3,5,11,13,\ldots }, and3{\displaystyle 3}is a quadratic non-residue forp=5,7,17,…{\displaystyle p=5,7,17,\ldots }. Ifp≡3mod4{\displaystyle p\equiv 3\mod 4}, that isp=3,7,11,19,…{\displaystyle p=3,7,11,19,\ldots }, one may choose−1≡p−1{\displaystyle -1\equiv p-1}as a quadratic non-residue, which allows us to have a very simple irreducible polynomialX2+1{\displaystyle X^{2}+1}.
Having chosen a quadratic non-residuer{\displaystyle r}, letα{\displaystyle \alpha }be a symbolic square root ofr{\displaystyle r}, that is, a symbol that has the propertyα2=r{\displaystyle \alpha ^{2}=r}, in the same way that the complex numberi{\displaystyle i}is a symbolic square root of−1{\displaystyle -1}. Then, the elements ofGF(p2){\displaystyle \mathrm {GF} (p^{2})}are all the linear expressionsa+bα,{\displaystyle a+b\alpha ,}witha{\displaystyle a}andb{\displaystyle b}inGF(p){\displaystyle \mathrm {GF} (p)}. The operations onGF(p2){\displaystyle \mathrm {GF} (p^{2})}are defined as follows (the operations between elements ofGF(p){\displaystyle \mathrm {GF} (p)}represented by Latin letters are the operations inGF(p){\displaystyle \mathrm {GF} (p)}):−(a+bα)=−a+(−b)α(a+bα)+(c+dα)=(a+c)+(b+d)α(a+bα)(c+dα)=(ac+rbd)+(ad+bc)α(a+bα)−1=a(a2−rb2)−1+(−b)(a2−rb2)−1α{\displaystyle {\begin{aligned}-(a+b\alpha )&=-a+(-b)\alpha \\(a+b\alpha )+(c+d\alpha )&=(a+c)+(b+d)\alpha \\(a+b\alpha )(c+d\alpha )&=(ac+rbd)+(ad+bc)\alpha \\(a+b\alpha )^{-1}&=a(a^{2}-rb^{2})^{-1}+(-b)(a^{2}-rb^{2})^{-1}\alpha \end{aligned}}}
The polynomialX3−X−1{\displaystyle X^{3}-X-1}is irreducible overGF(2){\displaystyle \mathrm {GF} (2)}andGF(3){\displaystyle \mathrm {GF} (3)}, that is, it is irreduciblemodulo2{\displaystyle 2}and3{\displaystyle 3}(to show this, it suffices to show that it has no root inGF(2){\displaystyle \mathrm {GF} (2)}nor inGF(3){\displaystyle \mathrm {GF} (3)}). It follows that the elements ofGF(8){\displaystyle \mathrm {GF} (8)}andGF(27){\displaystyle \mathrm {GF} (27)}may be represented byexpressionsa+bα+cα2,{\displaystyle a+b\alpha +c\alpha ^{2},}wherea,b,c{\displaystyle a,b,c}are elements ofGF(2){\displaystyle \mathrm {GF} (2)}orGF(3){\displaystyle \mathrm {GF} (3)}(respectively), andα{\displaystyle \alpha }is a symbol such thatα3=α+1.{\displaystyle \alpha ^{3}=\alpha +1.}
The addition, additive inverse and multiplication onGF(8){\displaystyle \mathrm {GF} (8)}andGF(27){\displaystyle \mathrm {GF} (27)}may thus be defined as follows; in following formulas, the operations between elements ofGF(2){\displaystyle \mathrm {GF} (2)}orGF(3){\displaystyle \mathrm {GF} (3)}, represented by Latin letters, are the operations inGF(2){\displaystyle \mathrm {GF} (2)}orGF(3){\displaystyle \mathrm {GF} (3)}, respectively:−(a+bα+cα2)=−a+(−b)α+(−c)α2(forGF(8),this operation is the identity)(a+bα+cα2)+(d+eα+fα2)=(a+d)+(b+e)α+(c+f)α2(a+bα+cα2)(d+eα+fα2)=(ad+bf+ce)+(ae+bd+bf+ce+cf)α+(af+be+cd+cf)α2{\displaystyle {\begin{aligned}-(a+b\alpha +c\alpha ^{2})&=-a+(-b)\alpha +(-c)\alpha ^{2}\qquad {\text{(for }}\mathrm {GF} (8),{\text{this operation is the identity)}}\\(a+b\alpha +c\alpha ^{2})+(d+e\alpha +f\alpha ^{2})&=(a+d)+(b+e)\alpha +(c+f)\alpha ^{2}\\(a+b\alpha +c\alpha ^{2})(d+e\alpha +f\alpha ^{2})&=(ad+bf+ce)+(ae+bd+bf+ce+cf)\alpha +(af+be+cd+cf)\alpha ^{2}\end{aligned}}}
The polynomialX4+X+1{\displaystyle X^{4}+X+1}is irreducible overGF(2){\displaystyle \mathrm {GF} (2)}, that is, it is irreducible modulo2{\displaystyle 2}. It follows that the elements ofGF(16){\displaystyle \mathrm {GF} (16)}may be represented byexpressionsa+bα+cα2+dα3,{\displaystyle a+b\alpha +c\alpha ^{2}+d\alpha ^{3},}wherea,b,c,d{\displaystyle a,b,c,d}are either0{\displaystyle 0}or1{\displaystyle 1}(elements ofGF(2){\displaystyle \mathrm {GF} (2)}), andα{\displaystyle \alpha }is a symbol such thatα4=α+1{\displaystyle \alpha ^{4}=\alpha +1}(that is,α{\displaystyle \alpha }is defined as a root of the given irreducible polynomial). As the characteristic ofGF(2){\displaystyle \mathrm {GF} (2)}is2{\displaystyle 2}, each element is its additive inverse inGF(16){\displaystyle \mathrm {GF} (16)}. The addition and multiplication onGF(16){\displaystyle \mathrm {GF} (16)}may be defined as follows; in following formulas, the operations between elements ofGF(2){\displaystyle \mathrm {GF} (2)}, represented by Latin letters are the operations inGF(2){\displaystyle \mathrm {GF} (2)}.(a+bα+cα2+dα3)+(e+fα+gα2+hα3)=(a+e)+(b+f)α+(c+g)α2+(d+h)α3(a+bα+cα2+dα3)(e+fα+gα2+hα3)=(ae+bh+cg+df)+(af+be+bh+cg+df+ch+dg)α+(ag+bf+ce+ch+dg+dh)α2+(ah+bg+cf+de+dh)α3{\displaystyle {\begin{aligned}(a+b\alpha +c\alpha ^{2}+d\alpha ^{3})+(e+f\alpha +g\alpha ^{2}+h\alpha ^{3})&=(a+e)+(b+f)\alpha +(c+g)\alpha ^{2}+(d+h)\alpha ^{3}\\(a+b\alpha +c\alpha ^{2}+d\alpha ^{3})(e+f\alpha +g\alpha ^{2}+h\alpha ^{3})&=(ae+bh+cg+df)+(af+be+bh+cg+df+ch+dg)\alpha \;+\\&\quad \;(ag+bf+ce+ch+dg+dh)\alpha ^{2}+(ah+bg+cf+de+dh)\alpha ^{3}\end{aligned}}}
The fieldGF(16){\displaystyle \mathrm {GF} (16)}has eightprimitive elements(the elements that have all nonzero elements ofGF(16){\displaystyle \mathrm {GF} (16)}as integer powers). These elements are the four roots ofX4+X+1{\displaystyle X^{4}+X+1}and theirmultiplicative inverses. In particular,α{\displaystyle \alpha }is a primitive element, and the primitive elements areαm{\displaystyle \alpha ^{m}}withm{\displaystyle m}less than andcoprimewith15{\displaystyle 15}(that is, 1, 2, 4, 7, 8, 11, 13, 14).
The set of non-zero elements inGF(q){\displaystyle \mathrm {GF} (q)}is anabelian groupunder the multiplication, of orderq−1{\displaystyle q-1}. ByLagrange's theorem, there exists a divisork{\displaystyle k}ofq−1{\displaystyle q-1}such thatxk=1{\displaystyle x^{k}=1}for every non-zerox{\displaystyle x}inGF(q){\displaystyle \mathrm {GF} (q)}. As the equationxk=1{\displaystyle x^{k}=1}has at mostk{\displaystyle k}solutions in any field,q−1{\displaystyle q-1}is the lowest possible value fork{\displaystyle k}.
Thestructure theorem of finite abelian groupsimplies that this multiplicative group iscyclic, that is, all non-zero elements are powers of a single element. In summary:
Such an elementa{\displaystyle a}is called aprimitive elementofGF(q){\displaystyle \mathrm {GF} (q)}. Unlessq=2,3{\displaystyle q=2,3}, the primitive element is not unique. The number of primitive elements isϕ(q−1){\displaystyle \phi (q-1)}whereϕ{\displaystyle \phi }isEuler's totient function.
The result above implies thatxq=x{\displaystyle x^{q}=x}for everyx{\displaystyle x}inGF(q){\displaystyle \mathrm {GF} (q)}. The particular case whereq{\displaystyle q}is prime isFermat's little theorem.
Ifa{\displaystyle a}is a primitive element inGF(q){\displaystyle \mathrm {GF} (q)}, then for any non-zero elementx{\displaystyle x}inF{\displaystyle F}, there is a unique integern{\displaystyle n}with0≤n≤q−2{\displaystyle 0\leq n\leq q-2}such thatx=an{\displaystyle x=a^{n}}.
This integern{\displaystyle n}is called thediscrete logarithmofx{\displaystyle x}to the basea{\displaystyle a}.
Whilean{\displaystyle a^{n}}can be computed very quickly, for example usingexponentiation by squaring, there is no known efficient algorithm for computing the inverse operation, the discrete logarithm. This has been used in variouscryptographic protocols, seeDiscrete logarithmfor details.
When the nonzero elements ofGF(q){\displaystyle \mathrm {GF} (q)}are represented by their discrete logarithms, multiplication and division are easy, as they reduce to addition and subtraction moduloq−1{\displaystyle q-1}. However, addition amounts to computing the discrete logarithm ofam+an{\displaystyle a^{m}+a^{n}}. The identityam+an=an(am−n+1){\displaystyle a^{m}+a^{n}=a^{n}\left(a^{m-n}+1\right)}allows one to solve this problem by constructing the table of the discrete logarithms ofan+1{\displaystyle a^{n}+1}, calledZech's logarithms, forn=0,…,q−2{\displaystyle n=0,\ldots ,q-2}(it is convenient to define the discrete logarithm of zero as being−∞{\displaystyle -\infty }).
Zech's logarithms are useful for large computations, such aslinear algebraover medium-sized fields, that is, fields that are sufficiently large for making natural algorithms inefficient, but not too large, as one has to pre-compute a table of the same size as the order of the field.
Every nonzero element of a finite field is aroot of unity, asxq−1=1{\displaystyle x^{q-1}=1}for every nonzero element ofGF(q){\displaystyle \mathrm {GF} (q)}.
Ifn{\displaystyle n}is a positive integer, ann{\displaystyle n}thprimitive root of unityis a solution of the equationxn=1{\displaystyle x^{n}=1}that is not a solution of the equationxm=1{\displaystyle x^{m}=1}for any positive integerm<n{\displaystyle m<n}. Ifa{\displaystyle a}is an{\displaystyle n}th primitive root of unity in a fieldF{\displaystyle F}, thenF{\displaystyle F}contains all then{\displaystyle n}roots of unity, which are1,a,a2,…,an−1{\displaystyle 1,a,a^{2},\ldots ,a^{n-1}}.
The fieldGF(q){\displaystyle \mathrm {GF} (q)}contains an{\displaystyle n}th primitive root of unity if and only ifn{\displaystyle n}is a divisor ofq−1{\displaystyle q-1}; ifn{\displaystyle n}is a divisor ofq−1{\displaystyle q-1}, then the number of primitiven{\displaystyle n}th roots of unity inGF(q){\displaystyle \mathrm {GF} (q)}isϕ(n){\displaystyle \phi (n)}(Euler's totient function). The number ofn{\displaystyle n}th roots of unity inGF(q){\displaystyle \mathrm {GF} (q)}isgcd(n,q−1){\displaystyle \mathrm {gcd} (n,q-1)}.
In a field of characteristicp{\displaystyle p}, everynp{\displaystyle np}th root of unity is also an{\displaystyle n}th root of unity. It follows that primitivenp{\displaystyle np}th roots of unity never exist in a field of characteristicp{\displaystyle p}.
On the other hand, ifn{\displaystyle n}iscoprimetop{\displaystyle p}, the roots of then{\displaystyle n}thcyclotomic polynomialare distinct in every field of characteristicp{\displaystyle p}, as this polynomial is a divisor ofXn−1{\displaystyle X^{n}-1}, whosediscriminantnn{\displaystyle n^{n}}is nonzero modulop{\displaystyle p}. It follows that then{\displaystyle n}thcyclotomic polynomialfactors overGF(q){\displaystyle \mathrm {GF} (q)}into distinct irreducible polynomials that have all the same degree, sayd{\displaystyle d}, and thatGF(pd){\displaystyle \mathrm {GF} (p^{d})}is the smallest field of characteristicp{\displaystyle p}that contains then{\displaystyle n}th primitive roots of unity.
When computingBrauer characters, one uses the mapαk↦exp(2πik/(q−1)){\displaystyle \alpha ^{k}\mapsto \exp(2\pi ik/(q-1))}to map eigenvalues of a representation matrix to the complex numbers. Under this mapping, the base subfieldGF(p){\displaystyle \mathrm {GF} (p)}consists of evenly spaced points around the unit circle (omitting zero).
The fieldGF(64){\displaystyle \mathrm {GF} (64)}has several interesting properties that smaller fields do not share: it has two subfields such that neither is contained in the other; not all generators (elements withminimal polynomialof degree6{\displaystyle 6}overGF(2){\displaystyle \mathrm {GF} (2)}) are primitive elements; and the primitive elements are not all conjugate under theGalois group.
The order of this field being26, and the divisors of6being1, 2, 3, 6, the subfields ofGF(64)areGF(2),GF(22) = GF(4),GF(23) = GF(8), andGF(64)itself. As2and3arecoprime, the intersection ofGF(4)andGF(8)inGF(64)is the prime fieldGF(2).
The union ofGF(4)andGF(8)has thus10elements. The remaining54elements ofGF(64)generateGF(64)in the sense that no other subfield contains any of them. It follows that they are roots of irreducible polynomials of degree6overGF(2). This implies that, overGF(2), there are exactly9 =54/6irreduciblemonic polynomialsof degree6. This may be verified by factoringX64−XoverGF(2).
The elements ofGF(64)are primitiven{\displaystyle n}th roots of unity for somen{\displaystyle n}dividing63{\displaystyle 63}. As the 3rd and the 7th roots of unity belong toGF(4)andGF(8), respectively, the54generators are primitiventh roots of unity for somenin{9, 21, 63}.Euler's totient functionshows that there are6primitive9th roots of unity,12{\displaystyle 12}primitive21{\displaystyle 21}st roots of unity, and36{\displaystyle 36}primitive63rd roots of unity. Summing these numbers, one finds again54{\displaystyle 54}elements.
By factoring thecyclotomic polynomialsoverGF(2){\displaystyle \mathrm {GF} (2)}, one finds that:
This shows that the best choice to constructGF(64){\displaystyle \mathrm {GF} (64)}is to define it asGF(2)[X] / (X6+X+ 1). In fact, this generator is a primitive element, and this polynomial is the irreducible polynomial that produces the easiest Euclidean division.
In this section,p{\displaystyle p}is a prime number, andq=pn{\displaystyle q=p^{n}}is a power ofp{\displaystyle p}.
InGF(q){\displaystyle \mathrm {GF} (q)}, the identity(x+y)p=xp+ypimplies that the mapφ:x↦xp{\displaystyle \varphi :x\mapsto x^{p}}is aGF(p){\displaystyle \mathrm {GF} (p)}-linear endomorphismand afield automorphismofGF(q){\displaystyle \mathrm {GF} (q)}, which fixes every element of the subfieldGF(p){\displaystyle \mathrm {GF} (p)}. It is called theFrobenius automorphism, afterFerdinand Georg Frobenius.
Denoting byφkthecompositionofφwith itselfktimes, we haveφk:x↦xpk.{\displaystyle \varphi ^{k}:x\mapsto x^{p^{k}}.}It has been shown in the preceding section thatφnis the identity. For0 <k<n, the automorphismφkis not the identity, as, otherwise, the polynomialXpk−X{\displaystyle X^{p^{k}}-X}would have more thanpkroots.
There are no otherGF(p)-automorphisms ofGF(q). In other words,GF(pn)has exactlynGF(p)-automorphisms, which areId=φ0,φ,φ2,…,φn−1.{\displaystyle \mathrm {Id} =\varphi ^{0},\varphi ,\varphi ^{2},\ldots ,\varphi ^{n-1}.}
In terms ofGalois theory, this means thatGF(pn)is aGalois extensionofGF(p), which has acyclicGalois group.
The fact that the Frobenius map is surjective implies that every finite field isperfect.
IfFis a finite field, a non-constantmonic polynomialwith coefficients inFisirreducibleoverF, if it is not the product of two non-constant monic polynomials, with coefficients inF.
As everypolynomial ringover a field is aunique factorization domain, every monic polynomial over a finite field may be factored in a unique way (up to the order of the factors) into a product of irreducible monic polynomials.
There are efficient algorithms for testing polynomial irreducibility and factoring polynomials over finite fields. They are a key step for factoring polynomials over the integers or therational numbers. At least for this reason, everycomputer algebra systemhas functions for factoring polynomials over finite fields, or, at least, over finite prime fields.
The polynomialXq−X{\displaystyle X^{q}-X}factors into linear factors over a field of orderq. More precisely, this polynomial is the product of all monic polynomials of degree one over a field of orderq.
This implies that, ifq=pnthenXq−Xis the product of all monic irreducible polynomials overGF(p), whose degree dividesn. In fact, ifPis an irreducible factor overGF(p)ofXq−X, its degree dividesn, as itssplitting fieldis contained inGF(pn). Conversely, ifPis an irreducible monic polynomial overGF(p)of degreeddividingn, it defines a field extension of degreed, which is contained inGF(pn), and all roots ofPbelong toGF(pn), and are roots ofXq−X; thusPdividesXq−X. AsXq−Xdoes not have any multiple factor, it is thus the product of all the irreducible monic polynomials that divide it.
This property is used to compute the product of the irreducible factors of each degree of polynomials overGF(p); seeDistinct degree factorization.
The numberN(q,n)of monic irreducible polynomials of degreenoverGF(q)is given by[5]N(q,n)=1n∑d∣nμ(d)qn/d,{\displaystyle N(q,n)={\frac {1}{n}}\sum _{d\mid n}\mu (d)q^{n/d},}whereμis theMöbius function. This formula is an immediate consequence of the property ofXq−Xabove and theMöbius inversion formula.
By the above formula, the number of irreducible (not necessarily monic) polynomials of degreenoverGF(q)is(q− 1)N(q,n).
The exact formula implies the inequalityN(q,n)≥1n(qn−∑ℓ∣n,ℓprimeqn/ℓ);{\displaystyle N(q,n)\geq {\frac {1}{n}}\left(q^{n}-\sum _{\ell \mid n,\ \ell {\text{ prime}}}q^{n/\ell }\right);}this is sharp if and only ifnis a power of some prime.
For everyqand everyn, the right hand side is positive, so there is at least one irreducible polynomial of degreenoverGF(q).
Incryptography, the difficulty of thediscrete logarithm problemin finite fields or inelliptic curvesis the basis of several widely used protocols, such as theDiffie–Hellmanprotocol. For example, in 2014, a secure internet connection to Wikipedia involved the elliptic curve Diffie–Hellman protocol (ECDHE) over a large finite field.[6]Incoding theory, many codes are constructed assubspacesofvector spacesover finite fields.
Finite fields are used by manyerror correction codes, such asReed–Solomon error correction codeorBCH code. The finite field almost always has characteristic of2, since computer data is stored in binary. For example, a byte of data can be interpreted as an element ofGF(28). One exception isPDF417bar code, which isGF(929). Some CPUs have special instructions that can be useful for finite fields of characteristic2, generally variations ofcarry-less product.
Finite fields are widely used innumber theory, as many problems over the integers may be solved by reducing themmoduloone or severalprime numbers. For example, the fastest known algorithms forpolynomial factorizationandlinear algebraover the field ofrational numbersproceed by reduction modulo one or several primes, and then reconstruction of the solution by usingChinese remainder theorem,Hensel liftingor theLLL algorithm.
Similarly many theoretical problems in number theory can be solved by considering their reductions modulo some or all prime numbers. See, for example,Hasse principle. Many recent developments ofalgebraic geometrywere motivated by the need to enlarge the power of these modular methods.Wiles' proof of Fermat's Last Theoremis an example of a deep result involving many mathematical tools, including finite fields.
TheWeil conjecturesconcern the number of points onalgebraic varietiesover finite fields and the theory has many applications includingexponentialandcharacter sumestimates.
Finite fields have widespread application incombinatorics, two well known examples being the definition ofPaley Graphsand the related construction forHadamard Matrices. Inarithmetic combinatoricsfinite fields[7]and finite field models[8][9]are used extensively, such as inSzemerédi's theoremon arithmetic progressions.
Adivision ringis a generalization of field. Division rings are not assumed to be commutative. There are no non-commutative finite division rings:Wedderburn's little theoremstates that all finitedivision ringsare commutative, and hence are finite fields. This result holds even if we relax theassociativityaxiom toalternativity, that is, all finitealternative division ringsare finite fields, by theArtin–Zorn theorem.[10]
A finite fieldF{\displaystyle F}is not algebraically closed: the polynomialf(T)=1+∏α∈F(T−α),{\displaystyle f(T)=1+\prod _{\alpha \in F}(T-\alpha ),}has no roots inF{\displaystyle F}, sincef(α) = 1for allα{\displaystyle \alpha }inF{\displaystyle F}.
Given a prime numberp, letF¯p{\displaystyle {\overline {\mathbb {F} }}_{p}}be an algebraic closure ofFp.{\displaystyle \mathbb {F} _{p}.}It is not only uniqueup toan isomorphism, as do all algebraic closures, but contrarily to the general case, all its subfield are fixed by all its automorphisms, and it is also the algebraic closure of all finite fields of the same characteristicp.
This property results mainly from the fact that the elements ofFpn{\displaystyle \mathbb {F} _{p^{n}}}are exactly the roots ofxpn−x,{\displaystyle x^{p^{n}}-x,}and this defines an inclusionFpn⊂Fpnm{\displaystyle \mathbb {\mathbb {F} } _{p^{n}}\subset \mathbb {F} _{p^{nm}}}form>1.{\displaystyle m>1.}These inclusions allow writing informallyF¯p=⋃n≥1Fpn.{\displaystyle {\overline {\mathbb {F} }}_{p}=\bigcup _{n\geq 1}\mathbb {F} _{p^{n}}.}The formal validation of this notation results from the fact that the above field inclusions form adirected setof fields; Itsdirect limitisF¯p,{\displaystyle {\overline {\mathbb {F} }}_{p},}which may thus be considered as "directed union".
Given aprimitive elementgmn{\displaystyle g_{mn}}ofFqmn,{\displaystyle \mathbb {F} _{q^{mn}},}thengmnm{\displaystyle g_{mn}^{m}}is a primitive element ofFqn.{\displaystyle \mathbb {F} _{q^{n}}.}
For explicit computations, it may be useful to have a coherent choice of the primitive elements for all finite fields; that is, to choose the primitive elementgn{\displaystyle g_{n}}ofFqn{\displaystyle \mathbb {F} _{q^{n}}}in order that, whenevern=mh,{\displaystyle n=mh,}one hasgm=gnh,{\displaystyle g_{m}=g_{n}^{h},}wheregm{\displaystyle g_{m}}is the primitive element already chosen forFqm.{\displaystyle \mathbb {F} _{q^{m}}.}
Such a construction may be obtained byConway polynomials.
Although finite fields are not algebraically closed, they arequasi-algebraically closed, which means that everyhomogeneous polynomialover a finite field has a non-trivial zero whose components are in the field if the number of its variables is more than its degree. This was a conjecture ofArtinandDicksonproved byChevalley(seeChevalley–Warning theorem).
|
https://en.wikipedia.org/wiki/Finite_field#Construction_of_finite_fields
|
Kerckhoffs's principle(also calledKerckhoffs's desideratum,assumption,axiom,doctrineorlaw) ofcryptographywas stated byDutch-borncryptographerAuguste Kerckhoffsin the 19th century. The principle holds that acryptosystemshould be secure, even if everything about the system, except thekey, is public knowledge. This concept is widely embraced by cryptographers, in contrast tosecurity through obscurity, which is not.
Kerckhoffs's principle was phrased by American mathematicianClaude Shannonas "theenemyknows the system",[1]i.e., "one ought to design systems under the assumption that the enemy will immediately gain full familiarity with them". In that form, it is calledShannon's maxim.
Another formulation by American researcher and professorSteven M. Bellovinis:
In other words—design your system assuming that your opponents know it in detail. (A former official at NSA's National Computer Security Center told me that the standard assumption there was that serial number 1 of any new device was delivered to the Kremlin.)[2]
The invention oftelegraphyradically changedmilitary communicationsand increased the number of messages that needed to be protected from the enemy dramatically, leading to the development of field ciphers which had to be easy to use without large confidentialcodebooksprone to capture on the battlefield.[3]It was this environment which led to the development of Kerckhoffs's requirements.
Auguste Kerckhoffs was a professor of German language atEcole des Hautes Etudes Commerciales(HEC) in Paris.[4]In early 1883, Kerckhoffs's article,La Cryptographie Militaire,[5]was published in two parts in theJournal of Military Science, in which he stated six design rules for militaryciphers.[6]Translated from French, they are:[7][8]
Some are no longer relevant given the ability of computers to perform complex encryption. The second rule, now known asKerckhoffs's principle, is still critically important.[9]
Kerckhoffs viewed cryptography as a rival to, and a better alternative than,steganographicencoding, which was common in the nineteenth century for hiding the meaning of military messages. One problem with encoding schemes is that they rely on humanly-held secrets such as "dictionaries" which disclose for example, the secret meaning of words. Steganographic-like dictionaries, once revealed, permanently compromise a corresponding encoding system. Another problem is that the risk of exposure increases as the number of users holding the secrets increases.
Nineteenth century cryptography, in contrast, used simple tables which provided for the transposition of alphanumeric characters, generally given row-column intersections which could be modified by keys which were generally short, numeric, and could be committed to human memory. The system was considered "indecipherable" because tables and keys do not convey meaning by themselves. Secret messages can be compromised only if a matching set of table, key, and message falls into enemy hands in a relevant time frame. Kerckhoffs viewed tactical messages as only having a few hours of relevance. Systems are not necessarily compromised, because their components (i.e. alphanumeric character tables and keys) can be easily changed.
Using secure cryptography is supposed to replace the difficult problem of keeping messages secure with a much more manageable one, keeping relatively small keys secure. A system that requires long-term secrecy for something as large and complex as the whole design of a cryptographic system obviously cannot achieve that goal. It only replaces one hard problem with another. However, if a system is secure even when the enemy knows everything except the key, then all that is needed is to manage keeping the keys secret.[10]
There are a large number of ways the internal details of a widely used system could be discovered. The most obvious is that someone could bribe, blackmail, or otherwise threaten staff or customers into explaining the system. In war, for example, one side will probably capture some equipment and people from the other side. Each side will also use spies to gather information.
If a method involves software, someone could domemory dumpsor run the software under the control of a debugger in order to understand the method. If hardware is being used, someone could buy or steal some of the hardware and build whatever programs or gadgets needed to test it. Hardware can also be dismantled so that the chip details can be examined under the microscope.
A generalization some make from Kerckhoffs's principle is: "The fewer and simpler the secrets that one must keep to ensure system security, the easier it is to maintain system security."Bruce Schneierties it in with a belief that all security systems must be designed tofail as gracefullyas possible:
Kerckhoffs's principle applies beyond codes and ciphers to security systems in general: every secret creates a potentialfailure point. Secrecy, in other words, is a prime cause of brittleness—and therefore something likely to make a system prone to catastrophic collapse. Conversely, openness provides ductility.[11]
Any security system depends crucially on keeping some things secret. However, Kerckhoffs's principle points out that the things kept secret ought to be those least costly to change if inadvertently disclosed.[9]
For example, a cryptographic algorithm may be implemented by hardware and software that is widely distributed among users. If security depends on keeping that secret, then disclosure leads to major logistic difficulties in developing, testing, and distributing implementations of a new algorithm – it is "brittle". On the other hand, if keeping the algorithm secret is not important, but only thekeysused with the algorithm must be secret, then disclosure of the keys simply requires the simpler, less costly process of generating and distributing new keys.[12]
In accordance with Kerckhoffs's principle, the majority of civilian cryptography makes use of publicly known algorithms. By contrast, ciphers used to protect classified government or military information are often kept secret (seeType 1 encryption). However, it should not be assumed that government/military ciphers must be kept secret to maintain security. It is possible that they are intended to be as cryptographically sound as public algorithms, and the decision to keep them secret is in keeping with a layered security posture.
It is moderately common for companies, and sometimes even standards bodies as in the case of theCSS encryption on DVDs, to keep the inner workings of a system secret. Some[who?]argue this "security by obscurity" makes the product safer and less vulnerable to attack. A counter-argument is that keeping the innards secret may improve security in the short term, but in the long run, only systems that have been published and analyzed should be trusted.
Steven BellovinandRandy Bushcommented:[13]
Security Through Obscurity Considered Dangerous
Hiding security vulnerabilities in algorithms, software, and/or hardware decreases the likelihood they will be repaired and increases the likelihood that they can and will be exploited. Discouraging or outlawing discussion of weaknesses and vulnerabilities is extremely dangerous and deleterious to the security of computer systems, the network, and its citizens.
Open Discussion Encourages Better Security
The long history of cryptography and cryptoanalysis has shown time and time again that open discussion and analysis of algorithms exposes weaknesses not thought of by the original authors, and thereby leads to better and more secure algorithms. As Kerckhoffs noted about cipher systems in 1883 [Kerc83], "Il faut qu'il n'exige pas le secret, et qu'il puisse sans inconvénient tomber entre les mains de l'ennemi." (Roughly, "the system must not require secrecy and must be able to be stolen by the enemy without causing trouble.")
|
https://en.wikipedia.org/wiki/Kerckhoffs%27s_principle
|
For AES-128, the key can be recovered with acomputational complexityof 2126.1using thebiclique attack. For biclique attacks on AES-192 and AES-256, the computational complexities of 2189.7and 2254.4respectively apply.Related-key attackscan break AES-256 and AES-192 with complexities 299.5and 2176in both time and data, respectively.[2]
TheAdvanced Encryption Standard(AES), also known by its original nameRijndael(Dutch pronunciation:[ˈrɛindaːl]),[5]is a specification for theencryptionof electronic data established by the U.S.National Institute of Standards and Technology(NIST) in 2001.[6]
AES is a variant of the Rijndaelblock cipher[5]developed by twoBelgiancryptographers,Joan DaemenandVincent Rijmen, who submitted a proposal[7]to NIST during theAES selection process.[8]Rijndael is a family of ciphers with differentkeyandblock sizes. For AES, NIST selected three members of the Rijndael family, each with a block size of 128 bits, but three different key lengths: 128, 192 and 256 bits.
AES has been adopted by theU.S. government. It supersedes theData Encryption Standard(DES),[9]which was published in 1977. The algorithm described by AES is asymmetric-key algorithm, meaning the same key is used for both encrypting and decrypting the data.
In the United States, AES was announced by the NIST as U.S.FIPSPUB 197 (FIPS 197) on November 26, 2001.[6]This announcement followed a five-year standardization process in which fifteen competing designs were presented and evaluated, before the Rijndael cipher was selected as the most suitable.[note 3]
AES is included in theISO/IEC18033-3standard. AES became effective as a U.S. federal government standard on May 26, 2002, after approval by U.S.Secretary of CommerceDonald Evans. AES is available in many different encryption packages, and is the first (and only) publicly accessiblecipherapproved by the U.S.National Security Agency(NSA) fortop secretinformation when used in an NSA approved cryptographic module.[note 4]
The Advanced Encryption Standard (AES) is defined in each of:
AES is based on a design principle known as asubstitution–permutation network, and is efficient in both software and hardware.[11]Unlike its predecessor DES, AES does not use aFeistel network. AES is a variant of Rijndael, with a fixedblock sizeof 128bits, and akey sizeof 128, 192, or 256 bits. By contrast, Rijndaelper seis specified with block and key sizes that may be any multiple of 32 bits, with a minimum of 128 and a maximum of 256 bits. Most AES calculations are done in a particularfinite field.
AES operates on a 4 × 4column-major orderarray of 16 bytesb0,b1,...,b15termed thestate:[note 5]
The key size used for an AES cipher specifies the number of transformation rounds that convert the input, called theplaintext, into the final output, called theciphertext. The number of rounds are as follows:
Each round consists of several processing steps, including one that depends on the encryption key itself. A set of reverse rounds are applied to transform ciphertext back into the original plaintext using the same encryption key.
In theSubBytesstep, each byteai,j{\displaystyle a_{i,j}}in thestatearray is replaced with aSubByteS(ai,j){\displaystyle S(a_{i,j})}using an 8-bitsubstitution box. Before round 0, thestatearray is simply the plaintext/input. This operation provides the non-linearity in thecipher. The S-box used is derived from themultiplicative inverseoverGF(28), known to have good non-linearity properties. To avoid attacks based on simple algebraic properties, the S-box is constructed by combining the inverse function with an invertibleaffine transformation. The S-box is also chosen to avoid any fixed points (and so is aderangement), i.e.,S(ai,j)≠ai,j{\displaystyle S(a_{i,j})\neq a_{i,j}}, and also any opposite fixed points, i.e.,S(ai,j)⊕ai,j≠FF16{\displaystyle S(a_{i,j})\oplus a_{i,j}\neq {\text{FF}}_{16}}.
While performing the decryption, theInvSubBytesstep (the inverse ofSubBytes) is used, which requires first taking the inverse of the affine transformation and then finding the multiplicative inverse.
TheShiftRowsstep operates on the rows of the state; it cyclically shifts the bytes in each row by a certainoffset. For AES, the first row is left unchanged. Each byte of the second row is shifted one to the left. Similarly, the third and fourth rows are shifted by offsets of two and three respectively.[note 6]In this way, each column of the output state of theShiftRowsstep is composed of bytes from each column of the input state. The importance of this step is to avoid the columns being encrypted independently, in which case AES would degenerate into four independent block ciphers.
In theMixColumnsstep, the four bytes of each column of the state are combined using an invertiblelinear transformation. TheMixColumnsfunction takes four bytes as input and outputs four bytes, where each input byte affects all four output bytes. Together withShiftRows,MixColumnsprovidesdiffusionin the cipher.
During this operation, each column is transformed using a fixed matrix (matrix left-multiplied by column gives new value of column in the state):
Matrix multiplication is composed of multiplication and addition of the entries. Entries are bytes treated as coefficients of polynomial of orderx7{\displaystyle x^{7}}. Addition is simply XOR. Multiplication is modulo irreducible polynomialx8+x4+x3+x+1{\displaystyle x^{8}+x^{4}+x^{3}+x+1}. If processed bit by bit, then, after shifting, a conditionalXORwith 1B16should be performed if the shifted value is larger than FF16(overflow must be corrected by subtraction of generating polynomial). These are special cases of the usual multiplication inGF(28){\displaystyle \operatorname {GF} (2^{8})}.
In more general sense, each column is treated as a polynomial overGF(28){\displaystyle \operatorname {GF} (2^{8})}and is then multiplied modulo0116⋅z4+0116{\displaystyle {01}_{16}\cdot z^{4}+{01}_{16}}with a fixed polynomialc(z)=0316⋅z3+0116⋅z2+0116⋅z+0216{\displaystyle c(z)={03}_{16}\cdot z^{3}+{01}_{16}\cdot z^{2}+{01}_{16}\cdot z+{02}_{16}}. The coefficients are displayed in theirhexadecimalequivalent of the binary representation of bit polynomials fromGF(2)[x]{\displaystyle \operatorname {GF} (2)[x]}. TheMixColumnsstep can also be viewed as a multiplication by the shown particularMDS matrixin thefinite fieldGF(28){\displaystyle \operatorname {GF} (2^{8})}. This process is described further in the articleRijndael MixColumns.
In theAddRoundKeystep, the subkey is combined with the state. For each round, a subkey is derived from the mainkeyusingRijndael's key schedule; each subkey is the same size as the state. The subkey is added by combining of the state with the corresponding byte of the subkey using bitwiseXOR.
On systems with 32-bit or larger words, it is possible to speed up execution of this cipher by combining theSubBytesandShiftRowssteps with theMixColumnsstep by transforming them into a sequence of table lookups. This requires four 256-entry 32-bit tables (together occupying 4096 bytes). A round can then be performed with 16 table lookup operations and 12 32-bit exclusive-or operations, followed by four 32-bit exclusive-or operations in theAddRoundKeystep.[12]Alternatively, the table lookup operation can be performed with a single 256-entry 32-bit table (occupying 1024 bytes) followed by circular rotation operations.
Using a byte-oriented approach, it is possible to combine theSubBytes,ShiftRows, andMixColumnssteps into a single round operation.[13]
TheNational Security Agency(NSA) reviewed all the AES finalists, including Rijndael, and stated that all of them were secure enough for U.S. Government non-classified data. In June 2003, the U.S. Government announced that AES could be used to protectclassified information:
The design and strength of all key lengths of the AES algorithm (i.e., 128, 192 and 256) are sufficient to protect classified information up to the SECRET level. TOP SECRET information will require use of either the 192 or 256 key lengths. The implementation of AES in products intended to protect national security systems and/or information must be reviewed and certified by NSA prior to their acquisition and use.[14]
AES has 10 rounds for 128-bit keys, 12 rounds for 192-bit keys, and 14 rounds for 256-bit keys.
For cryptographers, acryptographic"break" is anything faster than abrute-force attack—i.e., performing one trial decryption for each possible key in sequence(seeCryptanalysis § Computational resources required). A break can thus include results that are infeasible with current technology. Despite being impractical, theoretical breaks can sometimes provide insight into vulnerability patterns. The largest successful publicly known brute-force attack against a widely implemented block-cipher encryption algorithm was against a 64-bitRC5key bydistributed.netin 2006.[15]
The key space increases by a factor of 2 for each additional bit of key length, and if every possible value of the key is equiprobable; this translates into a doubling of the average brute-force key search time with every additional bit of key length. This implies that the effort of a brute-force search increases exponentially with key length. Key length in itself does not imply security against attacks, since there are ciphers with very long keys that have been found to be vulnerable.
AES has a fairly simple algebraic framework.[16]In 2002, a theoretical attack, named the "XSL attack", was announced byNicolas CourtoisandJosef Pieprzyk, purporting to show a weakness in the AES algorithm, partially due to the low complexity of its nonlinear components.[17]Since then, other papers have shown that the attack, as originally presented, is unworkable; seeXSL attack on block ciphers.
During the AES selection process, developers of competing algorithms wrote of Rijndael's algorithm "we are concerned about [its] use ... in security-critical applications."[18]In October 2000, however, at the end of the AES selection process,Bruce Schneier, a developer of the competing algorithmTwofish, wrote that while he thought successful academic attacks on Rijndael would be developed someday, he "did not believe that anyone will ever discover an attack that will allow someone to read Rijndael traffic."[19]
By 2006, the best known attacks were on 7 rounds for 128-bit keys, 8 rounds for 192-bit keys, and 9 rounds for 256-bit keys.[20]
Until May 2009, the only successful published attacks against the full AES wereside-channel attackson some specific implementations. In 2009, a newrelated-key attackwas discovered that exploits the simplicity of AES's key schedule and has a complexity of 2119. In December 2009 it was improved to 299.5.[2]This is a follow-up to an attack discovered earlier in 2009 byAlex Biryukov,Dmitry Khovratovich, and Ivica Nikolić, with a complexity of 296for one out of every 235keys.[21]However, related-key attacks are not of concern in any properly designed cryptographic protocol, as a properly designed protocol (i.e., implementational software) will take care not to allow related keys, essentially byconstrainingan attacker's means of selecting keys for relatedness.
Another attack was blogged by Bruce Schneier[3]on July 30, 2009, and released as apreprint[22]on August 3, 2009. This new attack, by Alex Biryukov,Orr Dunkelman,Nathan Keller, Dmitry Khovratovich, andAdi Shamir, is against AES-256 that uses only two related keys and 239time to recover the complete 256-bit key of a 9-round version, or 245time for a 10-round version with a stronger type of related subkey attack, or 270time for an 11-round version. 256-bit AES uses 14 rounds, so these attacks are not effective against full AES.
The practicality of these attacks with stronger related keys has been criticized,[23]for instance, by the paper on chosen-key-relations-in-the-middle attacks on AES-128 authored by Vincent Rijmen in 2010.[24]
In November 2009, the firstknown-key distinguishing attackagainst a reduced 8-round version of AES-128 was released as a preprint.[25]This known-key distinguishing attack is an improvement of the rebound, or the start-from-the-middle attack, against AES-like permutations, which view two consecutive rounds of permutation as the application of a so-called Super-S-box. It works on the 8-round version of AES-128, with a time complexity of 248, and a memory complexity of 232. 128-bit AES uses 10 rounds, so this attack is not effective against full AES-128.
The firstkey-recovery attackson full AES were by Andrey Bogdanov, Dmitry Khovratovich, and Christian Rechberger, and were published in 2011.[26]The attack is abiclique attackand is faster than brute force by a factor of about four. It requires 2126.2operations to recover an AES-128 key. For AES-192 and AES-256, 2190.2and 2254.6operations are needed, respectively. This result has been further improved to 2126.0for AES-128, 2189.9for AES-192, and 2254.3for AES-256 by Biaoshuai Tao and Hongjun Wu in a 2015 paper,[27]which are the current best results in key recovery attack against AES.
This is a very small gain, as a 126-bit key (instead of 128 bits) would still take billions of years to brute force on current and foreseeable hardware. Also, the authors calculate the best attack using their technique on AES with a 128-bit key requires storing 288bits of data. That works out to about 38 trillion terabytes of data, which was more than all the data stored on all the computers on the planet in 2016.[28]A paper in 2015 later improved the space complexity to 256bits,[27]which is 9007 terabytes (while still keeping a time complexity of approximately 2126).
According to theSnowden documents, the NSA is doing research on whether a cryptographic attack based ontau statisticmay help to break AES.[29]
At present, there is no known practical attack that would allow someone without knowledge of the key to read data encrypted by AES when correctly implemented.[citation needed]
Side-channel attacksdo not attack the cipher as ablack box, and thus are not related to cipher security as defined in the classical context, but are important in practice. They attack implementations of the cipher on hardware or software systems that inadvertently leak data. There are several such known attacks on various implementations of AES.
In April 2005,D. J. Bernsteinannounced a cache-timing attack that he used to break a custom server that usedOpenSSL's AES encryption.[30]The attack required over 200 million chosen plaintexts.[31]The custom server was designed to give out as much timing information as possible (the server reports back the number of machine cycles taken by the encryption operation). However, as Bernstein pointed out, "reducing the precision of the server's timestamps, or eliminating them from the server's responses, does not stop the attack: the client simply uses round-trip timings based on its local clock, and compensates for the increased noise by averaging over a larger number of samples."[30]
In October 2005, Dag Arne Osvik,Adi ShamirandEran Tromerpresented a paper demonstrating several cache-timing attacks against the implementations in AES found in OpenSSL and Linux'sdm-cryptpartition encryption function.[32]One attack was able to obtain an entire AES key after only 800 operations triggering encryptions, in a total of 65 milliseconds. This attack requires the attacker to be able to run programs on the same system or platform that is performing AES.
In December 2009 an attack on some hardware implementations was published that useddifferential fault analysisand allows recovery of a key with a complexity of 232.[33]
In November 2010 Endre Bangerter, David Gullasch and Stephan Krenn published a paper which described a practical approach to a "near real time" recovery of secret keys from AES-128 without the need for either cipher text or plaintext. The approach also works on AES-128 implementations that use compression tables, such as OpenSSL.[34]Like some earlier attacks, this one requires the ability to run unprivileged code on the system performing the AES encryption, which may be achieved by malware infection far more easily than commandeering the root account.[35]
In March 2016, C. Ashokkumar, Ravi Prakash Giri and Bernard Menezes presented a side-channel attack on AES implementations that can recover the complete 128-bit AES key in just 6–7 blocks of plaintext/ciphertext, which is a substantial improvement over previous works that require between 100 and a million encryptions.[36]The proposed attack requires standard user privilege and key-retrieval algorithms run under a minute.
Many modern CPUs have built-inhardware instructions for AES, which protect against timing-related side-channel attacks.[37][38]
AES-256 is considered to bequantum resistant, as it has similar quantum resistance to AES-128's resistance against traditional, non-quantum, attacks at 128bits of security. AES-192 and AES-128 are not considered quantum resistant due to their smaller key sizes. AES-192 has a strength of 96 bits against quantum attacks and AES-128 has 64 bits of strength against quantum attacks, making them both insecure.[39][40]
TheCryptographic Module Validation Program(CMVP) is operated jointly by the United States Government'sNational Institute of Standards and Technology(NIST) Computer Security Division and theCommunications Security Establishment(CSE) of the Government of Canada. The use of cryptographic modules validated to NISTFIPS 140-2is required by the United States Government for encryption of all data that has a classification ofSensitive but Unclassified(SBU) or above. From NSTISSP #11, National Policy Governing the Acquisition of Information Assurance: "Encryption products for protecting classified information will be certified by NSA, and encryption products intended for protecting sensitive information will be certified in accordance with NIST FIPS 140-2."[41]
The Government of Canada also recommends the use ofFIPS 140validated cryptographic modules in unclassified applications of its departments.
Although NIST publication 197 ("FIPS 197") is the unique document that covers the AES algorithm, vendors typically approach the CMVP under FIPS 140 and ask to have several algorithms (such asTriple DESorSHA1) validated at the same time. Therefore, it is rare to find cryptographic modules that are uniquely FIPS 197 validated and NIST itself does not generally take the time to list FIPS 197 validated modules separately on its public web site. Instead, FIPS 197 validation is typically just listed as an "FIPS approved: AES" notation (with a specific FIPS 197 certificate number) in the current list of FIPS 140 validated cryptographic modules.
The Cryptographic Algorithm Validation Program (CAVP)[42]allows for independent validation of the correct implementation of the AES algorithm. Successful validation results in being listed on the NIST validations page.[43]This testing is a pre-requisite for the FIPS 140-2 module validation. However, successful CAVP validation in no way implies that the cryptographic module implementing the algorithm is secure. A cryptographic module lacking FIPS 140-2 validation or specific approval by the NSA is not deemed secure by the US Government and cannot be used to protect government data.[41]
FIPS 140-2 validation is challenging to achieve both technically and fiscally.[44]There is a standardized battery of tests as well as an element of source code review that must be passed over a period of a few weeks. The cost to perform these tests through an approved laboratory can be significant (e.g., well over $30,000 US)[44]and does not include the time it takes to write, test, document and prepare a module for validation. After validation, modules must be re-submitted and re-evaluated if they are changed in any way. This can vary from simple paperwork updates if the security functionality did not change to a more substantial set of re-testing if the security functionality was impacted by the change.
Test vectors are a set of known ciphers for a given input and key. NIST distributes the reference of AES test vectors as AES Known Answer Test (KAT) Vectors.[note 7]
High speed and low RAM requirements were some of the criteria of the AES selection process. As the chosen algorithm, AES performed well on a wide variety of hardware, from 8-bitsmart cardsto high-performance computers.
On aPentium Pro, AES encryption requires 18 clock cycles per byte (cpb),[45]equivalent to a throughput of about 11 MiB/s for a 200 MHz processor.
OnIntel CoreandAMD RyzenCPUs supportingAES-NI instruction setextensions, throughput can be multiple GiB/s.[46]On an IntelWestmereCPU, AES encryption using AES-NI takes about 1.3 cpb for AES-128 and 1.8 cpb for AES-256.[47]
|
https://en.wikipedia.org/wiki/Advanced_Encryption_Standard
|
Themeet-in-the-middle attack(MITM), a known-plaintext attack,[1]is a genericspace–time tradeoffcryptographic attack against encryption schemes that rely on performing multiple encryption operations in sequence. The MITM attack is the primary reason whyDouble DESis not used and why aTriple DESkey (168-bit) can be brute-forced[clarification needed]by an attacker with 256space and 2112operations.[2]
When trying to improve the security of a block cipher, a tempting idea is to encrypt the data several times using multiple keys. One might think this doubles or evenn-tuples the security of the multiple-encryption scheme, depending on the number of times the data is encrypted, because an exhaustive search on all possible combinations of keys (simple brute force) would take 2n·kattempts if the data is encrypted withk-bit keysntimes.
The MITM attack is a generic attack which weakens the security benefits of using multiple encryptions by storing intermediate values from the encryptions or decryptions and using those to improve the time required to brute force[clarification needed]the decryption keys. This makes a Meet-in-the-Middle attack (MITM) a generic space–time tradeoffcryptographic[3]attack.
The MITM attack attempts to find the keys by using both the range (ciphertext) and domain (plaintext) of the composition of several functions (or block ciphers) such that the forward mapping through the first functions is the same as the backward mapping (inverse image) through the last functions, quite literallymeetingin the middle of the composed function. For example, although Double DES encrypts the data with two different 56-bit keys, Double DES can be broken with 257encryption and decryption operations.
The multidimensional MITM (MD-MITM) uses a combination of several simultaneous MITM attacks like described above, where the meeting happens in multiple positions in the composed function.
DiffieandHellmanfirst proposed the meet-in-the-middle attack on a hypothetical expansion of ablock cipherin 1977.[4]Their attack used aspace–time tradeoffto break the double-encryption scheme in only twice the time needed to break the single-encryption scheme.
In 2011, Bo Zhu andGuang Gonginvestigated themultidimensional meet-in-the-middle attackand presented new attacks on the block ciphersGOST,KTANTANandHummingbird-2.[5]
Assume someone wants to attack an encryption scheme with the following characteristics for a given plaintextPand ciphertextC:
whereENCis the encryption function,DECthe decryption function defined asENC−1(inverse mapping) andk1andk2are two keys.
The naive approach at brute-forcing this encryption scheme is to decrypt the ciphertext with every possiblek2, and decrypt each of the intermediate outputs with every possiblek1, for a total of 2|k1|× 2|k2|(or 2|k1|+|k2|) operations.
The meet-in-the-middle attack uses a more efficient approach. By decryptingCwithk2, one obtains the following equivalence:
The attacker can computeENCk1(P) for all values ofk1andDECk2(C) for all possible values ofk2, for a total of 2|k1|+ 2|k2|(or 2|k1|+1, ifk1andk2have the same size) operations. If the result from any of theENCk1(P) operations matches a result from theDECk2(C) operations, the pair ofk1andk2is possibly the correct key. This potentially-correct key is called acandidate key. The attacker can determine which candidate key is correct by testing it with a second test-set of plaintext and ciphertext.
The MITM attack is one of the reasons whyData Encryption Standard(DES) was replaced withTriple DESand not Double DES. An attacker can use a MITM attack to bruteforce Double DES with 257operations and 256space, making it only a small improvement over DES.[5]Triple DES uses a "triple length" (168-bit) key and is also vulnerable to a meet-in-the-middle attack in 256space and 2112operations, but is considered secure due to the size of its keyspace.[2][6]
Compute the following:
When a match is found, keepkf1,kb1{\displaystyle k_{f_{1}},k_{b_{1}}}as candidate key-pair in a tableT. Test pairs inTon a new pair of(P,C){\displaystyle (P,C)}to confirm validity. If the key-pair does not work on this new pair, do MITM again on a new pair of(P,C){\displaystyle (P,C)}.
If the keysize isk, this attack uses only 2k+1encryptions (and decryptions) andO(2k) memory to store the results of the forward computations in alookup table, in contrast to the naive attack, which needs 22·kencryptions butO(1) space.
While 1D-MITM can be efficient, a more sophisticated attack has been developed:multidimensional meet-in-the-middle attack, also abbreviatedMD-MITM.
This is preferred when the data has been encrypted using more than 2 encryptions with different keys.
Instead of meeting in the middle (one place in the sequence), the MD-MITM attack attempts to reach several specific intermediate states using the forward and backward computations at several positions in the cipher.[5]
Assume that the attack has to be mounted on a block cipher, where the encryption and decryption is defined as before:
that is a plaintext P is encrypted multiple times using a repetition of the same block cipher
The MD-MITM has been used for cryptanalysis of, among many, theGOST block cipher, where it has been shown that a 3D-MITM has significantly reduced the time complexity for an attack on it.[5]
Compute the following:
For each possible guess on the intermediate states1{\displaystyle s_{1}}compute the following:
Use the found combination of sub-keys(kf1,kb1,kf2,kb2,...,kfn+1,kbn+1){\displaystyle (k_{f_{1}},k_{b_{1}},k_{f_{2}},k_{b_{2}},...,k_{f_{n+1}},k_{b_{n+1}})}on another pair of plaintext/ciphertext to verify the correctness of the key.
Note the nested element in the algorithm. The guess on every possible value onsjis done for each guess on the previoussj-1.
This make up an element of exponential complexity to overall time complexity of this MD-MITM attack.
Time complexity of this attack without brute force, is2|kf1|+2|kbn+1|+2|s1|{\displaystyle 2^{|k_{f_{1}}|}+2^{|k_{b_{n+1}}|}+2^{|s_{1}|}}⋅(2|kb1|+2|kf2|+2|s2|{\displaystyle (2^{|k_{b_{1}}|}+2^{|k_{f_{2}}|}+2^{|s_{2}|}}⋅(2|kb2|+2|kf3|+⋯)){\displaystyle (2^{|k_{b_{2}}|}+2^{|k_{f_{3}}|}+\cdots ))}
Regarding the memory complexity, it is easy to see thatT2,T3,...,Tn{\displaystyle T_{2},T_{3},...,T_{n}}are much smaller than the first built table of candidate values:T1{\displaystyle T_{1}}as i increases, the candidate values contained inTi{\displaystyle T_{i}}must satisfy more conditions thereby fewer candidates will pass on to the end destinationTn{\displaystyle T_{n}}.
An upper bound of the memory complexity of MD-MITM is then
wherekdenotes the length of the whole key (combined).
The data complexity depends on the probability that a wrong key may pass (obtain a false positive), which is1/2|l|{\displaystyle 1/2^{|l|}}, wherelis the intermediate state in the first MITM phase. The size of the intermediate state and the block size is often the same!
Considering also how many keys that are left for testing after the first MITM-phase, it is2|k|/2|l|{\displaystyle 2^{|k|}/2^{|l|}}.
Therefore, after the first MITM phase, there are2|k|−b⋅2−b=2|k|−2b{\displaystyle 2^{|k|-b}\cdot 2^{-b}=2^{|k|-2b}}, where|b|{\displaystyle |b|}is the block size.
For each time the final candidate value of the keys are tested on a new plaintext/ciphertext-pair, the number of keys that will pass will be multiplied by the probability that a key may pass which is1/2|b|{\displaystyle 1/2^{|b|}}.
The part of brute force testing (testing the candidate key on new(P,C){\displaystyle (P,C)}-pairs, have time complexity2|k|−b+2|k|−2b+2|k|−3b+2|k|−4b⋯{\displaystyle 2^{|k|-b}+2^{|k|-2b}+2^{|k|-3b}+2^{|k|-4b}\cdots }, clearly for increasing multiples of b in the exponent, number tends to zero.
The conclusion on data complexity is by similar reasoning restricted by that around⌈|k|/n⌉{\displaystyle \lceil |k|/n\rceil }(P,C){\displaystyle (P,C)}-pairs.
Below is a specific example of how a 2D-MITM is mounted:
This is a general description of how 2D-MITM is mounted on a block cipher encryption.
In two-dimensional MITM (2D-MITM) the method is to reach 2 intermediate states inside the multiple encryption of the plaintext. See below figure:
Compute the following:
For each possible guess on an intermediate statesbetweenSubCipher1{\displaystyle {\mathit {SubCipher}}_{1}}andSubCipher2{\displaystyle {\mathit {SubCipher}}_{2}}compute the following:
Use the found combination of sub-keys(kf1,kb1,kf2,kb2){\displaystyle (k_{f_{1}},k_{b_{1}},k_{f_{2}},k_{b_{2}})}on another pair of plaintext/ciphertext to verify the correctness of the key.
Time complexity of this attack without brute force, is
where |⋅| denotes the length.
Main memory consumption is restricted by the construction of the setsAandBwhereTis much smaller than the others.
For data complexity seesubsection on complexity for MD-MITM.
|
https://en.wikipedia.org/wiki/Meet-in-the-middle_attack
|
TheData Encryption Standard(DES/ˌdiːˌiːˈɛs,dɛz/) is asymmetric-key algorithmfor theencryptionof digital data. Although its short key length of 56 bits makes it too insecure for modern applications, it has been highly influential in the advancement ofcryptography.
Developed in the early 1970s atIBMand based on an earlier design byHorst Feistel, the algorithm was submitted to theNational Bureau of Standards(NBS) following the agency's invitation to propose a candidate for the protection of sensitive, unclassified electronic government data. In 1976, after consultation with theNational Security Agency(NSA), the NBS selected a slightly modified version (strengthened againstdifferential cryptanalysis, but weakened againstbrute-force attacks), which was published as an officialFederal Information Processing Standard(FIPS) for the United States in 1977.[2]
The publication of an NSA-approved encryption standard led to its quick international adoption and widespread academic scrutiny. Controversies arose fromclassifieddesign elements, a relatively shortkey lengthof thesymmetric-keyblock cipherdesign, and the involvement of the NSA, raising suspicions about abackdoor. TheS-boxesthat had prompted those suspicions were designed by the NSA to address a vulnerability they secretly knew (differential cryptanalysis). However, the NSA also ensured that the key size was drastically reduced so that they could break the cipher by brute force attack.[2][failed verification]The intense academic scrutiny the algorithm received over time led to the modern understanding of block ciphers and theircryptanalysis.
DES is insecure due to the relatively short56-bit key size. In January 1999,distributed.netand theElectronic Frontier Foundationcollaborated to publicly break a DES key in 22 hours and 15 minutes (see§ Chronology). There are also some analytical results which demonstrate theoretical weaknesses in the cipher, although they are infeasible in practice[citation needed]. DES has been withdrawn as a standard by theNIST.[3]Later, the variantTriple DESwas developed to increase the security level, but it is considered insecure today as well. DES has been superseded by theAdvanced Encryption Standard(AES).
Some documents distinguish between the DES standard and its algorithm, referring to the algorithm as theDEA(Data Encryption Algorithm).
The origins of DES date to 1972, when aNational Bureau of Standardsstudy of US governmentcomputer securityidentified a need for a government-wide standard for encrypting unclassified, sensitive information.[4]
Around the same time, engineerMohamed Atallain 1972 foundedAtalla Corporationand developed the firsthardware security module(HSM), the so-called "Atalla Box" which was commercialized in 1973. It protected offline devices with a securePINgenerating key, and was a commercial success. Banks and credit card companies were fearful that Atalla would dominate the market, which spurred the development of an international encryption standard.[3]Atalla was an early competitor toIBMin the banking market, and was cited as an influence by IBM employees who worked on the DES standard.[5]TheIBM 3624later adopted a similar PIN verification system to the earlier Atalla system.[6]
On 15 May 1973, after consulting with the NSA, NBS solicited proposals for a cipher that would meet rigorous design criteria. None of the submissions was suitable. A second request was issued on 27 August 1974. This time,IBMsubmitted a candidate which was deemed acceptable—a cipher developed during the period 1973–1974 based on an earlier algorithm,Horst Feistel'sLucifercipher. The team at IBM involved in cipher design and analysis included Feistel,Walter Tuchman,Don Coppersmith, Alan Konheim, Carl Meyer, Mike Matyas,Roy Adler,Edna Grossman, Bill Notz, Lynn Smith, andBryant Tuckerman.
On 17 March 1975, the proposed DES was published in theFederal Register. Public comments were requested, and in the following year two open workshops were held to discuss the proposed standard. There was criticism received frompublic-key cryptographypioneersMartin HellmanandWhitfield Diffie,[1]citing a shortenedkey lengthand the mysterious "S-boxes" as evidence of improper interference from the NSA. The suspicion was that the algorithm had been covertly weakened by the intelligence agency so that they—but no one else—could easily read encrypted messages.[7]Alan Konheim (one of the designers of DES) commented, "We sent the S-boxes off to Washington. They came back and were all different."[8]TheUnited States Senate Select Committee on Intelligencereviewed the NSA's actions to determine whether there had been any improper involvement. In the unclassified summary of their findings, published in 1978, the Committee wrote:
In the development of DES, NSA convincedIBMthat a reduced key size was sufficient; indirectly assisted in the development of the S-box structures; and certified that the final DES algorithm was, to the best of their knowledge, free from any statistical or mathematical weakness.[9]
However, it also found that
NSA did not tamper with the design of the algorithm in any way. IBM invented and designed the algorithm, made all pertinent decisions regarding it, and concurred that the agreed upon key size was more than adequate for all commercial applications for which the DES was intended.[10]
Another member of the DES team, Walter Tuchman, stated "We developed the DES algorithm entirely within IBM using IBMers. The NSA did not dictate a single wire!"[11]In contrast, a declassified NSA book on cryptologic history states:
In 1973 NBS solicited private industry for a data encryption standard (DES). The first offerings were disappointing, so NSA began working on its own algorithm. Then Howard Rosenblum, deputy director for research and engineering, discovered that Walter Tuchman of IBM was working on a modification to Lucifer for general use. NSA gave Tuchman a clearance and brought him in to work jointly with the Agency on his Lucifer modification."[12]
and
NSA worked closely with IBM to strengthen the algorithm against all except brute-force attacks and to strengthen substitution tables, called S-boxes. Conversely, NSA tried to convince IBM to reduce the length of the key from 64 to 48 bits. Ultimately they compromised on a 56-bit key.[13][14]
Some of the suspicions about hidden weaknesses in the S-boxes were allayed in 1990, with the independent discovery and open publication byEli BihamandAdi Shamirofdifferential cryptanalysis, a general method for breaking block ciphers. The S-boxes of DES were much more resistant to the attack than if they had been chosen at random, strongly suggesting that IBM knew about the technique in the 1970s. This was indeed the case; in 1994, Don Coppersmith published some of the original design criteria for the S-boxes.[15]According toSteven Levy, IBM Watson researchers discovered differential cryptanalytic attacks in 1974 and were asked by the NSA to keep the technique secret.[16]Coppersmith explains IBM's secrecy decision by saying, "that was because [differential cryptanalysis] can be a very powerful tool, used against many schemes, and there was concern that such information in the public domain could adversely affect national security." Levy quotes Walter Tuchman: "[t]hey asked us to stamp all our documents confidential... We actually put a number on each one and locked them up in safes, because they were considered U.S. government classified. They said do it. So I did it".[16]Bruce Schneier observed that "It took the academic community two decades to figure out that the NSA 'tweaks' actually improved the security of DES."[17]
Despite the criticisms, DES was approved as a federal standard in November 1976, and published on 15 January 1977 asFIPSPUB 46, authorized for use on all unclassified data. It was subsequently reaffirmed as the standard in 1983, 1988 (revised as FIPS-46-1), 1993 (FIPS-46-2), and again in 1999 (FIPS-46-3), the latter prescribing "Triple DES" (see below). On 26 May 2002, DES was finally superseded by the Advanced Encryption Standard (AES), followinga public competition. On 19 May 2005, FIPS 46-3 was officially withdrawn, butNISThas approvedTriple DESthrough the year 2030 for sensitive government information.[18]
The algorithm is also specified inANSIX3.92 (Today X3 is known asINCITSand ANSI X3.92 as ANSIINCITS92),[19]NIST SP 800-67[18]and ISO/IEC 18033-3[20](as a component ofTDEA).
Another theoretical attack, linear cryptanalysis, was published in 1994, but it was theElectronic Frontier Foundation'sDES crackerin 1998 that demonstrated that DES could be attacked very practically, and highlighted the need for a replacement algorithm. These and other methods ofcryptanalysisare discussed in more detail later in this article.
The introduction of DES is considered to have been a catalyst for the academic study of cryptography, particularly of methods to crack block ciphers. According to a NIST retrospective about DES,
DES is the archetypalblock cipher—analgorithmthat takes a fixed-length string ofplaintextbits and transforms it through a series of complicated operations into anotherciphertextbitstring of the same length. In the case of DES, theblock sizeis 64 bits. DES also uses akeyto customize the transformation, so that decryption can supposedly only be performed by those who know the particular key used to encrypt. The key ostensibly consists of 64 bits; however, only 56 of these are actually used by the algorithm. Eight bits are used solely for checkingparity, and are thereafter discarded. Hence the effectivekey lengthis 56 bits.
The key is nominally stored or transmitted as 8bytes, each with odd parity. According to ANSI X3.92-1981 (Now, known as ANSIINCITS92–1981), section 3.5:
One bit in each 8-bit byte of theKEYmay be utilized for error detection in key generation, distribution, and storage. Bits 8, 16,..., 64 are for use in ensuring that each byte is of odd parity.
Like other block ciphers, DES by itself is not a secure means of encryption, but must instead be used in amode of operation. FIPS-81 specifies several modes for use with DES.[27]Further comments on the usage of DES are contained in FIPS-74.[28]
Decryption uses the same structure as encryption, but with the keys used in reverse order. (This has the advantage that the same hardware or software can be used in both directions.)
The algorithm's overall structure is shown in Figure 1: there are 16 identical stages of processing, termedrounds. There is also an initial and finalpermutation, termedIPandFP, which areinverses(IP "undoes" the action of FP, and vice versa). IP and FP have no cryptographic significance, but were included in order to facilitate loading blocks in and out of mid-1970s 8-bit based hardware.[29]
Before the main rounds, the block is divided into two 32-bit halves and processed alternately; this criss-crossing is known as theFeistel scheme. The Feistel structure ensures that decryption and encryption are very similar processes—the only difference is that the subkeys are applied in the reverse order when decrypting. The rest of the algorithm is identical. This greatly simplifies implementation, particularly in hardware, as there is no need for separate encryption and decryption algorithms.
The ⊕ symbol denotes theexclusive-OR(XOR) operation. TheF-functionscrambles half a block together with some of the key. The output from the F-function is then combined with the other half of the block, and the halves are swapped before the next round. After the final round, the halves are swapped; this is a feature of the Feistel structure which makes encryption and decryption similar processes.
The F-function, depicted in Figure 2, operates on half a block (32 bits) at a time and consists of four stages:
The alternation of substitution from the S-boxes, and permutation of bits from the P-box and E-expansion provides so-called "confusion and diffusion" respectively, a concept identified byClaude Shannonin the 1940s as a necessary condition for a secure yet practical cipher.
Figure 3 illustrates thekey schedulefor encryption—the algorithm which generates the subkeys. Initially, 56 bits of the key are selected from the initial 64 byPermuted Choice 1(PC-1)—the remaining eight bits are either discarded or used asparitycheck bits. The 56 bits are then divided into two 28-bit halves; each half is thereafter treated separately. In successive rounds, both halves are rotated left by one or two bits (specified for each round), and then 48 subkey bits are selected byPermuted Choice 2(PC-2)—24 bits from the left half, and 24 from the right. The rotations (denoted by "<<<" in the diagram) mean that a different set of bits is used in each subkey; each bit is used in approximately 14 out of the 16 subkeys.
The key schedule for decryption is similar—the subkeys are in reverse order compared to encryption. Apart from that change, the process is the same as for encryption. The same 28 bits are passed to all rotation boxes.
Pseudocodefor the DES algorithm follows.
Although more information has been published on the cryptanalysis of DES than any other block cipher, the most practical attack to date is still a brute-force approach. Various minor cryptanalytic properties are known, and three theoretical attacks are possible which, while having a theoretical complexity less than a brute-force attack, require an unrealistic number ofknownorchosen plaintextsto carry out, and are not a concern in practice.
For anycipher, the most basic method of attack isbrute force—trying every possible key in turn. Thelength of the keydetermines the number of possible keys, and hence the feasibility of this approach. For DES, questions were raised about the adequacy of its key size early on, even before it was adopted as a standard, and it was the small key size, rather than theoretical cryptanalysis, which dictated a need for a replacementalgorithm. As a result of discussions involving external consultants including the NSA, the key size was reduced from 256 bits to 56 bits to fit on a single chip.[30]
In academia, various proposals for a DES-cracking machine were advanced. In 1977, Diffie and Hellman proposed a machine costing an estimated US$20 million which could find a DES key in a single day.[1][31]By 1993, Wiener had proposed a key-search machine costing US$1 million which would find a key within 7 hours. However, none of these early proposals were ever implemented—or, at least, no implementations were publicly acknowledged. The vulnerability of DES was practically demonstrated in the late 1990s.[32]In 1997,RSA Securitysponsored a series of contests, offering a $10,000 prize to the first team that broke a message encrypted with DES for the contest. That contest was won by theDESCHALL Project, led by Rocke Verser,Matt Curtin, and Justin Dolske, using idle cycles of thousands of computers across the Internet. The feasibility of cracking DES quickly was demonstrated in 1998 when a custom DES-cracker was built by theElectronic Frontier Foundation(EFF), a cyberspace civil rights group, at the cost of approximately US$250,000 (seeEFF DES cracker). Their motivation was to show that DES was breakable in practice as well as in theory: "There are many people who will not believe a truth until they can see it with their own eyes. Showing them a physical machine that can crack DES in a few days is the only way to convince some people that they really cannot trust their security to DES." The machine brute-forced a key in a little more than 2 days' worth of searching.
The next confirmed DES cracker was the COPACOBANA machine built in 2006 by teams of theUniversities of BochumandKiel, both inGermany. Unlike the EFF machine, COPACOBANA consists of commercially available, reconfigurable integrated circuits. 120 of thesefield-programmable gate arrays(FPGAs) of type XILINX Spartan-3 1000 run in parallel. They are grouped in 20 DIMM modules, each containing 6 FPGAs. The use of reconfigurable hardware makes the machine applicable to other code breaking tasks as well.[33]One of the more interesting aspects of COPACOBANA is its cost factor. One machine can be built for approximately $10,000.[34]The cost decrease by roughly a factor of 25 over the EFF machine is an example of the continuous improvement ofdigital hardware—seeMoore's law. Adjusting for inflation over 8 years yields an even higher improvement of about 30x. Since 2007,SciEngines GmbH, a spin-off company of the two project partners of COPACOBANA has enhanced and developed successors of COPACOBANA. In 2008 their COPACOBANA RIVYERA reduced the time to break DES to less than one day, using 128 Spartan-3 5000's. SciEngines RIVYERA held the record in brute-force breaking DES, having utilized 128 Spartan-3 5000 FPGAs.[35]Their 256 Spartan-6 LX150 model has further lowered this time.
In 2012, David Hulton andMoxie Marlinspikeannounced a system with 48 Xilinx Virtex-6 LX240T FPGAs, each FPGA containing 40 fully pipelined DES cores running at 400 MHz, for a total capacity of 768 gigakeys/sec. The system can exhaustively search the entire 56-bit DES key space in about 26 hours and this service is offered for a fee online.[36][37]However, the service has been offline since the year 2024, supposedly for maintenance but probably permanently switched off.[38]
There are three attacks known that can break the full 16 rounds of DES with less complexity than a brute-force search:differential cryptanalysis(DC),[39]linear cryptanalysis(LC),[40]andDavies' attack.[41]However, the attacks are theoretical and are generally considered infeasible to mount in practice;[42]these types of attack are sometimes termed certificational weaknesses.
There have also been attacks proposed against reduced-round versions of the cipher, that is, versions of DES with fewer than 16 rounds. Such analysis gives an insight into how many rounds are needed for safety, and how much of a "security margin" the full version retains.
Differential-linear cryptanalysiswas proposed by Langford and Hellman in 1994, and combines differential and linear cryptanalysis into a single attack.[47]An enhanced version of the attack can break 9-round DES with 215.8chosen plaintexts and has a 229.2time complexity (Biham and others, 2002).[48]
DES exhibits the complementation property, namely that
wherex¯{\displaystyle {\overline {x}}}is thebitwise complementofx.{\displaystyle x.}EK{\displaystyle E_{K}}denotes encryption with keyK.{\displaystyle K.}P{\displaystyle P}andC{\displaystyle C}denote plaintext and ciphertext blocks respectively. The complementation property means that the work for abrute-force attackcould be reduced by a factor of 2 (or a single bit) under achosen-plaintextassumption. By definition, this property also applies to TDES cipher.[49]
DES also has four so-calledweak keys. Encryption (E) and decryption (D) under a weak key have the same effect (seeinvolution):
There are also six pairs ofsemi-weak keys. Encryption with one of the pair of semiweak keys,K1{\displaystyle K_{1}}, operates identically to decryption with the other,K2{\displaystyle K_{2}}:
It is easy enough to avoid the weak and semiweak keys in an implementation, either by testing for them explicitly, or simply by choosing keys randomly; the odds of picking a weak or semiweak key by chance are negligible. The keys are not really any weaker than any other keys anyway, as they do not give an attack any advantage.
DES has also been proved not to be agroup, or more precisely, the set{EK}{\displaystyle \{E_{K}\}}(for all possible keysK{\displaystyle K}) underfunctional compositionis not a group, nor "close" to being a group.[50]This was an open question for some time, and if it had been the case, it would have been possible to break DES, and multiple encryption modes such asTriple DESwould not increase the security, because repeated encryption (and decryptions) under different keys would be equivalent to encryption under another, single key.[51]
Simplified DES (SDES) was designed for educational purposes only, to help students learn about modern cryptanalytic techniques.
SDES has similar structure and properties to DES, but has been simplified to make it much easier to perform encryption and decryption by hand with pencil and paper.
Some people feel that learning SDES gives insight into DES and other block ciphers, and insight into various cryptanalytic attacks against them.[52][53][54][55][56][57][58][59][60]
Concerns about security and the relatively slow operation of DES insoftwaremotivated researchers to propose a variety of alternativeblock cipherdesigns, which started to appear in the late 1980s and early 1990s: examples includeRC5,Blowfish,IDEA,NewDES,SAFER,CAST5andFEAL. Most of these designs kept the 64-bitblock sizeof DES, and could act as a "drop-in" replacement, although they typically used a 64-bit or 128-bit key. In theSoviet UniontheGOST 28147-89algorithm was introduced, with a 64-bit block size and a 256-bit key, which was also used inRussialater.
Another approach to strengthening DES was the development ofTriple DES (3DES), which applies the DES algorithm three times to each data block to increase security. However, 3DES was later deprecated by NIST due to its inefficiencies and susceptibility to certain cryptographic attacks.
To address these security concerns, modern cryptographic systems rely on more advanced encryption techniques such as RSA, ECC, and post-quantum cryptography. These replacements aim to provide stronger resistance against both classical and quantum computing attacks.
A crucial aspect of DES involves itspermutations and key scheduling, which play a significant role in its encryption process. Analyzing these permutations helps in understanding DES's security limitations and the need for replacement algorithms. A detailed breakdown of DES permutations and their role in encryption is available in this analysis of Data Encryption Standards Permutations.[61]
DES itself can be adapted and reused in a more secure scheme. Many former DES users now useTriple DES(TDES) which was described and analysed by one of DES's patentees (seeFIPSPub 46–3); it involves applying DES three times with two (2TDES) or three (3TDES) different keys. TDES is regarded as adequately secure, although it is quite slow. A less computationally expensive alternative isDES-X, which increases the key size by XORing extra key material before and after DES.GDESwas a DES variant proposed as a way to speed up encryption, but it was shown to be susceptible to differential cryptanalysis.
On January 2, 1997, NIST announced that they wished to choose a successor to DES.[62]In 2001, after an international competition, NIST selected a new cipher, theAdvanced Encryption Standard(AES), as a replacement.[63]The algorithm which was selected as the AES was submitted by its designers under the nameRijndael. Other finalists in the NISTAES competitionincludedRC6,Serpent,MARS, andTwofish.
|
https://en.wikipedia.org/wiki/Data_Encryption_Standard#Round_function
|
Inmathematics, afinite fieldorGalois field(so-named in honor ofÉvariste Galois) is afieldthat contains a finite number ofelements. As with any field, a finite field is aseton which the operations of multiplication, addition, subtraction and division are defined and satisfy certain basic rules. The most common examples of finite fields are theintegers modp{\displaystyle p}whenp{\displaystyle p}is aprime number.
Theorderof a finite field is its number of elements, which is either a prime number or aprime power. For every prime numberp{\displaystyle p}and every positive integerk{\displaystyle k}there are fields of orderpk{\displaystyle p^{k}}. All finite fields of a given order areisomorphic.
Finite fields are fundamental in a number of areas of mathematics andcomputer science, includingnumber theory,algebraic geometry,Galois theory,finite geometry,cryptographyandcoding theory.
A finite field is a finite set that is afield; this means that multiplication, addition, subtraction and division (excluding division by zero) are defined and satisfy the rules of arithmetic known as thefield axioms.[1]
The number of elements of a finite field is called itsorderor, sometimes, itssize. A finite field of orderq{\displaystyle q}exists if and only ifq{\displaystyle q}is aprime powerpk{\displaystyle p^{k}}(wherep{\displaystyle p}is a prime number andk{\displaystyle k}is a positive integer). In a field of orderpk{\displaystyle p^{k}}, addingp{\displaystyle p}copies of any element always results in zero; that is, thecharacteristicof the field isp{\displaystyle p}.[1]
Forq=pk{\displaystyle q=p^{k}}, all fields of orderq{\displaystyle q}areisomorphic(see§ Existence and uniquenessbelow).[2]Moreover, a field cannot contain two different finitesubfieldswith the same order. One may therefore identify all finite fields with the same order, and they are unambiguously denotedFq{\displaystyle \mathbb {F} _{q}},Fq{\displaystyle \mathbf {F} _{q}}orGF(q){\displaystyle \mathrm {GF} (q)}, where the letters GF stand for "Galois field".[3]
In a finite field of orderq{\displaystyle q}, thepolynomialXq−X{\displaystyle X^{q}-X}has allq{\displaystyle q}elements of the finite field asroots.[citation needed]The non-zero elements of a finite field form amultiplicative group. This group iscyclic, so all non-zero elements can be expressed as powers of a single element called aprimitive elementof the field. (In general there will be several primitive elements for a given field.)[1]
The simplest examples of finite fields are the fields of prime order: for eachprime numberp{\displaystyle p}, theprime fieldof orderp{\displaystyle p}may be constructed as theintegers modulop{\displaystyle p},Z/pZ{\displaystyle \mathbb {Z} /p\mathbb {Z} }.[1]
The elements of the prime field of orderp{\displaystyle p}may be represented by integers in the range0,…,p−1{\displaystyle 0,\ldots ,p-1}. The sum, the difference and the product are theremainder of the divisionbyp{\displaystyle p}of the result of the corresponding integer operation.[1]The multiplicative inverse of an element may be computed by using the extended Euclidean algorithm (seeExtended Euclidean algorithm § Modular integers).[citation needed]
LetF{\displaystyle F}be a finite field. For any elementx{\displaystyle x}inF{\displaystyle F}and anyintegern{\displaystyle n}, denote byn⋅x{\displaystyle n\cdot x}the sum ofn{\displaystyle n}copies ofx{\displaystyle x}. The least positiven{\displaystyle n}such thatn⋅1=0{\displaystyle n\cdot 1=0}is the characteristicp{\displaystyle p}of the field.[1]This allows defining a multiplication(k,x)↦k⋅x{\displaystyle (k,x)\mapsto k\cdot x}of an elementk{\displaystyle k}ofGF(p){\displaystyle \mathrm {GF} (p)}by an elementx{\displaystyle x}ofF{\displaystyle F}by choosing an integer representative fork{\displaystyle k}. This multiplication makesF{\displaystyle F}into aGF(p){\displaystyle \mathrm {GF} (p)}-vector space.[1]It follows that the number of elements ofF{\displaystyle F}ispn{\displaystyle p^{n}}for some integern{\displaystyle n}.[1]
Theidentity(x+y)p=xp+yp{\displaystyle (x+y)^{p}=x^{p}+y^{p}}(sometimes called thefreshman's dream) is true in a field of characteristicp{\displaystyle p}. This follows from thebinomial theorem, as eachbinomial coefficientof the expansion of(x+y)p{\displaystyle (x+y)^{p}}, except the first and the last, is a multiple ofp{\displaystyle p}.[citation needed]
ByFermat's little theorem, ifp{\displaystyle p}is a prime number andx{\displaystyle x}is in the fieldGF(p){\displaystyle \mathrm {GF} (p)}thenxp=x{\displaystyle x^{p}=x}. This implies the equalityXp−X=∏a∈GF(p)(X−a){\displaystyle X^{p}-X=\prod _{a\in \mathrm {GF} (p)}(X-a)}for polynomials overGF(p){\displaystyle \mathrm {GF} (p)}. More generally, every element inGF(pn){\displaystyle \mathrm {GF} (p^{n})}satisfies the polynomial equationxpn−x=0{\displaystyle x^{p^{n}}-x=0}.[citation needed]
Any finitefield extensionof a finite field isseparableand simple. That is, ifE{\displaystyle E}is a finite field andF{\displaystyle F}is a subfield ofE{\displaystyle E}, thenE{\displaystyle E}is obtained fromF{\displaystyle F}by adjoining a single element whoseminimal polynomialisseparable. To use a piece of jargon, finite fields areperfect.[1]
A more generalalgebraic structurethat satisfies all the other axioms of a field, but whose multiplication is not required to becommutative, is called adivision ring(or sometimesskew field). ByWedderburn's little theorem, any finite division ring is commutative, and hence is a finite field.[1]
Letq=pn{\displaystyle q=p^{n}}be aprime power, andF{\displaystyle F}be thesplitting fieldof the polynomialP=Xq−X{\displaystyle P=X^{q}-X}over the prime fieldGF(p){\displaystyle \mathrm {GF} (p)}. This means thatF{\displaystyle F}is a finite field of lowest order, in whichP{\displaystyle P}hasq{\displaystyle q}distinct roots (theformal derivativeofP{\displaystyle P}isP′=−1{\displaystyle P'=-1}, implying thatgcd(P,P′)=1{\displaystyle \mathrm {gcd} (P,P')=1}, which in general implies that the splitting field is aseparable extensionof the original). Theabove identityshows that the sum and the product of two roots ofP{\displaystyle P}are roots ofP{\displaystyle P}, as well as the multiplicative inverse of a root ofP{\displaystyle P}. In other words, the roots ofP{\displaystyle P}form a field of orderq{\displaystyle q}, which is equal toF{\displaystyle F}by the minimality of the splitting field.
The uniqueness up to isomorphism of splitting fields implies thus that all fields of orderq{\displaystyle q}are isomorphic. Also, if a fieldF{\displaystyle F}has a field of orderq=pk{\displaystyle q=p^{k}}as a subfield, its elements are theq{\displaystyle q}roots ofXq−X{\displaystyle X^{q}-X}, andF{\displaystyle F}cannot contain another subfield of orderq{\displaystyle q}.
In summary, we have the following classification theorem first proved in 1893 byE. H. Moore:[2]
The order of a finite field is a prime power. For every prime powerq{\displaystyle q}there are fields of orderq{\displaystyle q}, and they are all isomorphic. In these fields, every element satisfiesxq=x,{\displaystyle x^{q}=x,}and the polynomialXq−X{\displaystyle X^{q}-X}factors asXq−X=∏a∈F(X−a).{\displaystyle X^{q}-X=\prod _{a\in F}(X-a).}
It follows thatGF(pn){\displaystyle \mathrm {GF} (p^{n})}contains a subfield isomorphic toGF(pm){\displaystyle \mathrm {GF} (p^{m})}if and only ifm{\displaystyle m}is a divisor ofn{\displaystyle n}; in that case, this subfield is unique. In fact, the polynomialXpm−X{\displaystyle X^{p^{m}}-X}dividesXpn−X{\displaystyle X^{p^{n}}-X}if and only ifm{\displaystyle m}is a divisor ofn{\displaystyle n}.
Given a prime powerq=pn{\displaystyle q=p^{n}}withp{\displaystyle p}prime andn>1{\displaystyle n>1}, the fieldGF(q){\displaystyle \mathrm {GF} (q)}may be explicitly constructed in the following way. One first chooses anirreducible polynomialP{\displaystyle P}inGF(p)[X]{\displaystyle \mathrm {GF} (p)[X]}of degreen{\displaystyle n}(such an irreducible polynomial always exists). Then thequotient ringGF(q)=GF(p)[X]/(P){\displaystyle \mathrm {GF} (q)=\mathrm {GF} (p)[X]/(P)}of the polynomial ringGF(p)[X]{\displaystyle \mathrm {GF} (p)[X]}by the ideal generated byP{\displaystyle P}is a field of orderq{\displaystyle q}.
More explicitly, the elements ofGF(q){\displaystyle \mathrm {GF} (q)}are the polynomials overGF(p){\displaystyle \mathrm {GF} (p)}whose degree is strictly less thann{\displaystyle n}. The addition and the subtraction are those of polynomials overGF(p){\displaystyle \mathrm {GF} (p)}. The product of two elements is the remainder of theEuclidean divisionbyP{\displaystyle P}of the product inGF(q)[X]{\displaystyle \mathrm {GF} (q)[X]}.
The multiplicative inverse of a non-zero element may be computed with the extended Euclidean algorithm; seeExtended Euclidean algorithm § Simple algebraic field extensions.
However, with this representation, elements ofGF(q){\displaystyle \mathrm {GF} (q)}may be difficult to distinguish from the corresponding polynomials. Therefore, it is common to give a name, commonlyα{\displaystyle \alpha }to the element ofGF(q){\displaystyle \mathrm {GF} (q)}that corresponds to the polynomialX{\displaystyle X}. So, the elements ofGF(q){\displaystyle \mathrm {GF} (q)}become polynomials inα{\displaystyle \alpha }, whereP(α)=0{\displaystyle P(\alpha )=0}, and, when one encounters a polynomial inα{\displaystyle \alpha }of degree greater or equal ton{\displaystyle n}(for example after a multiplication), one knows that one has to use the relationP(α)=0{\displaystyle P(\alpha )=0}to reduce its degree (it is what Euclidean division is doing).
Except in the construction ofGF(4){\displaystyle \mathrm {GF} (4)}, there are several possible choices forP{\displaystyle P}, which produce isomorphic results. To simplify the Euclidean division, one commonly chooses forP{\displaystyle P}a polynomial of the formXn+aX+b,{\displaystyle X^{n}+aX+b,}which make the needed Euclidean divisions very efficient. However, for some fields, typically in characteristic2{\displaystyle 2}, irreducible polynomials of the formXn+aX+b{\displaystyle X^{n}+aX+b}may not exist. In characteristic2{\displaystyle 2}, if the polynomialXn+X+1{\displaystyle X^{n}+X+1}is reducible, it is recommended to chooseXn+Xk+1{\displaystyle X^{n}+X^{k}+1}with the lowest possiblek{\displaystyle k}that makes the polynomial irreducible. If all thesetrinomialsare reducible, one chooses "pentanomials"Xn+Xa+Xb+Xc+1{\displaystyle X^{n}+X^{a}+X^{b}+X^{c}+1}, as polynomials of degree greater than1{\displaystyle 1}, with an even number of terms, are never irreducible in characteristic2{\displaystyle 2}, having1{\displaystyle 1}as a root.[4]
A possible choice for such a polynomial is given byConway polynomials. They ensure a certain compatibility between the representation of a field and the representations of its subfields.
In the next sections, we will show how the general construction method outlined above works for small finite fields.
The smallest non-prime field is the field with four elements, which is commonly denotedGF(4){\displaystyle \mathrm {GF} (4)}orF4.{\displaystyle \mathbb {F} _{4}.}It consists of the four elements0,1,α,1+α{\displaystyle 0,1,\alpha ,1+\alpha }such thatα2=1+α{\displaystyle \alpha ^{2}=1+\alpha },1⋅α=α⋅1=α{\displaystyle 1\cdot \alpha =\alpha \cdot 1=\alpha },x+x=0{\displaystyle x+x=0}, andx⋅0=0⋅x=0{\displaystyle x\cdot 0=0\cdot x=0}, for everyx∈GF(4){\displaystyle x\in \mathrm {GF} (4)}, the other operation results being easily deduced from thedistributive law. See below for the complete operation tables.
This may be deduced as follows from the results of the preceding section.
OverGF(2){\displaystyle \mathrm {GF} (2)}, there is only oneirreducible polynomialof degree2{\displaystyle 2}:X2+X+1{\displaystyle X^{2}+X+1}Therefore, forGF(4){\displaystyle \mathrm {GF} (4)}the construction of the preceding section must involve this polynomial, andGF(4)=GF(2)[X]/(X2+X+1).{\displaystyle \mathrm {GF} (4)=\mathrm {GF} (2)[X]/(X^{2}+X+1).}Letα{\displaystyle \alpha }denote a root of this polynomial inGF(4){\displaystyle \mathrm {GF} (4)}. This implies thatα2=1+α,{\displaystyle \alpha ^{2}=1+\alpha ,}and thatα{\displaystyle \alpha }and1+α{\displaystyle 1+\alpha }are the elements ofGF(4){\displaystyle \mathrm {GF} (4)}that are not inGF(2){\displaystyle \mathrm {GF} (2)}. The tables of the operations inGF(4){\displaystyle \mathrm {GF} (4)}result from this, and are as follows:
A table for subtraction is not given, because subtraction is identical to addition, as is the case for every field of characteristic 2.
In the third table, for the division ofx{\displaystyle x}byy{\displaystyle y}, the values ofx{\displaystyle x}must be read in the left column, and the values ofy{\displaystyle y}in the top row. (Because0⋅z=0{\displaystyle 0\cdot z=0}for everyz{\displaystyle z}in everyringthedivision by 0has to remain undefined.) From the tables, it can be seen that the additive structure ofGF(4){\displaystyle \mathrm {GF} (4)}is isomorphic to theKlein four-group, while the non-zero multiplicative structure is isomorphic to the groupZ3{\displaystyle Z_{3}}.
The mapφ:x↦x2{\displaystyle \varphi :x\mapsto x^{2}}is the non-trivial field automorphism, called theFrobenius automorphism, which sendsα{\displaystyle \alpha }into the second root1+α{\displaystyle 1+\alpha }of the above-mentioned irreducible polynomialX2+X+1{\displaystyle X^{2}+X+1}.
For applying theabove general constructionof finite fields in the case ofGF(p2){\displaystyle \mathrm {GF} (p^{2})}, one has to find an irreducible polynomial of degree 2. Forp=2{\displaystyle p=2}, this has been done in the preceding section. Ifp{\displaystyle p}is an odd prime, there are always irreducible polynomials of the formX2−r{\displaystyle X^{2}-r}, withr{\displaystyle r}inGF(p){\displaystyle \mathrm {GF} (p)}.
More precisely, the polynomialX2−r{\displaystyle X^{2}-r}is irreducible overGF(p){\displaystyle \mathrm {GF} (p)}if and only ifr{\displaystyle r}is aquadratic non-residuemodulop{\displaystyle p}(this is almost the definition of a quadratic non-residue). There arep−12{\displaystyle {\frac {p-1}{2}}}quadratic non-residues modulop{\displaystyle p}. For example,2{\displaystyle 2}is a quadratic non-residue forp=3,5,11,13,…{\displaystyle p=3,5,11,13,\ldots }, and3{\displaystyle 3}is a quadratic non-residue forp=5,7,17,…{\displaystyle p=5,7,17,\ldots }. Ifp≡3mod4{\displaystyle p\equiv 3\mod 4}, that isp=3,7,11,19,…{\displaystyle p=3,7,11,19,\ldots }, one may choose−1≡p−1{\displaystyle -1\equiv p-1}as a quadratic non-residue, which allows us to have a very simple irreducible polynomialX2+1{\displaystyle X^{2}+1}.
Having chosen a quadratic non-residuer{\displaystyle r}, letα{\displaystyle \alpha }be a symbolic square root ofr{\displaystyle r}, that is, a symbol that has the propertyα2=r{\displaystyle \alpha ^{2}=r}, in the same way that the complex numberi{\displaystyle i}is a symbolic square root of−1{\displaystyle -1}. Then, the elements ofGF(p2){\displaystyle \mathrm {GF} (p^{2})}are all the linear expressionsa+bα,{\displaystyle a+b\alpha ,}witha{\displaystyle a}andb{\displaystyle b}inGF(p){\displaystyle \mathrm {GF} (p)}. The operations onGF(p2){\displaystyle \mathrm {GF} (p^{2})}are defined as follows (the operations between elements ofGF(p){\displaystyle \mathrm {GF} (p)}represented by Latin letters are the operations inGF(p){\displaystyle \mathrm {GF} (p)}):−(a+bα)=−a+(−b)α(a+bα)+(c+dα)=(a+c)+(b+d)α(a+bα)(c+dα)=(ac+rbd)+(ad+bc)α(a+bα)−1=a(a2−rb2)−1+(−b)(a2−rb2)−1α{\displaystyle {\begin{aligned}-(a+b\alpha )&=-a+(-b)\alpha \\(a+b\alpha )+(c+d\alpha )&=(a+c)+(b+d)\alpha \\(a+b\alpha )(c+d\alpha )&=(ac+rbd)+(ad+bc)\alpha \\(a+b\alpha )^{-1}&=a(a^{2}-rb^{2})^{-1}+(-b)(a^{2}-rb^{2})^{-1}\alpha \end{aligned}}}
The polynomialX3−X−1{\displaystyle X^{3}-X-1}is irreducible overGF(2){\displaystyle \mathrm {GF} (2)}andGF(3){\displaystyle \mathrm {GF} (3)}, that is, it is irreduciblemodulo2{\displaystyle 2}and3{\displaystyle 3}(to show this, it suffices to show that it has no root inGF(2){\displaystyle \mathrm {GF} (2)}nor inGF(3){\displaystyle \mathrm {GF} (3)}). It follows that the elements ofGF(8){\displaystyle \mathrm {GF} (8)}andGF(27){\displaystyle \mathrm {GF} (27)}may be represented byexpressionsa+bα+cα2,{\displaystyle a+b\alpha +c\alpha ^{2},}wherea,b,c{\displaystyle a,b,c}are elements ofGF(2){\displaystyle \mathrm {GF} (2)}orGF(3){\displaystyle \mathrm {GF} (3)}(respectively), andα{\displaystyle \alpha }is a symbol such thatα3=α+1.{\displaystyle \alpha ^{3}=\alpha +1.}
The addition, additive inverse and multiplication onGF(8){\displaystyle \mathrm {GF} (8)}andGF(27){\displaystyle \mathrm {GF} (27)}may thus be defined as follows; in following formulas, the operations between elements ofGF(2){\displaystyle \mathrm {GF} (2)}orGF(3){\displaystyle \mathrm {GF} (3)}, represented by Latin letters, are the operations inGF(2){\displaystyle \mathrm {GF} (2)}orGF(3){\displaystyle \mathrm {GF} (3)}, respectively:−(a+bα+cα2)=−a+(−b)α+(−c)α2(forGF(8),this operation is the identity)(a+bα+cα2)+(d+eα+fα2)=(a+d)+(b+e)α+(c+f)α2(a+bα+cα2)(d+eα+fα2)=(ad+bf+ce)+(ae+bd+bf+ce+cf)α+(af+be+cd+cf)α2{\displaystyle {\begin{aligned}-(a+b\alpha +c\alpha ^{2})&=-a+(-b)\alpha +(-c)\alpha ^{2}\qquad {\text{(for }}\mathrm {GF} (8),{\text{this operation is the identity)}}\\(a+b\alpha +c\alpha ^{2})+(d+e\alpha +f\alpha ^{2})&=(a+d)+(b+e)\alpha +(c+f)\alpha ^{2}\\(a+b\alpha +c\alpha ^{2})(d+e\alpha +f\alpha ^{2})&=(ad+bf+ce)+(ae+bd+bf+ce+cf)\alpha +(af+be+cd+cf)\alpha ^{2}\end{aligned}}}
The polynomialX4+X+1{\displaystyle X^{4}+X+1}is irreducible overGF(2){\displaystyle \mathrm {GF} (2)}, that is, it is irreducible modulo2{\displaystyle 2}. It follows that the elements ofGF(16){\displaystyle \mathrm {GF} (16)}may be represented byexpressionsa+bα+cα2+dα3,{\displaystyle a+b\alpha +c\alpha ^{2}+d\alpha ^{3},}wherea,b,c,d{\displaystyle a,b,c,d}are either0{\displaystyle 0}or1{\displaystyle 1}(elements ofGF(2){\displaystyle \mathrm {GF} (2)}), andα{\displaystyle \alpha }is a symbol such thatα4=α+1{\displaystyle \alpha ^{4}=\alpha +1}(that is,α{\displaystyle \alpha }is defined as a root of the given irreducible polynomial). As the characteristic ofGF(2){\displaystyle \mathrm {GF} (2)}is2{\displaystyle 2}, each element is its additive inverse inGF(16){\displaystyle \mathrm {GF} (16)}. The addition and multiplication onGF(16){\displaystyle \mathrm {GF} (16)}may be defined as follows; in following formulas, the operations between elements ofGF(2){\displaystyle \mathrm {GF} (2)}, represented by Latin letters are the operations inGF(2){\displaystyle \mathrm {GF} (2)}.(a+bα+cα2+dα3)+(e+fα+gα2+hα3)=(a+e)+(b+f)α+(c+g)α2+(d+h)α3(a+bα+cα2+dα3)(e+fα+gα2+hα3)=(ae+bh+cg+df)+(af+be+bh+cg+df+ch+dg)α+(ag+bf+ce+ch+dg+dh)α2+(ah+bg+cf+de+dh)α3{\displaystyle {\begin{aligned}(a+b\alpha +c\alpha ^{2}+d\alpha ^{3})+(e+f\alpha +g\alpha ^{2}+h\alpha ^{3})&=(a+e)+(b+f)\alpha +(c+g)\alpha ^{2}+(d+h)\alpha ^{3}\\(a+b\alpha +c\alpha ^{2}+d\alpha ^{3})(e+f\alpha +g\alpha ^{2}+h\alpha ^{3})&=(ae+bh+cg+df)+(af+be+bh+cg+df+ch+dg)\alpha \;+\\&\quad \;(ag+bf+ce+ch+dg+dh)\alpha ^{2}+(ah+bg+cf+de+dh)\alpha ^{3}\end{aligned}}}
The fieldGF(16){\displaystyle \mathrm {GF} (16)}has eightprimitive elements(the elements that have all nonzero elements ofGF(16){\displaystyle \mathrm {GF} (16)}as integer powers). These elements are the four roots ofX4+X+1{\displaystyle X^{4}+X+1}and theirmultiplicative inverses. In particular,α{\displaystyle \alpha }is a primitive element, and the primitive elements areαm{\displaystyle \alpha ^{m}}withm{\displaystyle m}less than andcoprimewith15{\displaystyle 15}(that is, 1, 2, 4, 7, 8, 11, 13, 14).
The set of non-zero elements inGF(q){\displaystyle \mathrm {GF} (q)}is anabelian groupunder the multiplication, of orderq−1{\displaystyle q-1}. ByLagrange's theorem, there exists a divisork{\displaystyle k}ofq−1{\displaystyle q-1}such thatxk=1{\displaystyle x^{k}=1}for every non-zerox{\displaystyle x}inGF(q){\displaystyle \mathrm {GF} (q)}. As the equationxk=1{\displaystyle x^{k}=1}has at mostk{\displaystyle k}solutions in any field,q−1{\displaystyle q-1}is the lowest possible value fork{\displaystyle k}.
Thestructure theorem of finite abelian groupsimplies that this multiplicative group iscyclic, that is, all non-zero elements are powers of a single element. In summary:
Such an elementa{\displaystyle a}is called aprimitive elementofGF(q){\displaystyle \mathrm {GF} (q)}. Unlessq=2,3{\displaystyle q=2,3}, the primitive element is not unique. The number of primitive elements isϕ(q−1){\displaystyle \phi (q-1)}whereϕ{\displaystyle \phi }isEuler's totient function.
The result above implies thatxq=x{\displaystyle x^{q}=x}for everyx{\displaystyle x}inGF(q){\displaystyle \mathrm {GF} (q)}. The particular case whereq{\displaystyle q}is prime isFermat's little theorem.
Ifa{\displaystyle a}is a primitive element inGF(q){\displaystyle \mathrm {GF} (q)}, then for any non-zero elementx{\displaystyle x}inF{\displaystyle F}, there is a unique integern{\displaystyle n}with0≤n≤q−2{\displaystyle 0\leq n\leq q-2}such thatx=an{\displaystyle x=a^{n}}.
This integern{\displaystyle n}is called thediscrete logarithmofx{\displaystyle x}to the basea{\displaystyle a}.
Whilean{\displaystyle a^{n}}can be computed very quickly, for example usingexponentiation by squaring, there is no known efficient algorithm for computing the inverse operation, the discrete logarithm. This has been used in variouscryptographic protocols, seeDiscrete logarithmfor details.
When the nonzero elements ofGF(q){\displaystyle \mathrm {GF} (q)}are represented by their discrete logarithms, multiplication and division are easy, as they reduce to addition and subtraction moduloq−1{\displaystyle q-1}. However, addition amounts to computing the discrete logarithm ofam+an{\displaystyle a^{m}+a^{n}}. The identityam+an=an(am−n+1){\displaystyle a^{m}+a^{n}=a^{n}\left(a^{m-n}+1\right)}allows one to solve this problem by constructing the table of the discrete logarithms ofan+1{\displaystyle a^{n}+1}, calledZech's logarithms, forn=0,…,q−2{\displaystyle n=0,\ldots ,q-2}(it is convenient to define the discrete logarithm of zero as being−∞{\displaystyle -\infty }).
Zech's logarithms are useful for large computations, such aslinear algebraover medium-sized fields, that is, fields that are sufficiently large for making natural algorithms inefficient, but not too large, as one has to pre-compute a table of the same size as the order of the field.
Every nonzero element of a finite field is aroot of unity, asxq−1=1{\displaystyle x^{q-1}=1}for every nonzero element ofGF(q){\displaystyle \mathrm {GF} (q)}.
Ifn{\displaystyle n}is a positive integer, ann{\displaystyle n}thprimitive root of unityis a solution of the equationxn=1{\displaystyle x^{n}=1}that is not a solution of the equationxm=1{\displaystyle x^{m}=1}for any positive integerm<n{\displaystyle m<n}. Ifa{\displaystyle a}is an{\displaystyle n}th primitive root of unity in a fieldF{\displaystyle F}, thenF{\displaystyle F}contains all then{\displaystyle n}roots of unity, which are1,a,a2,…,an−1{\displaystyle 1,a,a^{2},\ldots ,a^{n-1}}.
The fieldGF(q){\displaystyle \mathrm {GF} (q)}contains an{\displaystyle n}th primitive root of unity if and only ifn{\displaystyle n}is a divisor ofq−1{\displaystyle q-1}; ifn{\displaystyle n}is a divisor ofq−1{\displaystyle q-1}, then the number of primitiven{\displaystyle n}th roots of unity inGF(q){\displaystyle \mathrm {GF} (q)}isϕ(n){\displaystyle \phi (n)}(Euler's totient function). The number ofn{\displaystyle n}th roots of unity inGF(q){\displaystyle \mathrm {GF} (q)}isgcd(n,q−1){\displaystyle \mathrm {gcd} (n,q-1)}.
In a field of characteristicp{\displaystyle p}, everynp{\displaystyle np}th root of unity is also an{\displaystyle n}th root of unity. It follows that primitivenp{\displaystyle np}th roots of unity never exist in a field of characteristicp{\displaystyle p}.
On the other hand, ifn{\displaystyle n}iscoprimetop{\displaystyle p}, the roots of then{\displaystyle n}thcyclotomic polynomialare distinct in every field of characteristicp{\displaystyle p}, as this polynomial is a divisor ofXn−1{\displaystyle X^{n}-1}, whosediscriminantnn{\displaystyle n^{n}}is nonzero modulop{\displaystyle p}. It follows that then{\displaystyle n}thcyclotomic polynomialfactors overGF(q){\displaystyle \mathrm {GF} (q)}into distinct irreducible polynomials that have all the same degree, sayd{\displaystyle d}, and thatGF(pd){\displaystyle \mathrm {GF} (p^{d})}is the smallest field of characteristicp{\displaystyle p}that contains then{\displaystyle n}th primitive roots of unity.
When computingBrauer characters, one uses the mapαk↦exp(2πik/(q−1)){\displaystyle \alpha ^{k}\mapsto \exp(2\pi ik/(q-1))}to map eigenvalues of a representation matrix to the complex numbers. Under this mapping, the base subfieldGF(p){\displaystyle \mathrm {GF} (p)}consists of evenly spaced points around the unit circle (omitting zero).
The fieldGF(64){\displaystyle \mathrm {GF} (64)}has several interesting properties that smaller fields do not share: it has two subfields such that neither is contained in the other; not all generators (elements withminimal polynomialof degree6{\displaystyle 6}overGF(2){\displaystyle \mathrm {GF} (2)}) are primitive elements; and the primitive elements are not all conjugate under theGalois group.
The order of this field being26, and the divisors of6being1, 2, 3, 6, the subfields ofGF(64)areGF(2),GF(22) = GF(4),GF(23) = GF(8), andGF(64)itself. As2and3arecoprime, the intersection ofGF(4)andGF(8)inGF(64)is the prime fieldGF(2).
The union ofGF(4)andGF(8)has thus10elements. The remaining54elements ofGF(64)generateGF(64)in the sense that no other subfield contains any of them. It follows that they are roots of irreducible polynomials of degree6overGF(2). This implies that, overGF(2), there are exactly9 =54/6irreduciblemonic polynomialsof degree6. This may be verified by factoringX64−XoverGF(2).
The elements ofGF(64)are primitiven{\displaystyle n}th roots of unity for somen{\displaystyle n}dividing63{\displaystyle 63}. As the 3rd and the 7th roots of unity belong toGF(4)andGF(8), respectively, the54generators are primitiventh roots of unity for somenin{9, 21, 63}.Euler's totient functionshows that there are6primitive9th roots of unity,12{\displaystyle 12}primitive21{\displaystyle 21}st roots of unity, and36{\displaystyle 36}primitive63rd roots of unity. Summing these numbers, one finds again54{\displaystyle 54}elements.
By factoring thecyclotomic polynomialsoverGF(2){\displaystyle \mathrm {GF} (2)}, one finds that:
This shows that the best choice to constructGF(64){\displaystyle \mathrm {GF} (64)}is to define it asGF(2)[X] / (X6+X+ 1). In fact, this generator is a primitive element, and this polynomial is the irreducible polynomial that produces the easiest Euclidean division.
In this section,p{\displaystyle p}is a prime number, andq=pn{\displaystyle q=p^{n}}is a power ofp{\displaystyle p}.
InGF(q){\displaystyle \mathrm {GF} (q)}, the identity(x+y)p=xp+ypimplies that the mapφ:x↦xp{\displaystyle \varphi :x\mapsto x^{p}}is aGF(p){\displaystyle \mathrm {GF} (p)}-linear endomorphismand afield automorphismofGF(q){\displaystyle \mathrm {GF} (q)}, which fixes every element of the subfieldGF(p){\displaystyle \mathrm {GF} (p)}. It is called theFrobenius automorphism, afterFerdinand Georg Frobenius.
Denoting byφkthecompositionofφwith itselfktimes, we haveφk:x↦xpk.{\displaystyle \varphi ^{k}:x\mapsto x^{p^{k}}.}It has been shown in the preceding section thatφnis the identity. For0 <k<n, the automorphismφkis not the identity, as, otherwise, the polynomialXpk−X{\displaystyle X^{p^{k}}-X}would have more thanpkroots.
There are no otherGF(p)-automorphisms ofGF(q). In other words,GF(pn)has exactlynGF(p)-automorphisms, which areId=φ0,φ,φ2,…,φn−1.{\displaystyle \mathrm {Id} =\varphi ^{0},\varphi ,\varphi ^{2},\ldots ,\varphi ^{n-1}.}
In terms ofGalois theory, this means thatGF(pn)is aGalois extensionofGF(p), which has acyclicGalois group.
The fact that the Frobenius map is surjective implies that every finite field isperfect.
IfFis a finite field, a non-constantmonic polynomialwith coefficients inFisirreducibleoverF, if it is not the product of two non-constant monic polynomials, with coefficients inF.
As everypolynomial ringover a field is aunique factorization domain, every monic polynomial over a finite field may be factored in a unique way (up to the order of the factors) into a product of irreducible monic polynomials.
There are efficient algorithms for testing polynomial irreducibility and factoring polynomials over finite fields. They are a key step for factoring polynomials over the integers or therational numbers. At least for this reason, everycomputer algebra systemhas functions for factoring polynomials over finite fields, or, at least, over finite prime fields.
The polynomialXq−X{\displaystyle X^{q}-X}factors into linear factors over a field of orderq. More precisely, this polynomial is the product of all monic polynomials of degree one over a field of orderq.
This implies that, ifq=pnthenXq−Xis the product of all monic irreducible polynomials overGF(p), whose degree dividesn. In fact, ifPis an irreducible factor overGF(p)ofXq−X, its degree dividesn, as itssplitting fieldis contained inGF(pn). Conversely, ifPis an irreducible monic polynomial overGF(p)of degreeddividingn, it defines a field extension of degreed, which is contained inGF(pn), and all roots ofPbelong toGF(pn), and are roots ofXq−X; thusPdividesXq−X. AsXq−Xdoes not have any multiple factor, it is thus the product of all the irreducible monic polynomials that divide it.
This property is used to compute the product of the irreducible factors of each degree of polynomials overGF(p); seeDistinct degree factorization.
The numberN(q,n)of monic irreducible polynomials of degreenoverGF(q)is given by[5]N(q,n)=1n∑d∣nμ(d)qn/d,{\displaystyle N(q,n)={\frac {1}{n}}\sum _{d\mid n}\mu (d)q^{n/d},}whereμis theMöbius function. This formula is an immediate consequence of the property ofXq−Xabove and theMöbius inversion formula.
By the above formula, the number of irreducible (not necessarily monic) polynomials of degreenoverGF(q)is(q− 1)N(q,n).
The exact formula implies the inequalityN(q,n)≥1n(qn−∑ℓ∣n,ℓprimeqn/ℓ);{\displaystyle N(q,n)\geq {\frac {1}{n}}\left(q^{n}-\sum _{\ell \mid n,\ \ell {\text{ prime}}}q^{n/\ell }\right);}this is sharp if and only ifnis a power of some prime.
For everyqand everyn, the right hand side is positive, so there is at least one irreducible polynomial of degreenoverGF(q).
Incryptography, the difficulty of thediscrete logarithm problemin finite fields or inelliptic curvesis the basis of several widely used protocols, such as theDiffie–Hellmanprotocol. For example, in 2014, a secure internet connection to Wikipedia involved the elliptic curve Diffie–Hellman protocol (ECDHE) over a large finite field.[6]Incoding theory, many codes are constructed assubspacesofvector spacesover finite fields.
Finite fields are used by manyerror correction codes, such asReed–Solomon error correction codeorBCH code. The finite field almost always has characteristic of2, since computer data is stored in binary. For example, a byte of data can be interpreted as an element ofGF(28). One exception isPDF417bar code, which isGF(929). Some CPUs have special instructions that can be useful for finite fields of characteristic2, generally variations ofcarry-less product.
Finite fields are widely used innumber theory, as many problems over the integers may be solved by reducing themmoduloone or severalprime numbers. For example, the fastest known algorithms forpolynomial factorizationandlinear algebraover the field ofrational numbersproceed by reduction modulo one or several primes, and then reconstruction of the solution by usingChinese remainder theorem,Hensel liftingor theLLL algorithm.
Similarly many theoretical problems in number theory can be solved by considering their reductions modulo some or all prime numbers. See, for example,Hasse principle. Many recent developments ofalgebraic geometrywere motivated by the need to enlarge the power of these modular methods.Wiles' proof of Fermat's Last Theoremis an example of a deep result involving many mathematical tools, including finite fields.
TheWeil conjecturesconcern the number of points onalgebraic varietiesover finite fields and the theory has many applications includingexponentialandcharacter sumestimates.
Finite fields have widespread application incombinatorics, two well known examples being the definition ofPaley Graphsand the related construction forHadamard Matrices. Inarithmetic combinatoricsfinite fields[7]and finite field models[8][9]are used extensively, such as inSzemerédi's theoremon arithmetic progressions.
Adivision ringis a generalization of field. Division rings are not assumed to be commutative. There are no non-commutative finite division rings:Wedderburn's little theoremstates that all finitedivision ringsare commutative, and hence are finite fields. This result holds even if we relax theassociativityaxiom toalternativity, that is, all finitealternative division ringsare finite fields, by theArtin–Zorn theorem.[10]
A finite fieldF{\displaystyle F}is not algebraically closed: the polynomialf(T)=1+∏α∈F(T−α),{\displaystyle f(T)=1+\prod _{\alpha \in F}(T-\alpha ),}has no roots inF{\displaystyle F}, sincef(α) = 1for allα{\displaystyle \alpha }inF{\displaystyle F}.
Given a prime numberp, letF¯p{\displaystyle {\overline {\mathbb {F} }}_{p}}be an algebraic closure ofFp.{\displaystyle \mathbb {F} _{p}.}It is not only uniqueup toan isomorphism, as do all algebraic closures, but contrarily to the general case, all its subfield are fixed by all its automorphisms, and it is also the algebraic closure of all finite fields of the same characteristicp.
This property results mainly from the fact that the elements ofFpn{\displaystyle \mathbb {F} _{p^{n}}}are exactly the roots ofxpn−x,{\displaystyle x^{p^{n}}-x,}and this defines an inclusionFpn⊂Fpnm{\displaystyle \mathbb {\mathbb {F} } _{p^{n}}\subset \mathbb {F} _{p^{nm}}}form>1.{\displaystyle m>1.}These inclusions allow writing informallyF¯p=⋃n≥1Fpn.{\displaystyle {\overline {\mathbb {F} }}_{p}=\bigcup _{n\geq 1}\mathbb {F} _{p^{n}}.}The formal validation of this notation results from the fact that the above field inclusions form adirected setof fields; Itsdirect limitisF¯p,{\displaystyle {\overline {\mathbb {F} }}_{p},}which may thus be considered as "directed union".
Given aprimitive elementgmn{\displaystyle g_{mn}}ofFqmn,{\displaystyle \mathbb {F} _{q^{mn}},}thengmnm{\displaystyle g_{mn}^{m}}is a primitive element ofFqn.{\displaystyle \mathbb {F} _{q^{n}}.}
For explicit computations, it may be useful to have a coherent choice of the primitive elements for all finite fields; that is, to choose the primitive elementgn{\displaystyle g_{n}}ofFqn{\displaystyle \mathbb {F} _{q^{n}}}in order that, whenevern=mh,{\displaystyle n=mh,}one hasgm=gnh,{\displaystyle g_{m}=g_{n}^{h},}wheregm{\displaystyle g_{m}}is the primitive element already chosen forFqm.{\displaystyle \mathbb {F} _{q^{m}}.}
Such a construction may be obtained byConway polynomials.
Although finite fields are not algebraically closed, they arequasi-algebraically closed, which means that everyhomogeneous polynomialover a finite field has a non-trivial zero whose components are in the field if the number of its variables is more than its degree. This was a conjecture ofArtinandDicksonproved byChevalley(seeChevalley–Warning theorem).
|
https://en.wikipedia.org/wiki/Galois_field#Finite_fields
|
Data integrityis the maintenance of, and the assurance of, data accuracy and consistency over its entirelife-cycle.[1]It is a critical aspect to the design, implementation, and usage of any system that stores, processes, or retrieves data. The term is broad in scope and may have widely different meanings depending on the specific context even under the same general umbrella ofcomputing. It is at times used as a proxy term fordata quality,[2]whiledata validationis a prerequisite for data integrity.[3]
Data integrity is the opposite ofdata corruption.[4]The overall intent of any data integrity technique is the same: ensure data is recorded exactly as intended (such as a database correctly rejecting mutually exclusive possibilities). Moreover, upon laterretrieval, ensure the data is the same as when it was originally recorded. In short, data integrity aims to prevent unintentional changes to information. Data integrity is not to be confused withdata security, the discipline of protecting data from unauthorized parties.
Any unintended changes to data as the result of a storage, retrieval or processing operation, including malicious intent, unexpected hardware failure, andhuman error, is failure of data integrity. If the changes are the result of unauthorized access, it may also be a failure of data security. Depending on the data involved this could manifest itself as benign as a single pixel in an image appearing a different color than was originally recorded, to the loss of vacation pictures or a business-critical database, to even catastrophic loss of human life in alife-critical system.
Physical integrity deals with challenges which are associated with correctly storing and fetching the data itself. Challenges with physical integrity may includeelectromechanicalfaults, design flaws, materialfatigue,corrosion,power outages, natural disasters, and other special environmental hazards such asionizing radiation, extreme temperatures, pressures andg-forces. Ensuring physical integrity includes methods such asredundanthardware, anuninterruptible power supply, certain types ofRAIDarrays,radiation hardenedchips,error-correcting memory, use of aclustered file system, using file systems that employ block levelchecksumssuch asZFS, storage arrays that compute parity calculations such asexclusive oror use acryptographic hash functionand even having awatchdog timeron critical subsystems.
Physical integrity often makes extensive use of error detecting algorithms known aserror-correcting codes. Human-induced data integrity errors are often detected through the use of simpler checks and algorithms, such as theDamm algorithmorLuhn algorithm. These are used to maintain data integrity after manual transcription from one computer system to another by a human intermediary (e.g. credit card or bank routing numbers). Computer-induced transcription errors can be detected throughhash functions.
In production systems, these techniques are used together to ensure various degrees of data integrity. For example, a computerfile systemmay be configured on a fault-tolerant RAID array, but might not provide block-level checksums to detect and preventsilent data corruption. As another example, a database management system might be compliant with theACIDproperties, but the RAID controller or hard disk drive's internal write cache might not be.
This type of integrity is concerned with thecorrectnessorrationalityof a piece of data, given a particular context. This includes topics such asreferential integrityandentity integrityin arelational databaseor correctly ignoring impossible sensor data in robotic systems. These concerns involve ensuring that the data "makes sense" given its environment. Challenges includesoftware bugs, design flaws, and human errors. Common methods of ensuring logical integrity include things such ascheck constraints,foreign key constraints, programassertions, and other run-time sanity checks.
Physical and logical integrity often share many challenges such as human errors and design flaws, and both must appropriately deal with concurrent requests to record and retrieve data, the latter of which is entirely a subject on its own.
If a data sector only has a logical error, it can be reused by overwriting it with new data. In case of a physical error, the affected data sector is permanently unusable.
Data integrity contains guidelines fordata retention, specifying or guaranteeing the length of time data can be retained in a particular database (typically arelational database). To achieve data integrity, these rules are consistently and routinely applied to all data entering the system, and any relaxation of enforcement could cause errors in the data. Implementing checks on the data as close as possible to the source of input (such as human data entry), causes less erroneous data to enter the system. Strict enforcement of data integrity rules results in lower error rates, and time saved troubleshooting and tracing erroneous data and the errors it causes to algorithms.
Data integrity also includes rules defining the relations a piece of data can have to other pieces of data, such as aCustomerrecord being allowed to link to purchasedProducts, but not to unrelated data such asCorporate Assets. Data integrity often includes checks and correction for invalid data, based on a fixedschemaor a predefined set of rules. An example being textual data entered where a date-time value is required. Rules for data derivation are also applicable, specifying how a data value is derived based on algorithm, contributors and conditions. It also specifies the conditions on how the data value could be re-derived.
Data integrity is normally enforced in adatabase systemby a series of integrity constraints or rules. Three types of integrity constraints are an inherent part of therelational data model: entity integrity, referential integrity and domain integrity.
If a database supports these features, it is the responsibility of the database to ensure data integrity as well as theconsistency modelfor the data storage and retrieval. If a database does not support these features, it is the responsibility of the applications to ensure data integrity while the database supports theconsistency modelfor the data storage and retrieval.
Having a single, well-controlled, and well-defined data-integrity system increases:
Moderndatabasessupport these features (seeComparison of relational database management systems), and it has become the de facto responsibility of the database to ensure data integrity. Companies, and indeed many database systems, offer products and services to migrate legacy systems to modern databases.
An example of a data-integrity mechanism is the parent-and-child relationship of related records. If a parent record owns one or more related child records all of the referential integrity processes are handled by the database itself, which automatically ensures the accuracy and integrity of the data so that no child record can exist without a parent (also called being orphaned) and that no parent loses their child records. It also ensures that no parent record can be deleted while the parent record owns any child records. All of this is handled at the database level and does not require coding integrity checks into each application.
Various research results show that neither widespreadfilesystems(includingUFS,Ext,XFS,JFSandNTFS) norhardware RAIDsolutions provide sufficient protection against data integrity problems.[5][6][7][8][9]
Some filesystems (includingBtrfsandZFS) provide internal data andmetadatachecksumming that is used for detectingsilent data corruptionand improving data integrity. If a corruption is detected that way and internal RAID mechanisms provided by those filesystems are also used, such filesystems can additionally reconstruct corrupted data in a transparent way.[10]This approach allows improved data integrity protection covering the entire data paths, which is usually known asend-to-end data protection.[11]
|
https://en.wikipedia.org/wiki/Data_integrity
|
Cryptography, orcryptology(fromAncient Greek:κρυπτός,romanized:kryptós"hidden, secret"; andγράφεινgraphein, "to write", or-λογία-logia, "study", respectively[1]), is the practice and study of techniques forsecure communicationin the presence ofadversarialbehavior.[2]More generally, cryptography is about constructing and analyzingprotocolsthat prevent third parties or the public from reading private messages.[3]Modern cryptography exists at the intersection of the disciplines of mathematics,computer science,information security,electrical engineering,digital signal processing, physics, and others.[4]Core concepts related toinformation security(data confidentiality,data integrity,authentication, andnon-repudiation) are also central to cryptography.[5]Practical applications of cryptography includeelectronic commerce,chip-based payment cards,digital currencies,computer passwords, andmilitary communications.
Cryptography prior to the modern age was effectively synonymous withencryption, converting readable information (plaintext) to unintelligiblenonsensetext (ciphertext), which can only be read by reversing the process (decryption). The sender of an encrypted (coded) message shares the decryption (decoding) technique only with the intended recipients to preclude access from adversaries. The cryptography literatureoften uses the names"Alice" (or "A") for the sender, "Bob" (or "B") for the intended recipient, and "Eve" (or "E") for theeavesdroppingadversary.[6]Since the development ofrotor cipher machinesinWorld War Iand the advent of computers inWorld War II, cryptography methods have become increasingly complex and their applications more varied.
Modern cryptography is heavily based onmathematical theoryand computer science practice; cryptographicalgorithmsare designed aroundcomputational hardness assumptions, making such algorithms hard to break in actual practice by any adversary. While it is theoretically possible to break into a well-designed system, it is infeasible in actual practice to do so. Such schemes, if well designed, are therefore termed "computationally secure". Theoretical advances (e.g., improvements ininteger factorizationalgorithms) and faster computing technology require these designs to be continually reevaluated and, if necessary, adapted.Information-theoretically secureschemes that provably cannot be broken even with unlimited computing power, such as theone-time pad, are much more difficult to use in practice than the best theoretically breakable but computationally secure schemes.
The growth of cryptographic technology has raiseda number of legal issuesin theInformation Age. Cryptography's potential for use as a tool for espionage andseditionhas led many governments to classify it as a weapon and to limit or even prohibit its use and export.[7]In some jurisdictions where the use of cryptography is legal, laws permit investigators tocompel the disclosureofencryption keysfor documents relevant to an investigation.[8][9]Cryptography also plays a major role indigital rights managementandcopyright infringementdisputes with regard todigital media.[10]
The first use of the term "cryptograph" (as opposed to "cryptogram") dates back to the 19th century—originating from "The Gold-Bug", a story byEdgar Allan Poe.[11][12]
Until modern times, cryptography referred almost exclusively to "encryption", which is the process of converting ordinary information (calledplaintext) into an unintelligible form (calledciphertext).[13]Decryption is the reverse, in other words, moving from the unintelligible ciphertext back to plaintext. Acipher(or cypher) is a pair of algorithms that carry out the encryption and the reversing decryption. The detailed operation of a cipher is controlled both by the algorithm and, in each instance, by a "key". The key is a secret (ideally known only to the communicants), usually a string of characters (ideally short so it can be remembered by the user), which is needed to decrypt the ciphertext. In formal mathematical terms, a "cryptosystem" is the ordered list of elements of finite possible plaintexts, finite possible cyphertexts, finite possible keys, and the encryption and decryption algorithms that correspond to each key. Keys are important both formally and in actual practice, as ciphers without variable keys can be trivially broken with only the knowledge of the cipher used and are therefore useless (or even counter-productive) for most purposes. Historically, ciphers were often used directly for encryption or decryption without additional procedures such asauthenticationor integrity checks.
There are two main types of cryptosystems:symmetricandasymmetric. In symmetric systems, the only ones known until the 1970s, the same secret key encrypts and decrypts a message. Data manipulation in symmetric systems is significantly faster than in asymmetric systems. Asymmetric systems use a "public key" to encrypt a message and a related "private key" to decrypt it. The advantage of asymmetric systems is that the public key can be freely published, allowing parties to establish secure communication without having a shared secret key. In practice, asymmetric systems are used to first exchange a secret key, and then secure communication proceeds via a more efficient symmetric system using that key.[14]Examples of asymmetric systems includeDiffie–Hellman key exchange, RSA (Rivest–Shamir–Adleman), ECC (Elliptic Curve Cryptography), andPost-quantum cryptography. Secure symmetric algorithms include the commonly used AES (Advanced Encryption Standard) which replaced the older DES (Data Encryption Standard).[15]Insecure symmetric algorithms include children's language tangling schemes such asPig Latinor othercant, and all historical cryptographic schemes, however seriously intended, prior to the invention of theone-time padearly in the 20th century.
Incolloquialuse, the term "code" is often used to mean any method of encryption or concealment of meaning. However, in cryptography, code has a more specific meaning: the replacement of a unit of plaintext (i.e., a meaningful word or phrase) with acode word(for example, "wallaby" replaces "attack at dawn"). A cypher, in contrast, is a scheme for changing or substituting an element below such a level (a letter, a syllable, or a pair of letters, etc.) to produce a cyphertext.
Cryptanalysisis the term used for the study of methods for obtaining the meaning of encrypted information without access to the key normally required to do so; i.e., it is the study of how to "crack" encryption algorithms or their implementations.
Some use the terms "cryptography" and "cryptology" interchangeably in English,[16]while others (including US military practice generally) use "cryptography" to refer specifically to the use and practice of cryptographic techniques and "cryptology" to refer to the combined study of cryptography and cryptanalysis.[17][18]English is more flexible than several other languages in which "cryptology" (done by cryptologists) is always used in the second sense above.RFC2828advises thatsteganographyis sometimes included in cryptology.[19]
The study of characteristics of languages that have some application in cryptography or cryptology (e.g. frequency data, letter combinations, universal patterns, etc.) is calledcryptolinguistics. Cryptolingusitics is especially used in military intelligence applications for deciphering foreign communications.[20][21]
Before the modern era, cryptography focused on message confidentiality (i.e., encryption)—conversion ofmessagesfrom a comprehensible form into an incomprehensible one and back again at the other end, rendering it unreadable by interceptors oreavesdropperswithout secret knowledge (namely the key needed for decryption of that message). Encryption attempted to ensuresecrecyin communications, such as those ofspies, military leaders, and diplomats. In recent decades, the field has expanded beyond confidentiality concerns to include techniques for message integrity checking, sender/receiver identity authentication,digital signatures,interactive proofsandsecure computation, among others.
The main classical cipher types aretransposition ciphers, which rearrange the order of letters in a message (e.g., 'hello world' becomes 'ehlol owrdl' in a trivially simple rearrangement scheme), andsubstitution ciphers, which systematically replace letters or groups of letters with other letters or groups of letters (e.g., 'fly at once' becomes 'gmz bu podf' by replacing each letter with the one following it in theLatin alphabet).[22]Simple versions of either have never offered much confidentiality from enterprising opponents. An early substitution cipher was theCaesar cipher, in which each letter in the plaintext was replaced by a letter three positions further down the alphabet.[23]Suetoniusreports thatJulius Caesarused it with a shift of three to communicate with his generals.Atbashis an example of an early Hebrew cipher. The earliest known use of cryptography is some carved ciphertext on stone inEgypt(c.1900 BCE), but this may have been done for the amusement of literate observers rather than as a way of concealing information.
TheGreeks of Classical timesare said to have known of ciphers (e.g., thescytaletransposition cipher claimed to have been used by theSpartanmilitary).[24]Steganography(i.e., hiding even the existence of a message so as to keep it confidential) was also first developed in ancient times. An early example, fromHerodotus, was a message tattooed on a slave's shaved head and concealed under the regrown hair.[13]Other steganography methods involve 'hiding in plain sight,' such as using amusic cipherto disguise an encrypted message within a regular piece of sheet music. More modern examples of steganography include the use ofinvisible ink,microdots, anddigital watermarksto conceal information.
In India, the 2000-year-oldKama SutraofVātsyāyanaspeaks of two different kinds of ciphers called Kautiliyam and Mulavediya. In the Kautiliyam, the cipher letter substitutions are based on phonetic relations, such as vowels becoming consonants. In the Mulavediya, the cipher alphabet consists of pairing letters and using the reciprocal ones.[13]
InSassanid Persia, there were two secret scripts, according to the Muslim authorIbn al-Nadim: thešāh-dabīrīya(literally "King's script") which was used for official correspondence, and therāz-saharīyawhich was used to communicate secret messages with other countries.[25]
David Kahnnotes inThe Codebreakersthat modern cryptology originated among theArabs, the first people to systematically document cryptanalytic methods.[26]Al-Khalil(717–786) wrote theBook of Cryptographic Messages, which contains the first use ofpermutations and combinationsto list all possible Arabic words with and without vowels.[27]
Ciphertexts produced by aclassical cipher(and some modern ciphers) will reveal statistical information about the plaintext, and that information can often be used to break the cipher. After the discovery offrequency analysis, nearly all such ciphers could be broken by an informed attacker.[28]Such classical ciphers still enjoy popularity today, though mostly aspuzzles(seecryptogram). TheArab mathematicianandpolymathAl-Kindi wrote a book on cryptography entitledRisalah fi Istikhraj al-Mu'amma(Manuscript for the Deciphering Cryptographic Messages), which described the first known use of frequency analysis cryptanalysis techniques.[29][30]
Language letter frequencies may offer little help for some extended historical encryption techniques such ashomophonic cipherthat tend to flatten the frequency distribution. For those ciphers, language letter group (or n-gram) frequencies may provide an attack.
Essentially all ciphers remained vulnerable to cryptanalysis using the frequency analysis technique until the development of thepolyalphabetic cipher, most clearly byLeon Battista Albertiaround the year 1467, though there is some indication that it was already known to Al-Kindi.[30]Alberti's innovation was to use different ciphers (i.e., substitution alphabets) for various parts of a message (perhaps for each successive plaintext letter at the limit). He also invented what was probably the first automaticcipher device, a wheel that implemented a partial realization of his invention. In theVigenère cipher, apolyalphabetic cipher, encryption uses akey word, which controls letter substitution depending on which letter of the key word is used. In the mid-19th centuryCharles Babbageshowed that the Vigenère cipher was vulnerable toKasiski examination, but this was first published about ten years later byFriedrich Kasiski.[31]
Although frequency analysis can be a powerful and general technique against many ciphers, encryption has still often been effective in practice, as many a would-be cryptanalyst was unaware of the technique. Breaking a message without using frequency analysis essentially required knowledge of the cipher used and perhaps of the key involved, thus making espionage, bribery, burglary, defection, etc., more attractive approaches to the cryptanalytically uninformed. It was finally explicitly recognized in the 19th century that secrecy of a cipher's algorithm is not a sensible nor practical safeguard of message security; in fact, it was further realized that any adequate cryptographic scheme (including ciphers) should remain secure even if the adversary fully understands the cipher algorithm itself. Security of the key used should alone be sufficient for a good cipher to maintain confidentiality under an attack. This fundamental principle was first explicitly stated in 1883 byAuguste Kerckhoffsand is generally calledKerckhoffs's Principle; alternatively and more bluntly, it was restated byClaude Shannon, the inventor ofinformation theoryand the fundamentals of theoretical cryptography, asShannon's Maxim—'the enemy knows the system'.
Different physical devices and aids have been used to assist with ciphers. One of the earliest may have been the scytale of ancient Greece, a rod supposedly used by the Spartans as an aid for a transposition cipher. In medieval times, other aids were invented such as thecipher grille, which was also used for a kind of steganography. With the invention of polyalphabetic ciphers came more sophisticated aids such as Alberti's owncipher disk,Johannes Trithemius'tabula rectascheme, andThomas Jefferson'swheel cypher(not publicly known, and reinvented independently byBazeriesaround 1900). Many mechanical encryption/decryption devices were invented early in the 20th century, and several patented, among themrotor machines—famously including theEnigma machineused by the German government and military from the late 1920s and duringWorld War II.[32]The ciphers implemented by better quality examples of these machine designs brought about a substantial increase in cryptanalytic difficulty after WWI.[33]
Cryptanalysis of the new mechanical ciphering devices proved to be both difficult and laborious. In the United Kingdom, cryptanalytic efforts atBletchley Parkduring WWII spurred the development of more efficient means for carrying out repetitive tasks, suchas military code breaking (decryption). This culminated in the development of theColossus, the world's first fully electronic, digital,programmablecomputer, which assisted in the decryption of ciphers generated by the German Army'sLorenz SZ40/42machine.
Extensive open academic research into cryptography is relatively recent, beginning in the mid-1970s. In the early 1970sIBMpersonnel designed the Data Encryption Standard (DES) algorithm that became the first federal government cryptography standard in the United States.[34]In 1976Whitfield DiffieandMartin Hellmanpublished the Diffie–Hellman key exchange algorithm.[35]In 1977 theRSA algorithmwas published inMartin Gardner'sScientific Americancolumn.[36]Since then, cryptography has become a widely used tool in communications,computer networks, andcomputer securitygenerally.
Some modern cryptographic techniques can only keep their keys secret if certain mathematical problems areintractable, such as theinteger factorizationor thediscrete logarithmproblems, so there are deep connections withabstract mathematics. There are very few cryptosystems that are proven to be unconditionally secure. Theone-time padis one, and was proven to be so by Claude Shannon. There are a few important algorithms that have been proven secure under certain assumptions. For example, the infeasibility of factoring extremely large integers is the basis for believing that RSA is secure, and some other systems, but even so, proof of unbreakability is unavailable since the underlying mathematical problem remains open. In practice, these are widely used, and are believed unbreakable in practice by most competent observers. There are systems similar to RSA, such as one byMichael O. Rabinthat are provably secure provided factoringn = pqis impossible; it is quite unusable in practice. Thediscrete logarithm problemis the basis for believing some other cryptosystems are secure, and again, there are related, less practical systems that are provably secure relative to the solvability or insolvability discrete log problem.[37]
As well as being aware of cryptographic history,cryptographic algorithmand system designers must also sensibly consider probable future developments while working on their designs. For instance, continuous improvements in computer processing power have increased the scope ofbrute-force attacks, so when specifyingkey lengths, the required key lengths are similarly advancing.[38]The potential impact ofquantum computingare already being considered by some cryptographic system designers developing post-quantum cryptography.[when?]The announced imminence of small implementations of these machines may be making the need for preemptive caution rather more than merely speculative.[5]
Claude Shannon's two papers, his1948 paperoninformation theory, and especially his1949 paperon cryptography, laid the foundations of modern cryptography and provided a mathematical basis for future cryptography.[39][40]His 1949 paper has been noted as having provided a "solid theoretical basis for cryptography and for cryptanalysis",[41]and as having turned cryptography from an "art to a science".[42]As a result of his contributions and work, he has been described as the "founding father of modern cryptography".[43]
Prior to the early 20th century, cryptography was mainly concerned withlinguisticandlexicographicpatterns. Since then cryptography has broadened in scope, and now makes extensive use of mathematical subdisciplines, including information theory,computational complexity, statistics,combinatorics,abstract algebra,number theory, andfinite mathematics.[44]Cryptography is also a branch of engineering, but an unusual one since it deals with active, intelligent, and malevolent opposition; other kinds of engineering (e.g., civil or chemical engineering) need deal only with neutral natural forces. There is also active research examining the relationship between cryptographic problems andquantum physics.
Just as the development of digital computers and electronics helped in cryptanalysis, it made possible much more complex ciphers. Furthermore, computers allowed for the encryption of any kind of data representable in any binary format, unlike classical ciphers which only encrypted written language texts; this was new and significant. Computer use has thus supplanted linguistic cryptography, both for cipher design and cryptanalysis. Many computer ciphers can be characterized by their operation onbinarybitsequences (sometimes in groups or blocks), unlike classical and mechanical schemes, which generally manipulate traditional characters (i.e., letters and digits) directly. However, computers have also assisted cryptanalysis, which has compensated to some extent for increased cipher complexity. Nonetheless, good modern ciphers have stayed ahead of cryptanalysis; it is typically the case that use of a quality cipher is very efficient (i.e., fast and requiring few resources, such as memory or CPU capability), while breaking it requires an effort many orders of magnitude larger, and vastly larger than that required for any classical cipher, making cryptanalysis so inefficient and impractical as to be effectively impossible.
Symmetric-key cryptography refers to encryption methods in which both the sender and receiver share the same key (or, less commonly, in which their keys are different, but related in an easily computable way). This was the only kind of encryption publicly known until June 1976.[35]
Symmetric key ciphers are implemented as eitherblock ciphersorstream ciphers. A block cipher enciphers input in blocks of plaintext as opposed to individual characters, the input form used by a stream cipher.
TheData Encryption Standard(DES) and theAdvanced Encryption Standard(AES) are block cipher designs that have been designatedcryptography standardsby the US government (though DES's designation was finally withdrawn after the AES was adopted).[45]Despite its deprecation as an official standard, DES (especially its still-approved and much more securetriple-DESvariant) remains quite popular; it is used across a wide range of applications, from ATM encryption[46]toe-mail privacy[47]andsecure remote access.[48]Many other block ciphers have been designed and released, with considerable variation in quality. Many, even some designed by capable practitioners, have been thoroughly broken, such asFEAL.[5][49]
Stream ciphers, in contrast to the 'block' type, create an arbitrarily long stream of key material, which is combined with the plaintext bit-by-bit or character-by-character, somewhat like theone-time pad. In a stream cipher, the output stream is created based on a hidden internal state that changes as the cipher operates. That internal state is initially set up using the secret key material.RC4is a widely used stream cipher.[5]Block ciphers can be used as stream ciphers by generating blocks of a keystream (in place of aPseudorandom number generator) and applying anXORoperation to each bit of the plaintext with each bit of the keystream.[50]
Message authentication codes(MACs) are much likecryptographic hash functions, except that a secret key can be used to authenticate the hash value upon receipt;[5][51]this additional complication blocks an attack scheme against baredigest algorithms, and so has been thought worth the effort. Cryptographic hash functions are a third type of cryptographic algorithm. They take a message of any length as input, and output a short, fixed-lengthhash, which can be used in (for example) a digital signature. For good hash functions, an attacker cannot find two messages that produce the same hash.MD4is a long-used hash function that is now broken;MD5, a strengthened variant of MD4, is also widely used but broken in practice. The USNational Security Agencydeveloped the Secure Hash Algorithm series of MD5-like hash functions: SHA-0 was a flawed algorithm that the agency withdrew;SHA-1is widely deployed and more secure than MD5, but cryptanalysts have identified attacks against it; theSHA-2family improves on SHA-1, but is vulnerable to clashes as of 2011; and the US standards authority thought it "prudent" from a security perspective to develop a new standard to "significantly improve the robustness ofNIST's overall hash algorithm toolkit."[52]Thus, ahash function design competitionwas meant to select a new U.S. national standard, to be calledSHA-3, by 2012. The competition ended on October 2, 2012, when the NIST announced thatKeccakwould be the new SHA-3 hash algorithm.[53]Unlike block and stream ciphers that are invertible, cryptographic hash functions produce a hashed output that cannot be used to retrieve the original input data. Cryptographic hash functions are used to verify the authenticity of data retrieved from an untrusted source or to add a layer of security.
Symmetric-key cryptosystems use the same key for encryption and decryption of a message, although a message or group of messages can have a different key than others. A significant disadvantage of symmetric ciphers is thekey managementnecessary to use them securely. Each distinct pair of communicating parties must, ideally, share a different key, and perhaps for each ciphertext exchanged as well. The number of keys required increases as thesquareof the number of network members, which very quickly requires complex key management schemes to keep them all consistent and secret.
In a groundbreaking 1976 paper, Whitfield Diffie and Martin Hellman proposed the notion ofpublic-key(also, more generally, calledasymmetric key) cryptography in which two different but mathematically related keys are used—apublickey and aprivatekey.[54]A public key system is so constructed that calculation of one key (the 'private key') is computationally infeasible from the other (the 'public key'), even though they are necessarily related. Instead, both keys are generated secretly, as an interrelated pair.[55]The historianDavid Kahndescribed public-key cryptography as "the most revolutionary new concept in the field since polyalphabetic substitution emerged in the Renaissance".[56]
In public-key cryptosystems, the public key may be freely distributed, while its paired private key must remain secret. In a public-key encryption system, thepublic keyis used for encryption, while theprivateorsecret keyis used for decryption. While Diffie and Hellman could not find such a system, they showed that public-key cryptography was indeed possible by presenting theDiffie–Hellman key exchangeprotocol, a solution that is now widely used in secure communications to allow two parties to secretly agree on ashared encryption key.[35]TheX.509standard defines the most commonly used format forpublic key certificates.[57]
Diffie and Hellman's publication sparked widespread academic efforts in finding a practical public-key encryption system. This race was finally won in 1978 byRonald Rivest,Adi Shamir, andLen Adleman, whose solution has since become known as theRSA algorithm.[58]
TheDiffie–HellmanandRSA algorithms, in addition to being the first publicly known examples of high-quality public-key algorithms, have been among the most widely used. Otherasymmetric-key algorithmsinclude theCramer–Shoup cryptosystem,ElGamal encryption, and variouselliptic curve techniques.
A document published in 1997 by the Government Communications Headquarters (GCHQ), a British intelligence organization, revealed that cryptographers at GCHQ had anticipated several academic developments.[59]Reportedly, around 1970,James H. Ellishad conceived the principles of asymmetric key cryptography. In 1973,Clifford Cocksinvented a solution that was very similar in design rationale to RSA.[59][60]In 1974,Malcolm J. Williamsonis claimed to have developed the Diffie–Hellman key exchange.[61]
Public-key cryptography is also used for implementingdigital signatureschemes. A digital signature is reminiscent of an ordinary signature; they both have the characteristic of being easy for a user to produce, but difficult for anyone else toforge. Digital signatures can also be permanently tied to the content of the message being signed; they cannot then be 'moved' from one document to another, for any attempt will be detectable. In digital signature schemes, there are two algorithms: one forsigning, in which a secret key is used to process the message (or a hash of the message, or both), and one forverification, in which the matching public key is used with the message to check the validity of the signature. RSA andDSAare two of the most popular digital signature schemes. Digital signatures are central to the operation ofpublic key infrastructuresand many network security schemes (e.g.,SSL/TLS, manyVPNs, etc.).[49]
Public-key algorithms are most often based on thecomputational complexityof "hard" problems, often fromnumber theory. For example, the hardness of RSA is related to theinteger factorizationproblem, while Diffie–Hellman and DSA are related to thediscrete logarithmproblem. The security ofelliptic curve cryptographyis based on number theoretic problems involvingelliptic curves. Because of the difficulty of the underlying problems, most public-key algorithms involve operations such asmodularmultiplication and exponentiation, which are much more computationally expensive than the techniques used in most block ciphers, especially with typical key sizes. As a result, public-key cryptosystems are commonlyhybrid cryptosystems, in which a fast high-quality symmetric-key encryption algorithm is used for the message itself, while the relevant symmetric key is sent with the message, but encrypted using a public-key algorithm. Similarly, hybrid signature schemes are often used, in which a cryptographic hash function is computed, and only the resulting hash is digitally signed.[5]
Cryptographic hash functions are functions that take a variable-length input and return a fixed-length output, which can be used in, for example, a digital signature. For a hash function to be secure, it must be difficult to compute two inputs that hash to the same value (collision resistance) and to compute an input that hashes to a given output (preimage resistance).MD4is a long-used hash function that is now broken;MD5, a strengthened variant of MD4, is also widely used but broken in practice. The USNational Security Agencydeveloped the Secure Hash Algorithm series of MD5-like hash functions: SHA-0 was a flawed algorithm that the agency withdrew;SHA-1is widely deployed and more secure than MD5, but cryptanalysts have identified attacks against it; theSHA-2family improves on SHA-1, but is vulnerable to clashes as of 2011; and the US standards authority thought it "prudent" from a security perspective to develop a new standard to "significantly improve the robustness ofNIST's overall hash algorithm toolkit."[52]Thus, ahash function design competitionwas meant to select a new U.S. national standard, to be calledSHA-3, by 2012. The competition ended on October 2, 2012, when the NIST announced thatKeccakwould be the new SHA-3 hash algorithm.[53]Unlike block and stream ciphers that are invertible, cryptographic hash functions produce a hashed output that cannot be used to retrieve the original input data. Cryptographic hash functions are used to verify the authenticity of data retrieved from an untrusted source or to add a layer of security.
The goal of cryptanalysis is to find some weakness or insecurity in a cryptographic scheme, thus permitting its subversion or evasion.
It is a common misconception that every encryption method can be broken. In connection with his WWII work atBell Labs,Claude Shannonproved that theone-time padcipher is unbreakable, provided the key material is trulyrandom, never reused, kept secret from all possible attackers, and of equal or greater length than the message.[62]Mostciphers, apart from the one-time pad, can be broken with enough computational effort bybrute force attack, but the amount of effort needed may beexponentiallydependent on the key size, as compared to the effort needed to make use of the cipher. In such cases, effective security could be achieved if it is proven that the effort required (i.e., "work factor", in Shannon's terms) is beyond the ability of any adversary. This means it must be shown that no efficient method (as opposed to the time-consuming brute force method) can be found to break the cipher. Since no such proof has been found to date, the one-time-pad remains the only theoretically unbreakable cipher. Although well-implemented one-time-pad encryption cannot be broken, traffic analysis is still possible.
There are a wide variety of cryptanalytic attacks, and they can be classified in any of several ways. A common distinction turns on what Eve (an attacker) knows and what capabilities are available. In aciphertext-only attack, Eve has access only to the ciphertext (good modern cryptosystems are usually effectively immune to ciphertext-only attacks). In aknown-plaintext attack, Eve has access to a ciphertext and its corresponding plaintext (or to many such pairs). In achosen-plaintext attack, Eve may choose a plaintext and learn its corresponding ciphertext (perhaps many times); an example isgardening, used by the British during WWII. In achosen-ciphertext attack, Eve may be able tochooseciphertexts and learn their corresponding plaintexts.[5]Finally in aman-in-the-middleattack Eve gets in between Alice (the sender) and Bob (the recipient), accesses and modifies the traffic and then forward it to the recipient.[63]Also important, often overwhelmingly so, are mistakes (generally in the design or use of one of theprotocolsinvolved).
Cryptanalysis of symmetric-key ciphers typically involves looking for attacks against the block ciphers or stream ciphers that are more efficient than any attack that could be against a perfect cipher. For example, a simple brute force attack against DES requires one known plaintext and 255decryptions, trying approximately half of the possible keys, to reach a point at which chances are better than even that the key sought will have been found. But this may not be enough assurance; alinear cryptanalysisattack against DES requires 243known plaintexts (with their corresponding ciphertexts) and approximately 243DES operations.[64]This is a considerable improvement over brute force attacks.
Public-key algorithms are based on the computational difficulty of various problems. The most famous of these are the difficulty ofinteger factorizationofsemiprimesand the difficulty of calculatingdiscrete logarithms, both of which are not yet proven to be solvable inpolynomial time(P) using only a classicalTuring-completecomputer. Much public-key cryptanalysis concerns designing algorithms inPthat can solve these problems, or using other technologies, such asquantum computers. For instance, the best-known algorithms for solving theelliptic curve-basedversion of discrete logarithm are much more time-consuming than the best-known algorithms for factoring, at least for problems of more or less equivalent size. Thus, to achieve an equivalent strength of encryption, techniques that depend upon the difficulty of factoring large composite numbers, such as the RSA cryptosystem, require larger keys than elliptic curve techniques. For this reason, public-key cryptosystems based on elliptic curves have become popular since their invention in the mid-1990s.
While pure cryptanalysis uses weaknesses in the algorithms themselves, other attacks on cryptosystems are based on actual use of the algorithms in real devices, and are calledside-channel attacks. If a cryptanalyst has access to, for example, the amount of time the device took to encrypt a number of plaintexts or report an error in a password or PIN character, they may be able to use atiming attackto break a cipher that is otherwise resistant to analysis. An attacker might also study the pattern and length of messages to derive valuable information; this is known astraffic analysis[65]and can be quite useful to an alert adversary. Poor administration of a cryptosystem, such as permitting too short keys, will make any system vulnerable, regardless of other virtues.Social engineeringand other attacks against humans (e.g., bribery,extortion,blackmail, espionage,rubber-hose cryptanalysisor torture) are usually employed due to being more cost-effective and feasible to perform in a reasonable amount of time compared to pure cryptanalysis by a high margin.
Much of the theoretical work in cryptography concernscryptographicprimitives—algorithms with basic cryptographic properties—and their relationship to other cryptographic problems. More complicated cryptographic tools are then built from these basic primitives. These primitives provide fundamental properties, which are used to develop more complex tools calledcryptosystemsorcryptographic protocols, which guarantee one or more high-level security properties. Note, however, that the distinction between cryptographicprimitivesand cryptosystems, is quite arbitrary; for example, the RSA algorithm is sometimes considered a cryptosystem, and sometimes a primitive. Typical examples of cryptographic primitives includepseudorandom functions,one-way functions, etc.
One or more cryptographic primitives are often used to develop a more complex algorithm, called a cryptographic system, orcryptosystem. Cryptosystems (e.g.,El-Gamal encryption) are designed to provide particular functionality (e.g., public key encryption) while guaranteeing certain security properties (e.g.,chosen-plaintext attack (CPA)security in therandom oracle model). Cryptosystems use the properties of the underlying cryptographic primitives to support the system's security properties. As the distinction between primitives and cryptosystems is somewhat arbitrary, a sophisticated cryptosystem can be derived from a combination of several more primitive cryptosystems. In many cases, the cryptosystem's structure involves back and forth communication among two or more parties in space (e.g., between the sender of a secure message and its receiver) or across time (e.g., cryptographically protectedbackupdata). Such cryptosystems are sometimes calledcryptographic protocols.
Some widely known cryptosystems include RSA,Schnorr signature,ElGamal encryption, andPretty Good Privacy(PGP). More complex cryptosystems includeelectronic cash[66]systems,signcryptionsystems, etc. Some more 'theoretical'[clarification needed]cryptosystems includeinteractive proof systems,[67](likezero-knowledge proofs)[68]and systems forsecret sharing.[69][70]
Lightweight cryptography (LWC) concerns cryptographic algorithms developed for a strictly constrained environment. The growth ofInternet of Things (IoT)has spiked research into the development of lightweight algorithms that are better suited for the environment. An IoT environment requires strict constraints on power consumption, processing power, and security.[71]Algorithms such as PRESENT,AES, andSPECKare examples of the many LWC algorithms that have been developed to achieve the standard set by theNational Institute of Standards and Technology.[72]
Cryptography is widely used on the internet to help protect user-data and prevent eavesdropping. To ensure secrecy during transmission, many systems use private key cryptography to protect transmitted information. With public-key systems, one can maintain secrecy without a master key or a large number of keys.[73]But, some algorithms likeBitLockerandVeraCryptare generally not private-public key cryptography. For example, Veracrypt uses a password hash to generate the single private key. However, it can be configured to run in public-private key systems. TheC++opensource encryption libraryOpenSSLprovidesfree and opensourceencryption software and tools. The most commonly used encryption cipher suit isAES,[74]as it has hardware acceleration for allx86based processors that hasAES-NI. A close contender isChaCha20-Poly1305, which is astream cipher, however it is commonly used for mobile devices as they areARMbased which does not feature AES-NI instruction set extension.
Cryptography can be used to secure communications by encrypting them. Websites use encryption viaHTTPS.[75]"End-to-end" encryption, where only sender and receiver can read messages, is implemented for email inPretty Good Privacyand for secure messaging in general inWhatsApp,SignalandTelegram.[75]
Operating systems use encryption to keep passwords secret, conceal parts of the system, and ensure that software updates are truly from the system maker.[75]Instead of storing plaintext passwords, computer systems store hashes thereof; then, when a user logs in, the system passes the given password through a cryptographic hash function and compares it to the hashed value on file. In this manner, neither the system nor an attacker has at any point access to the password in plaintext.[75]
Encryption is sometimes used to encrypt one's entire drive. For example,University College Londonhas implementedBitLocker(a program by Microsoft) to render drive data opaque without users logging in.[75]
Cryptographic techniques enablecryptocurrencytechnologies, such asdistributed ledger technologies(e.g.,blockchains), which financecryptoeconomicsapplications such asdecentralized finance (DeFi). Key cryptographic techniques that enable cryptocurrencies and cryptoeconomics include, but are not limited to:cryptographic keys, cryptographic hash function,asymmetric (public key) encryption,Multi-Factor Authentication (MFA),End-to-End Encryption (E2EE), andZero Knowledge Proofs (ZKP).
Cryptography has long been of interest to intelligence gathering andlaw enforcement agencies.[9]Secret communications may be criminal or eventreasonous.[citation needed]Because of its facilitation ofprivacy, and the diminution of privacy attendant on its prohibition, cryptography is also of considerable interest to civil rights supporters. Accordingly, there has been a history of controversial legal issues surrounding cryptography, especially since the advent of inexpensive computers has made widespread access to high-quality cryptography possible.
In some countries, even the domestic use of cryptography is, or has been, restricted. Until 1999, France significantly restricted the use of cryptography domestically, though it has since relaxed many of these rules. InChinaandIran, a license is still required to use cryptography.[7]Many countries have tight restrictions on the use of cryptography. Among the more restrictive are laws inBelarus,Kazakhstan,Mongolia,Pakistan, Singapore,Tunisia, andVietnam.[76]
In the United States, cryptography is legal for domestic use, but there has been much conflict over legal issues related to cryptography.[9]One particularly important issue has been theexport of cryptographyand cryptographic software and hardware. Probably because of the importance of cryptanalysis inWorld War IIand an expectation that cryptography would continue to be important for national security, many Western governments have, at some point, strictly regulated export of cryptography. After World War II, it was illegal in the US to sell or distribute encryption technology overseas; in fact, encryption was designated as auxiliary military equipment and put on theUnited States Munitions List.[77]Until the development of the personal computer, asymmetric key algorithms (i.e., public key techniques), and the Internet, this was not especially problematic. However, as the Internet grew and computers became more widely available, high-quality encryption techniques became well known around the globe.
In the 1990s, there were several challenges to US export regulation of cryptography. After thesource codeforPhilip Zimmermann'sPretty Good Privacy(PGP) encryption program found its way onto the Internet in June 1991, a complaint byRSA Security(then called RSA Data Security, Inc.) resulted in a lengthy criminal investigation of Zimmermann by the US Customs Service and theFBI, though no charges were ever filed.[78][79]Daniel J. Bernstein, then a graduate student atUC Berkeley, brought a lawsuit against the US government challenging some aspects of the restrictions based onfree speechgrounds. The 1995 caseBernstein v. United Statesultimately resulted in a 1999 decision that printed source code for cryptographic algorithms and systems was protected asfree speechby the United States Constitution.[80]
In 1996, thirty-nine countries signed theWassenaar Arrangement, an arms control treaty that deals with the export of arms and "dual-use" technologies such as cryptography. The treaty stipulated that the use of cryptography with short key-lengths (56-bit for symmetric encryption, 512-bit for RSA) would no longer be export-controlled.[81]Cryptography exports from the US became less strictly regulated as a consequence of a major relaxation in 2000;[82]there are no longer very many restrictions on key sizes in US-exportedmass-market software. Since this relaxation in US export restrictions, and because most personal computers connected to the Internet include US-sourcedweb browserssuch asFirefoxorInternet Explorer, almost every Internet user worldwide has potential access to quality cryptography via their browsers (e.g., viaTransport Layer Security). TheMozilla ThunderbirdandMicrosoft OutlookE-mail clientprograms similarly can transmit and receive emails via TLS, and can send and receive email encrypted withS/MIME. Many Internet users do not realize that their basic application software contains such extensivecryptosystems. These browsers and email programs are so ubiquitous that even governments whose intent is to regulate civilian use of cryptography generally do not find it practical to do much to control distribution or use of cryptography of this quality, so even when such laws are in force, actual enforcement is often effectively impossible.[citation needed]
Another contentious issue connected to cryptography in the United States is the influence of theNational Security Agencyon cipher development and policy.[9]The NSA was involved with the design ofDESduring its development atIBMand its consideration by theNational Bureau of Standardsas a possible Federal Standard for cryptography.[83]DES was designed to be resistant todifferential cryptanalysis,[84]a powerful and general cryptanalytic technique known to the NSA and IBM, that became publicly known only when it was rediscovered in the late 1980s.[85]According toSteven Levy, IBM discovered differential cryptanalysis,[79]but kept the technique secret at the NSA's request. The technique became publicly known only when Biham and Shamir re-discovered and announced it some years later. The entire affair illustrates the difficulty of determining what resources and knowledge an attacker might actually have.
Another instance of the NSA's involvement was the 1993Clipper chipaffair, an encryption microchip intended to be part of theCapstonecryptography-control initiative. Clipper was widely criticized by cryptographers for two reasons. The cipher algorithm (calledSkipjack) was then classified (declassified in 1998, long after the Clipper initiative lapsed). The classified cipher caused concerns that the NSA had deliberately made the cipher weak to assist its intelligence efforts. The whole initiative was also criticized based on its violation ofKerckhoffs's Principle, as the scheme included a specialescrow keyheld by the government for use by law enforcement (i.e.wiretapping).[79]
Cryptography is central to digital rights management (DRM), a group of techniques for technologically controlling use ofcopyrightedmaterial, being widely implemented and deployed at the behest of some copyright holders. In 1998, U.S. PresidentBill Clintonsigned theDigital Millennium Copyright Act(DMCA), which criminalized all production, dissemination, and use of certain cryptanalytic techniques and technology (now known or later discovered); specifically, those that could be used to circumvent DRM technological schemes.[86]This had a noticeable impact on the cryptography research community since an argument can be made that any cryptanalytic research violated the DMCA. Similar statutes have since been enacted in several countries and regions, including the implementation in theEU Copyright Directive. Similar restrictions are called for by treaties signed byWorld Intellectual Property Organizationmember-states.
TheUnited States Department of JusticeandFBIhave not enforced the DMCA as rigorously as had been feared by some, but the law, nonetheless, remains a controversial one.Niels Ferguson, a well-respected cryptography researcher, has publicly stated that he will not release some of his research into anIntelsecurity design for fear of prosecution under the DMCA.[87]CryptologistBruce Schneierhas argued that the DMCA encouragesvendor lock-in, while inhibiting actual measures toward cyber-security.[88]BothAlan Cox(longtimeLinux kerneldeveloper) andEdward Felten(and some of his students at Princeton) have encountered problems related to the Act.Dmitry Sklyarovwas arrested during a visit to the US from Russia, and jailed for five months pending trial for alleged violations of the DMCA arising from work he had done in Russia, where the work was legal. In 2007, the cryptographic keys responsible forBlu-rayandHD DVDcontent scrambling werediscovered and released onto the Internet. In both cases, theMotion Picture Association of Americasent out numerous DMCA takedown notices, and there was a massive Internet backlash[10]triggered by the perceived impact of such notices onfair useandfree speech.
In the United Kingdom, theRegulation of Investigatory Powers Actgives UK police the powers to force suspects to decrypt files or hand over passwords that protect encryption keys. Failure to comply is an offense in its own right, punishable on conviction by a two-year jail sentence or up to five years in cases involving national security.[8]Successful prosecutions have occurred under the Act; the first, in 2009,[89]resulted in a term of 13 months' imprisonment.[90]Similar forced disclosure laws in Australia, Finland, France, and India compel individual suspects under investigation to hand over encryption keys or passwords during a criminal investigation.
In the United States, the federal criminal case ofUnited States v. Fricosuaddressed whether a search warrant can compel a person to reveal anencryptionpassphraseor password.[91]TheElectronic Frontier Foundation(EFF) argued that this is a violation of the protection from self-incrimination given by theFifth Amendment.[92]In 2012, the court ruled that under theAll Writs Act, the defendant was required to produce an unencrypted hard drive for the court.[93]
In many jurisdictions, the legal status of forced disclosure remains unclear.
The 2016FBI–Apple encryption disputeconcerns the ability of courts in the United States to compel manufacturers' assistance in unlocking cell phones whose contents are cryptographically protected.
As a potential counter-measure to forced disclosure some cryptographic software supportsplausible deniability, where the encrypted data is indistinguishable from unused random data (for example such as that of adrive which has been securely wiped).
|
https://en.wikipedia.org/wiki/Cryptography#The_fundamental_trilogy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.